content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Replacing the last row value of a specific column value I have a dataframe df which looks something like this: key id x 0.6 x 0.5 x 0.43 x 0.56 y 13 y 14 y 0.4 y 0.1 I'd like to replace the Last value for every key value with 0, so that the df looks like this: key id x 0.6 x 0.5 x 0.43 x 0 y 13 y 14 y 0.4 y 0 I've tried the following: for i in df['key'].unique(): df.loc[df['key'] == i, 'id'].iat[-1] = 0 the problem is it does not replace the actual value in the df. What am I missing? and perhaps there's an even better (performing) way to tackle this problem. A: Use Series.duplicated for get last value per key and set 0 in DataFrame.loc: df.loc[~df['key'].duplicated(keep='last'), 'id'] = 0 print (df) key id 0 x 0.60 1 x 0.50 2 x 0.43 3 x 0.00 4 y 13.00 5 y 14.00 6 y 0.40 7 y 0.00 How it working: print (df.assign(mask=df['key'].duplicated(keep='last'), invert_mask=~df['key'].duplicated(keep='last'))) key id mask invert_mask 0 x 0.60 True False 1 x 0.50 True False 2 x 0.43 True False 3 x 0.00 False True 4 y 13.00 True False 5 y 14.00 True False 6 y 0.40 True False 7 y 0.00 False True Another solution is simply multiple id column with boolean mask: df['id'] = df['key'].duplicated(keep='last').mul(df['id']) print (df) key id 0 x 0.60 1 x 0.50 2 x 0.43 3 x 0.00 4 y 13.00 5 y 14.00 6 y 0.40 7 y 0.00 A: You can use groupby.cumcount to access the nth row per group from the end (with ascending=False), and boolean indexing: df.loc[df.groupby('key').cumcount(ascending=False).eq(0), 'id'] = 0 output: key id 0 x 0.60 1 x 0.50 2 x 0.43 3 x 0.00 4 y 13.00 5 y 14.00 6 y 0.40 7 y 0.00 Intermediate: key id cumcount eq(0) 0 x 0.60 3 False 1 x 0.50 2 False 2 x 0.43 1 False 3 x 0.56 0 True 4 y 13.00 3 False 5 y 14.00 2 False 6 y 0.40 1 False 7 y 0.10 0 True You can easily adapt to any row, example for the second to last row per group: df.loc[df.groupby('key').cumcount(ascending=False).eq(1), 'id'] = 0 For the third row per group: df.loc[df.groupby('key').cumcount().eq(2), 'id'] = 0
Replacing the last row value of a specific column value
I have a dataframe df which looks something like this: key id x 0.6 x 0.5 x 0.43 x 0.56 y 13 y 14 y 0.4 y 0.1 I'd like to replace the Last value for every key value with 0, so that the df looks like this: key id x 0.6 x 0.5 x 0.43 x 0 y 13 y 14 y 0.4 y 0 I've tried the following: for i in df['key'].unique(): df.loc[df['key'] == i, 'id'].iat[-1] = 0 the problem is it does not replace the actual value in the df. What am I missing? and perhaps there's an even better (performing) way to tackle this problem.
[ "Use Series.duplicated for get last value per key and set 0 in DataFrame.loc:\ndf.loc[~df['key'].duplicated(keep='last'), 'id'] = 0\n\nprint (df)\n key id\n0 x 0.60\n1 x 0.50\n2 x 0.43\n3 x 0.00\n4 y 13.00\n5 y 14.00\n6 y 0.40\n7 y 0.00\n\nHow it working:\nprint (df.assign(mask=df['key'].duplicated(keep='last'),\n invert_mask=~df['key'].duplicated(keep='last')))\n key id mask invert_mask\n0 x 0.60 True False\n1 x 0.50 True False\n2 x 0.43 True False\n3 x 0.00 False True\n4 y 13.00 True False\n5 y 14.00 True False\n6 y 0.40 True False\n7 y 0.00 False True\n\nAnother solution is simply multiple id column with boolean mask:\ndf['id'] = df['key'].duplicated(keep='last').mul(df['id'])\nprint (df)\n key id\n0 x 0.60\n1 x 0.50\n2 x 0.43\n3 x 0.00\n4 y 13.00\n5 y 14.00\n6 y 0.40\n7 y 0.00\n\n", "You can use groupby.cumcount to access the nth row per group from the end (with ascending=False), and boolean indexing:\ndf.loc[df.groupby('key').cumcount(ascending=False).eq(0), 'id'] = 0\n\noutput:\n key id\n0 x 0.60\n1 x 0.50\n2 x 0.43\n3 x 0.00\n4 y 13.00\n5 y 14.00\n6 y 0.40\n7 y 0.00\n\nIntermediate:\n key id cumcount eq(0)\n0 x 0.60 3 False\n1 x 0.50 2 False\n2 x 0.43 1 False\n3 x 0.56 0 True\n4 y 13.00 3 False\n5 y 14.00 2 False\n6 y 0.40 1 False\n7 y 0.10 0 True\n\nYou can easily adapt to any row, example for the second to last row per group:\ndf.loc[df.groupby('key').cumcount(ascending=False).eq(1), 'id'] = 0\n\nFor the third row per group:\ndf.loc[df.groupby('key').cumcount().eq(2), 'id'] = 0\n\n" ]
[ 3, 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074532302_pandas_python.txt
Q: I have an interval of integers that comprises some inner intervals. Given these intervals I want to compute a list including the intervals between Inner intervals are always inside the global one. All intervals are integer, left-closed, right-open intervals. Let's take this example. The "global" interval is [0, 22[. "Inner" intervals are [3, 6[ and [12, 15[. For this example I expect : [0, 3[ U [3, 6[ U [6, 12[ U [12, 15[ U [15, 22[ I've tried to define a function but then messed up with indices while iterating over intervals. def allspans(r, spans): pass allspans((0, 22), [(3,6), (12,15)]) # expected : [(0, 3), (3, 6), (6, 12), (12, 15), (15, 22)] A: Yes you have to iterate over your spans but take care of maintaining a position to correctly fill the spaces between. from typing import Generator def allspans(r, spans) -> Generator: pos = 0 for lower, upper in spans: if pos < lower: yield pos, lower yield lower, upper pos = upper if pos <= r[1]: yield pos, r[1] I find it easier to use a Generator. Just use list() to convert to a List. list(allspans((0, 22), [(3,6), (12,15)])) # [(0, 3), (3, 6), (6, 12), (12, 15), (15, 22)] A: Using a normal loop: def allspans(r, spans): intervals = [] intervals.append((r[0], spans[0][0])) for i in range(len(spans)): current_span = spans[i] if i != 0: intervals.append((spans[i - 1][1], current_span[0])) intervals.append((current_span[0], current_span[1])) intervals.append((spans[-1][1], r[1])) return intervals print(allspans((0, 22), [(3, 6), (12, 15)])) # [(0, 3), (3, 6), (6, 12), (12, 15), (15, 22)] A: Using itertools.chain and itertools.pairwise (Python 3.10+): from itertools import chain, pairwise def all_spans(r, spans): start, end = r it = chain((start,), chain.from_iterable(spans), (end,)) return [t for t in pairwise(it) if t[0] != t[1]] First, we construct an iterator it over all of the interval endpoints in order; then the sub-ranges are all the pairs of consecutive endpoints, excluding the empty sub-ranges where two consecutive endpoints are equal.
I have an interval of integers that comprises some inner intervals. Given these intervals I want to compute a list including the intervals between
Inner intervals are always inside the global one. All intervals are integer, left-closed, right-open intervals. Let's take this example. The "global" interval is [0, 22[. "Inner" intervals are [3, 6[ and [12, 15[. For this example I expect : [0, 3[ U [3, 6[ U [6, 12[ U [12, 15[ U [15, 22[ I've tried to define a function but then messed up with indices while iterating over intervals. def allspans(r, spans): pass allspans((0, 22), [(3,6), (12,15)]) # expected : [(0, 3), (3, 6), (6, 12), (12, 15), (15, 22)]
[ "Yes you have to iterate over your spans but take care of maintaining a position to correctly fill the spaces between.\nfrom typing import Generator\n\ndef allspans(r, spans) -> Generator:\n pos = 0\n for lower, upper in spans:\n if pos < lower:\n yield pos, lower\n yield lower, upper\n pos = upper\n if pos <= r[1]:\n yield pos, r[1]\n\nI find it easier to use a Generator.\nJust use list() to convert to a List.\nlist(allspans((0, 22), [(3,6), (12,15)])) # [(0, 3), (3, 6), (6, 12), (12, 15), (15, 22)]\n\n", "Using a normal loop:\ndef allspans(r, spans):\n intervals = []\n intervals.append((r[0], spans[0][0]))\n\n for i in range(len(spans)):\n current_span = spans[i]\n if i != 0:\n intervals.append((spans[i - 1][1], current_span[0]))\n intervals.append((current_span[0], current_span[1]))\n\n intervals.append((spans[-1][1], r[1]))\n\n return intervals\n\n\nprint(allspans((0, 22), [(3, 6), (12, 15)]))\n# [(0, 3), (3, 6), (6, 12), (12, 15), (15, 22)]\n\n", "Using itertools.chain and itertools.pairwise (Python 3.10+):\nfrom itertools import chain, pairwise\n\ndef all_spans(r, spans):\n start, end = r\n it = chain((start,), chain.from_iterable(spans), (end,))\n return [t for t in pairwise(it) if t[0] != t[1]] \n\nFirst, we construct an iterator it over all of the interval endpoints in order; then the sub-ranges are all the pairs of consecutive endpoints, excluding the empty sub-ranges where two consecutive endpoints are equal.\n" ]
[ 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074531960_python.txt
Q: Python - Summing values and number of duplicates I have csv file looking like this: part of the data. X and Y are my coordinates of pixel. I need to filter column ADC only for TDC values (in this column are also 0 values), and after this I need to sum up the energy value for every unique value of pixel, so for every x=0 y=0, x=0 y=1, x=0 y=2... until x=127 y=127. And in another column I need the number of duplicates for the pixel coordinates that occurs (so the number of places/rows from which I need to make a summation in the Energy column). I don't know how to write the appropriate conditions for this kind of task. I will appreciate any type of help. A: The following StackOverflow question and answers might help you out: Group dataframe and get sum AND count? But here is some code for your case which might be useful, too: # import the pandas package, for doing data analysis and manipulation import pandas as pd # create a dummy dataframe using data of the type you are using (I hope) df = pd.DataFrame( data = { "X": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "Y": [0, 0, 1, 1, 1, 1, 1, 1, 2, 2], "ADC": ["TDC", "TDC", "TDC", "TDC", "TDC", 0, 0, 0, "TDC", "TDC"], "Energy": [0, 0, 1, 1, 1, 2, 2, 2, 3, 3], "Time": [1.2, 1.2, 2.3, 2.3, 3.6, 3.61, 3.62, 0.66, 0.67, 0.68], } ) # use pandas' groupby method and aggregation methods to get the sum of the energy in every unique combination of X and Y, and the number of times those combinations appear df[df["ADC"] == "TDC"].groupby(by=["X","Y"]).agg({"Energy": ['sum','count']}).reset_index() The result I get from this in my dummy example is: X Y Energy sum count 0 0 0 0 2 1 0 1 3 3 2 0 2 6 2
Python - Summing values and number of duplicates
I have csv file looking like this: part of the data. X and Y are my coordinates of pixel. I need to filter column ADC only for TDC values (in this column are also 0 values), and after this I need to sum up the energy value for every unique value of pixel, so for every x=0 y=0, x=0 y=1, x=0 y=2... until x=127 y=127. And in another column I need the number of duplicates for the pixel coordinates that occurs (so the number of places/rows from which I need to make a summation in the Energy column). I don't know how to write the appropriate conditions for this kind of task. I will appreciate any type of help.
[ "The following StackOverflow question and answers might help you out:\nGroup dataframe and get sum AND count?\nBut here is some code for your case which might be useful, too:\n# import the pandas package, for doing data analysis and manipulation\nimport pandas as pd\n\n# create a dummy dataframe using data of the type you are using (I hope)\ndf = pd.DataFrame(\n data = {\n \"X\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n \"Y\": [0, 0, 1, 1, 1, 1, 1, 1, 2, 2],\n \"ADC\": [\"TDC\", \"TDC\", \"TDC\", \"TDC\", \"TDC\", 0, 0, 0, \"TDC\", \"TDC\"],\n \"Energy\": [0, 0, 1, 1, 1, 2, 2, 2, 3, 3],\n \"Time\": [1.2, 1.2, 2.3, 2.3, 3.6, 3.61, 3.62, 0.66, 0.67, 0.68],\n }\n)\n\n# use pandas' groupby method and aggregation methods to get the sum of the energy in every unique combination of X and Y, and the number of times those combinations appear\ndf[df[\"ADC\"] == \"TDC\"].groupby(by=[\"X\",\"Y\"]).agg({\"Energy\": ['sum','count']}).reset_index()\n\nThe result I get from this in my dummy example is:\n X Y Energy \n sum count\n0 0 0 0 2\n1 0 1 3 3\n2 0 2 6 2\n\n" ]
[ 0 ]
[]
[]
[ "data_analysis", "duplicates", "python", "sum" ]
stackoverflow_0074532074_data_analysis_duplicates_python_sum.txt
Q: Is it possible to access keyword arguments passed to a Field in a Pydantic BaseModel? I need to access my_key in a Pydantic Field, as shown below: class MyModel(BaseModel): x: str = Field(default=None, my_key=7) def print_field_objects(self): for obj in self.something_something: # What do I use here print(obj.my_key) # ... so that i can use my_key? I tried to see what self contains, like self.__dict__ but I wasn't able to find it. Is it even possible to access my_key? I need it for my FastAPI endpoint. A: Field doesn't take arbitrary arguments, what exactly are you trying to achieve, perhaps there's a more appropriate solution. Per your other question, x is a class attribute, whose definition can be found in self.__class__.__fields__, while its instance value can be found by calling self.x A: You can generate the model's JSON Schema representation using the BaseModel's .schema() method, and then rely on this functionality of Field customization that: ** any other keyword arguments (e.g. examples) will be added verbatim to the field's schema In other words, any other arbitrary keyword arguments passed to Field that isn't consumed or used by Pydantic (or by any custom creation/instantiation) would be present in that field's JSON Schema representation. So, in this case, your my_key should be present on the model's schema: In [4]: class MyModel(BaseModel): ...: x: str = Field(default=None, my_key=7) ...: In [5]: MyModel.schema() Out[5]: {'title': 'MyModel', 'type': 'object', 'properties': {'x': {'title': 'X', 'my_key': 7, 'type': 'string'}}} ^^^^^^^^^^^^ |||||||||||| You can then have an instance method that looks like this: In [21]: class MyModel(BaseModel): ...: x: str = Field(default=None, my_key=7) ...: y: int = Field(default=1, my_key=42) ...: ...: def print_field_objects(self): ...: for field_name, field in self.schema()["properties"].items(): ...: print(field["my_key"]) ...: In [22]: m1 = MyModel() In [23]: m1.print_field_objects() 7 42 But, since a model's schema and Field definitions are tied to the class, not the instance, multiple instances would have the same value: In [28]: m1 = MyModel() In [29]: m1.print_field_objects() 7 42 In [30]: m2 = MyModel() In [31]: m2.print_field_objects() 7 42 So, it would be more accurate to make it a class method instead, since my_key won't change anyway with different values of the field x or with different instances: In [35]: class MyModel(BaseModel): ...: x: str = Field(default=None, my_key=7) ...: y: int = Field(default=1, my_key=42) ...: ...: @classmethod ...: def print_field_objects(cls): ...: for field_name, field in cls.schema()["properties"].items(): ...: print(field_name, field.get("my_key")) ...: In [36]: MyModel.print_field_objects() x 7 y 42
Is it possible to access keyword arguments passed to a Field in a Pydantic BaseModel?
I need to access my_key in a Pydantic Field, as shown below: class MyModel(BaseModel): x: str = Field(default=None, my_key=7) def print_field_objects(self): for obj in self.something_something: # What do I use here print(obj.my_key) # ... so that i can use my_key? I tried to see what self contains, like self.__dict__ but I wasn't able to find it. Is it even possible to access my_key? I need it for my FastAPI endpoint.
[ "Field doesn't take arbitrary arguments, what exactly are you trying to achieve, perhaps there's a more appropriate solution.\nPer your other question, x is a class attribute, whose definition can be found in self.__class__.__fields__, while its instance value can be found by calling self.x\n", "You can generate the model's JSON Schema representation using the BaseModel's .schema() method, and then rely on this functionality of Field customization that:\n\n** any other keyword arguments (e.g. examples) will be added verbatim to the field's schema\n\nIn other words, any other arbitrary keyword arguments passed to Field that isn't consumed or used by Pydantic (or by any custom creation/instantiation) would be present in that field's JSON Schema representation.\nSo, in this case, your my_key should be present on the model's schema:\nIn [4]: class MyModel(BaseModel):\n ...: x: str = Field(default=None, my_key=7)\n ...: \n\nIn [5]: MyModel.schema()\nOut[5]: \n{'title': 'MyModel',\n 'type': 'object',\n 'properties': {'x': {'title': 'X', 'my_key': 7, 'type': 'string'}}}\n ^^^^^^^^^^^^\n ||||||||||||\n\nYou can then have an instance method that looks like this:\nIn [21]: class MyModel(BaseModel):\n ...: x: str = Field(default=None, my_key=7)\n ...: y: int = Field(default=1, my_key=42)\n ...: \n ...: def print_field_objects(self):\n ...: for field_name, field in self.schema()[\"properties\"].items():\n ...: print(field[\"my_key\"])\n ...: \n\nIn [22]: m1 = MyModel()\n\nIn [23]: m1.print_field_objects()\n7\n42\n\nBut, since a model's schema and Field definitions are tied to the class, not the instance, multiple instances would have the same value:\nIn [28]: m1 = MyModel()\n\nIn [29]: m1.print_field_objects()\n7\n42\n\nIn [30]: m2 = MyModel()\n\nIn [31]: m2.print_field_objects()\n7\n42\n\nSo, it would be more accurate to make it a class method instead, since my_key won't change anyway with different values of the field x or with different instances:\nIn [35]: class MyModel(BaseModel):\n ...: x: str = Field(default=None, my_key=7)\n ...: y: int = Field(default=1, my_key=42)\n ...: \n ...: @classmethod\n ...: def print_field_objects(cls):\n ...: for field_name, field in cls.schema()[\"properties\"].items():\n ...: print(field_name, field.get(\"my_key\"))\n ...: \n\nIn [36]: MyModel.print_field_objects()\nx 7\ny 42\n\n" ]
[ 2, 1 ]
[]
[]
[ "pydantic", "python" ]
stackoverflow_0074525003_pydantic_python.txt
Q: Convert data into same unit in a dataframe enter image description here there are different unit for size : like k for 1,000, M for mega. I want to convert all data into same unit - bytes may i know how to make it? The expected result is update the size column into bytes like 9k will be 9,000 A: def convert_unit(value): if value in "kb": #convert to bytes return bytes elif value in "mb": # convert to bytes return bytes # the above function is just an example df['column'].map(convert_unit) You can map all the column values using the function. Redefine the function as per your need. A: i think this should work if your data is consitent: def func(val): # get the string suffix ('K'/'M') val_char = val[-1:].lower() if val_char == 'k': return int(val[:-1]) * 1_000 elif val_char == 'm': return int(val[:-1]) * 1_000_000 else: return 0 df['size_bytes'] = df['size'].apply(lambda x: func(x))
Convert data into same unit in a dataframe
enter image description here there are different unit for size : like k for 1,000, M for mega. I want to convert all data into same unit - bytes may i know how to make it? The expected result is update the size column into bytes like 9k will be 9,000
[ "def convert_unit(value):\n if value in \"kb\":\n #convert to bytes \n return bytes\n elif value in \"mb\":\n # convert to bytes\n return bytes\n\n# the above function is just an example\n\ndf['column'].map(convert_unit)\n\nYou can map all the column values using the function. Redefine the function as per your need.\n", "i think this should work if your data is consitent:\ndef func(val):\n # get the string suffix ('K'/'M')\n val_char = val[-1:].lower()\n if val_char == 'k':\n return int(val[:-1]) * 1_000\n elif val_char == 'm':\n return int(val[:-1]) * 1_000_000\n else:\n return 0\n\ndf['size_bytes'] = df['size'].apply(lambda x: func(x))\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074532217_dataframe_python.txt
Q: pyinstaller doesn't change python executable window icon I am trying to change the default python icon in my executable using pyinstaller. I'm trying this on Windows 10 and the GUI framework is pyqt5. I have only managed to change the icon of the application (as seen in a file) but not the icons when you open the application (on the app's window). These are the commands I used after a bit of online searching: pyinstaller SSL_Configurator.py pyinstaller --onefile -w --icon="favicon.ico" SSL_Configurator.py pyinstaller --onefile -w --icon="favicon.ico" --paths=<C:\Users\Haylee\Desktop\python>\Lib\site-packages SSL_Configurator.py What else should I include in order to get the icon to be displayed on the window as well? thanks A: After Alexander's comment, I found this answer that explains how to fix this. Basicaly you need to compile the image with the code. new pyinstaller command would look like this (after following the answer linked): pyinstaller --onefile -w --add-data "favicon.ico;." --icon="favicon.ico" --paths=<C:\Users\Haylee\Desktop\python>\Lib\site-packages SSL_Configurator.py
pyinstaller doesn't change python executable window icon
I am trying to change the default python icon in my executable using pyinstaller. I'm trying this on Windows 10 and the GUI framework is pyqt5. I have only managed to change the icon of the application (as seen in a file) but not the icons when you open the application (on the app's window). These are the commands I used after a bit of online searching: pyinstaller SSL_Configurator.py pyinstaller --onefile -w --icon="favicon.ico" SSL_Configurator.py pyinstaller --onefile -w --icon="favicon.ico" --paths=<C:\Users\Haylee\Desktop\python>\Lib\site-packages SSL_Configurator.py What else should I include in order to get the icon to be displayed on the window as well? thanks
[ "After Alexander's comment, I found this answer that explains how to fix this. Basicaly you need to compile the image with the code.\nnew pyinstaller command would look like this (after following the answer linked):\n\npyinstaller --onefile -w --add-data \"favicon.ico;.\" --icon=\"favicon.ico\"\n--paths=<C:\\Users\\Haylee\\Desktop\\python>\\Lib\\site-packages SSL_Configurator.py\n\n" ]
[ 0 ]
[]
[]
[ "pyinstaller", "python" ]
stackoverflow_0074517925_pyinstaller_python.txt
Q: How to run python function in laravel with symfony process? I have a python function which returns string data, code runs fine after run import mysql.connector mydb = mysql.connector.connect( host="localhost", user="root", passwd="", database="db_absensi" ) mycursor = mydb.cursor() def example(): mycursor.execute("SELECT * FROM examples) data = mycursor.fetchall() return data this is my symfony code public function test() { $process = new Process(['python ../../../app/data.py']); $process->setTimeout(3600); $process->run(); if(!$process->isSuccessful()) { throw new ProcessFailedException($process); } dd ($process->getOutput()); return view("testView"); } and also I have another function that does not return data but a function, I plan to call this function with a procedure like in flask def face_recognition(): # generate frame by frame from camera def draw_boundary(img, classifier, scaleFactor, minNeighbors, color, text, clf): gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) features = classifier.detectMultiScale(gray_image, scaleFactor, minNeighbors) global justscanned global pause_cnt pause_cnt += 1 coords = [] for (x, y, w, h) in features: cv2.rectangle(img, (x, y), (x + w, y + h), color, 2) id, pred = clf.predict(gray_image[y:y + h, x:x + w]) confidence = int(100 * (1 - pred / 300)) if confidence > 70 and not justscanned: global cnt cnt += 1 n = (100 / 30) * cnt w_filled = (cnt / 30) * w cv2.putText(img, str(int(n))+' %', (x + 20, y + h + 28), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255,255,255), 2, cv2.LINE_AA) cv2.rectangle(img, (x, y + h + 40), (x + w, y + h + 50), color, 2) cv2.rectangle(img, (x, y + h + 40), (x + int(w_filled), y + h + 50), (255,255,255), cv2.FILLED) mycursor.execute("SELECT a.img_person, b.nama, b.kelas, b.tanggal_lahir " " FROM images a " " LEFT JOIN data_person b ON a.img_person = b.id_master " " WHERE img_id = " + str(id)) row = mycursor.fetchone() pnbr = row[0] pname = row[1] pkelas = row[2] if int(cnt) == 30: cnt = 0 mycursor.execute("INSERT INTO attendance_datamaster (attendance_date, attendance_person) VALUES('"+str(date.today())+"', '" + pnbr + "')") mydb.commit() cv2.putText(img, pname + ' | ' + pkelas, (x - 10, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255,255,255), 2, cv2.LINE_AA) time.sleep(4) # speech.say(pname + "successfully processed") # speech.runAndWait() justscanned = True pause_cnt = 0 else: if not justscanned: cv2.putText(img, 'UNKNOWN', (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2, cv2.LINE_AA) else: cv2.putText(img, ' ', (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2,cv2.LINE_AA) if pause_cnt > 80: justscanned = False coords = [x, y, w, h] return coords def recognize(img, clf, faceCascade): coords = draw_boundary(img, faceCascade, 1.1, 10, (255, 255, 255), "Face", clf) return img faceCascade = cv2.CascadeClassifier("resources/haarcascade_frontalface_default.xml") clf = cv2.face.LBPHFaceRecognizer_create() clf.read("classifier.xml") wCam, hCam = 400, 400 cap = cv2.VideoCapture(0) cap.set(3, wCam) cap.set(4, hCam) while True: ret, img = cap.read() img = recognize(img, clf, faceCascade) frame = cv2.imencode('.jpg', img)[1].tobytes() yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n') key = cv2.waitKey(1) if key == 27: break def video_feed(): return Response(face_recognition(), mimetype='multipart/x-mixed-replace; boundary=frame') I wanna put this function on src attribute of image (opencv function). This is what I usually do in flask <div class="col-md-8 " style="margin-top: 10%;"> <img src="{{ url_for('video_feed') }}" width="100%" class="img-thumbnail"> </div> Is there a similar way, or a way that is possible to run a python function that doesn't return string data inside laravel environment? A: For your first python script I would suggest you to simply recreate that mysql select within php. For def video_feed: Like Christoph mentioned in the comments, this return value looks like a http response. So you mixing something up. Probably simply return face_recognition() as json and use it with python process or start your python program as a http server and send a http request from laravel to python. And to use symphony/process you better use multiple arguments and full path like new Process(['/usr/bin/python3.6', '/var/www/app/data.py']);.
How to run python function in laravel with symfony process?
I have a python function which returns string data, code runs fine after run import mysql.connector mydb = mysql.connector.connect( host="localhost", user="root", passwd="", database="db_absensi" ) mycursor = mydb.cursor() def example(): mycursor.execute("SELECT * FROM examples) data = mycursor.fetchall() return data this is my symfony code public function test() { $process = new Process(['python ../../../app/data.py']); $process->setTimeout(3600); $process->run(); if(!$process->isSuccessful()) { throw new ProcessFailedException($process); } dd ($process->getOutput()); return view("testView"); } and also I have another function that does not return data but a function, I plan to call this function with a procedure like in flask def face_recognition(): # generate frame by frame from camera def draw_boundary(img, classifier, scaleFactor, minNeighbors, color, text, clf): gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) features = classifier.detectMultiScale(gray_image, scaleFactor, minNeighbors) global justscanned global pause_cnt pause_cnt += 1 coords = [] for (x, y, w, h) in features: cv2.rectangle(img, (x, y), (x + w, y + h), color, 2) id, pred = clf.predict(gray_image[y:y + h, x:x + w]) confidence = int(100 * (1 - pred / 300)) if confidence > 70 and not justscanned: global cnt cnt += 1 n = (100 / 30) * cnt w_filled = (cnt / 30) * w cv2.putText(img, str(int(n))+' %', (x + 20, y + h + 28), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255,255,255), 2, cv2.LINE_AA) cv2.rectangle(img, (x, y + h + 40), (x + w, y + h + 50), color, 2) cv2.rectangle(img, (x, y + h + 40), (x + int(w_filled), y + h + 50), (255,255,255), cv2.FILLED) mycursor.execute("SELECT a.img_person, b.nama, b.kelas, b.tanggal_lahir " " FROM images a " " LEFT JOIN data_person b ON a.img_person = b.id_master " " WHERE img_id = " + str(id)) row = mycursor.fetchone() pnbr = row[0] pname = row[1] pkelas = row[2] if int(cnt) == 30: cnt = 0 mycursor.execute("INSERT INTO attendance_datamaster (attendance_date, attendance_person) VALUES('"+str(date.today())+"', '" + pnbr + "')") mydb.commit() cv2.putText(img, pname + ' | ' + pkelas, (x - 10, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255,255,255), 2, cv2.LINE_AA) time.sleep(4) # speech.say(pname + "successfully processed") # speech.runAndWait() justscanned = True pause_cnt = 0 else: if not justscanned: cv2.putText(img, 'UNKNOWN', (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2, cv2.LINE_AA) else: cv2.putText(img, ' ', (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2,cv2.LINE_AA) if pause_cnt > 80: justscanned = False coords = [x, y, w, h] return coords def recognize(img, clf, faceCascade): coords = draw_boundary(img, faceCascade, 1.1, 10, (255, 255, 255), "Face", clf) return img faceCascade = cv2.CascadeClassifier("resources/haarcascade_frontalface_default.xml") clf = cv2.face.LBPHFaceRecognizer_create() clf.read("classifier.xml") wCam, hCam = 400, 400 cap = cv2.VideoCapture(0) cap.set(3, wCam) cap.set(4, hCam) while True: ret, img = cap.read() img = recognize(img, clf, faceCascade) frame = cv2.imencode('.jpg', img)[1].tobytes() yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n') key = cv2.waitKey(1) if key == 27: break def video_feed(): return Response(face_recognition(), mimetype='multipart/x-mixed-replace; boundary=frame') I wanna put this function on src attribute of image (opencv function). This is what I usually do in flask <div class="col-md-8 " style="margin-top: 10%;"> <img src="{{ url_for('video_feed') }}" width="100%" class="img-thumbnail"> </div> Is there a similar way, or a way that is possible to run a python function that doesn't return string data inside laravel environment?
[ "For your first python script I would suggest you to simply recreate that mysql select within php.\nFor def video_feed: Like Christoph mentioned in the comments, this return value looks like a http response. So you mixing something up. Probably simply return face_recognition() as json and use it with python process or start your python program as a http server and send a http request from laravel to python.\nAnd to use symphony/process you better use multiple arguments and full path like new Process(['/usr/bin/python3.6', '/var/www/app/data.py']);.\n" ]
[ 0 ]
[]
[]
[ "laravel", "php", "python", "symfony_process" ]
stackoverflow_0074528415_laravel_php_python_symfony_process.txt
Q: manipulate tuple into a list of tuples I have the following variable for class label in my dataset: y = np.array([3, 3, 3, 2, 3, 1, 3, 2, 3, 3, 3, 2, 2, 3, 2]) To determine the number of each class, I do: np.unique(y, return_counts=True) (array([1, 2, 3]), array([1, 5, 9])) How then do I manipulate this into a list of tuples for (label, n_samples)? So that I have: [ (1,1), (2,5), (3,9) ] A: If you want a simple list, use zip: out = list(zip(*np.unique(y, return_counts=True))) Output: [(1, 1), (2, 5), (3, 9)] Alternatively, you can create an array with: np.vstack(np.unique(y, return_counts=True)).T Output: array([[1, 1], [2, 5], [3, 9]]) A: list_1 = ['a', 'b', 'c'] list_2 = [1, 2, 3] # option 1 list_of_tuples = list( map( lambda x, y: (x, y), list_1, list_2 ) ) #option 2 list_of_tuples = [ (list_1[index], list_2[index]) for index in range(len(list_1)) ] # option 3 list_of_tuples = list(zip(list_1, list_2)) print(list_of_tuples) # output is [('a', 1), ('b', 2), ('c', 3)]
manipulate tuple into a list of tuples
I have the following variable for class label in my dataset: y = np.array([3, 3, 3, 2, 3, 1, 3, 2, 3, 3, 3, 2, 2, 3, 2]) To determine the number of each class, I do: np.unique(y, return_counts=True) (array([1, 2, 3]), array([1, 5, 9])) How then do I manipulate this into a list of tuples for (label, n_samples)? So that I have: [ (1,1), (2,5), (3,9) ]
[ "If you want a simple list, use zip:\nout = list(zip(*np.unique(y, return_counts=True)))\n\nOutput: [(1, 1), (2, 5), (3, 9)]\nAlternatively, you can create an array with:\nnp.vstack(np.unique(y, return_counts=True)).T\n\nOutput:\narray([[1, 1],\n [2, 5],\n [3, 9]])\n\n", "list_1 = ['a', 'b', 'c']\nlist_2 = [1, 2, 3]\n\n# option 1\nlist_of_tuples = list(\n map(\n lambda x, y: (x, y),\n list_1,\n list_2\n )\n)\n\n#option 2\nlist_of_tuples = [\n (list_1[index], list_2[index]) for index in range(len(list_1))\n]\n\n# option 3\nlist_of_tuples = list(zip(list_1, list_2))\n\nprint(list_of_tuples)\n# output is [('a', 1), ('b', 2), ('c', 3)]\n\n" ]
[ 2, 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074532304_numpy_python.txt
Q: ModuleNotFoundError while using geodesic in udf pyspark function We have pyspark dataframe like: df = spark.createDataFrame([(['target'], [2], [2], [3], [3]), (['NJ'],[3],[3], [4], [4]), (['target', 'target'],[4,5], [4,5], [6,7], [6,7]), (['CA'],[5],[5], [6], [6]), ], ('group_name', 'long', 'lat','com_long','com_lat')) Schema We want to extract the data at the position of target and use it to perform a distance calculation with a udf function. First we want to get the index of the target position in the group_name column. df = df.withColumn("target-1a-idx", (F.array_position(df.group_name, "target") -1 )) df = df.withColumn("target-1a-idx",F.when(F.col("target-1a-idx")!=-1,F.col("target-1a-idx"))) Now we create the helper columns with the target index. columns = ['long', 'lat','com_long','com_lat'] for col in columns: df = df.withColumn( prefix + col, F.col(col)[F.col("target-1a-idx")]) DF with helper columns Filtering the Null values is optional. df_filtered = df.filter(F.col("target-1a-idx").isNotNull()) Finally we defined a udf function to calculate distance, and called it import geopy from geopy.distance import geodesic @F.udf(returnType=T.FloatType()) def geodesic_udf(a, b): if (a is None) | (b is None): return 1.0 else: return geodesic(a, b).meters df_filtered = df_filtered.withColumn( "distance_to_station", geodesic_udf( F.array("target_long", "target_lat"), F.array( "target_com_long", "target_com_lat", ), ), ) ` ERROR MESSAGE While we are absolutely sure that we installed and imported geopy and geodesic correctly, we recieved ModuleNotFoundError. We guess the problem is actually not with the module. ModuleNotFoundError: No module named 'geopy' This is the error message: Could you help us with the answer. Thank you! Checked imports and installed packages (pip list). And filtered Nulls. A: The problem is with the use of the nodes. The library is not installed in the node. Using a udf does not use sparklogik but python and would need the library on each node. -> If possible, do not use a udf but a pyspark/spark native function. def calc_distance(df, suffix, lat1, lat2, lon1, lon2): #Haversine formula to calculate the distance between two gps coordinates and return the calculated result as Spark dataframe definition. df = df.withColumn('haversine_d{sf}'.format(sf=suffix), (F.pow(F.sin(F.radians(F.col(lat2) - F.col(lat1)) / 2), 2) + F.cos(F.radians(F.col(lat1))) * F.cos(F.radians(F.col(lat2))) * F.pow(F.sin(F.radians(F.col(lon2) - F.col(lon1)) / 2), 2))) df = df.withColumn('distance_in_m{sf}'.format(sf=suffix), F.atan2(F.sqrt(F.col('haversine_d{sf}'.format(sf=suffix))), F.sqrt(-F.col('haversine_d{sf}'.format(sf=suffix)) + 1)) * 12742000) df = df.drop('haversine_d{sf}'.format(sf=suffix)) return df or -> run the environment on each node. Description here p.s. I am part of the team asking the question.
ModuleNotFoundError while using geodesic in udf pyspark function
We have pyspark dataframe like: df = spark.createDataFrame([(['target'], [2], [2], [3], [3]), (['NJ'],[3],[3], [4], [4]), (['target', 'target'],[4,5], [4,5], [6,7], [6,7]), (['CA'],[5],[5], [6], [6]), ], ('group_name', 'long', 'lat','com_long','com_lat')) Schema We want to extract the data at the position of target and use it to perform a distance calculation with a udf function. First we want to get the index of the target position in the group_name column. df = df.withColumn("target-1a-idx", (F.array_position(df.group_name, "target") -1 )) df = df.withColumn("target-1a-idx",F.when(F.col("target-1a-idx")!=-1,F.col("target-1a-idx"))) Now we create the helper columns with the target index. columns = ['long', 'lat','com_long','com_lat'] for col in columns: df = df.withColumn( prefix + col, F.col(col)[F.col("target-1a-idx")]) DF with helper columns Filtering the Null values is optional. df_filtered = df.filter(F.col("target-1a-idx").isNotNull()) Finally we defined a udf function to calculate distance, and called it import geopy from geopy.distance import geodesic @F.udf(returnType=T.FloatType()) def geodesic_udf(a, b): if (a is None) | (b is None): return 1.0 else: return geodesic(a, b).meters df_filtered = df_filtered.withColumn( "distance_to_station", geodesic_udf( F.array("target_long", "target_lat"), F.array( "target_com_long", "target_com_lat", ), ), ) ` ERROR MESSAGE While we are absolutely sure that we installed and imported geopy and geodesic correctly, we recieved ModuleNotFoundError. We guess the problem is actually not with the module. ModuleNotFoundError: No module named 'geopy' This is the error message: Could you help us with the answer. Thank you! Checked imports and installed packages (pip list). And filtered Nulls.
[ "The problem is with the use of the nodes. The library is not installed in the node. Using a udf does not use sparklogik but python and would need the library on each node.\n-> If possible, do not use a udf but a pyspark/spark native function.\ndef calc_distance(df, suffix, lat1, lat2, lon1, lon2):\n#Haversine formula to calculate the distance between two gps coordinates and return the calculated result as Spark dataframe definition.\n\ndf = df.withColumn('haversine_d{sf}'.format(sf=suffix), (F.pow(F.sin(F.radians(F.col(lat2) - F.col(lat1)) / 2), 2) +\n F.cos(F.radians(F.col(lat1))) * F.cos(F.radians(F.col(lat2))) *\n F.pow(F.sin(F.radians(F.col(lon2) - F.col(lon1)) / 2), 2)))\ndf = df.withColumn('distance_in_m{sf}'.format(sf=suffix), F.atan2(F.sqrt(F.col('haversine_d{sf}'.format(sf=suffix))), F.sqrt(-F.col('haversine_d{sf}'.format(sf=suffix)) + 1)) * 12742000)\ndf = df.drop('haversine_d{sf}'.format(sf=suffix))\nreturn df\n\nor\n-> run the environment on each node. Description here\n\np.s. I am part of the team asking the question.\n" ]
[ 0 ]
[]
[]
[ "geopy", "module", "pyspark", "python" ]
stackoverflow_0074521514_geopy_module_pyspark_python.txt
Q: Create a matrix using a certain vector in Python I have this vector m = [1,0.8,0.6,0.4,0.2,0] and I have to create the following matrix in Python: I create a matrix of zeros and a double mm = np.zeros((6, 6)) for j in list(range(0,6,1)): for i in list(range(0,6,1)): ind = abs(i-j) m[j,i] = mm[ind] But, I got the following output: array([[1. , 0.8, 0.6, 0.4, 0.2, 0. ], [0.8, 1. , 0.8, 0.6, 0.4, 0.2], [0.6, 0.8, 1. , 0.8, 0.6, 0.4], [0.4, 0.6, 0.8, 1. , 0.8, 0.6], [0.2, 0.4, 0.6, 0.8, 1. , 0.8], [0. , 0.2, 0.4, 0.6, 0.8, 1. ]]) That is what I wanted! Thanks anyway. A: This could be written by comprehension if you do not want to use numpy, [m[i::-1] + m[1:len(m)-i] for i in range(len(m))] A: Here is a way to implement what you want with only numpy functions, without loops (m is your numpy array): x = np.tile(np.hstack([np.flip(m[1:]), m]), (m.size, 1)) rows, column_indices = np.ogrid[:x.shape[0], :x.shape[1]] column_indices = column_indices - np.arange(m.size)[:, np.newaxis] result = x[rows, column_indices][:, -m.size:] Example: >>> result array([[1. , 0.8, 0.6, 0.4, 0.2, 0. ], [0.8, 1. , 0.8, 0.6, 0.4, 0.2], [0.6, 0.8, 1. , 0.8, 0.6, 0.4], [0.4, 0.6, 0.8, 1. , 0.8, 0.6], [0.2, 0.4, 0.6, 0.8, 1. , 0.8], [0. , 0.2, 0.4, 0.6, 0.8, 1. ]]) This approach is much faster than using a list comprehension when m is large.
Create a matrix using a certain vector in Python
I have this vector m = [1,0.8,0.6,0.4,0.2,0] and I have to create the following matrix in Python: I create a matrix of zeros and a double mm = np.zeros((6, 6)) for j in list(range(0,6,1)): for i in list(range(0,6,1)): ind = abs(i-j) m[j,i] = mm[ind] But, I got the following output: array([[1. , 0.8, 0.6, 0.4, 0.2, 0. ], [0.8, 1. , 0.8, 0.6, 0.4, 0.2], [0.6, 0.8, 1. , 0.8, 0.6, 0.4], [0.4, 0.6, 0.8, 1. , 0.8, 0.6], [0.2, 0.4, 0.6, 0.8, 1. , 0.8], [0. , 0.2, 0.4, 0.6, 0.8, 1. ]]) That is what I wanted! Thanks anyway.
[ "This could be written by comprehension if you do not want to use numpy,\n[m[i::-1] + m[1:len(m)-i] for i in range(len(m))]\n\n", "Here is a way to implement what you want with only numpy functions, without loops (m is your numpy array):\nx = np.tile(np.hstack([np.flip(m[1:]), m]), (m.size, 1))\nrows, column_indices = np.ogrid[:x.shape[0], :x.shape[1]]\ncolumn_indices = column_indices - np.arange(m.size)[:, np.newaxis]\nresult = x[rows, column_indices][:, -m.size:]\n\nExample:\n>>> result\narray([[1. , 0.8, 0.6, 0.4, 0.2, 0. ],\n [0.8, 1. , 0.8, 0.6, 0.4, 0.2],\n [0.6, 0.8, 1. , 0.8, 0.6, 0.4],\n [0.4, 0.6, 0.8, 1. , 0.8, 0.6],\n [0.2, 0.4, 0.6, 0.8, 1. , 0.8],\n [0. , 0.2, 0.4, 0.6, 0.8, 1. ]])\n\nThis approach is much faster than using a list comprehension when m is large.\n" ]
[ 1, 1 ]
[]
[]
[ "matrix", "python", "vector" ]
stackoverflow_0074531389_matrix_python_vector.txt
Q: ValueError: Could not interpret value for parameter load "bmi.csv" into the Dataframe and create a scatter plot of the data using relplot() with height on x-axis and weight on y-axis and color the plot points based on Gender and vary the size of the points by BMI index. My code is: import pandas as pd import seaborn as sns df = pd.read_csv('bmi.csv') BMI = pd.DataFrame(df) g = sns.relplot(x = 'Height', y = 'Weight', data=df);b I get: Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> g = sns.relplot(x = 'Height', y = 'Weight', data=df);b File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/relational.py", line 862, in relplot p = plotter( File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/relational.py", line 538, in __init__ super().__init__(data=data, variables=variables) File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/_oldcore.py", line 640, in __init__ self.assign_variables(data, variables) File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/_oldcore.py", line 701, in assign_variables plot_data, variables = self._assign_variables_longform( File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/_oldcore.py", line 938, in _assign_variables_longform raise ValueError(err) ValueError: Could not interpret value `Height` for parameter `x` A: Besides the error, why are you constructing a dataframe from a dataframe and also you're not using it ? I'm talking about BMI here : df = pd.read_csv('bmi.csv') BMI = pd.DataFrame(df) And regarding the error, this one has occured because Height is not one of the columns of df. I suggest you to check the content/shape/columns of this dataframe before plotting with seaborn. It may be a problem with the separator of your .csv. sns.relplot(x = 'Height', y = 'Weight', data=df) Dataset: https://github.com/aniketsoni1/BMI-Data-Insight-using-SVM/blob/master/bmi.csv
ValueError: Could not interpret value for parameter
load "bmi.csv" into the Dataframe and create a scatter plot of the data using relplot() with height on x-axis and weight on y-axis and color the plot points based on Gender and vary the size of the points by BMI index. My code is: import pandas as pd import seaborn as sns df = pd.read_csv('bmi.csv') BMI = pd.DataFrame(df) g = sns.relplot(x = 'Height', y = 'Weight', data=df);b I get: Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> g = sns.relplot(x = 'Height', y = 'Weight', data=df);b File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/relational.py", line 862, in relplot p = plotter( File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/relational.py", line 538, in __init__ super().__init__(data=data, variables=variables) File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/_oldcore.py", line 640, in __init__ self.assign_variables(data, variables) File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/_oldcore.py", line 701, in assign_variables plot_data, variables = self._assign_variables_longform( File "/Users/aleksikurunsaari/Library/Python/3.10/lib/python/site-packages/seaborn/_oldcore.py", line 938, in _assign_variables_longform raise ValueError(err) ValueError: Could not interpret value `Height` for parameter `x`
[ "Besides the error, why are you constructing a dataframe from a dataframe and also you're not using it ? I'm talking about BMI here :\ndf = pd.read_csv('bmi.csv')\nBMI = pd.DataFrame(df)\n\nAnd regarding the error, this one has occured because Height is not one of the columns of df. I suggest you to check the content/shape/columns of this dataframe before plotting with seaborn. It may be a problem with the separator of your .csv.\nsns.relplot(x = 'Height', y = 'Weight', data=df)\n\n\nDataset: https://github.com/aniketsoni1/BMI-Data-Insight-using-SVM/blob/master/bmi.csv\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "relplot", "seaborn" ]
stackoverflow_0074531969_dataframe_pandas_python_relplot_seaborn.txt
Q: Knapsack with SPECIFIC AMOUNT of items from different groups So this is a variation of the Knapsack Problem I came with the other day. It is like a 0-1 Knapsack Problem where there are multiple groups and each item belongs to only one group. The goal is to maximize the profits subject to the constraints. In this case, a fixed number of items from each group have to be chosen for each group. It is similar to the Multiple Choice Knapsack Problem, but in that case you only pick 1 of item of each group, in this one you want to pick x amount of items of each group So, each item has: value, weight and group Each group has an item count (Ex: if group A (or 0) has 2, the final solution needs to have 2 items of group A, no more no less) And and you also have a maximum capacity (not related to the groups) This translates into: values[i] = The value of the ith element weights[i] = The weigth of the ith element groups[i] = The group of the ith element C = Capacity n = Amount of elements m = Amount of groups count[j] = Amount of items of group j I'm attempting a Recursive solution first and then I will try a Dynamic approach. Any solution would be appreciated (preferably Python, but anything will do :) ). Usefull links I found: Theorical solution of a similar problem First approach to the Multiple Choice Knapsack Problem Multiple Choice Knapsack Problem solved in Python Knapsack with count constraint A: Full code also in: https://github.com/pabloroldan98/knapsack-football-formations Explanation after the code. This code is for an example where you have a Fantasy League with a playersDB where each player has price (weight), points (value) and position (group); there is a list of possible_formations (group variations); and a budget (W) you can't go over. Full code: main.py: from group_knapsack import best_full_teams playersDB = [ Player(name="Keylor Navas", price=16, points=7.5, position="GK"), Player(name="Laporte", price=23, points=7.2, position="DEF"), Player(name="Modric", price=22, points=7.3, position="MID"), Player(name="Messi", price=51, points=8.2, position="ATT"), ... ] possible_formations = [ [3, 4, 3], [3, 5, 2], [4, 3, 3], [4, 4, 2], [4, 5, 1], [5, 3, 2], [5, 4, 1], ] budget = 300 best_full_teams(playersDB, possible_formations, budget) group_knapsack.py: import itertools from MCKP import knapsack_multichoice_onepick def best_full_teams(players_list, formations, budget): formation_score_players = [] for formation in formations: players_points, players_prices, players_comb_indexes = players_preproc( players_list, formation) score, comb_result_indexes = knapsack_multichoice_onepick( players_prices, players_points, budget) result_indexes = [] for comb_index in comb_result_indexes: for winning_i in players_comb_indexes[comb_index[0]][comb_index[1]]: result_indexes.append(winning_i) result_players = [] for res_index in result_indexes: result_players.append(players_list[res_index]) formation_score_players.append((formation, score, result_players)) print("With formation " + str(formation) + ": " + str(score)) for best_player in result_players: print(best_player) print() print() formation_score_players_by_score = sorted(formation_score_players, key=lambda tup: tup[1], reverse=True) for final_formation_score in formation_score_players_by_score: print((final_formation_score[0], final_formation_score[1])) return formation_score_players def players_preproc(players_list, formation): max_gk = 1 max_def = formation[0] max_mid = formation[1] max_att = formation[2] gk_values, gk_weights, gk_indexes = generate_group(players_list, "GK") gk_comb_values, gk_comb_weights, gk_comb_indexes = group_preproc(gk_values, gk_weights, gk_indexes, max_gk) def_values, def_weights, def_indexes = generate_group(players_list, "DEF") def_comb_values, def_comb_weights, def_comb_indexes = group_preproc( def_values, def_weights, def_indexes, max_def) mid_values, mid_weights, mid_indexes = generate_group(players_list, "MID") mid_comb_values, mid_comb_weights, mid_comb_indexes = group_preproc( mid_values, mid_weights, mid_indexes, max_mid) att_values, att_weights, att_indexes = generate_group(players_list, "ATT") att_comb_values, att_comb_weights, att_comb_indexes = group_preproc( att_values, att_weights, att_indexes, max_att) result_comb_values = [gk_comb_values, def_comb_values, mid_comb_values, att_comb_values] result_comb_weights = [gk_comb_weights, def_comb_weights, mid_comb_weights, att_comb_weights] result_comb_indexes = [gk_comb_indexes, def_comb_indexes, mid_comb_indexes, att_comb_indexes] return result_comb_values, result_comb_weights, result_comb_indexes def generate_group(full_list, group): group_values = [] group_weights = [] group_indexes = [] for i, item in enumerate(full_list): if item.position == group: group_values.append(item.points) group_weights.append(item.price) group_indexes.append(i) return group_values, group_weights, group_indexes def group_preproc(group_values, group_weights, initial_indexes, r): comb_values = list(itertools.combinations(group_values, r)) comb_weights = list(itertools.combinations(group_weights, r)) comb_indexes = list(itertools.combinations(initial_indexes, r)) group_comb_values = [] for value_combinations in comb_values: values_added = sum(list(value_combinations)) group_comb_values.append(values_added) group_comb_weights = [] for weight_combinations in comb_weights: weights_added = sum(list(weight_combinations)) group_comb_weights.append(weights_added) return group_comb_values, group_comb_weights, comb_indexes MCKP.py: import copy def knapsack_multichoice_onepick(weights, values, max_weight): if len(weights) == 0: return 0 last_array = [-1 for _ in range(max_weight + 1)] last_path = [[] for _ in range(max_weight + 1)] for i in range(len(weights[0])): if weights[0][i] < max_weight: if last_array[weights[0][i]] < values[0][i]: last_array[weights[0][i]] = values[0][i] last_path[weights[0][i]] = [(0, i)] for i in range(1, len(weights)): current_array = [-1 for _ in range(max_weight + 1)] current_path = [[] for _ in range(max_weight + 1)] for j in range(len(weights[i])): for k in range(weights[i][j], max_weight + 1): if last_array[k - weights[i][j]] > 0: if current_array[k] < last_array[k - weights[i][j]] + \ values[i][j]: current_array[k] = last_array[k - weights[i][j]] + \ values[i][j] current_path[k] = copy.deepcopy( last_path[k - weights[i][j]]) current_path[k].append((i, j)) last_array = current_array last_path = current_path solution, index_path = get_onepick_solution(last_array, last_path) return solution, index_path def get_onepick_solution(scores, paths): scores_paths = list(zip(scores, paths)) scores_paths_by_score = sorted(scores_paths, key=lambda tup: tup[0], reverse=True) return scores_paths_by_score[0][0], scores_paths_by_score[0][1] player.py: class Player: def __init__( self, name: str, price: float, points: float, position: str ): self.name = name self.price = price self.points = points self.position = position def __str__(self): return f"({self.name}, {self.price}, {self.points}, {self.position})" @property def position(self): return self._position @position.setter def position(self, pos): if pos not in ["GK", "DEF", "MID", "ATT"]: raise ValueError("Sorry, that's not a valid position") self._position = pos def get_group(self): if self.position == "GK": group = 0 elif self.position == "DEF": group = 1 elif self.position == "MID": group = 2 else: group = 3 return group Explanation: Okay,so I managed to find a solution translating what was here: Solving the Multiple Choice Knapsack Problem from C++ to Python. My solution also gives the path that got you to that solution. It uses Dynamic Programming and it's very fast. The input data, instead of having groups[i], has the weights and the values as a list of lists, where every list inside represent the values of each group: weights[i] = [weights_group_0, weights_group_1, ...] values[i] = [values_group_0, values_group_1, ...] Where: weights_group_i[j] = The weigth of the jth element of the ith group values_group_i[j] = The value of the jth element of the ith group Those would be the inputs of knapsack_multichoice_onepick. Here is an example: # Example values = [[6, 10], [12, 2], [2, 3]] weights = [[1, 2], [6, 2], [3, 2]] W = 7 print(knapsack_multichoice_onepick(weights, values, W)) # (15, [(0, 1), (1, 1), (2, 1)]) After that I followed @user3386109 's suggestion and did the combinations with the indexes. The group preprocesing methods are players_preproc, generate_group and group_preproc. Again, this code is for an example where you have a Fantasy League with a playersDB where each player has price (weight), points (value) and position (group); there is a list of possible_formations (group variations); and a budget (W) you can't go over. The best_full_teams method prints everything and uses all the previous ones.
Knapsack with SPECIFIC AMOUNT of items from different groups
So this is a variation of the Knapsack Problem I came with the other day. It is like a 0-1 Knapsack Problem where there are multiple groups and each item belongs to only one group. The goal is to maximize the profits subject to the constraints. In this case, a fixed number of items from each group have to be chosen for each group. It is similar to the Multiple Choice Knapsack Problem, but in that case you only pick 1 of item of each group, in this one you want to pick x amount of items of each group So, each item has: value, weight and group Each group has an item count (Ex: if group A (or 0) has 2, the final solution needs to have 2 items of group A, no more no less) And and you also have a maximum capacity (not related to the groups) This translates into: values[i] = The value of the ith element weights[i] = The weigth of the ith element groups[i] = The group of the ith element C = Capacity n = Amount of elements m = Amount of groups count[j] = Amount of items of group j I'm attempting a Recursive solution first and then I will try a Dynamic approach. Any solution would be appreciated (preferably Python, but anything will do :) ). Usefull links I found: Theorical solution of a similar problem First approach to the Multiple Choice Knapsack Problem Multiple Choice Knapsack Problem solved in Python Knapsack with count constraint
[ "Full code also in: https://github.com/pabloroldan98/knapsack-football-formations\nExplanation after the code.\nThis code is for an example where you have a Fantasy League with a playersDB where each player has price (weight), points (value) and position (group); there is a list of possible_formations (group variations); and a budget (W) you can't go over.\nFull code:\n\nmain.py:\n from group_knapsack import best_full_teams\n\n playersDB = [\n Player(name=\"Keylor Navas\", price=16, points=7.5, position=\"GK\"),\n Player(name=\"Laporte\", price=23, points=7.2, position=\"DEF\"),\n Player(name=\"Modric\", price=22, points=7.3, position=\"MID\"),\n Player(name=\"Messi\", price=51, points=8.2, position=\"ATT\"),\n ...\n ]\n\n possible_formations = [\n [3, 4, 3],\n [3, 5, 2],\n [4, 3, 3],\n [4, 4, 2],\n [4, 5, 1],\n [5, 3, 2],\n [5, 4, 1],\n ]\n\n budget = 300\n\n\n best_full_teams(playersDB, possible_formations, budget)\n\n\ngroup_knapsack.py:\n import itertools\n\n from MCKP import knapsack_multichoice_onepick\n\n\n def best_full_teams(players_list, formations, budget):\n formation_score_players = []\n\n for formation in formations:\n players_points, players_prices, players_comb_indexes = players_preproc(\n players_list, formation)\n\n score, comb_result_indexes = knapsack_multichoice_onepick(\n players_prices, players_points, budget)\n\n result_indexes = []\n for comb_index in comb_result_indexes:\n for winning_i in players_comb_indexes[comb_index[0]][comb_index[1]]:\n result_indexes.append(winning_i)\n\n result_players = []\n for res_index in result_indexes:\n result_players.append(players_list[res_index])\n\n formation_score_players.append((formation, score, result_players))\n\n print(\"With formation \" + str(formation) + \": \" + str(score))\n for best_player in result_players:\n print(best_player)\n print()\n print()\n\n formation_score_players_by_score = sorted(formation_score_players,\n key=lambda tup: tup[1],\n reverse=True)\n for final_formation_score in formation_score_players_by_score:\n print((final_formation_score[0], final_formation_score[1]))\n\n return formation_score_players\n\n\n def players_preproc(players_list, formation):\n max_gk = 1\n max_def = formation[0]\n max_mid = formation[1]\n max_att = formation[2]\n\n gk_values, gk_weights, gk_indexes = generate_group(players_list, \"GK\")\n gk_comb_values, gk_comb_weights, gk_comb_indexes = group_preproc(gk_values,\n gk_weights,\n gk_indexes,\n max_gk)\n\n def_values, def_weights, def_indexes = generate_group(players_list, \"DEF\")\n def_comb_values, def_comb_weights, def_comb_indexes = group_preproc(\n def_values, def_weights, def_indexes, max_def)\n\n mid_values, mid_weights, mid_indexes = generate_group(players_list, \"MID\")\n mid_comb_values, mid_comb_weights, mid_comb_indexes = group_preproc(\n mid_values, mid_weights, mid_indexes, max_mid)\n\n att_values, att_weights, att_indexes = generate_group(players_list, \"ATT\")\n att_comb_values, att_comb_weights, att_comb_indexes = group_preproc(\n att_values, att_weights, att_indexes, max_att)\n\n result_comb_values = [gk_comb_values, def_comb_values, mid_comb_values,\n att_comb_values]\n result_comb_weights = [gk_comb_weights, def_comb_weights, mid_comb_weights,\n att_comb_weights]\n result_comb_indexes = [gk_comb_indexes, def_comb_indexes, mid_comb_indexes,\n att_comb_indexes]\n\n return result_comb_values, result_comb_weights, result_comb_indexes\n\n\n def generate_group(full_list, group):\n group_values = []\n group_weights = []\n group_indexes = []\n for i, item in enumerate(full_list):\n if item.position == group:\n group_values.append(item.points)\n group_weights.append(item.price)\n group_indexes.append(i)\n return group_values, group_weights, group_indexes\n\n\n def group_preproc(group_values, group_weights, initial_indexes, r):\n comb_values = list(itertools.combinations(group_values, r))\n comb_weights = list(itertools.combinations(group_weights, r))\n comb_indexes = list(itertools.combinations(initial_indexes, r))\n\n group_comb_values = []\n for value_combinations in comb_values:\n values_added = sum(list(value_combinations))\n group_comb_values.append(values_added)\n\n group_comb_weights = []\n for weight_combinations in comb_weights:\n weights_added = sum(list(weight_combinations))\n group_comb_weights.append(weights_added)\n\n return group_comb_values, group_comb_weights, comb_indexes\n\n\nMCKP.py:\n import copy\n\n\n def knapsack_multichoice_onepick(weights, values, max_weight):\n if len(weights) == 0:\n return 0\n\n last_array = [-1 for _ in range(max_weight + 1)]\n last_path = [[] for _ in range(max_weight + 1)]\n for i in range(len(weights[0])):\n if weights[0][i] < max_weight:\n if last_array[weights[0][i]] < values[0][i]:\n last_array[weights[0][i]] = values[0][i]\n last_path[weights[0][i]] = [(0, i)]\n\n for i in range(1, len(weights)):\n current_array = [-1 for _ in range(max_weight + 1)]\n current_path = [[] for _ in range(max_weight + 1)]\n for j in range(len(weights[i])):\n for k in range(weights[i][j], max_weight + 1):\n if last_array[k - weights[i][j]] > 0:\n if current_array[k] < last_array[k - weights[i][j]] + \\\n values[i][j]:\n current_array[k] = last_array[k - weights[i][j]] + \\\n values[i][j]\n current_path[k] = copy.deepcopy(\n last_path[k - weights[i][j]])\n current_path[k].append((i, j))\n last_array = current_array\n last_path = current_path\n\n solution, index_path = get_onepick_solution(last_array, last_path)\n\n return solution, index_path\n\n\n def get_onepick_solution(scores, paths):\n scores_paths = list(zip(scores, paths))\n scores_paths_by_score = sorted(scores_paths, key=lambda tup: tup[0],\n reverse=True)\n\n return scores_paths_by_score[0][0], scores_paths_by_score[0][1]\n\n\nplayer.py:\n class Player:\n def __init__(\n self,\n name: str,\n price: float,\n points: float,\n position: str\n ):\n self.name = name\n self.price = price\n self.points = points\n self.position = position\n\n def __str__(self):\n return f\"({self.name}, {self.price}, {self.points}, {self.position})\"\n\n @property\n def position(self):\n return self._position\n\n @position.setter\n def position(self, pos):\n if pos not in [\"GK\", \"DEF\", \"MID\", \"ATT\"]:\n raise ValueError(\"Sorry, that's not a valid position\")\n self._position = pos\n\n def get_group(self):\n if self.position == \"GK\":\n group = 0\n elif self.position == \"DEF\":\n group = 1\n elif self.position == \"MID\":\n group = 2\n else:\n group = 3\n return group\n\n\n\nExplanation:\nOkay,so I managed to find a solution translating what was here: Solving the Multiple Choice Knapsack Problem from C++ to Python. My solution also gives the path that got you to that solution. It uses Dynamic Programming and it's very fast.\nThe input data, instead of having groups[i], has the weights and the values as a list of lists, where every list inside represent the values of each group:\n\nweights[i] = [weights_group_0, weights_group_1, ...]\nvalues[i] = [values_group_0, values_group_1, ...]\n\nWhere:\n\nweights_group_i[j] = The weigth of the jth element of the ith group\nvalues_group_i[j] = The value of the jth element of the ith group\n\nThose would be the inputs of knapsack_multichoice_onepick. Here is an example:\n# Example\nvalues = [[6, 10], [12, 2], [2, 3]]\nweights = [[1, 2], [6, 2], [3, 2]]\nW = 7\n\nprint(knapsack_multichoice_onepick(weights, values, W)) # (15, [(0, 1), (1, 1), (2, 1)])\n\nAfter that I followed @user3386109 's suggestion and did the combinations with the indexes. The group preprocesing methods are players_preproc, generate_group and group_preproc.\nAgain, this code is for an example where you have a Fantasy League with a playersDB where each player has price (weight), points (value) and position (group); there is a list of possible_formations (group variations); and a budget (W) you can't go over.\nThe best_full_teams method prints everything and uses all the previous ones.\n" ]
[ 0 ]
[]
[]
[ "algorithm", "dynamic_programming", "knapsack_problem", "python", "recursion" ]
stackoverflow_0074503207_algorithm_dynamic_programming_knapsack_problem_python_recursion.txt
Q: Difference between *3 in String to make Each Characters Triple I have a code that answer the question, the code is like this: def three_words(text): result = '' for letter in text: result += letter*3 return print(result) The function is returning three characters of each letter, example Ab will return AAAbbb My question is why it is not returning AbAbAb?, like when I did with the code below: str = 'Ab'*3 print(str) I just confused, someone please help me. A: basically when you loop through a string you get each seperate character per loop: test = '123' for c in test: print(c) output: '1' '2' '3'
Difference between *3 in String to make Each Characters Triple
I have a code that answer the question, the code is like this: def three_words(text): result = '' for letter in text: result += letter*3 return print(result) The function is returning three characters of each letter, example Ab will return AAAbbb My question is why it is not returning AbAbAb?, like when I did with the code below: str = 'Ab'*3 print(str) I just confused, someone please help me.
[ "basically when you loop through a string you get each seperate character per loop:\ntest = '123'\n\nfor c in test:\n print(c)\n\noutput:\n'1'\n'2'\n'3'\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074532225_python.txt
Q: run my .feature files using multiple userdata possibilities i'm running my .feature files with userdata what i'm trying to do is to add multiple values in userdata and loop the execution on every value for exemple: running login test many times with a different username and password in every try but with one command line Feature: login Scenario Outline : authentification Given open application When enter user data |username | password | And click on button Log In Then user connected [behave.userdata] username1= test password1= test username2= automation password2= automation A: Why not using it like this: Feature: login Scenario Outline : authentification Given open application When enter user email and password And click on button Log In Then user connected Examples: |email | passsword | |test | test | |automation | automation|
run my .feature files using multiple userdata possibilities
i'm running my .feature files with userdata what i'm trying to do is to add multiple values in userdata and loop the execution on every value for exemple: running login test many times with a different username and password in every try but with one command line Feature: login Scenario Outline : authentification Given open application When enter user data |username | password | And click on button Log In Then user connected [behave.userdata] username1= test password1= test username2= automation password2= automation
[ "Why not using it like this:\n\nFeature: login \n\n Scenario Outline : authentification \n Given open application\n When enter user email and password\n And click on button Log In\n Then user connected\nExamples:\n |email | passsword |\n |test | test |\n |automation | automation|\n\n" ]
[ 0 ]
[]
[]
[ "automated_tests", "python", "python_behave", "selenium", "testing" ]
stackoverflow_0074531920_automated_tests_python_python_behave_selenium_testing.txt
Q: simpler way to Concatenate string and int Here's the code I got so far: x = 2 y = 3 print('hi' + str(x) + 'hello' + str(y)) Is there any simpler way to concatenate strings and ints? I would like some examples. A: you should use formatted strings (fstrings): x = 2 y = 3 print(f'hi {x} hello {y}')
simpler way to Concatenate string and int
Here's the code I got so far: x = 2 y = 3 print('hi' + str(x) + 'hello' + str(y)) Is there any simpler way to concatenate strings and ints? I would like some examples.
[ "you should use formatted strings (fstrings):\nx = 2\ny = 3\n\nprint(f'hi {x} hello {y}')\n\n" ]
[ 1 ]
[ "you have multiple way to do that ! :)\ni found a article with multiple one:\nhttps://datagy.io/python-concatenate-string-int/\nthis one should be a good fit in your case:\n# Concatenating a String and an Int in Python with f-strings\nword = 'datagy'\ninteger = 2022\nnew_word = f'{word}{integer}'\nprint(new_word)\n# Returns: datagy2022\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074532441_python.txt
Q: connecting mysql using sqlalchemy & Docker compose I tried to make mysql connect with docker below is my docker compose file: version: "3.9" services: db: # build: ./mysql image: mysql:8 hostname: localhost environment: MYSQL_DATABASE: finops MYSQL_USER: root MYSQL_ALLOW_EMPTY_PASSWORD: 1 MYSQL_PASSWORD: Roh1t#mishra # MYSQL_ROOT_PASSWORD: 'Roh1t#mishra' # MYSQL_TCP_PORT: '3306' ports: - 3307:3307 expose: - 3307 api: build: ./cost-controller-engine ports: - 8023:8023 environment: WAIT_HOSTS: db:3307 depends_on: - db links: - db expose: - 8023 # Names our volume and here is my code to create engine using sqlalchemy:- engine = create_engine("mysql+mysqlconnector://root:Roh1t#mishra@127.0.0.1:3306/finops", echo=True) but getting error like _mysql_connector.MySQLInterfaceError: Can't connect to MySQL server on '127.0.0.1:3306' (111) Help me to connect MySQL using sqlalchemy & docker. A: The default database port is 3306, by commenting the env line it still remains 3306 (uncommenting it and setting a different value will change the port). If you don't need to connect to the database externally (outside of the containers) then there is no need for expose/ports, which is currently set to port 3307 and therefore not used. Note: the ports setting also acts as an expose If we use docker compose and therefore its easy to connect multiple containers, we use the container names for their network calls, in this case if from the container api I want to connect to the database in the container db I use db:3306, in your code it is called 127.0.0.1 which is localhost and it always calls only itself. So if I call 127.0.0.1 from the api container I call api. Try modifying it to: create_engine("mysql+mysqlconnector://root:Roh1t#mishra@db:3306/finops") Update: Remove db.hostname: localhost in docker-compose. I am attaching sample test files when I successfully connect to the database. from sqlalchemy import create_engine, MetaData from time import sleep engine = create_engine("mysql://root:toor@db:3306/my_db") con = engine.connect() con.execute("CREATE TABLE IF NOT EXISTS tbl (id INT, flag INT);") metadata = MetaData(bind=engine) metadata.reflect(only=['tbl']) print(metadata.tables) version: '3.9' services: api: build: context: . depends_on: - db db: image: mysql:8 environment: MYSQL_DATABASE: my_db MYSQL_ROOT_PASSWORD: toor Output from docker-compose logs: api_1 | FacadeDict({'tbl': Table('tbl', MetaData(bind=Engine(mysql://root:***@db:3306/my_db)), Column('id', INTEGER(), table=<tbl>, primary_key=True, nullable=False), Column('key', VARCHAR(length=20), table=<tbl>), Column('val', VARCHAR(length=20), table=<tbl>), schema=None)})
connecting mysql using sqlalchemy & Docker compose
I tried to make mysql connect with docker below is my docker compose file: version: "3.9" services: db: # build: ./mysql image: mysql:8 hostname: localhost environment: MYSQL_DATABASE: finops MYSQL_USER: root MYSQL_ALLOW_EMPTY_PASSWORD: 1 MYSQL_PASSWORD: Roh1t#mishra # MYSQL_ROOT_PASSWORD: 'Roh1t#mishra' # MYSQL_TCP_PORT: '3306' ports: - 3307:3307 expose: - 3307 api: build: ./cost-controller-engine ports: - 8023:8023 environment: WAIT_HOSTS: db:3307 depends_on: - db links: - db expose: - 8023 # Names our volume and here is my code to create engine using sqlalchemy:- engine = create_engine("mysql+mysqlconnector://root:Roh1t#mishra@127.0.0.1:3306/finops", echo=True) but getting error like _mysql_connector.MySQLInterfaceError: Can't connect to MySQL server on '127.0.0.1:3306' (111) Help me to connect MySQL using sqlalchemy & docker.
[ "The default database port is 3306, by commenting the env line it still remains 3306 (uncommenting it and setting a different value will change the port).\nIf you don't need to connect to the database externally (outside of the containers) then there is no need for expose/ports, which is currently set to port 3307 and therefore not used.\nNote: the ports setting also acts as an expose\nIf we use docker compose and therefore its easy to connect multiple containers, we use the container names for their network calls, in this case if from the container api I want to connect to the database in the container db I use db:3306, in your code it is called 127.0.0.1 which is localhost and it always calls only itself. So if I call 127.0.0.1 from the api container I call api.\nTry modifying it to:\ncreate_engine(\"mysql+mysqlconnector://root:Roh1t#mishra@db:3306/finops\")\nUpdate:\nRemove db.hostname: localhost in docker-compose.\nI am attaching sample test files when I successfully connect to the database.\nfrom sqlalchemy import create_engine, MetaData\nfrom time import sleep\n\n\nengine = create_engine(\"mysql://root:toor@db:3306/my_db\")\ncon = engine.connect()\n\ncon.execute(\"CREATE TABLE IF NOT EXISTS tbl (id INT, flag INT);\")\n\nmetadata = MetaData(bind=engine)\nmetadata.reflect(only=['tbl'])\n\nprint(metadata.tables)\n\nversion: '3.9'\n\nservices:\n api:\n build:\n context: .\n depends_on:\n - db\n db:\n image: mysql:8\n environment:\n MYSQL_DATABASE: my_db\n MYSQL_ROOT_PASSWORD: toor\n\nOutput from docker-compose logs:\napi_1 | FacadeDict({'tbl': Table('tbl', MetaData(bind=Engine(mysql://root:***@db:3306/my_db)), Column('id', INTEGER(), table=<tbl>, primary_key=True, nullable=False), Column('key', VARCHAR(length=20), table=<tbl>), Column('val', VARCHAR(length=20), table=<tbl>), schema=None)})\n\n" ]
[ 0 ]
[]
[]
[ "docker", "mysql", "python", "sqlalchemy" ]
stackoverflow_0074532444_docker_mysql_python_sqlalchemy.txt
Q: create new column group by values of other column I Have the following dataframe df1 = pd.DataFrame({'sentence': ['A', "A", "A", "A", 'A', 'B', "B", 'B'], 'entity': ['Stay home', "Stay home", "WAY", "WAY", "Stay home", 'Go outside', "Go outside", "purpose"], 'token' : ['Severe weather', "raining", "smt", "SMT0", "Windy", 'Sunny', "Good weather", "smt"] }) sentence entity token 0 A Stay home Severe weather 1 A Stay home raining 2 A Way smt 3 A Way SMT0 4 A Stay home Windy 5 B Go outside Sunny 6 B Go outside Good weather 7 B Purpose smt I want to group by the values of sentences and create new columns when Way and Purpose exists in entity columns Expected outcome: sentence entity token Way Purpose 0 A Stay home Severe weather, raining, Windy smt, SMTO Nan 1 B Go outside Sunny, Good weather Nan smt A: Filter rows for non matched rows by Series.isin in boolean indexing with ~ for invert mask, aggregate join and use DataFrame.join for filter rows matched list with DataFrame.pivot_table: vals = ['WAY','purpose'] m = df1['entity'].isin(vals) df2 = df1[m].pivot_table(index='sentence',columns='entity',values='token', aggfunc=','.join) df3 = df1[~m].groupby(['sentence','entity'])['token'].agg(', '.join).reset_index() df = df3.join(df2, on='sentence') print (df) sentence entity token WAY purpose 0 A Stay home Severe weather, raining, Windy smt,SMT0 NaN 1 B Go outside Sunny, Good weather NaN smt
create new column group by values of other column
I Have the following dataframe df1 = pd.DataFrame({'sentence': ['A', "A", "A", "A", 'A', 'B', "B", 'B'], 'entity': ['Stay home', "Stay home", "WAY", "WAY", "Stay home", 'Go outside', "Go outside", "purpose"], 'token' : ['Severe weather', "raining", "smt", "SMT0", "Windy", 'Sunny', "Good weather", "smt"] }) sentence entity token 0 A Stay home Severe weather 1 A Stay home raining 2 A Way smt 3 A Way SMT0 4 A Stay home Windy 5 B Go outside Sunny 6 B Go outside Good weather 7 B Purpose smt I want to group by the values of sentences and create new columns when Way and Purpose exists in entity columns Expected outcome: sentence entity token Way Purpose 0 A Stay home Severe weather, raining, Windy smt, SMTO Nan 1 B Go outside Sunny, Good weather Nan smt
[ "Filter rows for non matched rows by Series.isin in boolean indexing with ~ for invert mask, aggregate join and use DataFrame.join for filter rows matched list with DataFrame.pivot_table:\nvals = ['WAY','purpose']\n\nm = df1['entity'].isin(vals)\n\ndf2 = df1[m].pivot_table(index='sentence',columns='entity',values='token', aggfunc=','.join)\ndf3 = df1[~m].groupby(['sentence','entity'])['token'].agg(', '.join).reset_index()\n\ndf = df3.join(df2, on='sentence')\nprint (df)\n sentence entity token WAY purpose\n0 A Stay home Severe weather, raining, Windy smt,SMT0 NaN\n1 B Go outside Sunny, Good weather NaN smt\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "group_by", "python", "python_3.x" ]
stackoverflow_0074532513_dataframe_group_by_python_python_3.x.txt
Q: How do I set the area to 0 after the loop has run 1 time? a = 0 b = 2 n = 1 delta_x = (b-a) / n x = 0 area = 0 def f(x): return 1/2*x**2 + 4 while area < 9.333: for i in range (0, n): area += f(x) * delta_x x += delta_x n += 1 # i want to set the area to 0 here so that i can check for what n value area < 9.333 print(n) I tried to set area = 0 in different places, but it did not work. This code is for checking the area under a function using the left-square method. I want to find out for what n value area < 9.333 in the function f(x) = 1/2*x**2+4. A: As suziex has said you need to assign area as 0 between while and for this way area is reset every time for i in range(0 n) is run while area < 9.333: area = 0 for i in range (0, n): area += f(x) * delta_x x += delta_x n += 1 print(n)
How do I set the area to 0 after the loop has run 1 time?
a = 0 b = 2 n = 1 delta_x = (b-a) / n x = 0 area = 0 def f(x): return 1/2*x**2 + 4 while area < 9.333: for i in range (0, n): area += f(x) * delta_x x += delta_x n += 1 # i want to set the area to 0 here so that i can check for what n value area < 9.333 print(n) I tried to set area = 0 in different places, but it did not work. This code is for checking the area under a function using the left-square method. I want to find out for what n value area < 9.333 in the function f(x) = 1/2*x**2+4.
[ "As suziex has said you need to assign area as 0 between while and for this way area is reset every time for i in range(0 n) is run\nwhile area < 9.333:\n area = 0\n for i in range (0, n):\n area += f(x) * delta_x \n x += delta_x\n n += 1\nprint(n)\n \n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "python", "while_loop" ]
stackoverflow_0074532492_for_loop_python_while_loop.txt
Q: django getting all objects from select I also need the field (commentGroupDesc) from the foreign keys objects. models.py class commentGroup (models.Model): commentGroup = models.CharField(_("commentGroup"), primary_key=True, max_length=255) commentGroupDesc = models.CharField(_("commentGroupDesc"),null=True, blank=True, max_length=255) def __str__(self): return str(self.commentGroup) class Meta: ordering = ['commentGroup'] class Comment (models.Model): commentID = models.AutoField(_("commentID"),primary_key=True) commentUser = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) commentGroup = models.ForeignKey(commentGroup, on_delete=models.CASCADE, null=True) commentCI = models.ForeignKey(Servicenow, on_delete=models.CASCADE, null=True) commentText = RichTextField(_("commentText"), null=True, blank=True) commentTableUpdated = models.CharField(_("commentTableUpdated"), null=True, blank=True, max_length=25) def __str__(self): return str(self.commentGroup) class Meta: ordering = ['commentGroup'] views.py comment = Comment.objects.get(pk=commentID) Here I get the commentGroup fine but I also need commentGroupDesc to put into my form. A: At first, it's not a good thing to name same your model field as model name which is commentGroup kindly change field name, and run migration commands. You can simply use chaining to get commentGroupDesc, also it's better to use get_object_or_404() so: comment = get_object_or_404(Comment,pk=commentID) group_desc = comment.commentGroup.commentGroupDesc Remember to change field and model name first.
django getting all objects from select
I also need the field (commentGroupDesc) from the foreign keys objects. models.py class commentGroup (models.Model): commentGroup = models.CharField(_("commentGroup"), primary_key=True, max_length=255) commentGroupDesc = models.CharField(_("commentGroupDesc"),null=True, blank=True, max_length=255) def __str__(self): return str(self.commentGroup) class Meta: ordering = ['commentGroup'] class Comment (models.Model): commentID = models.AutoField(_("commentID"),primary_key=True) commentUser = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) commentGroup = models.ForeignKey(commentGroup, on_delete=models.CASCADE, null=True) commentCI = models.ForeignKey(Servicenow, on_delete=models.CASCADE, null=True) commentText = RichTextField(_("commentText"), null=True, blank=True) commentTableUpdated = models.CharField(_("commentTableUpdated"), null=True, blank=True, max_length=25) def __str__(self): return str(self.commentGroup) class Meta: ordering = ['commentGroup'] views.py comment = Comment.objects.get(pk=commentID) Here I get the commentGroup fine but I also need commentGroupDesc to put into my form.
[ "At first, it's not a good thing to name same your model field as model name which is commentGroup kindly change field name, and run migration commands.\nYou can simply use chaining to get commentGroupDesc, also it's better to use get_object_or_404() so:\ncomment = get_object_or_404(Comment,pk=commentID)\n\ngroup_desc = comment.commentGroup.commentGroupDesc\n\nRemember to change field and model name first.\n" ]
[ 2 ]
[]
[]
[ "django", "django_forms", "django_models", "django_queryset", "python" ]
stackoverflow_0074532381_django_django_forms_django_models_django_queryset_python.txt
Q: Assign index number after every two consecutive row within a group after pandas groupby I have a dataframe like below: TileDesc ReportDesc UrlLink 'AA' 'New Report-1' 'link-1' 'AA' 'New Report-2' 'link-2' 'AA' 'New Report-1' 'link-1' 'AA' 'New Report-1' 'link-1' 'AA' 'New Report-1' 'link-1' 'BB' 'New Report-4' 'link-4' 'BB' 'New Report-2' 'link-2' 'BB' 'New Report-4' 'link-4' 'BB' 'New Report-6' 'link-6' Now I want to add a column to this that will maintain a sequence of integer which would change after every 2 consecutive times. So the resultant dataframe would look like: TileDesc ReportDesc UrlLink Group 'AA' 'New Report-1' 'link-1' 1 'AA' 'New Report-2' 'link-2' 1 'AA' 'New Report-1' 'link-1' 2 'AA' 'New Report-4' 'link-4' 2 'AA' 'New Report-6' 'link-1' 3 'BB' 'New Report-4' 'link-4' 1 'BB' 'New Report-2' 'link-2' 1 'BB' 'New Report-4' 'link-4' 2 'BB' 'New Report-6' 'link-6' 2 I am following the ngroup() approach but not able to get through. df['Group'] = df.groupby(['TileDesc']).ngroup() The above code snippet is giving me same Group Number for each Group. I.e. for AA for all three I am getting 0, and then for all BB I am getting 1 and so on. My second approach was more like: df['Index'] = df.index + 1 df['Group'] = df['Index'].apply(lambda x : math.ceil(x/4)) But this doesn't consider TileDesc What I am missing here? Edit The group value ONLY changes after each two consecutive row within a TileDesc group. A: IIUC, you could group by and use cumcount(). The added trick is that you can replace the initial 0 ( cumcount starts from 0) with blank and replace with 1 (i.e. bfill): df['Group'] = df.groupby('TileDesc').cumcount().replace(0,np.nan).bfill().astype(int) result: TileDesc ReportDesc UrlLink Group 0 'AA' 'New Report-1' 'link-1' 1 1 'AA' 'New Report-2' 'link-2' 1 2 'AA' 'New Report-1' 'link-1' 2 3 'BB' 'New Report-4' 'link-4' 1 4 'BB' 'New Report-2' 'link-2' 1 5 'BB' 'New Report-4' 'link-4' 2 6 'CC' 'New Report-4' 'link-4' 1 7 'CC' 'New Report-2' 'link-2' 1 8 'CC' 'New Report-4' 'link-4' 2 9 'CC' 'New Report-4' 'link-4' 3 10 'CC' 'New Report-2' 'link-2' 4 11 'CC' 'New Report-4' 'link-4' 5 Added an extra 'CC' section to demonstrate. A: Use cumsum, but //2 +1 to increment only every second line (sorry, my copy-paste came out a bit broken, but it works) In [38]: df Out[38]: TileDesc ReportDesc UrlLink 0 'AA' Report-1' 'link-1' 1 'AA' Report-2' 'link-2' 2 'AA' Report-1' 'link-1' 3 'AA' Report-1' 'link-1' 4 'AA' Report-1' 'link-1' 5 'BB' Report-4' 'link-4' 6 'BB' Report-2' 'link-2' 7 'BB' Report-4' 'link-4' 8 'BB' Report-6' 'link-6' In [39]: df['Group'] = df.groupby('TileDesc').cumcount() // 2 + 1 In [40]: df Out[40]: TileDesc ReportDesc UrlLink Group 0 'AA' Report-1' 'link-1' 1 1 'AA' Report-2' 'link-2' 1 2 'AA' Report-1' 'link-1' 2 3 'AA' Report-1' 'link-1' 2 4 'AA' Report-1' 'link-1' 3 5 'BB' Report-4' 'link-4' 1 6 'BB' Report-2' 'link-2' 1 7 'BB' Report-4' 'link-4' 2 8 'BB' Report-6' 'link-6' 2
Assign index number after every two consecutive row within a group after pandas groupby
I have a dataframe like below: TileDesc ReportDesc UrlLink 'AA' 'New Report-1' 'link-1' 'AA' 'New Report-2' 'link-2' 'AA' 'New Report-1' 'link-1' 'AA' 'New Report-1' 'link-1' 'AA' 'New Report-1' 'link-1' 'BB' 'New Report-4' 'link-4' 'BB' 'New Report-2' 'link-2' 'BB' 'New Report-4' 'link-4' 'BB' 'New Report-6' 'link-6' Now I want to add a column to this that will maintain a sequence of integer which would change after every 2 consecutive times. So the resultant dataframe would look like: TileDesc ReportDesc UrlLink Group 'AA' 'New Report-1' 'link-1' 1 'AA' 'New Report-2' 'link-2' 1 'AA' 'New Report-1' 'link-1' 2 'AA' 'New Report-4' 'link-4' 2 'AA' 'New Report-6' 'link-1' 3 'BB' 'New Report-4' 'link-4' 1 'BB' 'New Report-2' 'link-2' 1 'BB' 'New Report-4' 'link-4' 2 'BB' 'New Report-6' 'link-6' 2 I am following the ngroup() approach but not able to get through. df['Group'] = df.groupby(['TileDesc']).ngroup() The above code snippet is giving me same Group Number for each Group. I.e. for AA for all three I am getting 0, and then for all BB I am getting 1 and so on. My second approach was more like: df['Index'] = df.index + 1 df['Group'] = df['Index'].apply(lambda x : math.ceil(x/4)) But this doesn't consider TileDesc What I am missing here? Edit The group value ONLY changes after each two consecutive row within a TileDesc group.
[ "IIUC, you could group by and use cumcount(). The added trick is that you can replace the initial 0 ( cumcount starts from 0) with blank and replace with 1 (i.e. bfill):\ndf['Group'] = df.groupby('TileDesc').cumcount().replace(0,np.nan).bfill().astype(int)\n\nresult:\n TileDesc ReportDesc UrlLink Group\n0 'AA' 'New Report-1' 'link-1' 1\n1 'AA' 'New Report-2' 'link-2' 1\n2 'AA' 'New Report-1' 'link-1' 2\n3 'BB' 'New Report-4' 'link-4' 1\n4 'BB' 'New Report-2' 'link-2' 1\n5 'BB' 'New Report-4' 'link-4' 2\n6 'CC' 'New Report-4' 'link-4' 1\n7 'CC' 'New Report-2' 'link-2' 1\n8 'CC' 'New Report-4' 'link-4' 2\n9 'CC' 'New Report-4' 'link-4' 3\n10 'CC' 'New Report-2' 'link-2' 4\n11 'CC' 'New Report-4' 'link-4' 5\n\nAdded an extra 'CC' section to demonstrate.\n", "Use cumsum, but //2 +1 to increment only every second line\n(sorry, my copy-paste came out a bit broken, but it works)\nIn [38]: df\nOut[38]:\n TileDesc ReportDesc UrlLink\n0 'AA' Report-1' 'link-1'\n1 'AA' Report-2' 'link-2'\n2 'AA' Report-1' 'link-1'\n3 'AA' Report-1' 'link-1'\n4 'AA' Report-1' 'link-1'\n5 'BB' Report-4' 'link-4'\n6 'BB' Report-2' 'link-2'\n7 'BB' Report-4' 'link-4'\n8 'BB' Report-6' 'link-6'\n\nIn [39]: df['Group'] = df.groupby('TileDesc').cumcount() // 2 + 1\n\nIn [40]: df\nOut[40]:\n TileDesc ReportDesc UrlLink Group\n0 'AA' Report-1' 'link-1' 1\n1 'AA' Report-2' 'link-2' 1\n2 'AA' Report-1' 'link-1' 2\n3 'AA' Report-1' 'link-1' 2\n4 'AA' Report-1' 'link-1' 3\n5 'BB' Report-4' 'link-4' 1\n6 'BB' Report-2' 'link-2' 1\n7 'BB' Report-4' 'link-4' 2\n8 'BB' Report-6' 'link-6' 2\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074531796_python.txt
Q: Check if any string in a list of strings is in a pandas row and return bool result I want to return bool column based on a condition: column with sentences list = ['foo', 'box'] if any from list in row -> return True, else return False My code does not work and I can't find the mistake: clean_df['to_process'] = clean_df['sentence'].apply( lambda x: True if any(st in x for st in ['foo','box']) else False) A: Use Series.str.contains with join list for regex OR: L = ['foo','box'] clean_df['to_process'] = clean_df['sentence'].str.contains('|'.join(L))
Check if any string in a list of strings is in a pandas row and return bool result
I want to return bool column based on a condition: column with sentences list = ['foo', 'box'] if any from list in row -> return True, else return False My code does not work and I can't find the mistake: clean_df['to_process'] = clean_df['sentence'].apply( lambda x: True if any(st in x for st in ['foo','box']) else False)
[ "Use Series.str.contains with join list for regex OR:\nL = ['foo','box']\nclean_df['to_process'] = clean_df['sentence'].str.contains('|'.join(L))\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python", "string" ]
stackoverflow_0074532647_pandas_python_string.txt
Q: Replace and overwrite instead of appending I have the following code: import re #open the xml file for reading: file = open('path/test.xml','r+') #convert to string: data = file.read() file.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>",data)) file.close() where I'd like to replace the old content that's in the file with the new content. However, when I execute my code, the file "test.xml" is appended, i.e. I have the old content follwed by the new "replaced" content. What can I do in order to delete the old stuff and only keep the new? A: You need seek to the beginning of the file before writing and then use file.truncate() if you want to do inplace replace: import re myfile = "path/test.xml" with open(myfile, "r+") as f: data = f.read() f.seek(0) f.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>", r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", data)) f.truncate() The other way is to read the file then open it again with open(myfile, 'w'): with open(myfile, "r") as f: data = f.read() with open(myfile, "w") as f: f.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>", r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", data)) Neither truncate nor open(..., 'w') will change the inode number of the file (I tested twice, once with Ubuntu 12.04 NFS and once with ext4). By the way, this is not really related to Python. The interpreter calls the corresponding low level API. The method truncate() works the same in the C programming language: See http://man7.org/linux/man-pages/man2/truncate.2.html A: file='path/test.xml' with open(file, 'w') as filetowrite: filetowrite.write('new content') Open the file in 'w' mode, you will be able to replace its current text save the file with new contents. A: Using truncate(), the solution could be import re #open the xml file for reading: with open('path/test.xml','r+') as f: #convert to string: data = f.read() f.seek(0) f.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>",data)) f.truncate() A: import os#must import this library if os.path.exists('TwitterDB.csv'): os.remove('TwitterDB.csv') #this deletes the file else: print("The file does not exist")#add this to prevent errors I had a similar problem, and instead of overwriting my existing file using the different 'modes', I just deleted the file before using it again, so that it would be as if I was appending to a new file on each run of my code. A: See from How to Replace String in File works in a simple way and is an answer that works with replace fin = open("data.txt", "rt") fout = open("out.txt", "wt") for line in fin: fout.write(line.replace('pyton', 'python')) fin.close() fout.close() A: Using python3 pathlib library: import re from pathlib import Path import shutil shutil.copy2("/tmp/test.xml", "/tmp/test.xml.bak") # create backup filepath = Path("/tmp/test.xml") content = filepath.read_text() filepath.write_text(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", content)) Similar method using different approach to backups: from pathlib import Path filepath = Path("/tmp/test.xml") filepath.rename(filepath.with_suffix('.bak')) # different approach to backups content = filepath.read_text() filepath.write_text(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", content)) A: in my case the following code did the trick with open("output.json", "w+") as outfile: #using w+ mode to create file if it not exists. and overwrite the existing content json.dump(result_plot, outfile)
Replace and overwrite instead of appending
I have the following code: import re #open the xml file for reading: file = open('path/test.xml','r+') #convert to string: data = file.read() file.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>",data)) file.close() where I'd like to replace the old content that's in the file with the new content. However, when I execute my code, the file "test.xml" is appended, i.e. I have the old content follwed by the new "replaced" content. What can I do in order to delete the old stuff and only keep the new?
[ "You need seek to the beginning of the file before writing and then use file.truncate() if you want to do inplace replace:\nimport re\n\nmyfile = \"path/test.xml\"\n\nwith open(myfile, \"r+\") as f:\n data = f.read()\n f.seek(0)\n f.write(re.sub(r\"<string>ABC</string>(\\s+)<string>(.*)</string>\", r\"<xyz>ABC</xyz>\\1<xyz>\\2</xyz>\", data))\n f.truncate()\n\nThe other way is to read the file then open it again with open(myfile, 'w'):\nwith open(myfile, \"r\") as f:\n data = f.read()\n\nwith open(myfile, \"w\") as f:\n f.write(re.sub(r\"<string>ABC</string>(\\s+)<string>(.*)</string>\", r\"<xyz>ABC</xyz>\\1<xyz>\\2</xyz>\", data))\n\nNeither truncate nor open(..., 'w') will change the inode number of the file (I tested twice, once with Ubuntu 12.04 NFS and once with ext4).\nBy the way, this is not really related to Python. The interpreter calls the corresponding low level API. The method truncate() works the same in the C programming language: See http://man7.org/linux/man-pages/man2/truncate.2.html\n", "file='path/test.xml' \nwith open(file, 'w') as filetowrite:\n filetowrite.write('new content')\n\nOpen the file in 'w' mode, you will be able to replace its current text save the file with new contents.\n", "Using truncate(), the solution could be\nimport re\n#open the xml file for reading:\nwith open('path/test.xml','r+') as f:\n #convert to string:\n data = f.read()\n f.seek(0)\n f.write(re.sub(r\"<string>ABC</string>(\\s+)<string>(.*)</string>\",r\"<xyz>ABC</xyz>\\1<xyz>\\2</xyz>\",data))\n f.truncate()\n\n", "import os#must import this library\nif os.path.exists('TwitterDB.csv'):\n os.remove('TwitterDB.csv') #this deletes the file\nelse:\n print(\"The file does not exist\")#add this to prevent errors\n\nI had a similar problem, and instead of overwriting my existing file using the different 'modes', I just deleted the file before using it again, so that it would be as if I was appending to a new file on each run of my code. \n", "See from How to Replace String in File works in a simple way and is an answer that works with replace\nfin = open(\"data.txt\", \"rt\")\nfout = open(\"out.txt\", \"wt\")\n\nfor line in fin:\n fout.write(line.replace('pyton', 'python'))\n\nfin.close()\nfout.close()\n\n", "Using python3 pathlib library:\nimport re\nfrom pathlib import Path\nimport shutil\n\nshutil.copy2(\"/tmp/test.xml\", \"/tmp/test.xml.bak\") # create backup\nfilepath = Path(\"/tmp/test.xml\")\ncontent = filepath.read_text()\nfilepath.write_text(re.sub(r\"<string>ABC</string>(\\s+)<string>(.*)</string>\",r\"<xyz>ABC</xyz>\\1<xyz>\\2</xyz>\", content))\n\nSimilar method using different approach to backups:\nfrom pathlib import Path\n\nfilepath = Path(\"/tmp/test.xml\")\nfilepath.rename(filepath.with_suffix('.bak')) # different approach to backups\ncontent = filepath.read_text()\nfilepath.write_text(re.sub(r\"<string>ABC</string>(\\s+)<string>(.*)</string>\",r\"<xyz>ABC</xyz>\\1<xyz>\\2</xyz>\", content))\n\n", "in my case the following code did the trick\nwith open(\"output.json\", \"w+\") as outfile: #using w+ mode to create file if it not exists. and overwrite the existing content\n json.dump(result_plot, outfile)\n\n" ]
[ 157, 124, 21, 3, 3, 0, 0 ]
[]
[]
[ "python", "replace" ]
stackoverflow_0011469228_python_replace.txt
Q: How to configure mypy to ignore a stub file for a specific module? I installed a "dnspython" package with "pip install dnspython" under Ubuntu 22.10 and made a following short script: #!/usr/bin/env python3 import dns.zone import dns.query zone = dns.zone.Zone("example.net") dns.query.inbound_xfr("10.0.0.1", zone) for (name, ttl, rdata) in zone.iterate_rdatas("SOA"): serial_nr = rdata.serial When I check this code snippet with mypy(version 0.990), then it reports an error: Module has no attribute "inbound_xfr" [attr-defined] for line number 7. According to mypy documentation, if a Python file and a stub file are both present in the same directory on the search path, then only the stub file is used. In case of "dnspython", the stub file query.pyi is present in the dns package and the stub file indeed has no attribute "inbound_xfr". When I rename or remove the stub file, then the query.py Python file is used instead of the stub file and mypy no longer complains about missing attribute. I guess this is a "dnspython" bug? Is there a way to tell to mypy that for query module, the stub file should be ignored? A: I would recommend ignoring only the specific wrong line, not the whole module. dns.query.inbound_xfr("10.0.0.1", zone) # type: ignore[attr-defined] This will suppress attr-defined error message that is generated on that line. If you're going to take this approach, I'd also recommend running mypy with the --warn-unused-ignores flag, which will report any redundant and unused # type: ignore statements (for example, after updating the library). A: Is there a way to tell to mypy that for query module, the stub file should be ignored? No. Stub files have precedence over modules. Even if you pass the entire path of the stub file to --exclude, it will still see it. You want to disable a language construct created specifically for definitions, which doesn't seem very logical. I guess this is a "dnspython" bug? Yes. A: First of all, there is a option --exclude PATTERN to ignore files or directory to check. According that doc, you should use --follow-imports option to skip the import module checked by mypy: In particular, --exclude does not affect mypy’s import following. You can use a per-module follow_imports config option to additionally avoid mypy from following imports and checking code you do not wish to be checked. Another way, you could configure the Stub files in a specific directory, and using it by export MYPYPATH.
How to configure mypy to ignore a stub file for a specific module?
I installed a "dnspython" package with "pip install dnspython" under Ubuntu 22.10 and made a following short script: #!/usr/bin/env python3 import dns.zone import dns.query zone = dns.zone.Zone("example.net") dns.query.inbound_xfr("10.0.0.1", zone) for (name, ttl, rdata) in zone.iterate_rdatas("SOA"): serial_nr = rdata.serial When I check this code snippet with mypy(version 0.990), then it reports an error: Module has no attribute "inbound_xfr" [attr-defined] for line number 7. According to mypy documentation, if a Python file and a stub file are both present in the same directory on the search path, then only the stub file is used. In case of "dnspython", the stub file query.pyi is present in the dns package and the stub file indeed has no attribute "inbound_xfr". When I rename or remove the stub file, then the query.py Python file is used instead of the stub file and mypy no longer complains about missing attribute. I guess this is a "dnspython" bug? Is there a way to tell to mypy that for query module, the stub file should be ignored?
[ "I would recommend ignoring only the specific wrong line, not the whole module.\ndns.query.inbound_xfr(\"10.0.0.1\", zone) # type: ignore[attr-defined]\n\nThis will suppress attr-defined error message that is generated on that line. If you're going to take this approach, I'd also recommend running mypy with the --warn-unused-ignores flag, which will report any redundant and unused # type: ignore statements (for example, after updating the library).\n", "\nIs there a way to tell to mypy that for query module, the stub file should be ignored?\n\nNo. Stub files have precedence over modules. Even if you pass the entire path of the stub file to --exclude, it will still see it.\nYou want to disable a language construct created specifically for definitions, which doesn't seem very logical.\n\nI guess this is a \"dnspython\" bug?\n\nYes.\n", "First of all, there is a option --exclude PATTERN to ignore files or directory to check.\nAccording that doc, you should use --follow-imports option to skip the import module checked by mypy:\n\nIn particular, --exclude does not affect mypy’s import following.\n\n\nYou can use a per-module follow_imports config option to additionally avoid mypy from following imports and checking code you do not wish to be checked.\n\nAnother way, you could configure the Stub files in a specific directory, and using it by export MYPYPATH.\n" ]
[ 5, 3, 2 ]
[]
[]
[ "dnspython", "mypy", "python" ]
stackoverflow_0074425218_dnspython_mypy_python.txt
Q: FastAPI does not throw exception I continue writing my first project on the FastAPI. My final method is delete. It deletes record. But after deleting record i post the same "delete" request and the FastAPI says that it was deleted instead of throwing the exception. I checked version_instance and it is None. I checked db and there is no such version. You can post any query parameters and the status code will be 204. So I'm confused. Has somebody any ideas about it? main.py: @app.delete('/', status_code=status.HTTP_204_NO_CONTENT) async def delete_config(service: str, version: str, db: Session = Depends(get_db)): service_instance = db.query(models.Service).filter( models.Service.name == service ).first() if service_instance is None: return HTTPException(status_code=400, detail='Service not found') version_instance = db.query(models.ServiceVersion).filter( models.ServiceVersion.service_id == service_instance.id ).filter(models.ServiceVersion.version == version).first() print(version_instance) if version_instance is None: return HTTPException( status_code=400, detail='Version of service not found' ) if version_instance.is_used == True: return HTTPException(status_code=400, detail='Config is in use') db.query(models.ServiceKey).filter( models.ServiceKey.service_id == service_instance.id ).filter(models.ServiceKey.version_id == version_instance.id).delete() db.query(models.ServiceVersion).filter( models.ServiceVersion.service_id == service_instance.id ).filter(models.ServiceVersion.version == version).delete() db.commit() return 'deleted' A: You have to raise the exception instead of returning it: Below is the example: @app.delete('/', status_code=status.HTTP_204_NO_CONTENT) async def delete_config(service: str, version: str, db: Session = Depends(get_db)): service_instance = db.query(models.Service).filter( models.Service.name == service ).first() if service_instance is None: raise HTTPException(status_code=400, detail='Service not found') version_instance = db.query(models.ServiceVersion).filter( models.ServiceVersion.service_id == service_instance.id ).filter(models.ServiceVersion.version == version).first() print(version_instance) if version_instance is None: raise HTTPException( status_code=400, detail='Version of service not found' ) if version_instance.is_used == True: raise HTTPException(status_code=400, detail='Config is in use') db.query(models.ServiceKey).filter( models.ServiceKey.service_id == service_instance.id ).filter(models.ServiceKey.version_id == version_instance.id).delete() db.query(models.ServiceVersion).filter( models.ServiceVersion.service_id == service_instance.id ).filter(models.ServiceVersion.version == version).delete() db.commit() return 'deleted'
FastAPI does not throw exception
I continue writing my first project on the FastAPI. My final method is delete. It deletes record. But after deleting record i post the same "delete" request and the FastAPI says that it was deleted instead of throwing the exception. I checked version_instance and it is None. I checked db and there is no such version. You can post any query parameters and the status code will be 204. So I'm confused. Has somebody any ideas about it? main.py: @app.delete('/', status_code=status.HTTP_204_NO_CONTENT) async def delete_config(service: str, version: str, db: Session = Depends(get_db)): service_instance = db.query(models.Service).filter( models.Service.name == service ).first() if service_instance is None: return HTTPException(status_code=400, detail='Service not found') version_instance = db.query(models.ServiceVersion).filter( models.ServiceVersion.service_id == service_instance.id ).filter(models.ServiceVersion.version == version).first() print(version_instance) if version_instance is None: return HTTPException( status_code=400, detail='Version of service not found' ) if version_instance.is_used == True: return HTTPException(status_code=400, detail='Config is in use') db.query(models.ServiceKey).filter( models.ServiceKey.service_id == service_instance.id ).filter(models.ServiceKey.version_id == version_instance.id).delete() db.query(models.ServiceVersion).filter( models.ServiceVersion.service_id == service_instance.id ).filter(models.ServiceVersion.version == version).delete() db.commit() return 'deleted'
[ "You have to raise the exception instead of returning it:\nBelow is the example:\n@app.delete('/', status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_config(service: str, version: str, db: Session = Depends(get_db)):\n service_instance = db.query(models.Service).filter(\n models.Service.name == service\n ).first()\n if service_instance is None:\n raise HTTPException(status_code=400, detail='Service not found')\n version_instance = db.query(models.ServiceVersion).filter(\n models.ServiceVersion.service_id == service_instance.id\n ).filter(models.ServiceVersion.version == version).first()\n print(version_instance)\n if version_instance is None:\n raise HTTPException(\n status_code=400, detail='Version of service not found'\n )\n if version_instance.is_used == True:\n raise HTTPException(status_code=400, detail='Config is in use')\n db.query(models.ServiceKey).filter(\n models.ServiceKey.service_id == service_instance.id\n ).filter(models.ServiceKey.version_id == version_instance.id).delete()\n db.query(models.ServiceVersion).filter(\n models.ServiceVersion.service_id == service_instance.id\n ).filter(models.ServiceVersion.version == version).delete()\n db.commit()\n return 'deleted'\n\n" ]
[ 1 ]
[]
[]
[ "fastapi", "python" ]
stackoverflow_0074519795_fastapi_python.txt
Q: Is there a way to use Pathlib to traverse parents folders until a name matches? I was discussing with a colleague if there is a built-in (or clean) way to use Pathlib to traverse through an arbitrary Path to find a given parent folder, for example the root of your repository (which may differ per user that has a local copy of said repo). I simulated the desired behaviour below: from pathlib import Path def find_parent(path: Path, target_parent: str) -> Path: for part in path.parts[::-1]: if part != target_parent: path = path.parent else: break return path path = Path("/some/arbitrarily/long/path/ROOT_FOLDER/subfolder1/subfolder2/file.py") root = find_parent(path, "ROOT_FOLDER") assert root == Path("/some/arbitrarily/long/path/ROOT_FOLDER") Is there an easier way to achieve this? A: You could iterate over path.parents (plural) directly, which makes this a bit cleaner: def find_parent(path: Path, target_parent: str) -> Path | None: # `path.parents` does not include `path`, so we need to prepend it if it is # to be considered for parent in [path] + list(path.parents): if parent.name == target_parent: return parent (No need for the else clause.) A: Based on @Chris's answer, I found the following one-liner is what I am after: root = [parent for parent in path.parents if parent.name == "ROOT_FOLDER"][0] Updated to root = next((parent for parent in path.parents if parent.name == "ROOT_FOLDER"), None) based on @SUTerliakov's suggestion.
Is there a way to use Pathlib to traverse parents folders until a name matches?
I was discussing with a colleague if there is a built-in (or clean) way to use Pathlib to traverse through an arbitrary Path to find a given parent folder, for example the root of your repository (which may differ per user that has a local copy of said repo). I simulated the desired behaviour below: from pathlib import Path def find_parent(path: Path, target_parent: str) -> Path: for part in path.parts[::-1]: if part != target_parent: path = path.parent else: break return path path = Path("/some/arbitrarily/long/path/ROOT_FOLDER/subfolder1/subfolder2/file.py") root = find_parent(path, "ROOT_FOLDER") assert root == Path("/some/arbitrarily/long/path/ROOT_FOLDER") Is there an easier way to achieve this?
[ "You could iterate over path.parents (plural) directly, which makes this a bit cleaner:\ndef find_parent(path: Path, target_parent: str) -> Path | None:\n # `path.parents` does not include `path`, so we need to prepend it if it is\n # to be considered\n for parent in [path] + list(path.parents):\n if parent.name == target_parent:\n return parent\n\n(No need for the else clause.)\n", "Based on @Chris's answer, I found the following one-liner is what I am after:\nroot = [parent for parent in path.parents if parent.name == \"ROOT_FOLDER\"][0]\n\nUpdated to root = next((parent for parent in path.parents if parent.name == \"ROOT_FOLDER\"), None) based on @SUTerliakov's suggestion.\n" ]
[ 3, 0 ]
[]
[]
[ "pathlib", "python" ]
stackoverflow_0074532372_pathlib_python.txt
Q: How to update DataTable interactively with a callback function in dash? I feel like this is a basic problem and I`ve looked through all relevant topics on SO but still can't manage to update a simple table in dash with interactive input. Basically I have a table that contains data and want to be able to change that data depending on manual user inputs. I feel like this should be possible with a simple @callback. However no matter what I do the table always remains the same. In the following example I am trying to filter the data by a category depending on a Input checklist. But I am not looking for a solution where I can filter but rather actually alter the table's data, like multiplying the price by a Input factor etc. # import dash and standard packages import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output import pandas as pd # creating the base table df = pd.DataFrame(columns=['car', 'category', 'price'], data=[[1, 'SUV', 27000], [2, 'Sports', 90000],[3, 'SUV', 47000]]) # launch the app app = dash.Dash() app.layout = html.Div(children=[ dcc.Checklist(id='category-list', options=[{'label':s ,'value':s} for s in df['category'].unique()], value=[s for s in df['category'].unique()], labelStyle={"display": "block"}), html.Div([dash.dash_table.DataTable( id='scorecard-table', data=df.to_dict('records'), columns=[{"name": i, "id": i} for i in df.columns], fixed_rows={'headers': True, 'data': 0}, fixed_columns={'headers': True, 'data': 1}, export_columns='visible', export_format='xlsx') ])], style={'border':'2px grey solid'}) if __name__ == '__main__': app.run_server() # update the table @app.callback( Output('scorecard-table', 'data'), Input('category-list', 'value'), ) def update_output(value): global df # not sure if that's helpful return df[df['category'].isin(value)].to_dict('records') But if I change the checkboxes, no matter what way, nothing happens to the DataTable it always stays the same A: I found the problem in your code. I have changed the order of the code and I have also set the debug mode which helps to debug your code. Below is the code with few modifications and fully functional # import dash and standard packages import dash from dash import html, dcc, Input, Output import pandas as pd # creating the base table df = pd.DataFrame(columns=['car', 'category', 'price'], data=[[1, 'SUV', 27000], [2, 'Sports', 90000],[3, 'SUV', 47000]]) # launch the app app = dash.Dash() app.layout = html.Div(children=[ dcc.Checklist(id='category-list', options=[{'label':s ,'value':s} for s in df['category'].unique()], value=[s for s in df['category'].unique()], labelStyle={"display": "block"}), html.Div([dash.dash_table.DataTable( id='scorecard-table', data=df.to_dict('records'), columns=[{"name": i, "id": i} for i in df.columns], fixed_rows={'headers': True, 'data': 0}, fixed_columns={'headers': True, 'data': 1}, export_columns='visible', export_format='xlsx') ])], style={'border':'2px grey solid'}) # update the table @app.callback( Output('scorecard-table', 'data'), Input('category-list', 'value'), ) def update_output(value): if value == []: return df.to_dict('records') return df[df['category'].isin(value)].to_dict('records') if __name__ == '__main__': app.run_server(debug=True) I hope it can help you to better understand how to use Dash appropriatelly and feel free to ask me any question you have about the modifications. Also, if this answers helps, don't forget to upvote it and set it as the answer. Best Regards, Leonardo
How to update DataTable interactively with a callback function in dash?
I feel like this is a basic problem and I`ve looked through all relevant topics on SO but still can't manage to update a simple table in dash with interactive input. Basically I have a table that contains data and want to be able to change that data depending on manual user inputs. I feel like this should be possible with a simple @callback. However no matter what I do the table always remains the same. In the following example I am trying to filter the data by a category depending on a Input checklist. But I am not looking for a solution where I can filter but rather actually alter the table's data, like multiplying the price by a Input factor etc. # import dash and standard packages import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output import pandas as pd # creating the base table df = pd.DataFrame(columns=['car', 'category', 'price'], data=[[1, 'SUV', 27000], [2, 'Sports', 90000],[3, 'SUV', 47000]]) # launch the app app = dash.Dash() app.layout = html.Div(children=[ dcc.Checklist(id='category-list', options=[{'label':s ,'value':s} for s in df['category'].unique()], value=[s for s in df['category'].unique()], labelStyle={"display": "block"}), html.Div([dash.dash_table.DataTable( id='scorecard-table', data=df.to_dict('records'), columns=[{"name": i, "id": i} for i in df.columns], fixed_rows={'headers': True, 'data': 0}, fixed_columns={'headers': True, 'data': 1}, export_columns='visible', export_format='xlsx') ])], style={'border':'2px grey solid'}) if __name__ == '__main__': app.run_server() # update the table @app.callback( Output('scorecard-table', 'data'), Input('category-list', 'value'), ) def update_output(value): global df # not sure if that's helpful return df[df['category'].isin(value)].to_dict('records') But if I change the checkboxes, no matter what way, nothing happens to the DataTable it always stays the same
[ "I found the problem in your code.\n\nI have changed the order of the code and I have also set the debug mode which helps to debug your code.\nBelow is the code with few modifications and fully functional\n# import dash and standard packages\nimport dash\nfrom dash import html, dcc, Input, Output\nimport pandas as pd\n\n# creating the base table\ndf = pd.DataFrame(columns=['car', 'category', 'price'], data=[[1, 'SUV', 27000], [2, 'Sports', 90000],[3, 'SUV', 47000]])\n\n# launch the app\napp = dash.Dash()\napp.layout = html.Div(children=[\n dcc.Checklist(id='category-list',\n options=[{'label':s ,'value':s} for s in df['category'].unique()],\n value=[s for s in df['category'].unique()],\n labelStyle={\"display\": \"block\"}),\n html.Div([dash.dash_table.DataTable(\n id='scorecard-table',\n data=df.to_dict('records'),\n columns=[{\"name\": i, \"id\": i} for i in df.columns],\n fixed_rows={'headers': True, 'data': 0},\n fixed_columns={'headers': True, 'data': 1},\n export_columns='visible',\n export_format='xlsx')\n ])], style={'border':'2px grey solid'})\n\n\n# update the table\n@app.callback(\n Output('scorecard-table', 'data'),\n Input('category-list', 'value'),\n)\ndef update_output(value):\n if value == []:\n return df.to_dict('records')\n return df[df['category'].isin(value)].to_dict('records')\n\nif __name__ == '__main__':\n app.run_server(debug=True)\n\nI hope it can help you to better understand how to use Dash appropriatelly and feel free to ask me any question you have about the modifications.\nAlso, if this answers helps, don't forget to upvote it and set it as the answer.\nBest Regards,\nLeonardo\n" ]
[ 1 ]
[]
[]
[ "callback", "dashboard", "interactive", "plotly_dash", "python" ]
stackoverflow_0074531568_callback_dashboard_interactive_plotly_dash_python.txt
Q: Custom standard input for python subprocess I'm running an SSH process like this: sshproc = subprocess.Popen([command], shell=True) exit = os.waitpid(sshproc.pid, 0)[1] This works and opens an interactive terminal. Based on the documentation for subprocess, sshproc is using the script's sys.stdin. The question is: how can I print to stderr or a file what input is being received to this child process? I am creating a logging API, and currently lose the ability to record what commands are run over this SSH session. I don't need the answer, just a nudge in the right direction. Thanks everyone! EDIT: It is important that I start the process as shown above so that I can have a interactive SSH session with my user. E.g. I cannot use communicate() as far as I know. A: sshproc = subprocess.Popen([command], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) stdout_value, stderr_value = sshproc.communicate('through stdin to stdout') print repr(stdout_value) print repr(stderr_value) Ah since you said nudge in right direction, I thought I should point you to good readups: http://www.doughellmann.com/PyMOTW/subprocess/ capture stderr from python subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE) - A: correct link to @pyfunc's answer, http://pymotw.com/2/subprocess/ Very informative source, subprocess, allows calling any number of other processes or bash commands, with all piping customizations.
Custom standard input for python subprocess
I'm running an SSH process like this: sshproc = subprocess.Popen([command], shell=True) exit = os.waitpid(sshproc.pid, 0)[1] This works and opens an interactive terminal. Based on the documentation for subprocess, sshproc is using the script's sys.stdin. The question is: how can I print to stderr or a file what input is being received to this child process? I am creating a logging API, and currently lose the ability to record what commands are run over this SSH session. I don't need the answer, just a nudge in the right direction. Thanks everyone! EDIT: It is important that I start the process as shown above so that I can have a interactive SSH session with my user. E.g. I cannot use communicate() as far as I know.
[ "sshproc = subprocess.Popen([command],\n shell=True,\n stdin=subprocess.PIPE,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n )\n\nstdout_value, stderr_value = sshproc.communicate('through stdin to stdout')\nprint repr(stdout_value)\nprint repr(stderr_value)\n\nAh since you said nudge in right direction, I thought I should point you to good readups:\n\nhttp://www.doughellmann.com/PyMOTW/subprocess/\ncapture stderr from python subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE)\n\n-\n", "correct link to @pyfunc's answer, http://pymotw.com/2/subprocess/\nVery informative source, subprocess, allows calling any number of other processes or bash commands, with all piping customizations.\n" ]
[ 8, 0 ]
[]
[]
[ "python", "stdin", "subprocess" ]
stackoverflow_0003729366_python_stdin_subprocess.txt
Q: Creating a list of n numbers between x and y who sum up to z I am trying to create a random set of 25 numbers, which are between 2 and 25, and sum up to 100 in python. This Question gives an answer, but it seems that the maximum number never ends up being close to 25. I've tried creating a list, dividing each number, and recreating the list, but it essentially nullifies my min and max values since they end up getting divided by a number larger than 1 almost all of the time: numbers = np.random.randint(low = 2, high = 25, size = 100, dtype = int) scale = 100 / sum(numbers) #We want weights to add up to 100% #Scale values for value in numbers: nums.append(value * scale) Is there any way to do this? Thanks A: You haven't specified what probability distribution the numbers should have so this could be an easy valid way although very unlikely to yield numbers close to 25: import numpy as np numbers = np.full(25,2) while numbers.sum() < 100: i = np.random.randint(25) if numbers[i] < 25: # almost guaranteed... numbers[i] += 1 A: Suppose you want a random list of 25 numbers (not necessarily integers) which add up to 100, with the constraint that each number is at least 2 and no more than 25. First, note that we can change that into an equivalent problem where the numbers are only required to be non-negative, by generating 25 numbers between 0 and 23 which add up to 100-25*2, which is 50. Once we have that list, we just add two to each number; the new list will be between 2 and 25, and its sum will be 100 (because we added 2 to each of 25 numbers). The second thing to note is that the probability of finding a number close to 25 in that list is pretty small, since that would require one number to have attracted almost half of the available total. (That claim is clearer if you look at the alternative formulation, 25 numbers between 0 and 23 which add up to 50. If one of those numbers is, say, 20, then the other 24 numbers add up to 30, which means that you have a distribution which looks more like the distribution of wealth in an unregulated market than a uniform distribution.) Since we're going to generate a uniform sample, we can handle the maximum value by ignoring it until we generate the random list, and then checking to see if the incredibly unlikely biased sample showed up; if it did, we just toss out the result and try again. (That's called "rejection sampling", and it's a pretty common technique, which works adequately even if half the samples are rejected. The advantage of rejection sampling is that it does not introduce bias.) So let's get back to the question of how to generate a uniformly distributed list of non-negative numbers with a given sum. As long as the numbers are from a very large universe of possible values (like all double-precision floating point numbers within the range), that's quite easy. Say we need N numbers which add up to k. We start by randomly generating N-1 numbers, each in the range (0, k). Then we sort that set of numbers, and put 0 at one end of the sorted list and k at the other end. Finally, we compute the adjacent differences between successive elements. That gives us a list of N numbers which add up to k, and it turns out that the random lists so generated are an almost uniform sample of the possibilities. (There is a tiny bias introduced by the fact that it is possible for the same random number to have been generated twice, leading to a zero in the final list of differences. The zero is not a problem; the problem is that the zero shows up slightly too often. But the probability of getting an exact zero is less than one in a hundred million, and it's only exact zeros whose frequency is biased.) In summary: from random import uniform def gen_random_list(N=25, k=100, min=2, max=25): assert(N * min <= k <= N * max) adjusted_k = k - min * N while True: endpoints = sorted(uniform(0, adjusted_k) for i in range(N - 1)) values = [*(end - begin + min for begin, end in zip([0] + endpoints, endpoints + [adjusted_k]))] if all(v <= max for v in values): return values OK, what if we need a list of integers? In that case, it's much more likely that the above procedure will produce a zero, and the bias will be noticeable. To avoid that, we make two changes: Instead of adjusting the range so that the minimum is 0, we adjust it so that the minimum is 1. (Which works for integers, because there are no integers between 0 and 1.) Now, the adjusted sum will be k' = k - N * (min - 1). Second, instead of generating N - 1 independent random values, we generate a random selection of N - 1 different values from the half-open range [1, k') (using random.sample) Other than that, the algorithm is the same. Sort the generated list, compute adjacent differences, and verify that the maximum wasn't exceeded: from random import sample def gen_random_list_of_ints(N=25, k=100, min=2, max=25): assert(N * min <= k <= N * max) adjusted_k = k - (min - 1) * N while True: endpoints = sorted(sample(range(1, adjusted_k), N - 1)) values = [*(end - begin + min - 1 for begin, end in zip([0] + endpoints, endpoints + [adjusted_k]))] if all(v <= max for v in values): return values A: Simplest way to do that for integers is to use Multinomial distribution, which has nice property to sum up to desired number. First, take out minimum value to get range [0...s], and then just use multinomial and reject sample exceeding max value. You could play with probabilities array p to get desired behavior. As already noted, mean value would be 4. Code, Python 3.10, Windows x64 import numpy as np N = 25 minv = 2 maxv = 25 summ = 100 def sampling(N, minv, maxv, summ): summa = summ - N*minv # fix range to [0...] p = np.full(N, 1.0/N) # probabilities, uniform while True: t = np.random.multinomial(summa, p, size=1) + minv # back to desired range if np.any(t > maxv): # check continue # and reject return t q = sampling(N, minv, maxv, summ) print(np.sum(q)) UPDATE Mean value of Xi for multinomial is E(Xi)=n pi. In your case n=(100-25⎈2)=50. pi=1/25, so E(Xi)=50/25=2, and you have to add back 2, so what you see as mean would be 4. But! You could change pi such that it is not equiprobable anymore. E.g. 5⎈[0.1] + 20⎈[0.5/20] will produce first five rv with mean 50⎈0.1+2=7 and last 20 with mean 50⎈0.5/20+2=1.25+2=3.25
Creating a list of n numbers between x and y who sum up to z
I am trying to create a random set of 25 numbers, which are between 2 and 25, and sum up to 100 in python. This Question gives an answer, but it seems that the maximum number never ends up being close to 25. I've tried creating a list, dividing each number, and recreating the list, but it essentially nullifies my min and max values since they end up getting divided by a number larger than 1 almost all of the time: numbers = np.random.randint(low = 2, high = 25, size = 100, dtype = int) scale = 100 / sum(numbers) #We want weights to add up to 100% #Scale values for value in numbers: nums.append(value * scale) Is there any way to do this? Thanks
[ "You haven't specified what probability distribution the numbers should have so this could be an easy valid way although very unlikely to yield numbers close to 25:\nimport numpy as np \nnumbers = np.full(25,2)\nwhile numbers.sum() < 100:\n i = np.random.randint(25)\n if numbers[i] < 25: # almost guaranteed...\n numbers[i] += 1\n\n", "Suppose you want a random list of 25 numbers (not necessarily integers) which add up to 100, with the constraint that each number is at least 2 and no more than 25.\nFirst, note that we can change that into an equivalent problem where the numbers are only required to be non-negative, by generating 25 numbers between 0 and 23 which add up to 100-25*2, which is 50. Once we have that list, we just add two to each number; the new list will be between 2 and 25, and its sum will be 100 (because we added 2 to each of 25 numbers).\nThe second thing to note is that the probability of finding a number close to 25 in that list is pretty small, since that would require one number to have attracted almost half of the available total. (That claim is clearer if you look at the alternative formulation, 25 numbers between 0 and 23 which add up to 50. If one of those numbers is, say, 20, then the other 24 numbers add up to 30, which means that you have a distribution which looks more like the distribution of wealth in an unregulated market than a uniform distribution.)\nSince we're going to generate a uniform sample, we can handle the maximum value by ignoring it until we generate the random list, and then checking to see if the incredibly unlikely biased sample showed up; if it did, we just toss out the result and try again. (That's called \"rejection sampling\", and it's a pretty common technique, which works adequately even if half the samples are rejected. The advantage of rejection sampling is that it does not introduce bias.)\nSo let's get back to the question of how to generate a uniformly distributed list of non-negative numbers with a given sum. As long as the numbers are from a very large universe of possible values (like all double-precision floating point numbers within the range), that's quite easy. Say we need N numbers which add up to k. We start by randomly generating N-1 numbers, each in the range (0, k). Then we sort that set of numbers, and put 0 at one end of the sorted list and k at the other end. Finally, we compute the adjacent differences between successive elements. That gives us a list of N numbers which add up to k, and it turns out that the random lists so generated are an almost uniform sample of the possibilities. (There is a tiny bias introduced by the fact that it is possible for the same random number to have been generated twice, leading to a zero in the final list of differences. The zero is not a problem; the problem is that the zero shows up slightly too often. But the probability of getting an exact zero is less than one in a hundred million, and it's only exact zeros whose frequency is biased.)\nIn summary:\nfrom random import uniform\ndef gen_random_list(N=25, k=100, min=2, max=25):\n assert(N * min <= k <= N * max)\n adjusted_k = k - min * N\n while True:\n endpoints = sorted(uniform(0, adjusted_k) for i in range(N - 1))\n values = [*(end - begin + min\n for begin, end in zip([0] + endpoints,\n endpoints + [adjusted_k]))]\n if all(v <= max for v in values):\n return values\n\nOK, what if we need a list of integers? In that case, it's much more likely that the above procedure will produce a zero, and the bias will be noticeable. To avoid that, we make two changes:\n\nInstead of adjusting the range so that the minimum is 0, we adjust it so that the minimum is 1. (Which works for integers, because there are no integers between 0 and 1.) Now, the adjusted sum will be k' = k - N * (min - 1).\nSecond, instead of generating N - 1 independent random values, we generate a random selection of N - 1 different values from the half-open range [1, k') (using random.sample)\nOther than that, the algorithm is the same. Sort the generated list, compute adjacent differences, and verify that the maximum wasn't exceeded:\n\nfrom random import sample\ndef gen_random_list_of_ints(N=25, k=100, min=2, max=25):\n assert(N * min <= k <= N * max)\n adjusted_k = k - (min - 1) * N\n while True:\n endpoints = sorted(sample(range(1, adjusted_k), N - 1))\n values = [*(end - begin + min - 1\n for begin, end in zip([0] + endpoints,\n endpoints + [adjusted_k]))]\n if all(v <= max for v in values):\n return values\n\n", "Simplest way to do that for integers is to use Multinomial distribution, which has nice property to sum up to desired number. First, take out minimum value to get range [0...s], and then just use multinomial and reject sample exceeding max value. You could play with probabilities array p to get desired behavior.\nAs already noted, mean value would be 4.\nCode, Python 3.10, Windows x64\nimport numpy as np\n\nN = 25\n\nminv = 2\nmaxv = 25\nsumm = 100\n\ndef sampling(N, minv, maxv, summ):\n\n summa = summ - N*minv # fix range to [0...]\n p = np.full(N, 1.0/N) # probabilities, uniform\n\n while True:\n t = np.random.multinomial(summa, p, size=1) + minv # back to desired range\n if np.any(t > maxv): # check\n continue # and reject\n return t\n\nq = sampling(N, minv, maxv, summ)\n\nprint(np.sum(q))\n\nUPDATE\nMean value of Xi for multinomial is E(Xi)=n pi. In your case n=(100-25⎈2)=50. pi=1/25, so\nE(Xi)=50/25=2, and you have to add back 2, so what you see as mean would be 4.\nBut! You could change pi such that it is not equiprobable anymore. E.g. 5⎈[0.1] + 20⎈[0.5/20] will produce first five rv with mean 50⎈0.1+2=7 and last 20 with mean 50⎈0.5/20+2=1.25+2=3.25\n" ]
[ 0, 0, 0 ]
[]
[]
[ "numpy", "pandas", "python", "random" ]
stackoverflow_0074527506_numpy_pandas_python_random.txt
Q: How to make Python recognize installed SQLite? My Linux machine has sqlite3 installed: [root@airflow-xxxxx bin]# which sqlite3 /bin/sqlite3 [root@airflow-xxxxx bin]# However there are two versions of Python on my machine; 3.6.8 and 3.9.10: [root@airflow-xxxxx bin]# python3 Python 3.6.8 (default, Aug 13 2020, 07:46:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 >>> exit() [root@airflow-xxxxx bin]# python Python 3.9.10 (main, Nov 21 2022, 14:02:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.9/sqlite3/__init__.py", line 57, in <module> from sqlite3.dbapi2 import * File "/usr/local/lib/python3.9/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ModuleNotFoundError: No module named '_sqlite3' >>> But only the 3.6 version recognizes installed sqlite3. I tried installing sqlite-devel but got "module not found" from Nexus. As I understand, SQLite comes bundled with Python. How do I get Python 3.9 to recognize the SQLite installed? A: i actually had this problem recently, the issue is the order of installation, the _sqlite3 problem only seems to happen for versions of python 3.8 and above when python is built from source. how were the two installations of python put on the machine? one solution would be to uninstall the 3.9 completely, (i used this answer when i had this problem https://unix.stackexchange.com/questions/190794/uninstall-python-installed-by-compiling-source) then yum install sqlite-devel and only after this is completed build python from source again as an altinstall with: ./configure --enable-optimizations --enable-loadable-sqlite-extensions if you are unsure how to build python from source you can use this tutorial: https://docs.posit.co/resources/install-python-source/ note that you will need to add the --enable-loadable-sqlite-extensions flag you could also use an asnwer such as this one: https://superuser.com/questions/686980/how-to-install-alternative-version-of-python-beside-distro-supplied to create an alternate instalation in any case, it's a real pain, i hope this helps
How to make Python recognize installed SQLite?
My Linux machine has sqlite3 installed: [root@airflow-xxxxx bin]# which sqlite3 /bin/sqlite3 [root@airflow-xxxxx bin]# However there are two versions of Python on my machine; 3.6.8 and 3.9.10: [root@airflow-xxxxx bin]# python3 Python 3.6.8 (default, Aug 13 2020, 07:46:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 >>> exit() [root@airflow-xxxxx bin]# python Python 3.9.10 (main, Nov 21 2022, 14:02:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.9/sqlite3/__init__.py", line 57, in <module> from sqlite3.dbapi2 import * File "/usr/local/lib/python3.9/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ModuleNotFoundError: No module named '_sqlite3' >>> But only the 3.6 version recognizes installed sqlite3. I tried installing sqlite-devel but got "module not found" from Nexus. As I understand, SQLite comes bundled with Python. How do I get Python 3.9 to recognize the SQLite installed?
[ "i actually had this problem recently, the issue is the order of installation, the _sqlite3 problem only seems to happen for versions of python 3.8 and above\nwhen python is built from source.\nhow were the two installations of python put on the machine? one solution would be to uninstall the 3.9 completely, (i used this answer when i had this problem https://unix.stackexchange.com/questions/190794/uninstall-python-installed-by-compiling-source) then yum install sqlite-devel and only after this is completed build python from source again as an altinstall with:\n./configure --enable-optimizations --enable-loadable-sqlite-extensions\n\nif you are unsure how to build python from source you can use this tutorial:\nhttps://docs.posit.co/resources/install-python-source/\nnote that you will need to add the --enable-loadable-sqlite-extensions flag\nyou could also use an asnwer such as this one: https://superuser.com/questions/686980/how-to-install-alternative-version-of-python-beside-distro-supplied to create an alternate instalation\nin any case, it's a real pain, i hope this helps\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "sqlite" ]
stackoverflow_0074532551_python_python_3.x_sqlite.txt
Q: How to find a value up to some decimal point? I have a csv file with thousand of rows like: name,post x1,25.84 x2,51.0634699001 x3,73.01 x4,72.0 x5,79.0 x6,75.9 x7,95.29 x8,93.55 x9,93.7 x10,10.0 x11,93.99 I am trying to write a python code, possibly something with pandas maybe that will pick up only the post values ending with .0 The desired output in this case is name,post x4,72.0 x5,79.0 x10,10.0 not showing x2 and x3 because after 0 other numbers exist. I tried this but not working: df['zeros'] = df['post'].str.extract('([.0]*[,.][.0]*)') A: If you have floats, this is not possible. Assuming you have strings, use str.endswith: out = df[df['post'].str.endswith('0')] If you want to ensure matching a decimal ending in 0 (again, with a string as input), use: To allow integers without decimal part (10) out = df[df['post'].str.fullmatch(r'\d+(\.\d*0)?')] To forbid integers without decimal part: out = df[df['post'].str.fullmatch(r'\d+(\.\d*0)')] Output: name post 3 x4 72.0 4 x5 79.0 9 x10 10.0
How to find a value up to some decimal point?
I have a csv file with thousand of rows like: name,post x1,25.84 x2,51.0634699001 x3,73.01 x4,72.0 x5,79.0 x6,75.9 x7,95.29 x8,93.55 x9,93.7 x10,10.0 x11,93.99 I am trying to write a python code, possibly something with pandas maybe that will pick up only the post values ending with .0 The desired output in this case is name,post x4,72.0 x5,79.0 x10,10.0 not showing x2 and x3 because after 0 other numbers exist. I tried this but not working: df['zeros'] = df['post'].str.extract('([.0]*[,.][.0]*)')
[ "If you have floats, this is not possible.\nAssuming you have strings, use str.endswith:\nout = df[df['post'].str.endswith('0')]\n\nIf you want to ensure matching a decimal ending in 0 (again, with a string as input), use:\nTo allow integers without decimal part (10)\nout = df[df['post'].str.fullmatch(r'\\d+(\\.\\d*0)?')]\n\nTo forbid integers without decimal part:\nout = df[df['post'].str.fullmatch(r'\\d+(\\.\\d*0)')]\n\nOutput:\n name post\n3 x4 72.0\n4 x5 79.0\n9 x10 10.0\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074532786_dataframe_pandas_python.txt
Q: How to change view in pycharm in SciView I have an example to multiple each row and column and I am getting good results but I want to change view, when I multiply column I want to have result in n-columns in one row, and when I multiply rows I want to have one column and n-rows. Now it shows in both cases one row and multiple columns, and it is difficult to understand. array2 = np.array([[2,3,5,1], [5,1,2,8], [5,1,6,-1]]) multiply_columns_array2 = array2.prod(axis=0) multiply_rows_array2 = array2.prod(axis=1) A: You can use reshape or vstack. The best way in my experience is to use reshape(-1, 1) because you don't have to specify the size of the array. It works like this: >>> a=np.arange(1,4) >>> a array([1, 2, 3]) >>> a.reshape(3,1) array([[1], [2], [3]]) >>> np.vstack(a) array([[1], [2], [3]]) Also, you can use broadcasting in order to reshape your array: In [32]: a = np.arange(10) In [33]: a Out[33]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [34]: a[:,None] Out[34]: array([[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]])
How to change view in pycharm in SciView
I have an example to multiple each row and column and I am getting good results but I want to change view, when I multiply column I want to have result in n-columns in one row, and when I multiply rows I want to have one column and n-rows. Now it shows in both cases one row and multiple columns, and it is difficult to understand. array2 = np.array([[2,3,5,1], [5,1,2,8], [5,1,6,-1]]) multiply_columns_array2 = array2.prod(axis=0) multiply_rows_array2 = array2.prod(axis=1)
[ "You can use reshape or vstack.\nThe best way in my experience is to use reshape(-1, 1) because you don't have to specify the size of the array. It works like this:\n>>> a=np.arange(1,4)\n>>> a\narray([1, 2, 3])\n>>> a.reshape(3,1)\narray([[1],\n [2],\n [3]])\n>>> np.vstack(a)\narray([[1],\n [2],\n [3]])\n\nAlso, you can use broadcasting in order to reshape your array:\nIn [32]: a = np.arange(10)\nIn [33]: a\nOut[33]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\nIn [34]: a[:,None]\nOut[34]: \narray([[0],\n [1],\n [2],\n [3],\n [4],\n [5],\n [6],\n [7],\n [8],\n [9]])\n\n" ]
[ 0 ]
[]
[]
[ "numpy_ndarray", "pycharm", "python" ]
stackoverflow_0074531120_numpy_ndarray_pycharm_python.txt
Q: How to modify values in xml using python? I am trying to modify the values of xml files using python. Here is a sample xml file I wrote a code for adding the text to the name with iteration. If given a set of inputs in an array, how can we check the values name example:"Belgian Waffles" and add 2$ more price to it ? example : array=[Strawberry Belgian Waffles,Belgian Waffles] If "Belgian Waffles" is present add 2$ to price modify the price in the elements where the name is exactly the same as the array member <breakfast_menu> <food> <name itemid="11">Belgian Waffles</name> <price>5.95</price> <description>Two of our famous Belgian Waffles with plenty of real maple syrup</description> <calories>650</calories> </food> <food> <name itemid="21">Strawberry Belgian Waffles</name> <price>7.95</price> <description>Light Belgian waffles covered with strawberries and whipped cream</description> <calories>900</calories> </food> <food> <name itemid="31">Berry-Berry Belgian Waffles</name> <price>8.95</price> <description>Light Belgian waffles covered with an assortment of fresh berries and whipped cream</description> <calories>900</calories> </food> <food> <name itemid="41">French Toast</name> <price>4.50</price> <description>Thick slices made from our homemade sourdough bread</description> <calories>600</calories> </food> </breakfast_menu> import xml.etree.ElementTree as ET mytree = ET.parse('t.xml') myroot = mytree.getroot() print(myroot[0][1]) print(myroot[0].food['name'].value) for names in myroot.iter('name'): names.text = names.text + ' <br> testdrive' A: Try it this way: waffles = ["Strawberry Belgian Waffles", "Belgian Waffles"] for food in myroot.findall('.//food'): item = food.find('./name').text if item in waffles: cur_price = food.find('.//price').text #next one is a little tricky - the price is a string on which you need #to perform a math function; so it needs to be converted to float and #then back to string for insertion in the element food.find('.//price').text= str(float(cur_price)+2) print(ET.tostring(myroot).decode()) Output should be what you're looking gor.
How to modify values in xml using python?
I am trying to modify the values of xml files using python. Here is a sample xml file I wrote a code for adding the text to the name with iteration. If given a set of inputs in an array, how can we check the values name example:"Belgian Waffles" and add 2$ more price to it ? example : array=[Strawberry Belgian Waffles,Belgian Waffles] If "Belgian Waffles" is present add 2$ to price modify the price in the elements where the name is exactly the same as the array member <breakfast_menu> <food> <name itemid="11">Belgian Waffles</name> <price>5.95</price> <description>Two of our famous Belgian Waffles with plenty of real maple syrup</description> <calories>650</calories> </food> <food> <name itemid="21">Strawberry Belgian Waffles</name> <price>7.95</price> <description>Light Belgian waffles covered with strawberries and whipped cream</description> <calories>900</calories> </food> <food> <name itemid="31">Berry-Berry Belgian Waffles</name> <price>8.95</price> <description>Light Belgian waffles covered with an assortment of fresh berries and whipped cream</description> <calories>900</calories> </food> <food> <name itemid="41">French Toast</name> <price>4.50</price> <description>Thick slices made from our homemade sourdough bread</description> <calories>600</calories> </food> </breakfast_menu> import xml.etree.ElementTree as ET mytree = ET.parse('t.xml') myroot = mytree.getroot() print(myroot[0][1]) print(myroot[0].food['name'].value) for names in myroot.iter('name'): names.text = names.text + ' <br> testdrive'
[ "Try it this way:\nwaffles = [\"Strawberry Belgian Waffles\", \"Belgian Waffles\"]\n\nfor food in myroot.findall('.//food'):\n item = food.find('./name').text\n if item in waffles:\n cur_price = food.find('.//price').text\n\n #next one is a little tricky - the price is a string on which you need\n #to perform a math function; so it needs to be converted to float and \n #then back to string for insertion in the element\n\n food.find('.//price').text= str(float(cur_price)+2)\nprint(ET.tostring(myroot).decode())\n\nOutput should be what you're looking gor.\n" ]
[ 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0074532158_python_xml.txt
Q: Store indexes of a Series into an array My idea is to apply linear regression to draw a line on a time series dataset to approximate the direction it is evolving in (first I draw the line, then I calculate the slope and I see if my plot is increasing decreasing, or constant). For that, I relied on this code def estimate_coef(x, y): # number of observations/points n = np.size(x) # mean of x and y vector m_x = np.mean(x) m_y = np.mean(y) # calculating cross-deviation and deviation about x SS_xy = np.sum(y*x) - n*m_y*m_x SS_xx = np.sum(x*x) - n*m_x*m_x # calculating regression coefficients b_1 = SS_xy / SS_xx b_0 = m_y - b_1*m_x return (b_0, b_1) def plot_regression_line(x, y, b): # plotting the actual points as scatter plot plt.scatter(x, y, color = "m", marker = "o", s = 30) # predicted response vector y_pred = b[0] + b[1]*x # plotting the regression line plt.plot(x, y_pred, color = "g") # putting labels plt.xlabel('x') plt.ylabel('y') # function to show plot plt.show() For that I need an X and Y array. The data I extracted had an index in the format of a date "Y-M-D". enter image description here As you may know for linear regression it does not make sense to have the "date" as index, hence I used the A.reset_index() to get numeric indexes enter image description here Now that I got my data I need to extract the indexes to put them in an array "X" and the data to be plotted in an array "Y". Therefore my question would be how to extract these new indexes and put them in the array X. A: You can do: x=[i + 1 for i in A.index] # to make data x starts with 1 instead of 0 y=A['lift'] And you apply your functions on those x and y
Store indexes of a Series into an array
My idea is to apply linear regression to draw a line on a time series dataset to approximate the direction it is evolving in (first I draw the line, then I calculate the slope and I see if my plot is increasing decreasing, or constant). For that, I relied on this code def estimate_coef(x, y): # number of observations/points n = np.size(x) # mean of x and y vector m_x = np.mean(x) m_y = np.mean(y) # calculating cross-deviation and deviation about x SS_xy = np.sum(y*x) - n*m_y*m_x SS_xx = np.sum(x*x) - n*m_x*m_x # calculating regression coefficients b_1 = SS_xy / SS_xx b_0 = m_y - b_1*m_x return (b_0, b_1) def plot_regression_line(x, y, b): # plotting the actual points as scatter plot plt.scatter(x, y, color = "m", marker = "o", s = 30) # predicted response vector y_pred = b[0] + b[1]*x # plotting the regression line plt.plot(x, y_pred, color = "g") # putting labels plt.xlabel('x') plt.ylabel('y') # function to show plot plt.show() For that I need an X and Y array. The data I extracted had an index in the format of a date "Y-M-D". enter image description here As you may know for linear regression it does not make sense to have the "date" as index, hence I used the A.reset_index() to get numeric indexes enter image description here Now that I got my data I need to extract the indexes to put them in an array "X" and the data to be plotted in an array "Y". Therefore my question would be how to extract these new indexes and put them in the array X.
[ "You can do:\nx=[i + 1 for i in A.index] # to make data x starts with 1 instead of 0\ny=A['lift']\n\nAnd you apply your functions on those x and y\n" ]
[ 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074532795_numpy_python.txt
Q: Error when installing prophet package in python I am trying to install the prophet package in python, but it gives the following error. Can you please help? It is required for the darts package. Actually, the main goal is to install the darts package but it gives an error when it comes to installing prophet as a sub-package. So, when I try installing prophet separately, I cannot. In Anaconda prompt in windows, I do: pip install prophet The error is: Using cached https://files.pythonhosted.org/packages/f0/fa/c382f0ac5abe9f0a4df9d874a5e8843db035fe2f071b5c00a545b1e3c10b/prophet-1.0.1.tar.gz Requirement already satisfied: Cython>=0.22 in c:\programdata\anaconda3\lib\site-packages (from prophet) (0.29.12) Requirement already satisfied: cmdstanpy==0.9.68 in c:\programdata\anaconda3\lib\site-packages (from prophet) (0.9.68) Requirement already satisfied: pystan~=2.19.1.1 in c:\programdata\anaconda3\lib\site-packages (from prophet) (2.19.1.1) Requirement already satisfied: numpy>=1.15.4 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (1.21.6) Requirement already satisfied: pandas>=1.0.4 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (1.3.5) Requirement already satisfied: matplotlib>=2.0.0 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (3.5.1) Requirement already satisfied: LunarCalendar>=0.0.9 in c:\programdata\anaconda3\lib\site-packages (from prophet) (0.0.9) Requirement already satisfied: convertdate>=2.1.2 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (2.4.0) Requirement already satisfied: holidays>=0.10.2 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (0.13) Requirement already satisfied: setuptools-git>=1.2 in c:\programdata\anaconda3\lib\site-packages (from prophet) (1.2) Requirement already satisfied: python-dateutil>=2.8.0 in c:\programdata\anaconda3\lib\site-packages (from prophet) (2.8.0) Requirement already satisfied: tqdm>=4.36.1 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (4.64.0) Requirement already satisfied: ujson in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from cmdstanpy==0.9.68->prophet) (5.2.0) Requirement already satisfied: pytz>=2017.3 in c:\programdata\anaconda3\lib\site-packages (from pandas>=1.0.4->prophet) (2019.1) Requirement already satisfied: fonttools>=4.22.0 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from matplotlib>=2.0.0->prophet) (4.33.2) Requirement already satisfied: packaging>=20.0 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from matplotlib>=2.0.0->prophet) (21.3) Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib>=2.0.0->prophet) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib>=2.0.0->prophet) (1.1.0) Requirement already satisfied: pillow>=6.2.0 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from matplotlib>=2.0.0->prophet) (9.1.0) Requirement already satisfied: pyparsing>=2.2.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib>=2.0.0->prophet) (2.4.0) Requirement already satisfied: ephem>=3.7.5.3 in c:\programdata\anaconda3\lib\site-packages (from LunarCalendar>=0.0.9->prophet) (4.1.3) Requirement already satisfied: pymeeus<=1,>=0.3.13 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from convertdate>=2.1.2->prophet) (0.5.11) Requirement already satisfied: hijri-converter in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from holidays>=0.10.2->prophet) (2.2.3) Requirement already satisfied: korean-lunar-calendar in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from holidays>=0.10.2->prophet) (0.2.1) Requirement already satisfied: six>=1.5 in c:\programdata\anaconda3\lib\site-packages (from python-dateutil>=2.8.0->prophet) (1.12.0) Requirement already satisfied: colorama; platform_system == "Windows" in c:\programdata\anaconda3\lib\site-packages (from tqdm>=4.36.1->prophet) (0.4.1) Requirement already satisfied: setuptools in c:\programdata\anaconda3\lib\site-packages (from kiwisolver>=1.0.1->matplotlib>=2.0.0->prophet) (41.0.1) Building wheels for collected packages: prophet Building wheel for prophet (setup.py) ... error ERROR: Complete output from command 'C:\ProgramData\Anaconda3\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\REDD2~1.SOL\\AppData\\Local\\Temp\\1\\pip-install-lmtfq4_i\\prophet\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\REDD2~1.SOL\AppData\Local\Temp\1\pip-wheel-8b3_ugik' --python-tag cp37: ERROR: running bdist_wheel running build running build_py creating build creating build\lib creating build\lib\prophet creating build\lib\prophet\stan_model C:\Users\r.soltani\AppData\Roaming\Python\Python37\site-packages\pandas\compat\_optional.py:138: UserWarning: Pandas requires version '2.7.0' or newer of 'numexpr' (version '2.6.9' currently installed). warnings.warn(msg, UserWarning) INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_f5236004a3fd5b8429270d00efcc0cf9 NOW. WARNING:pystan:MSVC compiler is not supported stanfit4anon_model_f5236004a3fd5b8429270d00efcc0cf9_8617278733964175527.cpp C:\Users\r.soltani\AppData\Roaming\Python\Python37\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(14) : Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION C:\ProgramData\Anaconda3\lib\site-packages\pystan\stan\lib\stan_math\stan/math/prim/mat/meta/seq_view.hpp(145): warning C4267: 'return': conversion from 'size_t' to 'int', possible loss of data C:\ProgramData\Anaconda3\lib\site-packages\pystan\stan\lib\stan_math\stan/math/prim/mat/fun/multiply_lower_tri_self_transpose.hpp(19): warning C4244: 'initializing': conversion from 'Eigen::EigenBase<Derived>::Index' to 'int', possible loss of data with [ Derived=Eigen::Matrix<double,-1,-1,0,-1,-1> ] C:\ProgramData\Anaconda3\lib\site-packages\pystan\stan\lib\stan_math\stan/math/prim/mat/fun/multiply_lower_tri_self_transpose.hpp(27): warning C4244: 'initializing': conversion from 'Eigen::EigenBase<Derived>::Index' to 'int', possible loss of data with [ Derived=Eigen::Matrix<double,-1,-1,0,-1,-1> ]``` I cannot share the whole error because of the character limit here. A: This is used for installing prophet from conda -> conda install -c conda-forge prophet using pip -> pip install prophet or install pystan and fbprophet -> pip install pystan~=2.14 pip install fbprophet
Error when installing prophet package in python
I am trying to install the prophet package in python, but it gives the following error. Can you please help? It is required for the darts package. Actually, the main goal is to install the darts package but it gives an error when it comes to installing prophet as a sub-package. So, when I try installing prophet separately, I cannot. In Anaconda prompt in windows, I do: pip install prophet The error is: Using cached https://files.pythonhosted.org/packages/f0/fa/c382f0ac5abe9f0a4df9d874a5e8843db035fe2f071b5c00a545b1e3c10b/prophet-1.0.1.tar.gz Requirement already satisfied: Cython>=0.22 in c:\programdata\anaconda3\lib\site-packages (from prophet) (0.29.12) Requirement already satisfied: cmdstanpy==0.9.68 in c:\programdata\anaconda3\lib\site-packages (from prophet) (0.9.68) Requirement already satisfied: pystan~=2.19.1.1 in c:\programdata\anaconda3\lib\site-packages (from prophet) (2.19.1.1) Requirement already satisfied: numpy>=1.15.4 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (1.21.6) Requirement already satisfied: pandas>=1.0.4 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (1.3.5) Requirement already satisfied: matplotlib>=2.0.0 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (3.5.1) Requirement already satisfied: LunarCalendar>=0.0.9 in c:\programdata\anaconda3\lib\site-packages (from prophet) (0.0.9) Requirement already satisfied: convertdate>=2.1.2 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (2.4.0) Requirement already satisfied: holidays>=0.10.2 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (0.13) Requirement already satisfied: setuptools-git>=1.2 in c:\programdata\anaconda3\lib\site-packages (from prophet) (1.2) Requirement already satisfied: python-dateutil>=2.8.0 in c:\programdata\anaconda3\lib\site-packages (from prophet) (2.8.0) Requirement already satisfied: tqdm>=4.36.1 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from prophet) (4.64.0) Requirement already satisfied: ujson in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from cmdstanpy==0.9.68->prophet) (5.2.0) Requirement already satisfied: pytz>=2017.3 in c:\programdata\anaconda3\lib\site-packages (from pandas>=1.0.4->prophet) (2019.1) Requirement already satisfied: fonttools>=4.22.0 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from matplotlib>=2.0.0->prophet) (4.33.2) Requirement already satisfied: packaging>=20.0 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from matplotlib>=2.0.0->prophet) (21.3) Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib>=2.0.0->prophet) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib>=2.0.0->prophet) (1.1.0) Requirement already satisfied: pillow>=6.2.0 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from matplotlib>=2.0.0->prophet) (9.1.0) Requirement already satisfied: pyparsing>=2.2.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib>=2.0.0->prophet) (2.4.0) Requirement already satisfied: ephem>=3.7.5.3 in c:\programdata\anaconda3\lib\site-packages (from LunarCalendar>=0.0.9->prophet) (4.1.3) Requirement already satisfied: pymeeus<=1,>=0.3.13 in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from convertdate>=2.1.2->prophet) (0.5.11) Requirement already satisfied: hijri-converter in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from holidays>=0.10.2->prophet) (2.2.3) Requirement already satisfied: korean-lunar-calendar in c:\users\r.soltani\appdata\roaming\python\python37\site-packages (from holidays>=0.10.2->prophet) (0.2.1) Requirement already satisfied: six>=1.5 in c:\programdata\anaconda3\lib\site-packages (from python-dateutil>=2.8.0->prophet) (1.12.0) Requirement already satisfied: colorama; platform_system == "Windows" in c:\programdata\anaconda3\lib\site-packages (from tqdm>=4.36.1->prophet) (0.4.1) Requirement already satisfied: setuptools in c:\programdata\anaconda3\lib\site-packages (from kiwisolver>=1.0.1->matplotlib>=2.0.0->prophet) (41.0.1) Building wheels for collected packages: prophet Building wheel for prophet (setup.py) ... error ERROR: Complete output from command 'C:\ProgramData\Anaconda3\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\REDD2~1.SOL\\AppData\\Local\\Temp\\1\\pip-install-lmtfq4_i\\prophet\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\REDD2~1.SOL\AppData\Local\Temp\1\pip-wheel-8b3_ugik' --python-tag cp37: ERROR: running bdist_wheel running build running build_py creating build creating build\lib creating build\lib\prophet creating build\lib\prophet\stan_model C:\Users\r.soltani\AppData\Roaming\Python\Python37\site-packages\pandas\compat\_optional.py:138: UserWarning: Pandas requires version '2.7.0' or newer of 'numexpr' (version '2.6.9' currently installed). warnings.warn(msg, UserWarning) INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_f5236004a3fd5b8429270d00efcc0cf9 NOW. WARNING:pystan:MSVC compiler is not supported stanfit4anon_model_f5236004a3fd5b8429270d00efcc0cf9_8617278733964175527.cpp C:\Users\r.soltani\AppData\Roaming\Python\Python37\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(14) : Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION C:\ProgramData\Anaconda3\lib\site-packages\pystan\stan\lib\stan_math\stan/math/prim/mat/meta/seq_view.hpp(145): warning C4267: 'return': conversion from 'size_t' to 'int', possible loss of data C:\ProgramData\Anaconda3\lib\site-packages\pystan\stan\lib\stan_math\stan/math/prim/mat/fun/multiply_lower_tri_self_transpose.hpp(19): warning C4244: 'initializing': conversion from 'Eigen::EigenBase<Derived>::Index' to 'int', possible loss of data with [ Derived=Eigen::Matrix<double,-1,-1,0,-1,-1> ] C:\ProgramData\Anaconda3\lib\site-packages\pystan\stan\lib\stan_math\stan/math/prim/mat/fun/multiply_lower_tri_self_transpose.hpp(27): warning C4244: 'initializing': conversion from 'Eigen::EigenBase<Derived>::Index' to 'int', possible loss of data with [ Derived=Eigen::Matrix<double,-1,-1,0,-1,-1> ]``` I cannot share the whole error because of the character limit here.
[ "This is used for installing prophet from conda ->\n\nconda install -c conda-forge prophet\n\nusing pip ->\n\npip install prophet\n\nor\ninstall pystan and fbprophet ->\npip install pystan~=2.14\npip install fbprophet\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0072132548_python_python_3.x.txt
Q: How create a string by another one? | Selenium Python First of all, I would like to apologize for my question and for my English, it's my first time here on the forum and I'm noob in python, I'm still learning. So, in my code, I imported a module that contains some strings for example: users.py: user1 = Jeremy user2 = John user3 = Alana user4 = Bella ... and in my code, I would like, with each loop, the string "next_user" change according to each repetition... But for some reason, the code I wrote results in a value, and not in the string itself. from users import * next_user = str("user" + str(start_user_num)) print("The user is"+str(next_user)) result: The user is user1 and i want it to be "Jeremy" Sorry if you guys can't understand, I can try to explain better. o___o' A: Firstly your users.py since it is not a python script should be changed to users.txt So the contents of the text file is: user1 = Jeremy user2 = John user3 = Alana user4 = Bella ... Then you can read everything within the text file, convert it into an array and call upon elements within the array, here is the code for that: with open('users.txt', 'r') as fp: data = fp.readlines() users = [] for element in data: element = element.replace('\n', '') split = element.split(' = ') users.append(split[1]) for i in range(len(users)): next_user = str(users[i]) print(f'The user is {next_user}') Which creates the output when run: The user is Jeremy The user is John The user is Alana I hope this helps
How create a string by another one? | Selenium Python
First of all, I would like to apologize for my question and for my English, it's my first time here on the forum and I'm noob in python, I'm still learning. So, in my code, I imported a module that contains some strings for example: users.py: user1 = Jeremy user2 = John user3 = Alana user4 = Bella ... and in my code, I would like, with each loop, the string "next_user" change according to each repetition... But for some reason, the code I wrote results in a value, and not in the string itself. from users import * next_user = str("user" + str(start_user_num)) print("The user is"+str(next_user)) result: The user is user1 and i want it to be "Jeremy" Sorry if you guys can't understand, I can try to explain better. o___o'
[ "Firstly your users.py since it is not a python script should be changed to users.txt\nSo the contents of the text file is:\n\nuser1 = Jeremy\nuser2 = John\nuser3 = Alana\nuser4 = Bella\n...\n\nThen you can read everything within the text file, convert it into an array and call upon elements within the array, here is the code for that:\nwith open('users.txt', 'r') as fp:\ndata = fp.readlines()\n\nusers = []\n\nfor element in data:\n element = element.replace('\\n', '')\n split = element.split(' = ')\n users.append(split[1])\n\n\nfor i in range(len(users)):\n next_user = str(users[i])\n print(f'The user is {next_user}')\n\nWhich creates the output when run:\n\nThe user is Jeremy\nThe user is John\nThe user is Alana\n\nI hope this helps\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "string" ]
stackoverflow_0074512386_python_selenium_string.txt
Q: convert series of dates to int number of dates I have a pandas Series that is of the following format dates = [Nov 2022, Dec 2022, Jan 2023, Feb 2023 ..] I want to create a dataframe that takes these values and has the number of days. I have to consider of course the case if it is a leap year I have created a small function that splits the dates into 2 dataframes and 2 lists of months depending if they have 30 or 31 days like the following month = [Nov, Dec, Jan, Feb ..] and year = [2022, 2022, 2023, 2023 ..] and then use the isin function in a sense if the month is in listA then insert 31 days etc. I also check for the leap years. However, I was wondering if there is a way to automate this whole proces with the pd.datetime A: If you want the number of days in this month: dates = pd.Series(['Nov 2022', 'Dec 2022', 'Jan 2023', 'Feb 2023']) out = (pd.to_datetime(dates, format='%b %Y') .dt.days_in_month ) # Or out = (pd.to_datetime(dates, format='%b %Y') .add(pd.offsets.MonthEnd(0)) .dt.day ) Output: 0 30 1 31 2 31 3 28 dtype: int64 previous interpretation If I understand correctly, you want the day of year? Assuming: dates = pd.Series(['Nov 2022', 'Dec 2022', 'Jan 2023', 'Feb 2023']) You can use: pd.to_datetime(dates, format='%b %Y').dt.dayofyear NB. The reference is the start of each month. Output: 0 305 1 335 2 1 3 32 dtype: int64
convert series of dates to int number of dates
I have a pandas Series that is of the following format dates = [Nov 2022, Dec 2022, Jan 2023, Feb 2023 ..] I want to create a dataframe that takes these values and has the number of days. I have to consider of course the case if it is a leap year I have created a small function that splits the dates into 2 dataframes and 2 lists of months depending if they have 30 or 31 days like the following month = [Nov, Dec, Jan, Feb ..] and year = [2022, 2022, 2023, 2023 ..] and then use the isin function in a sense if the month is in listA then insert 31 days etc. I also check for the leap years. However, I was wondering if there is a way to automate this whole proces with the pd.datetime
[ "If you want the number of days in this month:\ndates = pd.Series(['Nov 2022', 'Dec 2022', 'Jan 2023', 'Feb 2023'])\n\nout = (pd.to_datetime(dates, format='%b %Y')\n .dt.days_in_month\n )\n\n# Or\n\nout = (pd.to_datetime(dates, format='%b %Y')\n .add(pd.offsets.MonthEnd(0))\n .dt.day\n )\n\nOutput:\n0 30\n1 31\n2 31\n3 28\ndtype: int64\n\nprevious interpretation\nIf I understand correctly, you want the day of year?\nAssuming:\ndates = pd.Series(['Nov 2022', 'Dec 2022', 'Jan 2023', 'Feb 2023'])\n\nYou can use:\npd.to_datetime(dates, format='%b %Y').dt.dayofyear\n\nNB. The reference is the start of each month.\nOutput:\n0 305\n1 335\n2 1\n3 32\ndtype: int64\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "pandas", "python" ]
stackoverflow_0074532851_datetime_pandas_python.txt
Q: How to change the y tick label in matplotlib The below code generates a scatter plot. #KNNClassifier_weighted import numpy as np import matplotlib.pyplot as plt plt.figure(figsize=(100, 30)) xy = np.array([ (x, y) for x, lst in df_param.items() for sublst in lst for y in sublst ]) plt.scatter(*xy.T, s=500, edgecolors='black', linewidth=3) plt.title("KNNClassifier: weighted",fontsize=80) # Setting the x and y labels plt.xlabel("Iteration",fontsize=80) plt.ylabel("value",fontsize=80) #labels=["True", "False"] # Setting the number of ticks plt.xticks(np.arange(0, len(df_param)+1, 10),fontsize=34, rotation=90) plt.yticks(fontsize=45) plt.xlim(xmin=0) plt.show() A sample of the dataframe that is used to generate the plot is {0: [[True], [False], [True], [False], [False], [False]], 1: [[False], [True], [False], [False], [False]], 2: [[False], [True], [False], [False]], 3: [[False], [False], [False]], 4: [[False], [False]], 5: [[False]], 6: [], 7: [], 8: [[False]], 9: [], 10: []} When I try putting the labels in an array and set it as yticks. labels=["True", "False"] # Setting the number of ticks plt.xticks(np.arange(0, len(df_param)+1, 10),fontsize=34, rotation=90) plt.yticks(labels, fontsize=45) I get the conversion error. ConversionError: Failed to convert value(s) to axis units: ['True', 'False'] I want the values in the dataframe to be used as labels.
How to change the y tick label in matplotlib
The below code generates a scatter plot. #KNNClassifier_weighted import numpy as np import matplotlib.pyplot as plt plt.figure(figsize=(100, 30)) xy = np.array([ (x, y) for x, lst in df_param.items() for sublst in lst for y in sublst ]) plt.scatter(*xy.T, s=500, edgecolors='black', linewidth=3) plt.title("KNNClassifier: weighted",fontsize=80) # Setting the x and y labels plt.xlabel("Iteration",fontsize=80) plt.ylabel("value",fontsize=80) #labels=["True", "False"] # Setting the number of ticks plt.xticks(np.arange(0, len(df_param)+1, 10),fontsize=34, rotation=90) plt.yticks(fontsize=45) plt.xlim(xmin=0) plt.show() A sample of the dataframe that is used to generate the plot is {0: [[True], [False], [True], [False], [False], [False]], 1: [[False], [True], [False], [False], [False]], 2: [[False], [True], [False], [False]], 3: [[False], [False], [False]], 4: [[False], [False]], 5: [[False]], 6: [], 7: [], 8: [[False]], 9: [], 10: []} When I try putting the labels in an array and set it as yticks. labels=["True", "False"] # Setting the number of ticks plt.xticks(np.arange(0, len(df_param)+1, 10),fontsize=34, rotation=90) plt.yticks(labels, fontsize=45) I get the conversion error. ConversionError: Failed to convert value(s) to axis units: ['True', 'False'] I want the values in the dataframe to be used as labels.
[]
[]
[ "I haven't tried this myself but something like this may help:\nplt.yticks([1.0, 0.0], labels, fontsize=45)\n\n" ]
[ -1 ]
[ "matplotlib", "python", "visualization" ]
stackoverflow_0074531458_matplotlib_python_visualization.txt
Q: Delete specific parts in a txt file I am working on a txt file which and in between the data that I need there are also information that I want to delete. For instance the txt file is built like this: |important|data|that|I|need|to|keep| ------------------------------- --------------- ---------------- info|I|dont|need| ---------------- --------------- ------------------------------ |important|data|that|I|need|to|keep |I|want|to|keep|this|info| ------------------------------- --------------- ---------------- info|I|dont|need| ---------------- --------------- ------------------------------ how can I delete everything between the dashes? When I read the file I would like to have just something like this: |important|data|that|I|need|to|keep| |important|data|that|I|need|to|keep |I|want|to|keep|this|info| update: is it possible to just delete everything in between the dashes? the format of the info between them can be different so I would like to find a one fits all solution A: Type str comes with a feature startswith to check if the string starts with a specific user-defined character. More information can be found in the following documentation - python startswith with open("<file_name>.txt", "r") as f: for line in f: if line.startswith("|"): print(line) Result: |important|data|that|I|need|to|keep| |important|data|that|I|need|to|keep |I|want|to|keep|this|info| A: Read each line to list and apply comprehnsion accordingly as follows. Add keywords you want to keep inside List B and play with it. with open('file.txt') as f: lines = [line.rstrip('\n') for line in f] B = ['important','want'] mask = [word for word in lines if any(want in word for want in B)] mask = [w.replace('-', '') for w in mask] mask ='\n'.join(mask) print(mask) output # |important|data|that|I|need|to|keep| |important|data|that|I|need|to|keep |I|want|to|keep|this|info|
Delete specific parts in a txt file
I am working on a txt file which and in between the data that I need there are also information that I want to delete. For instance the txt file is built like this: |important|data|that|I|need|to|keep| ------------------------------- --------------- ---------------- info|I|dont|need| ---------------- --------------- ------------------------------ |important|data|that|I|need|to|keep |I|want|to|keep|this|info| ------------------------------- --------------- ---------------- info|I|dont|need| ---------------- --------------- ------------------------------ how can I delete everything between the dashes? When I read the file I would like to have just something like this: |important|data|that|I|need|to|keep| |important|data|that|I|need|to|keep |I|want|to|keep|this|info| update: is it possible to just delete everything in between the dashes? the format of the info between them can be different so I would like to find a one fits all solution
[ "Type str comes with a feature startswith to check if the string starts with a specific user-defined character.\nMore information can be found in the following documentation - python startswith\nwith open(\"<file_name>.txt\", \"r\") as f:\n for line in f:\n if line.startswith(\"|\"):\n print(line)\n\nResult:\n|important|data|that|I|need|to|keep|\n|important|data|that|I|need|to|keep\n|I|want|to|keep|this|info|\n\n", "Read each line to list and apply comprehnsion accordingly as follows. Add keywords you want to keep inside List B and play with it.\nwith open('file.txt') as f:\n lines = [line.rstrip('\\n') for line in f]\n\n\nB = ['important','want']\n\nmask = [word for word in lines if any(want in word for want in B)]\nmask = [w.replace('-', '') for w in mask]\nmask ='\\n'.join(mask)\nprint(mask)\n\noutput #\n|important|data|that|I|need|to|keep|\n|important|data|that|I|need|to|keep\n|I|want|to|keep|this|info|\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "split", "txt" ]
stackoverflow_0074532678_python_split_txt.txt
Q: Why is declaring `size_x` and `size_y` different from delcaring both in `size` in kivy? Why do these two blocks yield different results in kivy? size size: [50,50] size_x and size_y size_x: 50 size_y: 50 Example For example, the following code does not render the same looking app size Using just size has more padding around the label #!/usr/bin/env python3 from kivy.uix.button import Button from kivy.lang import Builder from kivy.app import App KV = """ StackLayout: orientation: 'lr-tb' Label: text: "Hello" size: [50,50] size_hint: None, None Label: text: "World" size: self.texture_size size_hint: None, None """ class MyApp(App): def build(self): return Builder.load_string( KV ) size_x and size_y Using both size_x and size_y has less padding around the label #!/usr/bin/env python3 from kivy.uix.button import Button from kivy.lang import Builder from kivy.app import App KV = """ StackLayout: orientation: 'lr-tb' Label: text: "Hello" size_x: 50 size_y: 50 size_hint: None, None Label: text: "World" size: self.texture_size size_hint: None, None """ class MyApp(App): def build(self): return Builder.load_string( KV ) My understanding is that size is merely a python list of [size_x, size_y]. Because of this, I'd expect that declaring them separately would yield the same results. Why does declaring size as distinct size_x and size_y variables differ from declaring it just once with size? A: size_x and size_y doesn't exist at all in the Kivy's API, and in the Kivy widget attributes. size is a reference to a list of [width, height]. Theses code are identicals: size: 100, 100 size_hint: None, None is equal to: width: 100 height: 100 size_hint: None, None Your example have a different behavior because the default size of a unconstrained Widget is 100, 100. So assigning size_x and size_y are meaningless.
Why is declaring `size_x` and `size_y` different from delcaring both in `size` in kivy?
Why do these two blocks yield different results in kivy? size size: [50,50] size_x and size_y size_x: 50 size_y: 50 Example For example, the following code does not render the same looking app size Using just size has more padding around the label #!/usr/bin/env python3 from kivy.uix.button import Button from kivy.lang import Builder from kivy.app import App KV = """ StackLayout: orientation: 'lr-tb' Label: text: "Hello" size: [50,50] size_hint: None, None Label: text: "World" size: self.texture_size size_hint: None, None """ class MyApp(App): def build(self): return Builder.load_string( KV ) size_x and size_y Using both size_x and size_y has less padding around the label #!/usr/bin/env python3 from kivy.uix.button import Button from kivy.lang import Builder from kivy.app import App KV = """ StackLayout: orientation: 'lr-tb' Label: text: "Hello" size_x: 50 size_y: 50 size_hint: None, None Label: text: "World" size: self.texture_size size_hint: None, None """ class MyApp(App): def build(self): return Builder.load_string( KV ) My understanding is that size is merely a python list of [size_x, size_y]. Because of this, I'd expect that declaring them separately would yield the same results. Why does declaring size as distinct size_x and size_y variables differ from declaring it just once with size?
[ "size_x and size_y doesn't exist at all in the Kivy's API, and in the Kivy widget attributes.\nsize is a reference to a list of [width, height]. Theses code are identicals:\nsize: 100, 100\nsize_hint: None, None\n\nis equal to:\nwidth: 100\nheight: 100\nsize_hint: None, None\n\nYour example have a different behavior because the default size of a unconstrained Widget is 100, 100. So assigning size_x and size_y are meaningless.\n" ]
[ 0 ]
[]
[]
[ "kivy", "kivy_language", "python", "stacklayout" ]
stackoverflow_0074482717_kivy_kivy_language_python_stacklayout.txt
Q: Better way to get multi lines input from console on python 3? I want to know how to handle multi lines input on python 3. When the input is 10 1 6 8 5 4 7 3 2 9 0 , and the code is numbers=[] n = int(input()) # Get n numbers for i in range(n): # Add n numbers in list numbers.append(int(input())) I cannot input the text by copy & paste whole text block, cause python console gave me ValueError. I have to type line by line using Enter Key on keyboard. My solution looks like below. sample_input=input().splitlines() n = int(sample_input[0]) # Get n numbers data=[] for i in range(1, n+1): # Add n numbers in list data.append(int(sample_input[i])) But I think this is messy code. What can be a better way for this one? A: One solution: sample_input=input().splitlines() sample_input_as_int = [int(value) for value in sample_input] n, *data = sample_input_as_int if len(data) != n: raise ValueError("wrong number of data provided") Do you really need to ask the user how many numbers they are going to enter? If they enter only the numbers, you can simplify the code: sample_input=input().splitlines() data = [int(value) for value in sample_input]
Better way to get multi lines input from console on python 3?
I want to know how to handle multi lines input on python 3. When the input is 10 1 6 8 5 4 7 3 2 9 0 , and the code is numbers=[] n = int(input()) # Get n numbers for i in range(n): # Add n numbers in list numbers.append(int(input())) I cannot input the text by copy & paste whole text block, cause python console gave me ValueError. I have to type line by line using Enter Key on keyboard. My solution looks like below. sample_input=input().splitlines() n = int(sample_input[0]) # Get n numbers data=[] for i in range(1, n+1): # Add n numbers in list data.append(int(sample_input[i])) But I think this is messy code. What can be a better way for this one?
[ "One solution:\nsample_input=input().splitlines()\nsample_input_as_int = [int(value) for value in sample_input]\nn, *data = sample_input_as_int\n\nif len(data) != n:\n raise ValueError(\"wrong number of data provided\")\n\nDo you really need to ask the user how many numbers they are going to enter?\nIf they enter only the numbers, you can simplify the code:\nsample_input=input().splitlines()\ndata = [int(value) for value in sample_input]\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074531288_python_python_3.x.txt
Q: How to translate this small part of TensorFlow code into pyTorch? How to translate this small part of TensorFlow code into pyTorch? def transforms(x): # stft returns spectogram for each sample and each eeg # input X contains 3 signals, apply stft for each # and get array with shape [samples, num_of_eeg, time_stamps, freq] # change dims and return [samples, time_stamps, freq, num_of_eeg] spectrograms = tf.signal.stft(x, frame_length=32, frame_step=4, fft_length=64) spectrograms = tf.abs(spectrograms) return tf.einsum("...ijk->...jki", spectrograms) A: You can find the doc for STFT pytorch implementation here. The rest is fast-forward. It should be: def transforms(x: torch.Tensor) -> torch.Tensor: """Return Fourrier spectrogram.""" spectrograms = torch.stft(x, win_length=32, n_fft=4, hop_length=64) spectrograms = torch.abs(spectrograms) return torch.einsum("...ijk->...jki", spectrograms)
How to translate this small part of TensorFlow code into pyTorch?
How to translate this small part of TensorFlow code into pyTorch? def transforms(x): # stft returns spectogram for each sample and each eeg # input X contains 3 signals, apply stft for each # and get array with shape [samples, num_of_eeg, time_stamps, freq] # change dims and return [samples, time_stamps, freq, num_of_eeg] spectrograms = tf.signal.stft(x, frame_length=32, frame_step=4, fft_length=64) spectrograms = tf.abs(spectrograms) return tf.einsum("...ijk->...jki", spectrograms)
[ "You can find the doc for STFT pytorch implementation here. The rest is fast-forward. It should be:\ndef transforms(x: torch.Tensor) -> torch.Tensor:\n \"\"\"Return Fourrier spectrogram.\"\"\"\n spectrograms = torch.stft(x, win_length=32, n_fft=4, hop_length=64)\n spectrograms = torch.abs(spectrograms)\n return torch.einsum(\"...ijk->...jki\", spectrograms)\n\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "machine_learning", "python", "pytorch", "tensorflow" ]
stackoverflow_0074523337_deep_learning_machine_learning_python_pytorch_tensorflow.txt
Q: appending xml node with subnodes of sam name I have key-value pairs: task_vars = '{"BNS_DT": "20220831","DWH_BD": "dwh_bd=2022-08-31","LAYR_CD": "STG"}' with which I would like to generate subnodes variable: tsk_var = ET.fromstring("""<variable><name></name><value></value></variable>""") and then append variables node in: payload = ET.fromstring("""<task-launch><variables></variables><name>{}</name></task-launch>""".format(task_name)) I loop though the key-values and try to append variables node: tsk_vars = json.loads(task_vars) for name, value in tsk_vars.items(): for nm in tsk_var.iter('name'): nm.text = name for vl in tsk_var.iter('value'): vl.text = value print('to be added') print(ET.tostring(tsk_var)) payload.find('variables').append(tsk_var) When I print out the subnode by which variables should be appended I got correct values on each iteration. But in final result I get correct number of subnodes but all are filled with last key value: <task-launch> <variables> <variable> <name>LAYR_CD</name> <value>STG</value> </variable> <variable> <name>LAYR_CD</name> <value>STG</value> </variable> <variable> <name>LAYR_CD</name> <value>STG</value> </variable> </variables> Please hot do I get correct values for every variable subnode? A: Try it this way: destination = payload.find('.//variables') # use f-strings to insert the values into the <variable> children for name, value in tsk_vars.items(): new_childs = ET.fromstring(f"""<variable><name>{name}</name><value>{value}</value></variable>""") destination.insert(0,new_childs) #the line below requires python 3.9+ ET.indent(payload, space=' ', level=0) print(ET.tostring(payload).decode()) Output: <task-launch> <variables> <variable> <name>LAYR_CD</name> <value>STG</value> </variable> <variable> <name>DWH_BD</name> <value>dwh_bd=2022-08-31</value> </variable> <variable> <name>BNS_DT</name> <value>20220831</value> </variable> </variables> </task-launch>
appending xml node with subnodes of sam name
I have key-value pairs: task_vars = '{"BNS_DT": "20220831","DWH_BD": "dwh_bd=2022-08-31","LAYR_CD": "STG"}' with which I would like to generate subnodes variable: tsk_var = ET.fromstring("""<variable><name></name><value></value></variable>""") and then append variables node in: payload = ET.fromstring("""<task-launch><variables></variables><name>{}</name></task-launch>""".format(task_name)) I loop though the key-values and try to append variables node: tsk_vars = json.loads(task_vars) for name, value in tsk_vars.items(): for nm in tsk_var.iter('name'): nm.text = name for vl in tsk_var.iter('value'): vl.text = value print('to be added') print(ET.tostring(tsk_var)) payload.find('variables').append(tsk_var) When I print out the subnode by which variables should be appended I got correct values on each iteration. But in final result I get correct number of subnodes but all are filled with last key value: <task-launch> <variables> <variable> <name>LAYR_CD</name> <value>STG</value> </variable> <variable> <name>LAYR_CD</name> <value>STG</value> </variable> <variable> <name>LAYR_CD</name> <value>STG</value> </variable> </variables> Please hot do I get correct values for every variable subnode?
[ "Try it this way:\ndestination = payload.find('.//variables')\n\n# use f-strings to insert the values into the <variable> children\nfor name, value in tsk_vars.items():\n new_childs = ET.fromstring(f\"\"\"<variable><name>{name}</name><value>{value}</value></variable>\"\"\")\n destination.insert(0,new_childs)\n\n#the line below requires python 3.9+\nET.indent(payload, space=' ', level=0)\n\nprint(ET.tostring(payload).decode())\n\nOutput:\n<task-launch>\n <variables>\n <variable>\n <name>LAYR_CD</name>\n <value>STG</value>\n </variable>\n <variable>\n <name>DWH_BD</name>\n <value>dwh_bd=2022-08-31</value>\n </variable>\n <variable>\n <name>BNS_DT</name>\n <value>20220831</value>\n </variable>\n </variables>\n</task-launch>\n\n" ]
[ 1 ]
[]
[]
[ "elementtree", "python", "xml" ]
stackoverflow_0074532082_elementtree_python_xml.txt
Q: python multiprocessing write data to the same list I want to write data to the same list via python multiprocessing, I do interprocess data sharing via mp.manager.list. The code is shown below, this is just a demo, I want to add the same numbers to the same list. However, counter can be increased, but grp remains the same. Where is the problem? import multiprocessing as mp import random import time import numpy as np class A: def __init__(self): self.raw = [random.randint(1, 4) for _ in range(100)] self.manager = mp.Manager() self.grp = self.manager.list([[1], [2], [3], [4]]) self.use_cpu_num = 2 self.counter = self.manager.Value('i', 0) def run(self): subsets = np.array_split(self.raw, self.use_cpu_num) subsets = [each.tolist() for each in subsets] process = [] for i in range(self.use_cpu_num): process.append(mp.Process(target=self.process, args=(subsets[i], ))) for each in process: each.start() for each in process: each.join() each.close() print(self.grp) def process(self, subset): for each in subset: for i in range(len(self.grp)): each_grp = self.grp[i] if each in each_grp: self.counter.set(self.counter.value + 1) self.grp[i].append(each) print(self.counter.value) if __name__ == '__main__': a = A() a.run() I tried using mp.Lock(), but that doesn't share data between different processes. A: Put it this way, self.grp is a managed object, any change on it using self.grp.append or self.grp[i] = x will be transferred to the manager process. The objects inside self.grp are not managed, any change to them will not be transferred to the manager, you only get a copy of them when you use self.grp[i]. In order to allow modifications to the lists inside self.grp to propagate, those lists must themselves be manager.list object, and nesting managed objects is not supported for versions of python below 3.6 self.grp = self.manager.list([self.manager.list(x) for x in ([1], [2], [3], [4])]) If you are only storing numbers you can pass multiprocessing.Array which can be wrapped as a numpy ndarray for convenience, but you cannot append to it, and must know the size beforehand. Edit: on windows you will get an error when trying to pickle the self.manager object, so i modified it out of the class in the example below. import multiprocessing as mp import random import time import numpy as np class A: def __init__(self): self.raw = [random.randint(1, 4) for _ in range(100)] self.grp = manager.list([manager.list(x) for x in ([1], [2], [3], [4])]) self.use_cpu_num = 2 self.counter = manager.Value('i', 0) def run(self): subsets = np.array_split(self.raw, self.use_cpu_num) subsets = [each.tolist() for each in subsets] process = [] for i in range(self.use_cpu_num): process.append(mp.Process(target=self.process, args=(subsets[i], ))) for each in process: each.start() for each in process: each.join() each.close() print([list(x) for x in self.grp]) def process(self, subset): for each in subset: for i in range(len(self.grp)): each_grp = self.grp[i] if each in each_grp: self.counter.set(self.counter.value + 1) self.grp[i].append(each) print(self.counter.value) if __name__ == '__main__': manager = mp.Manager() a = A() a.run()
python multiprocessing write data to the same list
I want to write data to the same list via python multiprocessing, I do interprocess data sharing via mp.manager.list. The code is shown below, this is just a demo, I want to add the same numbers to the same list. However, counter can be increased, but grp remains the same. Where is the problem? import multiprocessing as mp import random import time import numpy as np class A: def __init__(self): self.raw = [random.randint(1, 4) for _ in range(100)] self.manager = mp.Manager() self.grp = self.manager.list([[1], [2], [3], [4]]) self.use_cpu_num = 2 self.counter = self.manager.Value('i', 0) def run(self): subsets = np.array_split(self.raw, self.use_cpu_num) subsets = [each.tolist() for each in subsets] process = [] for i in range(self.use_cpu_num): process.append(mp.Process(target=self.process, args=(subsets[i], ))) for each in process: each.start() for each in process: each.join() each.close() print(self.grp) def process(self, subset): for each in subset: for i in range(len(self.grp)): each_grp = self.grp[i] if each in each_grp: self.counter.set(self.counter.value + 1) self.grp[i].append(each) print(self.counter.value) if __name__ == '__main__': a = A() a.run() I tried using mp.Lock(), but that doesn't share data between different processes.
[ "Put it this way, self.grp is a managed object, any change on it using self.grp.append or self.grp[i] = x will be transferred to the manager process.\nThe objects inside self.grp are not managed, any change to them will not be transferred to the manager, you only get a copy of them when you use self.grp[i].\nIn order to allow modifications to the lists inside self.grp to propagate, those lists must themselves be manager.list object, and nesting managed objects is not supported for versions of python below 3.6\nself.grp = self.manager.list([self.manager.list(x) for x in ([1], [2], [3], [4])])\n\nIf you are only storing numbers you can pass multiprocessing.Array which can be wrapped as a numpy ndarray for convenience, but you cannot append to it, and must know the size beforehand.\nEdit: on windows you will get an error when trying to pickle the self.manager object, so i modified it out of the class in the example below.\nimport multiprocessing as mp\nimport random\nimport time\n\nimport numpy as np\nclass A:\n def __init__(self):\n self.raw = [random.randint(1, 4) for _ in range(100)]\n self.grp = manager.list([manager.list(x) for x in ([1], [2], [3], [4])])\n self.use_cpu_num = 2\n self.counter = manager.Value('i', 0)\n\n def run(self):\n subsets = np.array_split(self.raw, self.use_cpu_num)\n subsets = [each.tolist() for each in subsets]\n process = []\n for i in range(self.use_cpu_num):\n process.append(mp.Process(target=self.process, args=(subsets[i], )))\n for each in process:\n each.start()\n for each in process:\n each.join()\n each.close()\n print([list(x) for x in self.grp])\n\n def process(self, subset):\n for each in subset:\n for i in range(len(self.grp)):\n each_grp = self.grp[i]\n if each in each_grp:\n self.counter.set(self.counter.value + 1)\n self.grp[i].append(each)\n print(self.counter.value)\n\n\n\nif __name__ == '__main__':\n manager = mp.Manager()\n a = A()\n a.run()\n\n" ]
[ 1 ]
[]
[]
[ "concurrency", "multiprocessing", "python" ]
stackoverflow_0074532214_concurrency_multiprocessing_python.txt
Q: Order the sub-lists in a nested list I have a series of lists, and I want to combine them in a larger nested list. However, I want to order them in a certain way. I want the first sub-list to be the one whose first element is zero. Then i want the second sub-list to be the one whose first element is the same as the LAST element of the previous list. For example, here's four sub-lists; [0, 3], [7, 0], [3, 8], [8, 7] I want to end up with this; [[0, 3], [3, 8], [8, 7], [7,0]] I can't for the life of me see the code logic in my head that would achieve this for me. Can anyone help please? UPDATE Solved! Many thanks to all who contributed! A: I think of your list as being a collection of links which are to be arranged into a chain. Here is an approach which uses @quanrama 's idea of a dictionary keyed by the first element of that link: links = [[0, 3], [7, 0], [3, 8], [8, 7]] d = {link[0]:link for link in links} chain = [] i = min(d) while d: link = d[i] chain.append(link) del d[i] i = link[1] print(chain) #[[0, 3], [3, 8], [8, 7], [7, 0]] A: Another approach with a generator function: links = [[0, 3], [7, 0], [3, 8], [8, 7]] def get_path(links, *, start=0, end=0): linkmap = dict(links) key = start while True: link = linkmap[key] yield [key,link] key = link if link == end: break print(list(get_path(links))) print(list(get_path(links,start=3,end=3))) # [[0, 3], [3, 8], [8, 7], [7, 0]] # [[3, 8], [8, 7], [7, 0], [0, 3]] A: You can try something like this: source = [[0, 3], [7, 0], [3, 8], [8, 7]] # Start at 0 last_val = 0 # this will be the output l = [] while len(l)==0 or last_val!=0: # Find the first value where the first element is last_val l.append(next(i for i in source if i[0]==last_val)) # set last val to the second element of the list last_val = l[-1][1] print(l)
Order the sub-lists in a nested list
I have a series of lists, and I want to combine them in a larger nested list. However, I want to order them in a certain way. I want the first sub-list to be the one whose first element is zero. Then i want the second sub-list to be the one whose first element is the same as the LAST element of the previous list. For example, here's four sub-lists; [0, 3], [7, 0], [3, 8], [8, 7] I want to end up with this; [[0, 3], [3, 8], [8, 7], [7,0]] I can't for the life of me see the code logic in my head that would achieve this for me. Can anyone help please? UPDATE Solved! Many thanks to all who contributed!
[ "I think of your list as being a collection of links which are to be arranged into a chain. Here is an approach which uses @quanrama 's idea of a dictionary keyed by the first element of that link:\nlinks = [[0, 3], [7, 0], [3, 8], [8, 7]]\n\nd = {link[0]:link for link in links}\nchain = []\ni = min(d)\nwhile d:\n link = d[i]\n chain.append(link)\n del d[i]\n i = link[1]\n\nprint(chain) #[[0, 3], [3, 8], [8, 7], [7, 0]]\n\n", "Another approach with a generator function:\nlinks = [[0, 3], [7, 0], [3, 8], [8, 7]]\n\ndef get_path(links, *, start=0, end=0):\n linkmap = dict(links)\n key = start\n while True:\n link = linkmap[key]\n yield [key,link]\n key = link\n if link == end:\n break\n \nprint(list(get_path(links)))\nprint(list(get_path(links,start=3,end=3)))\n\n# [[0, 3], [3, 8], [8, 7], [7, 0]]\n# [[3, 8], [8, 7], [7, 0], [0, 3]]\n\n", "You can try something like this:\nsource = [[0, 3], [7, 0], [3, 8], [8, 7]]\n\n# Start at 0\nlast_val = 0\n# this will be the output\nl = []\nwhile len(l)==0 or last_val!=0:\n # Find the first value where the first element is last_val\n l.append(next(i for i in source if i[0]==last_val))\n # set last val to the second element of the list\n last_val = l[-1][1]\n\nprint(l)\n \n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074532216_list_python.txt
Q: convert the usual file.txt format into dictionaty format, nested dictionary python I am opening a cook-book 'recipes.txt' and it reads like this: f = open('recipes.txt', 'r', encoding='utf-8') for x in f: print(x) result: Omelet 3 Egg | 2 | PCS Milk | 100 | ml Tomato | 2 | PCS Peking Duck 4 Duck | 1 | PCS Water | 2 | l Honey | 3 | t.sp Soy sauce | 60 | ml I need to read / convert it into a nested dictionary format, like this: cook_book = { 'Omelet': [ {'ingredient_name': 'Egg', 'quantity': 2, 'measure': 'PCS'}, {'ingredient_name': 'Milk', 'quantity': 100, 'measure': 'ml'}, {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'PCS'} ], 'Peking Duck': [ {'ingredient_name': 'Duck', 'quantity': 1, 'measure': 'PCS'}, {'ingredient_name': 'Water', 'quantity': 2, 'measure': 'l'}, {'ingredient_name': 'Honey', 'quantity': 3, 'measure': 't.sp'}, {'ingredient_name': 'Soy sauce', 'quantity': 60, 'measure': 'ml'} ] } I cannot get on my own how to get exactly desired format. Would appreciate any suggestions. A: Hey maybe a little shorter then @Hunters solution with open('recipes.txt', 'r', encoding='utf-8') as recipes: cook_book = {} for line in recipes: if (line.replace("\n", "")).isnumeric() or line == "\n": # Ignore unwanted lines continue elif "|" not in line: # Initialize individual recipes current_recipe = line.replace("\n", "") cook_book[current_recipe] = [] elif len(line.strip("|")) > 2: # Add ingredients ingredient = {} ingredient_lst = line.split("|") ingredient["ingredient_name"] = ingredient_lst[0].strip() ingredient["quantity"] = ingredient_lst[1].strip() ingredient["measure"] = ingredient_lst[2].replace("\n", "").strip() cook_book[current_recipe].append(ingredient) print(cook_book) If u load any file and read from it you should always use a with block - python will then handle closing the file etc on its own. EDIT: If you want to dump the cook_book to a json file you could add this import json with open('cook_book.json', 'w', encoding='utf-8') as f: json.dump(cook_book, f, ensure_ascii=False, indent=4) Results in this: { "Omelet": [ { "ingredient_name": "Egg ", "quantity": " 2 ", "measure": " PCS" }, { "ingredient_name": "Milk ", "quantity": " 100 ", "measure": " ml" }, { "ingredient_name": "Tomato ", "quantity": " 2 ", "measure": " PCS" } ], "Peking Duck": [ { "ingredient_name": "Duck ", "quantity": " 1 ", "measure": " PCS" }, { "ingredient_name": "Water ", "quantity": " 2 ", "measure": " l" }, { "ingredient_name": "Honey ", "quantity": " 3 ", "measure": " t.sp" }, { "ingredient_name": "Soy sauce ", "quantity": " 60 ", "measure": " ml" } ] } A: What you have done is just read from the file, whereas you should be taking values from it and assigning it within the dictionaries. cook_book = {} indexes = [] with open('recipes.txt', 'r', encoding='utf-8') as fp: data = fp.read() linedData = data.split('\n') i = 0 for line in data.split('\n'): try: amount = int(line) indexes.append(i) indexes.append(amount) except: pass i += 1 for x in range(len(indexes)): if (x % 2) == 0: mealName = linedData[indexes[x]-1] focusedData = linedData[indexes[x] + 1:] focusedData = focusedData[:(indexes[x+1])] totalIngredients = [] for line in focusedData: ingredients = {} try: name, amm, ref = line.split(' | ') ingredients['ingredient_name'] = name ingredients['quantity'] = int(amm) ingredients['measure'] = ref except: pass totalIngredients.append(ingredients) cook_book[mealName] = totalIngredients print(cook_book) creates an output of: {'Omelette': [{'ingredient_name': 'Egg', 'quantity': 2, 'measure': 'PCS'}, {'ingredient_name': 'Milk', 'quantity': 100, 'measure': 'ml'}, {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'PCS'}], 'Peking duck': [{'ingredient_name': 'Duck', 'quantity': 1, 'measure': 'PCS'}, {'ingredient_name': 'Water', 'quantity': 2, 'measure': 'l'}, {'ingredient_name': 'Honey', 'quantity': 3, 'measure': 'tbsp'}, {'ingredient_name': 'Soy sauce', 'quantity': 60, 'measure': 'ml'}], 'Baked potatoes': [{'ingredient_name': 'Potatoes', 'quantity': 1, 'measure': 'kg'}, {'ingredient_name': 'Garlic', 'quantity': 3, 'measure': 'tooth'}, {'ingredient_name': 'Gouda cheese', 'quantity': 100, 'measure': 'G'}], 'Fajitos': [{'ingredient_name': 'Beef', 'quantity': 500, 'measure': 'G'}, {'ingredient_name': 'Sweet pepper', 'quantity': 1, 'measure': 'PCS'}, {'ingredient_name': 'Lavash', 'quantity': 2, 'measure': 'state'}, {'ingredient_name': 'Wine vinegar', 'quantity': 1, 'measure': 'tbsp'}, {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'state'}]} But when formatted it creates your desired output of: { 'Omelette': [ {'ingredient_name': 'Egg', 'quantity': 2, 'measure': 'PCS'}, {'ingredient_name': 'Milk', 'quantity': 100, 'measure': 'ml'}, {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'PCS'} ], 'Peking duck': [ {'ingredient_name': 'Duck', 'quantity': 1, 'measure': 'PCS'}, {'ingredient_name': 'Water', 'quantity': 2, 'measure': 'l'}, {'ingredient_name': 'Honey', 'quantity': 3, 'measure': 'tbsp'}, {'ingredient_name': 'Soy sauce', 'quantity': 60, 'measure': 'ml'} ], 'Baked potatoes': [ {'ingredient_name': 'Potatoes', 'quantity': 1, 'measure': 'kg'}, {'ingredient_name': 'Garlic', 'quantity': 3, 'measure': 'tooth'}, {'ingredient_name': 'Gouda cheese', 'quantity': 100, 'measure': 'G'} ], 'Fajitos': [ {'ingredient_name': 'Beef', 'quantity': 500, 'measure': 'G'}, {'ingredient_name': 'Sweet pepper', 'quantity': 1, 'measure': 'PCS'}, {'ingredient_name': 'Lavash', 'quantity': 2, 'measure': 'state'}, {'ingredient_name': 'Wine vinegar', 'quantity': 1, 'measure': 'tbsp'}, {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'state'} ] } Please bear in mind there are probably more efficient ways of achieving the desired output, and there will be ways to refactor my code, but this is my attempt at the question. Hope this helps.
convert the usual file.txt format into dictionaty format, nested dictionary python
I am opening a cook-book 'recipes.txt' and it reads like this: f = open('recipes.txt', 'r', encoding='utf-8') for x in f: print(x) result: Omelet 3 Egg | 2 | PCS Milk | 100 | ml Tomato | 2 | PCS Peking Duck 4 Duck | 1 | PCS Water | 2 | l Honey | 3 | t.sp Soy sauce | 60 | ml I need to read / convert it into a nested dictionary format, like this: cook_book = { 'Omelet': [ {'ingredient_name': 'Egg', 'quantity': 2, 'measure': 'PCS'}, {'ingredient_name': 'Milk', 'quantity': 100, 'measure': 'ml'}, {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'PCS'} ], 'Peking Duck': [ {'ingredient_name': 'Duck', 'quantity': 1, 'measure': 'PCS'}, {'ingredient_name': 'Water', 'quantity': 2, 'measure': 'l'}, {'ingredient_name': 'Honey', 'quantity': 3, 'measure': 't.sp'}, {'ingredient_name': 'Soy sauce', 'quantity': 60, 'measure': 'ml'} ] } I cannot get on my own how to get exactly desired format. Would appreciate any suggestions.
[ "Hey maybe a little shorter then @Hunters solution\nwith open('recipes.txt', 'r', encoding='utf-8') as recipes:\n cook_book = {}\n for line in recipes:\n if (line.replace(\"\\n\", \"\")).isnumeric() or line == \"\\n\": # Ignore unwanted lines\n continue\n elif \"|\" not in line: # Initialize individual recipes\n current_recipe = line.replace(\"\\n\", \"\")\n cook_book[current_recipe] = []\n elif len(line.strip(\"|\")) > 2: # Add ingredients\n ingredient = {}\n ingredient_lst = line.split(\"|\")\n ingredient[\"ingredient_name\"] = ingredient_lst[0].strip()\n ingredient[\"quantity\"] = ingredient_lst[1].strip()\n ingredient[\"measure\"] = ingredient_lst[2].replace(\"\\n\", \"\").strip()\n\n cook_book[current_recipe].append(ingredient)\n\nprint(cook_book)\n\nIf u load any file and read from it you should always use a with block - python will then handle closing the file etc on its own.\nEDIT:\nIf you want to dump the cook_book to a json file you could add this\nimport json\n\nwith open('cook_book.json', 'w', encoding='utf-8') as f:\n json.dump(cook_book, f, ensure_ascii=False, indent=4)\n\nResults in this:\n{\n \"Omelet\": [\n {\n \"ingredient_name\": \"Egg \",\n \"quantity\": \" 2 \",\n \"measure\": \" PCS\"\n },\n {\n \"ingredient_name\": \"Milk \",\n \"quantity\": \" 100 \",\n \"measure\": \" ml\"\n },\n {\n \"ingredient_name\": \"Tomato \",\n \"quantity\": \" 2 \",\n \"measure\": \" PCS\"\n }\n ],\n \"Peking Duck\": [\n {\n \"ingredient_name\": \"Duck \",\n \"quantity\": \" 1 \",\n \"measure\": \" PCS\"\n },\n {\n \"ingredient_name\": \"Water \",\n \"quantity\": \" 2 \",\n \"measure\": \" l\"\n },\n {\n \"ingredient_name\": \"Honey \",\n \"quantity\": \" 3 \",\n \"measure\": \" t.sp\"\n },\n {\n \"ingredient_name\": \"Soy sauce \",\n \"quantity\": \" 60 \",\n \"measure\": \" ml\"\n }\n ]\n}\n\n", "What you have done is just read from the file, whereas you should be taking values from it and assigning it within the dictionaries.\ncook_book = {}\n\nindexes = []\n\nwith open('recipes.txt', 'r', encoding='utf-8') as fp:\n data = fp.read()\n linedData = data.split('\\n')\n i = 0\n for line in data.split('\\n'):\n try:\n amount = int(line)\n indexes.append(i)\n indexes.append(amount)\n except:\n pass\n i += 1\n for x in range(len(indexes)):\n if (x % 2) == 0:\n mealName = linedData[indexes[x]-1]\n focusedData = linedData[indexes[x] + 1:]\n focusedData = focusedData[:(indexes[x+1])]\n totalIngredients = []\n for line in focusedData:\n ingredients = {}\n try:\n name, amm, ref = line.split(' | ')\n ingredients['ingredient_name'] = name\n ingredients['quantity'] = int(amm)\n ingredients['measure'] = ref\n except:\n pass\n totalIngredients.append(ingredients)\n cook_book[mealName] = totalIngredients\n\n print(cook_book)\n\ncreates an output of:\n{'Omelette': [{'ingredient_name': 'Egg', 'quantity': 2, 'measure': 'PCS'}, {'ingredient_name': 'Milk', 'quantity': 100, 'measure': 'ml'}, {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'PCS'}], 'Peking duck': [{'ingredient_name': 'Duck', 'quantity': 1, 'measure': 'PCS'}, {'ingredient_name': 'Water', 'quantity': 2, 'measure': 'l'}, {'ingredient_name': 'Honey', 'quantity': 3, 'measure': 'tbsp'}, {'ingredient_name': 'Soy sauce', 'quantity': 60, 'measure': 'ml'}], 'Baked potatoes': [{'ingredient_name': 'Potatoes', 'quantity': 1, 'measure': 'kg'}, {'ingredient_name': 'Garlic', 'quantity': 3, 'measure': 'tooth'}, {'ingredient_name': 'Gouda cheese', 'quantity': 100, 'measure': 'G'}], 'Fajitos': [{'ingredient_name': 'Beef', 'quantity': 500, 'measure': 'G'}, {'ingredient_name': 'Sweet pepper', 'quantity': 1, 'measure': 'PCS'}, {'ingredient_name': 'Lavash', 'quantity': 2, 'measure': 'state'}, {'ingredient_name': 'Wine vinegar', 'quantity': 1, 'measure': 'tbsp'}, {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'state'}]}\n\nBut when formatted it creates your desired output of:\n{\n'Omelette': [\n {'ingredient_name': 'Egg', 'quantity': 2, 'measure': 'PCS'}, \n {'ingredient_name': 'Milk', 'quantity': 100, 'measure': 'ml'}, \n {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'PCS'}\n ],\n'Peking duck': [\n {'ingredient_name': 'Duck', 'quantity': 1, 'measure': 'PCS'}, \n {'ingredient_name': 'Water', 'quantity': 2, 'measure': 'l'}, \n {'ingredient_name': 'Honey', 'quantity': 3, 'measure': 'tbsp'}, \n {'ingredient_name': 'Soy sauce', 'quantity': 60, 'measure': 'ml'}\n ], \n'Baked potatoes': [\n {'ingredient_name': 'Potatoes', 'quantity': 1, 'measure': 'kg'}, \n {'ingredient_name': 'Garlic', 'quantity': 3, 'measure': 'tooth'}, \n {'ingredient_name': 'Gouda cheese', 'quantity': 100, 'measure': 'G'}\n ], \n'Fajitos': [\n {'ingredient_name': 'Beef', 'quantity': 500, 'measure': 'G'}, \n {'ingredient_name': 'Sweet pepper', 'quantity': 1, 'measure': 'PCS'}, \n {'ingredient_name': 'Lavash', 'quantity': 2, 'measure': 'state'}, \n {'ingredient_name': 'Wine vinegar', 'quantity': 1, 'measure': 'tbsp'}, \n {'ingredient_name': 'Tomato', 'quantity': 2, 'measure': 'state'}\n ]\n}\n\nPlease bear in mind there are probably more efficient ways of achieving the desired output, and there will be ways to refactor my code, but this is my attempt at the question.\nHope this helps.\n" ]
[ 2, 1 ]
[]
[]
[ "dictionary", "file_read", "nested_lists", "python", "readline" ]
stackoverflow_0074530101_dictionary_file_read_nested_lists_python_readline.txt
Q: ModuleNotFoundError: No module named 'kivymd' i installed kivy and kivymd. now i try to use it and it seems like i've never installed any of it. # importing all necessary modules # like MDApp, MDLabel Screen, MDTextField # and MDRectangleFlatButton from kivymd.app import MDApp from kivymd.uix.label import MDLabel from kivymd.uix.screen import Screen from kivymd.uix.textfield import MDTextField from kivymd.uix.button import MDRectangleFlatButton # creating Demo Class(base class) class Demo(MDApp): def build(self): screen = Screen() # defining label with all the parameters l = MDLabel(text="HI PEOPLE!", halign='center', theme_text_color="Custom", text_color=(0.5, 0, 0.5, 1), font_style='Caption') # defining Text field with all the parameters name = MDTextField(text="Enter name", pos_hint={ 'center_x': 0.8, 'center_y': 0.8}, size_hint_x=None, width=100) # defining Button with all the parameters btn = MDRectangleFlatButton(text="Submit", pos_hint={ 'center_x': 0.5, 'center_y': 0.3}, on_release=self.btnfunc) # adding widgets to screen screen.add_widget(name) screen.add_widget(btn) screen.add_widget(l) # returning the screen return screen # defining a btnfun() for the button to # call when clicked on it def btnfunc(self, obj): print("button is pressed!!") if __name__ == "__main__": Demo().run() the code above is just a example code which i used to test it it gives to following error: runcell(0, 'C:/Users/niekl/OneDrive/Bureaublad/Rad/Nieuwe map/untitled0.py') Traceback (most recent call last): File "C:\Users\niekl\OneDrive\Bureaublad\Rad\Nieuwe map\untitled0.py", line 4, in <module> from kivymd.app import MDApp ModuleNotFoundError: No module named 'kivymd' Kivy KivyMD I installed these packages A: Please try to install it again with pip install kivymd if above solution won't work please do: pip install --force-reinstall https://github.com/kivymd/KivyMD/archive/master.zip Edit: Another option: git clone https://github.com/kivymd/KivyMD.git --depth 1 cd KivyMD pip install . Read more about the installation options on kivyMD docs
ModuleNotFoundError: No module named 'kivymd'
i installed kivy and kivymd. now i try to use it and it seems like i've never installed any of it. # importing all necessary modules # like MDApp, MDLabel Screen, MDTextField # and MDRectangleFlatButton from kivymd.app import MDApp from kivymd.uix.label import MDLabel from kivymd.uix.screen import Screen from kivymd.uix.textfield import MDTextField from kivymd.uix.button import MDRectangleFlatButton # creating Demo Class(base class) class Demo(MDApp): def build(self): screen = Screen() # defining label with all the parameters l = MDLabel(text="HI PEOPLE!", halign='center', theme_text_color="Custom", text_color=(0.5, 0, 0.5, 1), font_style='Caption') # defining Text field with all the parameters name = MDTextField(text="Enter name", pos_hint={ 'center_x': 0.8, 'center_y': 0.8}, size_hint_x=None, width=100) # defining Button with all the parameters btn = MDRectangleFlatButton(text="Submit", pos_hint={ 'center_x': 0.5, 'center_y': 0.3}, on_release=self.btnfunc) # adding widgets to screen screen.add_widget(name) screen.add_widget(btn) screen.add_widget(l) # returning the screen return screen # defining a btnfun() for the button to # call when clicked on it def btnfunc(self, obj): print("button is pressed!!") if __name__ == "__main__": Demo().run() the code above is just a example code which i used to test it it gives to following error: runcell(0, 'C:/Users/niekl/OneDrive/Bureaublad/Rad/Nieuwe map/untitled0.py') Traceback (most recent call last): File "C:\Users\niekl\OneDrive\Bureaublad\Rad\Nieuwe map\untitled0.py", line 4, in <module> from kivymd.app import MDApp ModuleNotFoundError: No module named 'kivymd' Kivy KivyMD I installed these packages
[ "Please try to install it again with\npip install kivymd\n\nif above solution won't work please do:\npip install --force-reinstall https://github.com/kivymd/KivyMD/archive/master.zip\n\nEdit:\nAnother option:\ngit clone https://github.com/kivymd/KivyMD.git --depth 1\ncd KivyMD\npip install .\n\nRead more about the installation options on kivyMD docs\n" ]
[ 1 ]
[]
[]
[ "kivy", "kivymd", "modulenotfounderror", "python" ]
stackoverflow_0074533016_kivy_kivymd_modulenotfounderror_python.txt
Q: Why is the str() data type not making the input into a string variable? cement = str(input("Do you want premium cement or standard cement? ")) print(cement) It works for the choice of cement but also for a number. When I try an input with numbers the program doesn't close and tells me that an integer is wrong. Instead, it takes the number as a string but I don't want it to. Is there any way I can fix this? A: This should work for what you need: cement = str(input("Do you want premium cement or standard cement? ")) if (any(char.isdigit() for char in cement)) == True: exit() else: print(cement) When you enter any sentence containing a number such as 1a or 1 it exits the program. Hope this helps
Why is the str() data type not making the input into a string variable?
cement = str(input("Do you want premium cement or standard cement? ")) print(cement) It works for the choice of cement but also for a number. When I try an input with numbers the program doesn't close and tells me that an integer is wrong. Instead, it takes the number as a string but I don't want it to. Is there any way I can fix this?
[ "This should work for what you need:\ncement = str(input(\"Do you want premium cement or standard cement? \"))\nif (any(char.isdigit() for char in cement)) == True:\n exit()\nelse:\n print(cement)\n\nWhen you enter any sentence containing a number such as 1a or 1 it exits the program.\nHope this helps\n" ]
[ 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0074494767_python_string.txt
Q: Isinstance slice in Numba jitclass __getitem__ I am using a numba jitclass and would like to make a transformation on the key whenever it is not a slice (but I want to keep the slice functionality). Question: How can I? To give a little context, I would rather write tensor[coord] than tensor[tensor_to_formalseries(coord, tensor.dim)] and I also like the condensed tensor[:key] more than tensor.formal_series[:key]. Below are 3 examples that work in pure python and don't as jitclasses. import numpy as np from numba import njit from numba.experimental import jitclass from numba.core.types import int64, SliceType @njit def tensor_to_formalseries(coordinate: int, dim=2): key = coordinate * 2 # some crazy stuff with coordinate return key @jitclass(spec={"formal_series": int64[:], "dim": int64}) class Tensor1: def __init__(self, dim=2): self.dim = dim self.formal_series = np.arange(10) def __getitem__(self, key): if isinstance(key, (slice, SliceType)): # tried with SliceLiteral, slice2_type, slice3_type from numba.core.types print("key is a slice") return self.formal_series[key] else: print("key is not a slice") return self.formal_series[tensor_to_formalseries(key, self.dim)] tensor = Tensor1() print(tensor[:3]) print(tensor[2]) """ Rejected as the implementation raised a specific error: NumbaTypeError: isinstance() does not support variables of type "slice<a:b>". """ @jitclass(spec={"formal_series": int64[:], "dim": int64}) class Tensor2: def __init__(self, dim=2): self.dim = dim self.formal_series = np.arange(10) def __getitem__(self, key): if isinstance(self.formal_series[key], np.ndarray): print("key is a slice") return self.formal_series[key] else: print("key is not a slice") return self.formal_series[tensor_to_formalseries(key, self.dim)] tensor = Tensor2() print(tensor[:3]) print(tensor[2]) """ Use of unsupported NumPy function 'numpy.ndarray' or unsupported use of the function. """ @jitclass(spec={"formal_series": int64[:], "dim": int64}) class Tensor3: def __init__(self, dim=2): self.dim = dim self.formal_series = np.arange(10) def __getitem__(self, key): try: len(self.formal_series[key]) print("key is a slice") return self.formal_series[key] except Exception: # should be a TypeError, but jitclass doesn't like it: UnsupportedError: Exception matching is limited to <class 'Exception'> print("key is not a slice") return self.formal_series[tensor_to_formalseries(key, self.dim)] tensor = Tensor3() print(tensor[:3]) """ Overload of function 'mul': File: <numerous>: Line N/A. With argument(s): '(slice<a:b>, int64)': """ print(tensor[2]) """ Overload of function 'len': File: <numerous>: Line N/A. With argument(s): '(int64)': No match. """ A: As of numba 0.56, isinstance() is not supported inside a numba class. Source: numba jitclass documentation Is it really needed to numba compile the "is this a slice" check? Most likely your code spent the most time on the transformation part, which means that the transformation part is where you need to focus your optimisation efforts. What i would do, is to detect the type of the key with regular python code, then use a njit compiled numba function to do the actual transformation. I would also refrain from using the experimental numba class type. You are probably better off using a regular python class that simply calls a numba compiled function.
Isinstance slice in Numba jitclass __getitem__
I am using a numba jitclass and would like to make a transformation on the key whenever it is not a slice (but I want to keep the slice functionality). Question: How can I? To give a little context, I would rather write tensor[coord] than tensor[tensor_to_formalseries(coord, tensor.dim)] and I also like the condensed tensor[:key] more than tensor.formal_series[:key]. Below are 3 examples that work in pure python and don't as jitclasses. import numpy as np from numba import njit from numba.experimental import jitclass from numba.core.types import int64, SliceType @njit def tensor_to_formalseries(coordinate: int, dim=2): key = coordinate * 2 # some crazy stuff with coordinate return key @jitclass(spec={"formal_series": int64[:], "dim": int64}) class Tensor1: def __init__(self, dim=2): self.dim = dim self.formal_series = np.arange(10) def __getitem__(self, key): if isinstance(key, (slice, SliceType)): # tried with SliceLiteral, slice2_type, slice3_type from numba.core.types print("key is a slice") return self.formal_series[key] else: print("key is not a slice") return self.formal_series[tensor_to_formalseries(key, self.dim)] tensor = Tensor1() print(tensor[:3]) print(tensor[2]) """ Rejected as the implementation raised a specific error: NumbaTypeError: isinstance() does not support variables of type "slice<a:b>". """ @jitclass(spec={"formal_series": int64[:], "dim": int64}) class Tensor2: def __init__(self, dim=2): self.dim = dim self.formal_series = np.arange(10) def __getitem__(self, key): if isinstance(self.formal_series[key], np.ndarray): print("key is a slice") return self.formal_series[key] else: print("key is not a slice") return self.formal_series[tensor_to_formalseries(key, self.dim)] tensor = Tensor2() print(tensor[:3]) print(tensor[2]) """ Use of unsupported NumPy function 'numpy.ndarray' or unsupported use of the function. """ @jitclass(spec={"formal_series": int64[:], "dim": int64}) class Tensor3: def __init__(self, dim=2): self.dim = dim self.formal_series = np.arange(10) def __getitem__(self, key): try: len(self.formal_series[key]) print("key is a slice") return self.formal_series[key] except Exception: # should be a TypeError, but jitclass doesn't like it: UnsupportedError: Exception matching is limited to <class 'Exception'> print("key is not a slice") return self.formal_series[tensor_to_formalseries(key, self.dim)] tensor = Tensor3() print(tensor[:3]) """ Overload of function 'mul': File: <numerous>: Line N/A. With argument(s): '(slice<a:b>, int64)': """ print(tensor[2]) """ Overload of function 'len': File: <numerous>: Line N/A. With argument(s): '(int64)': No match. """
[ "As of numba 0.56, isinstance() is not supported inside a numba class. Source: numba jitclass documentation\nIs it really needed to numba compile the \"is this a slice\" check? Most likely your code spent the most time on the transformation part, which means that the transformation part is where you need to focus your optimisation efforts.\nWhat i would do, is to detect the type of the key with regular python code, then use a njit compiled numba function to do the actual transformation.\nI would also refrain from using the experimental numba class type. You are probably better off using a regular python class that simply calls a numba compiled function.\n" ]
[ 1 ]
[]
[]
[ "isinstance", "numba", "python", "slice" ]
stackoverflow_0074532282_isinstance_numba_python_slice.txt
Q: Problem "EXCEPTION NOT FOUND" in Cryptography (Python) CODE start_time1 = time.time() ec = EC(a, b, num) g, _ = ec.at(at) assert ec.order(g) <= ec.q # ElGamal enc/dec usage eg = ElGamal(ec, g) # mapping value to ec point # "masking": value k to point ec.mul(g, k) # ("imbedding" on proper n:use a point of x as 0 <= n*v <= x < n*(v+1) < q) mapping = [ec.mul(g, i) for i in range(eg.n)] plain = mapping[at] pub = eg.gen(priv) cipher = eg.enc(plain, pub, r) decoded = eg.dec(cipher, priv) assert decoded == plain assert cipher != pub average_time1 = time.time() - start_time1 ERROR TRACEBACK Exception Traceback (most recent call last) <ipython-input-2-77934393a2f8> in <module> 256 257 ec = EC(a, b, num) --> 258 g, _ = ec.at(at) 259 assert ec.order(g) <= ec.q 260 1 frames <ipython-input-2-77934393a2f8> in sqrt(n, q) 85 return (i, q - i) 86 pass ---> 87 raise Exception("not found") 88 89 Exception: not found Donot know what to do with this error.This is basically an ECC Cryptography Code in Python. I found this on the stack overflow - use \Exception as Exception; But Error. A: The exception appear to originate in this code for calculating (or really, just brute-forcing by trying all integer possibilities) the square root of a number n, modulo q. def sqrt(n, q): """sqrt on PN modulo: returns two numbers or exception if not exist >>> assert (sqrt(n, q)[0] ** 2) % q == n >>> assert (sqrt(n, q)[1] ** 2) % q == n """ assert n < q for i in range(1, q): if i * i % q == n: return (i, q - i) pass raise Exception("not found") Not all whole numbers are squares of other whole numbers, so the function will throw an exception with the message "not found" for that case. This in turn is happening because on the line g, _ = ec.at(at) you are asking for a point on the elliptic curve which does not exist.
Problem "EXCEPTION NOT FOUND" in Cryptography (Python)
CODE start_time1 = time.time() ec = EC(a, b, num) g, _ = ec.at(at) assert ec.order(g) <= ec.q # ElGamal enc/dec usage eg = ElGamal(ec, g) # mapping value to ec point # "masking": value k to point ec.mul(g, k) # ("imbedding" on proper n:use a point of x as 0 <= n*v <= x < n*(v+1) < q) mapping = [ec.mul(g, i) for i in range(eg.n)] plain = mapping[at] pub = eg.gen(priv) cipher = eg.enc(plain, pub, r) decoded = eg.dec(cipher, priv) assert decoded == plain assert cipher != pub average_time1 = time.time() - start_time1 ERROR TRACEBACK Exception Traceback (most recent call last) <ipython-input-2-77934393a2f8> in <module> 256 257 ec = EC(a, b, num) --> 258 g, _ = ec.at(at) 259 assert ec.order(g) <= ec.q 260 1 frames <ipython-input-2-77934393a2f8> in sqrt(n, q) 85 return (i, q - i) 86 pass ---> 87 raise Exception("not found") 88 89 Exception: not found Donot know what to do with this error.This is basically an ECC Cryptography Code in Python. I found this on the stack overflow - use \Exception as Exception; But Error.
[ "The exception appear to originate in this code for calculating (or really, just brute-forcing by trying all integer possibilities) the square root of a number n, modulo q.\ndef sqrt(n, q):\n \"\"\"sqrt on PN modulo: returns two numbers or exception if not exist\n >>> assert (sqrt(n, q)[0] ** 2) % q == n\n >>> assert (sqrt(n, q)[1] ** 2) % q == n\n \"\"\"\n assert n < q\n for i in range(1, q):\n if i * i % q == n:\n return (i, q - i)\n pass\n raise Exception(\"not found\")\n\nNot all whole numbers are squares of other whole numbers, so the function will throw an exception with the message \"not found\" for that case.\nThis in turn is happening because on the line\ng, _ = ec.at(at)\n\nyou are asking for a point on the elliptic curve which does not exist.\n" ]
[ 0 ]
[]
[]
[ "cryptography", "elliptic_curve", "exception", "python", "python_cryptography" ]
stackoverflow_0074522426_cryptography_elliptic_curve_exception_python_python_cryptography.txt
Q: Only Owner of the Profile able to Update the data Using class Based (APIView) in Django rest framework for Getting and Patch (Updating) UserInfo data. views.py class getUserInfo(APIView): permission_classes = [permissions.IsAuthenticated] def get(self, request, format=None): user = request.user userinfos = user.userinfo_set.all() serializer = UserInfoSerializers(userinfos, many=True) return Response(serializer.data) def patch(self, request, pk, format=None): user = UserInfo.objects.get(id=pk) serializer = UserInfoSerializers(instance=user, data=request.data, partial=True) if serializer.is_valid(): serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) serializers.py from django.contrib.auth.models import User from .models import UserInfo class UserSerializer(serializers.ModelSerializer): class Meta: model = User fields = ('id', 'first_name', 'username') class UserInfoSerializers(serializers.ModelSerializer): user = UserSerializer(many=False, required=True) class Meta: model = UserInfo fields = ('id', 'picture', 'profession', 'user') Everything is working so far so good. Able to GET and PATCH (Update) logged-in user data. While Testing the API in Postman, I found out that if User1 is logged in he can change the data of User2 by only using the pk of User2. urls.py urlpatterns = [ path('userinfo/', views.getUserInfo.as_view(), name="UserInfo"), path('userinfo/<str:pk>/', views.getUserInfo.as_view()), path('api/token/', views.MyTokenObtainPairView.as_view(), name='token_obtain_pair'), path('api/token/refresh/', TokenRefreshView.as_view(), name='token_refresh'), path('register/', views.RegisterView.as_view(), name='auth_register'), ] Using rest_framework_simplejwt for Auth models.py from django.contrib.auth.models import User class UserInfo(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE, null=True) picture = models.ImageField(upload_to="profile_pics", null=True) profession = models.CharField(max_length=200, null=True) def __str__(self): return "%s's Profile Picture" % self.user Any help would be appreciated A: Don't use the primary key to get the user.You are using user = request.user to get the user on get method, use the same mechanism also on update. Then the login user can only update his/her info not others info or another way you can check the user = UserInfo.objects.get(id=pk) is same as the current user request.user . If not you can show an exception. A: For Retrieving and Updating an object, you can use RetrieveUpdateAPIView class GetUserInfo(generics.RetrieveUpdateAPIView): permission_classes = [IsAuthenticated] queryset = UserInfo.objects.all() serializer_class = UserInfoSerializers def get_object(self): return self.request.user Here we are getting an object, it will be called from get_object method. Instead of getting user using PK, we get the current user. You can use same url for getting and updating the user, just change the method in postman while you hit the api. GET for retrieving and PATCH for partial update. path('userinfo/', views.GetUserInfo.as_view(), name="UserInfo"),
Only Owner of the Profile able to Update the data
Using class Based (APIView) in Django rest framework for Getting and Patch (Updating) UserInfo data. views.py class getUserInfo(APIView): permission_classes = [permissions.IsAuthenticated] def get(self, request, format=None): user = request.user userinfos = user.userinfo_set.all() serializer = UserInfoSerializers(userinfos, many=True) return Response(serializer.data) def patch(self, request, pk, format=None): user = UserInfo.objects.get(id=pk) serializer = UserInfoSerializers(instance=user, data=request.data, partial=True) if serializer.is_valid(): serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) serializers.py from django.contrib.auth.models import User from .models import UserInfo class UserSerializer(serializers.ModelSerializer): class Meta: model = User fields = ('id', 'first_name', 'username') class UserInfoSerializers(serializers.ModelSerializer): user = UserSerializer(many=False, required=True) class Meta: model = UserInfo fields = ('id', 'picture', 'profession', 'user') Everything is working so far so good. Able to GET and PATCH (Update) logged-in user data. While Testing the API in Postman, I found out that if User1 is logged in he can change the data of User2 by only using the pk of User2. urls.py urlpatterns = [ path('userinfo/', views.getUserInfo.as_view(), name="UserInfo"), path('userinfo/<str:pk>/', views.getUserInfo.as_view()), path('api/token/', views.MyTokenObtainPairView.as_view(), name='token_obtain_pair'), path('api/token/refresh/', TokenRefreshView.as_view(), name='token_refresh'), path('register/', views.RegisterView.as_view(), name='auth_register'), ] Using rest_framework_simplejwt for Auth models.py from django.contrib.auth.models import User class UserInfo(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE, null=True) picture = models.ImageField(upload_to="profile_pics", null=True) profession = models.CharField(max_length=200, null=True) def __str__(self): return "%s's Profile Picture" % self.user Any help would be appreciated
[ "Don't use the primary key to get the user.You are using user = request.user to get the user on get method, use the same mechanism also on update. Then the login user can only update his/her info not others info or another way you can check the user = UserInfo.objects.get(id=pk) is same as the current user request.user . If not you can show an exception.\n", "For Retrieving and Updating an object, you can use RetrieveUpdateAPIView\nclass GetUserInfo(generics.RetrieveUpdateAPIView):\n permission_classes = [IsAuthenticated]\n queryset = UserInfo.objects.all()\n serializer_class = UserInfoSerializers\n \n def get_object(self):\n return self.request.user\n\nHere we are getting an object, it will be called from get_object method. Instead of getting user using PK, we get the current user.\nYou can use same url for getting and updating the user, just change the method in postman while you hit the api. GET for retrieving and PATCH for partial update.\npath('userinfo/', views.GetUserInfo.as_view(), name=\"UserInfo\"),\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_rest_framework", "django_views", "python", "serialization" ]
stackoverflow_0074527821_django_django_rest_framework_django_views_python_serialization.txt
Q: Python: how to replace the characters between fixed format of a column with another column in DataFrame? for example, how to replace <Isis/> with twins in the first row in the whole table? I try to use the following codes, but Python indicates:"TypeError: replace() argument 1 must be str, not None" import pandas as pd import re df = pd.read_csv('train.csv') p = re.compile('<\w+/>') df['original'] = df.apply(lambda x: x['original'].replace( p.match(x['original']), str(x['edit'])), axis = 1) print(df.head()) I hope powerful friends help me, very anxious, thank you! I expect the code can return the DataFrame format, and "France is ‘ hunting down its citizens who joined ’ without trial in Iraq" can be changed to "France is ‘ hunting down its citizens who joined twins ’ without trial in Iraq". A: can you try: import re df['original'] = df.apply(lambda x: re.sub("<.*?>", x['edit'], x['original']),axis=1)
Python: how to replace the characters between fixed format of a column with another column in DataFrame?
for example, how to replace <Isis/> with twins in the first row in the whole table? I try to use the following codes, but Python indicates:"TypeError: replace() argument 1 must be str, not None" import pandas as pd import re df = pd.read_csv('train.csv') p = re.compile('<\w+/>') df['original'] = df.apply(lambda x: x['original'].replace( p.match(x['original']), str(x['edit'])), axis = 1) print(df.head()) I hope powerful friends help me, very anxious, thank you! I expect the code can return the DataFrame format, and "France is ‘ hunting down its citizens who joined ’ without trial in Iraq" can be changed to "France is ‘ hunting down its citizens who joined twins ’ without trial in Iraq".
[ "can you try:\nimport re\ndf['original'] = df.apply(lambda x: re.sub(\"<.*?>\", x['edit'], x['original']),axis=1)\n\n" ]
[ 0 ]
[]
[]
[ "data_processing", "dataframe", "python", "replace" ]
stackoverflow_0074533087_data_processing_dataframe_python_replace.txt
Q: No module named cv cv2 No matching distribution found for mediapipe I am using windows import cv2 ModuleNotFoundError: No module named 'cv2' how to fix it? I tried pip install opencv-contrib-python pip3 install opencv-python pip install opencv-python etc etc, still did not work update: cv2 is fixed, but I am having a problem on mediapipe. it's showing like this: ERROR: Could not find a version that satisfies the requirement mediapipe (from versions: none) ERROR: No matching distribution found for mediapipe WARNING: You are using pip version 21.3.1; however, version 22.3.1 is available. You should consider upgrading via the 'E:\python\Scripts\python.exe -m pip install --upgrade pip' command. my python version is 3.11.0 A: It seems the install of cv2 goes nicer with system install: apt install python3-opencv A: I think, with Python 3.11.0 we can't install mediapipe. What I suggest you is to try lowering your Python version to 3.7.0 and install mediapipe. If you face the same issue then try installing mediapipe==0.8.9
No module named cv cv2 No matching distribution found for mediapipe
I am using windows import cv2 ModuleNotFoundError: No module named 'cv2' how to fix it? I tried pip install opencv-contrib-python pip3 install opencv-python pip install opencv-python etc etc, still did not work update: cv2 is fixed, but I am having a problem on mediapipe. it's showing like this: ERROR: Could not find a version that satisfies the requirement mediapipe (from versions: none) ERROR: No matching distribution found for mediapipe WARNING: You are using pip version 21.3.1; however, version 22.3.1 is available. You should consider upgrading via the 'E:\python\Scripts\python.exe -m pip install --upgrade pip' command. my python version is 3.11.0
[ "It seems the install of cv2 goes nicer with system install: apt install python3-opencv\n", "I think, with Python 3.11.0 we can't install mediapipe. What I suggest you is to try lowering your Python version to 3.7.0 and install mediapipe. If you face the same issue then try installing mediapipe==0.8.9\n" ]
[ 0, 0 ]
[]
[]
[ "artificial_intelligence", "mediapipe", "python" ]
stackoverflow_0074525008_artificial_intelligence_mediapipe_python.txt
Q: Tasket cmd in Python (windows) Hi all I am just wondering how I can use Taskset command in windows? Here's Part of code which is written in python and I am running it on windows it is giving error as taskset' is not recognized as an internal or external command here's code below :- event_list = df.to_records(index=False) event_list = list(event_list) os.system("taskset -p 0xff %d" % os.getpid()) p = Pool(processes=60) p.starmap(calc_hazard,event_list) print(time.time()-t_initial)``` A: You can use the psutil library which implements the taskset command. For example: p = psutil.Process(pid) p.cpu_affinity(cpus) where cpus is a list of integers specifying the new CPUs affinity. The documentation is here.
Tasket cmd in Python (windows)
Hi all I am just wondering how I can use Taskset command in windows? Here's Part of code which is written in python and I am running it on windows it is giving error as taskset' is not recognized as an internal or external command here's code below :- event_list = df.to_records(index=False) event_list = list(event_list) os.system("taskset -p 0xff %d" % os.getpid()) p = Pool(processes=60) p.starmap(calc_hazard,event_list) print(time.time()-t_initial)```
[ "You can use the psutil library which implements the taskset command. For example:\np = psutil.Process(pid)\np.cpu_affinity(cpus)\n\nwhere cpus is a list of integers specifying the new CPUs affinity. The documentation is here.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0072433504_python.txt
Q: Using `to_string` with formatters removes one space between columns of a Pandas DataFrame I am using formatters to display a column of a Pandas DataFrame in a certain way: import pandas df = pandas.DataFrame({"id": [0, 10, 288, 1], "value": [38.8, 88.3, 15, 19.8], "percent": [0.55, 0.05, 0.008, 0.12]}) print(df.to_string(formatters={"percent": "{:}".format})) which outputs as: id value percent 0 0 38.8 0.55 1 10 88.3 0.05 2 288 15.0 0.008 3 1 19.8 0.12 As you can see, there is only one space between the "values" column and the "percent" column, whereas there should be two spaces (as between "id" and "values"). This is a nitty gritty detail, but how to have the two spaces back? A: import pandas df = pandas.DataFrame({"id": [0, 10, 288, 1], "value": [38.8, 88.3, 15, 19.8], "percent": [0.55, 0.05, 0.008, 0.12]}) print(df.to_string(formatters={"id":"{:4}".format, "value": "{:6}".format,"percent": "{:8}".format})) Output: id value percent 0 0 38.8 0.55 1 10 88.3 0.05 2 288 15.0 0.008 3 1 19.8 0.12
Using `to_string` with formatters removes one space between columns of a Pandas DataFrame
I am using formatters to display a column of a Pandas DataFrame in a certain way: import pandas df = pandas.DataFrame({"id": [0, 10, 288, 1], "value": [38.8, 88.3, 15, 19.8], "percent": [0.55, 0.05, 0.008, 0.12]}) print(df.to_string(formatters={"percent": "{:}".format})) which outputs as: id value percent 0 0 38.8 0.55 1 10 88.3 0.05 2 288 15.0 0.008 3 1 19.8 0.12 As you can see, there is only one space between the "values" column and the "percent" column, whereas there should be two spaces (as between "id" and "values"). This is a nitty gritty detail, but how to have the two spaces back?
[ "import pandas\ndf = pandas.DataFrame({\"id\": [0, 10, 288, 1], \"value\": [38.8, 88.3, 15, 19.8], \"percent\": [0.55, 0.05, 0.008, 0.12]})\nprint(df.to_string(formatters={\"id\":\"{:4}\".format, \"value\": \"{:6}\".format,\"percent\": \"{:8}\".format}))\n\nOutput:\n id value percent\n0 0 38.8 0.55\n1 10 88.3 0.05\n2 288 15.0 0.008\n3 1 19.8 0.12\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "format", "pandas", "python", "python_3.x" ]
stackoverflow_0065772452_dataframe_format_pandas_python_python_3.x.txt
Q: How to use a counter in a program, or how to use looping in simple code I have a pre-defined invited guest list. I ask a user for their name and check if the name is in the list. If it is, we simply print welcome. If not, we print the statement in the else condition. After that I want to add looping of name. What should I add in this? The program should work repeatedly when run once. guest_list = ['abhishek olkha' , 'monika' , 'chanchal' , 'daisy' , 'mayank'] name= input('enter your name please ') if name in guest_list: print( "welcome sir/ma'am") else: print('sorry you are not invited') A: guest_list = ['abhishek olkha' , 'monika' , 'chanchal' , 'daisy' , 'mayank'] #infinite loop while True: name= input('enter your name please ') if name in guest_list: print( "welcome sir/ma'am") else: print('sorry you are not invited') A: If you want to indefinitely loop by giving a new name and checking the result you should wrap everything in a while(true) loop. If you want to exit the loop and the program when the name is not in the list you should use a boolean variable set to True at first and that variable is set to False if the name is not in the list guest_list = ['abhishek olkha' , 'monika' , 'chanchal' , 'daisy' , 'mayank'] condition=True while(condition): name= input('enter your name please ') if name in guest_list: print( "welcome sir/ma'am") else: print('sorry you are not invited') condition=False A: Use a for loop and specify how many time you want it check guest_list = ['abhishek olkha' , 'monika' , 'chanchal' , 'daisy' , 'mayank'] name= input('enter your name please ') for i in range(10): #the loop would run for 10 times starting from 0 to 9 if name in guest_list: print( "welcome sir/ma'am") else: print('sorry you are not invited')
How to use a counter in a program, or how to use looping in simple code
I have a pre-defined invited guest list. I ask a user for their name and check if the name is in the list. If it is, we simply print welcome. If not, we print the statement in the else condition. After that I want to add looping of name. What should I add in this? The program should work repeatedly when run once. guest_list = ['abhishek olkha' , 'monika' , 'chanchal' , 'daisy' , 'mayank'] name= input('enter your name please ') if name in guest_list: print( "welcome sir/ma'am") else: print('sorry you are not invited')
[ "guest_list = ['abhishek olkha' , 'monika' , 'chanchal' , 'daisy' , 'mayank']\n#infinite loop\nwhile True:\n name= input('enter your name please ')\n if name in guest_list:\n print( \"welcome sir/ma'am\")\n else:\n print('sorry you are not invited')\n\n", "If you want to indefinitely loop by giving a new name and checking the result you should wrap everything in a while(true) loop.\nIf you want to exit the loop and the program when the name is not in the list you should use a boolean variable set to True at first and that variable is set to False if the name is not in the list\nguest_list = ['abhishek olkha' , 'monika' , 'chanchal' , 'daisy' , 'mayank']\ncondition=True\nwhile(condition):\n name= input('enter your name please ')\n if name in guest_list:\n print( \"welcome sir/ma'am\")\n else:\n print('sorry you are not invited')\n condition=False\n\n", "Use a for loop and specify how many time you want it check\nguest_list = ['abhishek olkha' , 'monika' , 'chanchal' , 'daisy' , 'mayank']\nname= input('enter your name please ')\nfor i in range(10): #the loop would run for 10 times starting from 0 to 9\n if name in guest_list:\n print( \"welcome sir/ma'am\")\n else:\n print('sorry you are not invited')\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "counter", "if_statement", "list", "loops", "python" ]
stackoverflow_0074533169_counter_if_statement_list_loops_python.txt
Q: Adding a hyperlink to Tkinter Treeview Values I'm putting together a decision tree tool using Tkinter. I would like to turn the values in the hyperlink column into clickable hyperlinks. How do i do this? Here is the relevant code. root=Tk() root.title('Decision Tree') root.geometry("600x600") my_tree = ttk.Treeview(root) #Define the columns my_tree['columns'] = ("Decision", "Hyperlinks", "ID") #format the columns my_tree.column("#0", width=250, minwidth=100) my_tree.column("#1", width=0, stretch="No") my_tree.column("Hyperlinks", anchor=W, width=200) my_tree.column("ID", anchor=CENTER, width=80) #Create Headings my_tree.heading("#0", text="Decision", anchor=W) my_tree.heading("#1", text="", anchor=W) my_tree.heading("Hyperlinks", text="Hyperlinks", anchor=W) my_tree.heading("ID", text="ID", anchor=CENTER) #Add Data my_tree.insert(parent='', index='1', iid=0, text="Problem 1", values=("", "", "1")) my_tree.insert(parent='', index='1', iid=2, text="Problem 2", values=("", "", "3")) my_tree.insert(parent='', index='1', iid=1, text="Problem 3", values=("", "", "2")) #Add child level 1 my_tree.insert(parent='0', index='end', iid=6, text="Prob 1 level 2", values=("", "Hyperlink 1", "1.1")) my_tree.insert(parent='1', index='end', iid=7, text="Prob 3 level 2", values=("", "Hyperlink 2", "3.1")) my_tree.insert(parent='2', index='end', iid=8, text="Prob 2 level 2", values=("", "Hyperlink 2", "2.1")) #Add child level 2 my_tree.insert(parent='6', index='end', iid=9, text="Prob 1 level 3", values=("", "", "1.11")) my_tree.insert(parent='7', index='end', iid=10, text="Prob 2 level 3", values=("", "", "2.21")) my_tree.pack(pady=20) root.mainloop() A: It should be fairly straightforward to grab the hyperlink from the treeview selection and open it in a browser import webbrowser as wb def open_link(event): tree = event.widget # get the treeview widget item = tree.item(tree.focus()) # get the treeview selection link = item['values'][1] # get the link from the selected row wb.open_new_tab(link) # open the link in a browser tab # bind the selection event to 'open_link' my_tree.bind('<<TreeviewSelect>>', open_link) Note that this will trigger when you select an item from the treeview, i.e when you click on a row of the table., rather than clicking specifically on a hyperlink in the 2nd column. If you want to do that, you have to be more particular... import webbrowser as wb def open_link(event): tree = event.widget # get the treeview widget region = tree.identify_region(event.x, event.y) col = tree.identify_column(event.x) iid = tree.identify('item', event.x, event.y) if region == 'cell' and col == '#2': link = tree.item(iid)['values'][1] # get the link from the selected row wb.open_new_tab(link) # open the link in a browser tab # bind left-click to 'open_link' my_tree.bind('<Button-1>', open_link) Now the link should only open when the user clicks on a link in the "Hyperlinks" column
Adding a hyperlink to Tkinter Treeview Values
I'm putting together a decision tree tool using Tkinter. I would like to turn the values in the hyperlink column into clickable hyperlinks. How do i do this? Here is the relevant code. root=Tk() root.title('Decision Tree') root.geometry("600x600") my_tree = ttk.Treeview(root) #Define the columns my_tree['columns'] = ("Decision", "Hyperlinks", "ID") #format the columns my_tree.column("#0", width=250, minwidth=100) my_tree.column("#1", width=0, stretch="No") my_tree.column("Hyperlinks", anchor=W, width=200) my_tree.column("ID", anchor=CENTER, width=80) #Create Headings my_tree.heading("#0", text="Decision", anchor=W) my_tree.heading("#1", text="", anchor=W) my_tree.heading("Hyperlinks", text="Hyperlinks", anchor=W) my_tree.heading("ID", text="ID", anchor=CENTER) #Add Data my_tree.insert(parent='', index='1', iid=0, text="Problem 1", values=("", "", "1")) my_tree.insert(parent='', index='1', iid=2, text="Problem 2", values=("", "", "3")) my_tree.insert(parent='', index='1', iid=1, text="Problem 3", values=("", "", "2")) #Add child level 1 my_tree.insert(parent='0', index='end', iid=6, text="Prob 1 level 2", values=("", "Hyperlink 1", "1.1")) my_tree.insert(parent='1', index='end', iid=7, text="Prob 3 level 2", values=("", "Hyperlink 2", "3.1")) my_tree.insert(parent='2', index='end', iid=8, text="Prob 2 level 2", values=("", "Hyperlink 2", "2.1")) #Add child level 2 my_tree.insert(parent='6', index='end', iid=9, text="Prob 1 level 3", values=("", "", "1.11")) my_tree.insert(parent='7', index='end', iid=10, text="Prob 2 level 3", values=("", "", "2.21")) my_tree.pack(pady=20) root.mainloop()
[ "It should be fairly straightforward to grab the hyperlink from the treeview selection and open it in a browser\nimport webbrowser as wb\n\n\ndef open_link(event):\n tree = event.widget # get the treeview widget\n item = tree.item(tree.focus()) # get the treeview selection\n link = item['values'][1] # get the link from the selected row\n wb.open_new_tab(link) # open the link in a browser tab\n\n\n# bind the selection event to 'open_link'\nmy_tree.bind('<<TreeviewSelect>>', open_link) \n\nNote that this will trigger when you select an item from the treeview, i.e when you click on a row of the table., rather than clicking specifically on a hyperlink in the 2nd column. If you want to do that, you have to be more particular...\nimport webbrowser as wb\n\n\ndef open_link(event):\n tree = event.widget # get the treeview widget\n region = tree.identify_region(event.x, event.y)\n col = tree.identify_column(event.x)\n iid = tree.identify('item', event.x, event.y)\n if region == 'cell' and col == '#2':\n link = tree.item(iid)['values'][1] # get the link from the selected row\n wb.open_new_tab(link) # open the link in a browser tab\n\n\n# bind left-click to 'open_link'\nmy_tree.bind('<Button-1>', open_link)\n\nNow the link should only open when the user clicks on a link in the \"Hyperlinks\" column\n" ]
[ 1 ]
[]
[]
[ "hyperlink", "python", "tkinter" ]
stackoverflow_0074532947_hyperlink_python_tkinter.txt
Q: Remove two first character of line if match (Python) I have a text file large with content format below, i want remove two first character 11, i try to search by dont know how to continue with my code. Looking for help. Thanks file.txt 11112345,67890,12345 115432,a123q,hs1230 11s1a123,qw321,98765321 342342,121sa,12123243 11023456,sa123,d32acas2 My code import re with open('in.txt') as oldfile, open('out.txt', 'w') as newfile: for line in oldfile: removed = re.sub(r'11', '', line[:2]): newfile.write(removed) Result expected: 112345,67890,12345 115432,a123q,hs1230 s1a123,qw321,98765321 342342,121sa,12123243 023456,sa123,d32acas2 A: Here is a suggestion of easy-to-read solution, without using regex that I find a bit cumbersome here (but this is obviously a personal opinion): with open('in.txt', 'r') as oldfile, open('out.txt', 'w') as newfile: for line in oldfile: newfile.write(line[2:] if line.startswith('11') else line) Added note after comments from @kng: you can use the additional condition line[6] != ',' to avoid removing the '11' when there are only 6 characters before the comma. A: The main issue in your code is this line : removed = re.sub(r'11', '', line[:2]): Because in this case : You'll only write the 2 first character of the line in the file You'll replace each "11" by empty char The answer provided above is great, even if it doesn't match you're 2 expectation (115432,a123q,hs1230) However, if you definitely wants to use the regex : removed = re.sub(r'^(11)','', line) This line should work
Remove two first character of line if match (Python)
I have a text file large with content format below, i want remove two first character 11, i try to search by dont know how to continue with my code. Looking for help. Thanks file.txt 11112345,67890,12345 115432,a123q,hs1230 11s1a123,qw321,98765321 342342,121sa,12123243 11023456,sa123,d32acas2 My code import re with open('in.txt') as oldfile, open('out.txt', 'w') as newfile: for line in oldfile: removed = re.sub(r'11', '', line[:2]): newfile.write(removed) Result expected: 112345,67890,12345 115432,a123q,hs1230 s1a123,qw321,98765321 342342,121sa,12123243 023456,sa123,d32acas2
[ "Here is a suggestion of easy-to-read solution, without using regex that I find a bit cumbersome here (but this is obviously a personal opinion):\nwith open('in.txt', 'r') as oldfile, open('out.txt', 'w') as newfile:\n for line in oldfile:\n newfile.write(line[2:] if line.startswith('11') else line)\n\nAdded note after comments from @kng: you can use the additional condition line[6] != ',' to avoid removing the '11' when there are only 6 characters before the comma.\n", "The main issue in your code is this line :\nremoved = re.sub(r'11', '', line[:2]):\n\nBecause in this case :\n\nYou'll only write the 2 first character of the line in the file\nYou'll replace each \"11\" by empty char\n\nThe answer provided above is great, even if it doesn't match you're 2 expectation (115432,a123q,hs1230)\nHowever, if you definitely wants to use the regex :\nremoved = re.sub(r'^(11)','', line)\n\nThis line should work\n" ]
[ 6, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074532767_python_python_3.x.txt
Q: Count Odd Numbers in an Interval Range. Leetcode problem №1523. Python I tried to solve this problem from leetcode and I came up with the following code but the testcase where low = 3 and high = 7 gives 2 as an output and 3 is expected by leetcode. I will be grateful if you explain what is wrong and how it should be done. class Solution: def countOdds(self, low: int, high: int) -> int: count = 0 for i in range(low, high): if i % 2 != 0: count += 1 return count A: range(low, high) will result in a range of (low, low+1, low+2, ..., high -1). Meaning that in your case high will not be considered. If high should also be considered use: range(low, high + 1)
Count Odd Numbers in an Interval Range. Leetcode problem №1523. Python
I tried to solve this problem from leetcode and I came up with the following code but the testcase where low = 3 and high = 7 gives 2 as an output and 3 is expected by leetcode. I will be grateful if you explain what is wrong and how it should be done. class Solution: def countOdds(self, low: int, high: int) -> int: count = 0 for i in range(low, high): if i % 2 != 0: count += 1 return count
[ "range(low, high) will result in a range of (low, low+1, low+2, ..., high -1). Meaning that in your case high will not be considered.\nIf high should also be considered use:\nrange(low, high + 1)\n\n" ]
[ 2 ]
[]
[]
[ "count", "numbers", "python" ]
stackoverflow_0074533366_count_numbers_python.txt
Q: How to assign cpu affinity for Python 3 subprocess? I am very much a novice at Python. I am running a Tkinter GUI on Windows 7 and Windows 10. I have a subprocess running a data logger routine at 1 KHz. I would like to set a cpu affinity for the subprocess, I am building with Python 3.8. A: You can use the psutil library. This answer should help you.
How to assign cpu affinity for Python 3 subprocess?
I am very much a novice at Python. I am running a Tkinter GUI on Windows 7 and Windows 10. I have a subprocess running a data logger routine at 1 KHz. I would like to set a cpu affinity for the subprocess, I am building with Python 3.8.
[ "You can use the psutil library. This answer should help you.\n" ]
[ 0 ]
[]
[]
[ "affinity", "cpu", "python", "subprocess", "tkinter" ]
stackoverflow_0069872036_affinity_cpu_python_subprocess_tkinter.txt
Q: PIL - draw multiline text on image I try to add text at the bottom of image and actually I've done it, but in case of my text is longer then image width it is cut from both sides, to simplify I would like text to be in multiple lines if it is longer than image width. Here is my code: FOREGROUND = (255, 255, 255) WIDTH = 375 HEIGHT = 50 TEXT = 'Chyba najwyższy czas zadać to pytanie na śniadanie \n Chyba najwyższy czas zadać to pytanie na śniadanie' font_path = '/Library/Fonts/Arial.ttf' font = ImageFont.truetype(font_path, 14, encoding='unic') text = TEXT.decode('utf-8') (width, height) = font.getsize(text) x = Image.open('media/converty/image.png') y = ImageOps.expand(x,border=2,fill='white') y = ImageOps.expand(y,border=30,fill='black') w, h = y.size bg = Image.new('RGBA', (w, 1000), "#000000") W, H = bg.size xo, yo = (W-w)/2, (H-h)/2 bg.paste(y, (xo, 0, xo+w, h)) draw = ImageDraw.Draw(bg) draw.text(((w - width)/2, w), text, font=font, fill=FOREGROUND) bg.show() bg.save('media/converty/test.png') A: You could use textwrap.wrap to break text into a list of strings, each at most width characters long: import textwrap lines = textwrap.wrap(text, width=40) y_text = h for line in lines: width, height = font.getsize(line) draw.text(((w - width) / 2, y_text), line, font=font, fill=FOREGROUND) y_text += height A: The accepted answer wraps text without measuring the font (max 40 characters, no matter what the font size and box width is), so the results are only approximate and may easily overfill or underfill the box. Here is a simple library which solves the problem correctly: https://gist.github.com/turicas/1455973 A: For a complete working example using unutbu's trick (tested with Python 3.6 and Pillow 5.3.0): from PIL import Image, ImageDraw, ImageFont import textwrap def draw_multiple_line_text(image, text, font, text_color, text_start_height): ''' From unutbu on [python PIL draw multiline text on image](https://stackoverflow.com/a/7698300/395857) ''' draw = ImageDraw.Draw(image) image_width, image_height = image.size y_text = text_start_height lines = textwrap.wrap(text, width=40) for line in lines: line_width, line_height = font.getsize(line) draw.text(((image_width - line_width) / 2, y_text), line, font=font, fill=text_color) y_text += line_height def main(): ''' Testing draw_multiple_line_text ''' #image_width image = Image.new('RGB', (800, 600), color = (0, 0, 0)) fontsize = 40 # starting font size font = ImageFont.truetype("arial.ttf", fontsize) text1 = "I try to add text at the bottom of image and actually I've done it, but in case of my text is longer then image width it is cut from both sides, to simplify I would like text to be in multiple lines if it is longer than image width." text2 = "You could use textwrap.wrap to break text into a list of strings, each at most width characters long" text_color = (200, 200, 200) text_start_height = 0 draw_multiple_line_text(image, text1, font, text_color, text_start_height) draw_multiple_line_text(image, text2, font, text_color, 400) image.save('pil_text.png') if __name__ == "__main__": main() #cProfile.run('main()') # if you want to do some profiling Result: A: All recommendations about textwrap usage fail to determine correct width for non-monospaced fonts (as Arial, used in topic example code). I've wrote simple helper class to wrap text regarding to real font letters sizing: from PIL import Image, ImageDraw class TextWrapper(object): """ Helper class to wrap text in lines, based on given text, font and max allowed line width. """ def __init__(self, text, font, max_width): self.text = text self.text_lines = [ ' '.join([w.strip() for w in l.split(' ') if w]) for l in text.split('\n') if l ] self.font = font self.max_width = max_width self.draw = ImageDraw.Draw( Image.new( mode='RGB', size=(100, 100) ) ) self.space_width = self.draw.textsize( text=' ', font=self.font )[0] def get_text_width(self, text): return self.draw.textsize( text=text, font=self.font )[0] def wrapped_text(self): wrapped_lines = [] buf = [] buf_width = 0 for line in self.text_lines: for word in line.split(' '): word_width = self.get_text_width(word) expected_width = word_width if not buf else \ buf_width + self.space_width + word_width if expected_width <= self.max_width: # word fits in line buf_width = expected_width buf.append(word) else: # word doesn't fit in line wrapped_lines.append(' '.join(buf)) buf = [word] buf_width = word_width if buf: wrapped_lines.append(' '.join(buf)) buf = [] buf_width = 0 return '\n'.join(wrapped_lines) Example usage: wrapper = TextWrapper(text, image_font_intance, 800) wrapped_text = wrapper.wrapped_text() It's probably not super-fast, because it renders whole text word by word, to determine words width. But for most cases it should be OK. A: You could use PIL.ImageDraw.Draw.multiline_text(). draw.multiline_text((WIDTH, HEIGHT), TEXT, fill=FOREGROUND, font=font) You even set spacing or align using the same param names. NOTE: You need to wrap the text according to your image size vs desired font size. A: This function will split the text into rows that are at most max length long when made in font font, then it creates a transparent image with the text on it. def split_text(text, font, max) text=text.split(" ") total=0 result=[] line="" for part in text: if total+font.getsize(f"{part} ")[0]<max: line+=f"{part} " total+=font.getsize(part)[0] else: line=line.rstrip() result.append(line) line=f"{part} " total=font.getsize(f"{part} ")[0] line=line.rstrip() result.append(line) image=new("RGBA", (max, font.getsize("gL")[1]*len(result)), (0, 0, 0, 0)) imageDrawable=Draw(image) position=0 for line in result: imageDrawable.text((0, position), line, font) position+=font.getsize("gL")[1] return image A: A minimal example, keep adding words until it exceeds the maximum width limit. The function get_line returns the current line and remaining words, which can again be used in loop, as in draw_lines function below. def get_line(words, width_limit): # get text which can fit in one line, remains is list of words left over line_width = 0 line = '' i = 0 while i < len(words) and (line_width + FONT.getsize(words[i])[0]) < width_limit: if i == 0: line = line + words[i] else: line = line + ' ' + words[i] i = i + 1 line_width = FONT.getsize(line)[0] remains = [] if i < len(words): remains = words[i:len(words)] return line, remains def draw_lines(text, text_box): # add some margin to avoid touching borders box_width = text_box[1][0] - text_box[0][0] - (2*MARGIN) text_x = text_box[0][0] + MARGIN text_y = text_box[0][1] + MARGIN words = text.split(' ') while words: line, words = get_line(words, box_width) width, height = FONT.getsize(line) im_draw.text((text_x, text_y), line, font=FONT, fill=FOREGROUND) text_y += height A: Easiest solution is to use textwrap + multiline_text function from PIL import Image, ImageDraw import textwrap lines = textwrap.wrap("your long text", width=20) draw.multiline_text((x,y), '\n'.join(lines))
PIL - draw multiline text on image
I try to add text at the bottom of image and actually I've done it, but in case of my text is longer then image width it is cut from both sides, to simplify I would like text to be in multiple lines if it is longer than image width. Here is my code: FOREGROUND = (255, 255, 255) WIDTH = 375 HEIGHT = 50 TEXT = 'Chyba najwyższy czas zadać to pytanie na śniadanie \n Chyba najwyższy czas zadać to pytanie na śniadanie' font_path = '/Library/Fonts/Arial.ttf' font = ImageFont.truetype(font_path, 14, encoding='unic') text = TEXT.decode('utf-8') (width, height) = font.getsize(text) x = Image.open('media/converty/image.png') y = ImageOps.expand(x,border=2,fill='white') y = ImageOps.expand(y,border=30,fill='black') w, h = y.size bg = Image.new('RGBA', (w, 1000), "#000000") W, H = bg.size xo, yo = (W-w)/2, (H-h)/2 bg.paste(y, (xo, 0, xo+w, h)) draw = ImageDraw.Draw(bg) draw.text(((w - width)/2, w), text, font=font, fill=FOREGROUND) bg.show() bg.save('media/converty/test.png')
[ "You could use textwrap.wrap to break text into a list of strings, each at most width characters long: \nimport textwrap\nlines = textwrap.wrap(text, width=40)\ny_text = h\nfor line in lines:\n width, height = font.getsize(line)\n draw.text(((w - width) / 2, y_text), line, font=font, fill=FOREGROUND)\n y_text += height\n\n", "The accepted answer wraps text without measuring the font (max 40 characters, no matter what the font size and box width is), so the results are only approximate and may easily overfill or underfill the box. \nHere is a simple library which solves the problem correctly:\nhttps://gist.github.com/turicas/1455973\n", "For a complete working example using unutbu's trick (tested with Python 3.6 and Pillow 5.3.0):\nfrom PIL import Image, ImageDraw, ImageFont\nimport textwrap\n\ndef draw_multiple_line_text(image, text, font, text_color, text_start_height):\n '''\n From unutbu on [python PIL draw multiline text on image](https://stackoverflow.com/a/7698300/395857)\n '''\n draw = ImageDraw.Draw(image)\n image_width, image_height = image.size\n y_text = text_start_height\n lines = textwrap.wrap(text, width=40)\n for line in lines:\n line_width, line_height = font.getsize(line)\n draw.text(((image_width - line_width) / 2, y_text), \n line, font=font, fill=text_color)\n y_text += line_height\n\n\ndef main():\n '''\n Testing draw_multiple_line_text\n '''\n #image_width\n image = Image.new('RGB', (800, 600), color = (0, 0, 0))\n fontsize = 40 # starting font size\n font = ImageFont.truetype(\"arial.ttf\", fontsize)\n text1 = \"I try to add text at the bottom of image and actually I've done it, but in case of my text is longer then image width it is cut from both sides, to simplify I would like text to be in multiple lines if it is longer than image width.\"\n text2 = \"You could use textwrap.wrap to break text into a list of strings, each at most width characters long\"\n\n text_color = (200, 200, 200)\n text_start_height = 0\n draw_multiple_line_text(image, text1, font, text_color, text_start_height)\n draw_multiple_line_text(image, text2, font, text_color, 400)\n image.save('pil_text.png')\n\nif __name__ == \"__main__\":\n main()\n #cProfile.run('main()') # if you want to do some profiling\n\nResult:\n\n", "All recommendations about textwrap usage fail to determine correct width for non-monospaced fonts (as Arial, used in topic example code).\nI've wrote simple helper class to wrap text regarding to real font letters sizing:\nfrom PIL import Image, ImageDraw\n\nclass TextWrapper(object):\n \"\"\" Helper class to wrap text in lines, based on given text, font\n and max allowed line width.\n \"\"\"\n\n def __init__(self, text, font, max_width):\n self.text = text\n self.text_lines = [\n ' '.join([w.strip() for w in l.split(' ') if w])\n for l in text.split('\\n')\n if l\n ]\n self.font = font\n self.max_width = max_width\n\n self.draw = ImageDraw.Draw(\n Image.new(\n mode='RGB',\n size=(100, 100)\n )\n )\n\n self.space_width = self.draw.textsize(\n text=' ',\n font=self.font\n )[0]\n\n def get_text_width(self, text):\n return self.draw.textsize(\n text=text,\n font=self.font\n )[0]\n\n def wrapped_text(self):\n wrapped_lines = []\n buf = []\n buf_width = 0\n\n for line in self.text_lines:\n for word in line.split(' '):\n word_width = self.get_text_width(word)\n\n expected_width = word_width if not buf else \\\n buf_width + self.space_width + word_width\n\n if expected_width <= self.max_width:\n # word fits in line\n buf_width = expected_width\n buf.append(word)\n else:\n # word doesn't fit in line\n wrapped_lines.append(' '.join(buf))\n buf = [word]\n buf_width = word_width\n\n if buf:\n wrapped_lines.append(' '.join(buf))\n buf = []\n buf_width = 0\n\n return '\\n'.join(wrapped_lines)\n\nExample usage:\nwrapper = TextWrapper(text, image_font_intance, 800)\nwrapped_text = wrapper.wrapped_text()\n\nIt's probably not super-fast, because it renders whole text word by word, to determine words width. But for most cases it should be OK.\n", "You could use PIL.ImageDraw.Draw.multiline_text().\ndraw.multiline_text((WIDTH, HEIGHT), TEXT, fill=FOREGROUND, font=font)\n\nYou even set spacing or align using the same param names.\nNOTE: You need to wrap the text according to your image size vs desired font size.\n", "This function will split the text into rows that are at most max length long when made in font font, then it creates a transparent image with the text on it.\ndef split_text(text, font, max)\n text=text.split(\" \")\n total=0\n result=[]\n line=\"\"\n for part in text:\n if total+font.getsize(f\"{part} \")[0]<max:\n line+=f\"{part} \"\n total+=font.getsize(part)[0]\n else:\n line=line.rstrip()\n result.append(line)\n line=f\"{part} \"\n total=font.getsize(f\"{part} \")[0]\n line=line.rstrip()\n result.append(line)\n image=new(\"RGBA\", (max, font.getsize(\"gL\")[1]*len(result)), (0, 0, 0, 0))\n imageDrawable=Draw(image)\n position=0\n for line in result:\n imageDrawable.text((0, position), line, font)\n position+=font.getsize(\"gL\")[1]\n return image\n\n", "A minimal example, keep adding words until it exceeds the maximum width limit. The function get_line returns the current line and remaining words, which can again be used in loop, as in draw_lines function below.\ndef get_line(words, width_limit):\n # get text which can fit in one line, remains is list of words left over\n line_width = 0\n line = ''\n i = 0\n while i < len(words) and (line_width + FONT.getsize(words[i])[0]) < width_limit:\n if i == 0:\n line = line + words[i]\n else:\n line = line + ' ' + words[i]\n i = i + 1\n line_width = FONT.getsize(line)[0]\n remains = []\n if i < len(words):\n remains = words[i:len(words)]\n return line, remains\n\n\ndef draw_lines(text, text_box):\n # add some margin to avoid touching borders\n box_width = text_box[1][0] - text_box[0][0] - (2*MARGIN)\n text_x = text_box[0][0] + MARGIN\n text_y = text_box[0][1] + MARGIN\n words = text.split(' ')\n while words:\n line, words = get_line(words, box_width)\n width, height = FONT.getsize(line)\n im_draw.text((text_x, text_y), line, font=FONT, fill=FOREGROUND)\n text_y += height\n\n", "Easiest solution is to use textwrap + multiline_text function\nfrom PIL import Image, ImageDraw\nimport textwrap\n\nlines = textwrap.wrap(\"your long text\", width=20)\ndraw.multiline_text((x,y), '\\n'.join(lines))\n\n" ]
[ 69, 25, 17, 11, 0, 0, 0, 0 ]
[ "text = textwrap.fill(\"test \",width=35)\nself.draw.text((x, y), text, font=font, fill=\"Black\")\n\n" ]
[ -2 ]
[ "image", "python", "python_imaging_library", "text" ]
stackoverflow_0007698231_image_python_python_imaging_library_text.txt
Q: Django Ninja API framework Pydantic schema for User model ommits fields Project running Django with Ninja API framework. To serialize native Django's User model I use following Pydantic schema: class UserBase(Schema): """Base user schema for GET method.""" id: int username = str first_name = str last_name = str email = str But, this approach gives me response: { "id": 1 } Where are the rest of fields? Thought this approach gives me a full data response: class UserModel(ModelSchema): class Config: model = User model_fields = ["id", "username", "first_name", "last_name", "email"] Response from ModelSchema: { "id": 1, "username": "aaaa", "first_name": "first", "last_name": "last", "email": "a@aa.aa" } A: Looks like the problem is that you didn't specify type for other fields. Just replace = with : in your schema for all fields: class UserBase(Schema): """Base user schema for GET method.""" id: int username: str # not = first_name: str last_name: str email: str
Django Ninja API framework Pydantic schema for User model ommits fields
Project running Django with Ninja API framework. To serialize native Django's User model I use following Pydantic schema: class UserBase(Schema): """Base user schema for GET method.""" id: int username = str first_name = str last_name = str email = str But, this approach gives me response: { "id": 1 } Where are the rest of fields? Thought this approach gives me a full data response: class UserModel(ModelSchema): class Config: model = User model_fields = ["id", "username", "first_name", "last_name", "email"] Response from ModelSchema: { "id": 1, "username": "aaaa", "first_name": "first", "last_name": "last", "email": "a@aa.aa" }
[ "Looks like the problem is that you didn't specify type for other fields. Just replace = with : in your schema for all fields:\nclass UserBase(Schema):\n \"\"\"Base user schema for GET method.\"\"\"\n\n id: int\n username: str # not =\n first_name: str\n last_name: str\n email: str\n\n" ]
[ 1 ]
[]
[]
[ "django", "pydantic", "python" ]
stackoverflow_0074533382_django_pydantic_python.txt
Q: Using python AI mnist to recognize my picture, trained accuracy is 97.99%, but accuracy to my img is less than 20% Using python AI mnist to recognize my picture, trained accuracy is 97.99%, but accuracy to my img is less than 20% I'm hoping can use MNIST doing 0~9 number recognition, and trainning accuracy rate reach up to 97% , I thought it will be fine to reconize my pic but predict/recognize my 2 picture as number 7 predict/recognize my 3 picture as number 6 predict/recognize my 5 picture as number 2 here is the share pic link : https://imgur.com/a/yDJ8ujc import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs= 3 ,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/02_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) plt.imshow(reIm) plt.savefig('/content/result.png') im1 = np.array(reIm.convert("L")) im1 = im1.reshape((1,28*28)) im1 = im1.astype('float32')/255 # predict = model.predict_classes(im1) predict_x=model.predict(im1) classes_x=np.argmax(predict_x,axis=1) print ("---------------------------------") print ('predict as:') print (predict_x) print ("") print ("") print ('predict number as:') print (classes_x) print ("---------------------------------") print ("Original img : ") what should I do for this? should I also import my img with ans for AI to trainning? add more layers? that all the idea I came up, if there is more, just let me know? If that the only two idea to slove, also tell me how to implement (ex:import my img with ans for AI to trainning) tried code suggested by expert: use data augmentation in dataset in Keras with ImageDataGenerator import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs= 3 ,batch_size=128) # Here is image data augmentation example: from tensorflow.keras.preprocessing.image import ImageDataGenerator data_generator = ImageDataGenerator(rotation_range=10, width_shift_range=8, height_shift_range=8, brightness_range=[0.6,1.1], zoom_range=.15, validation_split=.2, rescale=1./255) train_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='training') validation_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='validation') # Now it's time to train model with augmented dataset network.fit(train_dataset, validation_data=validation_dataset, epochs=30) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/02_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) plt.imshow(reIm) plt.savefig('/content/result.png') im1 = np.array(reIm.convert("L")) im1 = im1.reshape((1,28*28)) im1 = im1.astype('float32')/255 # predict = model.predict_classes(im1) predict_x=model.predict(im1) classes_x=np.argmax(predict_x,axis=1) print ("---------------------------------") print ('predict as:') print (predict_x) print ("") print ("") print ('predict number as:') print (classes_x) print ("---------------------------------") print ("Original img : ") output: Epoch 1/3 469/469 [==============================] - 10s 15ms/step - loss: 0.2555 - accuracy: 0.9268 Epoch 2/3 469/469 [==============================] - 5s 10ms/step - loss: 0.1023 - accuracy: 0.9695 Epoch 3/3 469/469 [==============================] - 5s 10ms/step - loss: 0.0678 - accuracy: 0.9796 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-11-476f532516e9> in <module> 51 rescale=1./255) 52 ---> 53 train_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='training') 54 validation_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='validation') 55 1 frames /usr/local/lib/python3.7/dist-packages/keras/preprocessing/image.py in __init__(self, x, y, image_data_generator, batch_size, shuffle, sample_weight, seed, data_format, save_to_dir, save_prefix, save_format, subset, ignore_class_split, dtype) 675 'Input data in `NumpyArrayIterator` ' 676 'should have rank 4. You passed an array ' --> 677 'with shape', self.x.shape) 678 channels_axis = 3 if data_format == 'channels_last' else 1 679 if self.x.shape[channels_axis] not in {1, 3, 4}: ValueError: ('Input data in `NumpyArrayIterator` should have rank 4. You passed an array with shape', (48000, 784)) A: As Dr. Snoopy mentioned, MNIST is an academic dataset, the handwritten numbers are in the same size, and all of them are in the center of the image, but we know in the real world this rarely happens. I think the best thing you should do is use data augmentation. With data augmentation, you can train the model with images with different zooms, different brightness and move numbers in different directions, in this situation the model does not get used to a specific zoom, brightness, and location of numbers, and it has more chance to work properly in the real world. You can simply use data augmentation in your image dataset in Keras with ImageDataGenerator. Here is a very simple code that might help you: # Here is image data augmentation example: from tensorflow.keras.preprocessing.image import ImageDataGenerator data_generator = ImageDataGenerator(rotation_range=10, width_shift_range=8, height_shift_range=8, brightness_range=[0.6,1.1], zoom_range=.15, validation_split=.2, rescale=1./255) train_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='training') validation_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='validation') # Now it's time to train model with augmented dataset network.fit(train_dataset, validation_data=validation_dataset, epochs=10) A: The MNIST dataset is white digit on black background whereas you have provided a black digit on a white background which inverts everything.
Using python AI mnist to recognize my picture, trained accuracy is 97.99%, but accuracy to my img is less than 20%
Using python AI mnist to recognize my picture, trained accuracy is 97.99%, but accuracy to my img is less than 20% I'm hoping can use MNIST doing 0~9 number recognition, and trainning accuracy rate reach up to 97% , I thought it will be fine to reconize my pic but predict/recognize my 2 picture as number 7 predict/recognize my 3 picture as number 6 predict/recognize my 5 picture as number 2 here is the share pic link : https://imgur.com/a/yDJ8ujc import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs= 3 ,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/02_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) plt.imshow(reIm) plt.savefig('/content/result.png') im1 = np.array(reIm.convert("L")) im1 = im1.reshape((1,28*28)) im1 = im1.astype('float32')/255 # predict = model.predict_classes(im1) predict_x=model.predict(im1) classes_x=np.argmax(predict_x,axis=1) print ("---------------------------------") print ('predict as:') print (predict_x) print ("") print ("") print ('predict number as:') print (classes_x) print ("---------------------------------") print ("Original img : ") what should I do for this? should I also import my img with ans for AI to trainning? add more layers? that all the idea I came up, if there is more, just let me know? If that the only two idea to slove, also tell me how to implement (ex:import my img with ans for AI to trainning) tried code suggested by expert: use data augmentation in dataset in Keras with ImageDataGenerator import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs= 3 ,batch_size=128) # Here is image data augmentation example: from tensorflow.keras.preprocessing.image import ImageDataGenerator data_generator = ImageDataGenerator(rotation_range=10, width_shift_range=8, height_shift_range=8, brightness_range=[0.6,1.1], zoom_range=.15, validation_split=.2, rescale=1./255) train_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='training') validation_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='validation') # Now it's time to train model with augmented dataset network.fit(train_dataset, validation_data=validation_dataset, epochs=30) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/02_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) plt.imshow(reIm) plt.savefig('/content/result.png') im1 = np.array(reIm.convert("L")) im1 = im1.reshape((1,28*28)) im1 = im1.astype('float32')/255 # predict = model.predict_classes(im1) predict_x=model.predict(im1) classes_x=np.argmax(predict_x,axis=1) print ("---------------------------------") print ('predict as:') print (predict_x) print ("") print ("") print ('predict number as:') print (classes_x) print ("---------------------------------") print ("Original img : ") output: Epoch 1/3 469/469 [==============================] - 10s 15ms/step - loss: 0.2555 - accuracy: 0.9268 Epoch 2/3 469/469 [==============================] - 5s 10ms/step - loss: 0.1023 - accuracy: 0.9695 Epoch 3/3 469/469 [==============================] - 5s 10ms/step - loss: 0.0678 - accuracy: 0.9796 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-11-476f532516e9> in <module> 51 rescale=1./255) 52 ---> 53 train_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='training') 54 validation_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='validation') 55 1 frames /usr/local/lib/python3.7/dist-packages/keras/preprocessing/image.py in __init__(self, x, y, image_data_generator, batch_size, shuffle, sample_weight, seed, data_format, save_to_dir, save_prefix, save_format, subset, ignore_class_split, dtype) 675 'Input data in `NumpyArrayIterator` ' 676 'should have rank 4. You passed an array ' --> 677 'with shape', self.x.shape) 678 channels_axis = 3 if data_format == 'channels_last' else 1 679 if self.x.shape[channels_axis] not in {1, 3, 4}: ValueError: ('Input data in `NumpyArrayIterator` should have rank 4. You passed an array with shape', (48000, 784))
[ "As Dr. Snoopy mentioned, MNIST is an academic dataset, the handwritten numbers are in the same size, and all of them are in the center of the image, but we know in the real world this rarely happens. I think the best thing you should do is use data augmentation.\nWith data augmentation, you can train the model with images with different zooms, different brightness and move numbers in different directions, in this situation the model does not get used to a specific zoom, brightness, and location of numbers, and it has more chance to work properly in the real world.\nYou can simply use data augmentation in your image dataset in Keras with ImageDataGenerator. Here is a very simple code that might help you:\n# Here is image data augmentation example:\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\n\ndata_generator = ImageDataGenerator(rotation_range=10,\n width_shift_range=8,\n height_shift_range=8,\n brightness_range=[0.6,1.1],\n zoom_range=.15,\n validation_split=.2,\n rescale=1./255)\n\ntrain_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='training')\nvalidation_dataset = data_generator.flow(train_images, train_labels, batch_size=32, subset='validation')\n\n\n# Now it's time to train model with augmented dataset\nnetwork.fit(train_dataset, validation_data=validation_dataset, epochs=10)\n\n", "The MNIST dataset is white digit on black background whereas you have provided a black digit on a white background which inverts everything.\n" ]
[ 2, 1 ]
[]
[]
[ "artificial_intelligence", "keras", "python", "python_3.x", "tensorflow" ]
stackoverflow_0074517638_artificial_intelligence_keras_python_python_3.x_tensorflow.txt
Q: BERT word embeddings I'm trying to use BERT in a static word embeddings kind of way to compare to Word2Vec and show the differences and how BERT is not really meant to be used in a contextless manner. This is how (based on many blogsposts and tutorials) I am attempting to do that def get_hidden_states(encoded, model, layers): with torch.no_grad(): output = model(**encoded) states = output.hidden_states # Stack final 4 layers output = torch.stack([states[i] for i in layers]).sum(0).squeeze() ## shape torch.Size([5, 768]) return output.mean(dim=0) ##average def get_word_vector(sent, tokenizer, model, layers): encoded = tokenizer.encode_plus(sent, return_tensors="pt") ##{'input_ids': tensor([[ 101, 9712, 4774, 3408, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])} return get_hidden_states(encoded, model, layers) word = "embeddings" layers = [-4, -3, -2, -1] tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased") model = BertModel.from_pretrained("bert-base-cased", output_hidden_states=True) word_embed = get_word_vector(word, tokenizer, model, layers) ## shape torch.Size([768]) My main question is, do I exclude the CLS and SEP embeddings when I average the subtokens to get a whole word representation? Or should these be theoretically included? A: you can just add a add_special_tokens paramater to add or remove special tokens from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pertrained('bert-base-uncased') sentence = 'test 1 2 3' features = tokenizer( sentence, padding='do_not_pad', add_special_tokens=False, return_tensors='pt' )
BERT word embeddings
I'm trying to use BERT in a static word embeddings kind of way to compare to Word2Vec and show the differences and how BERT is not really meant to be used in a contextless manner. This is how (based on many blogsposts and tutorials) I am attempting to do that def get_hidden_states(encoded, model, layers): with torch.no_grad(): output = model(**encoded) states = output.hidden_states # Stack final 4 layers output = torch.stack([states[i] for i in layers]).sum(0).squeeze() ## shape torch.Size([5, 768]) return output.mean(dim=0) ##average def get_word_vector(sent, tokenizer, model, layers): encoded = tokenizer.encode_plus(sent, return_tensors="pt") ##{'input_ids': tensor([[ 101, 9712, 4774, 3408, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])} return get_hidden_states(encoded, model, layers) word = "embeddings" layers = [-4, -3, -2, -1] tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased") model = BertModel.from_pretrained("bert-base-cased", output_hidden_states=True) word_embed = get_word_vector(word, tokenizer, model, layers) ## shape torch.Size([768]) My main question is, do I exclude the CLS and SEP embeddings when I average the subtokens to get a whole word representation? Or should these be theoretically included?
[ "you can just add a add_special_tokens paramater to add or remove special tokens\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pertrained('bert-base-uncased')\n\nsentence = 'test 1 2 3'\n\nfeatures = tokenizer(\n sentence, padding='do_not_pad', add_special_tokens=False, return_tensors='pt'\n)\n\n" ]
[ 0 ]
[]
[]
[ "bert_language_model", "huggingface_transformers", "python", "pytorch", "word_embedding" ]
stackoverflow_0074531494_bert_language_model_huggingface_transformers_python_pytorch_word_embedding.txt
Q: Python pickle adds first double value instead of single This is the version of rock paper scissors game but I dont seem to find the solution to why it always adds double values to the first score that you get. For example if I play two games in a row it prints out double the amount of the first score and single amount of the second score and stores both of them inside the pickle. import pickle import random rock = ''' _______ ---' ____) (_____) (_____) (____) ---.__(___) ''' paper = ''' _______ ---' ____)____ ______) _______) _______) ---.__________) ''' scissors = ''' _______ ---' ____)____ ______) __________) (____) ---.__(___) ''' CHOICES = ''' 1 - Play 2 - Statistic 3 - Quit''' STATISTIC = ''' 1 - This game results 2 - History of all results 3 - Quit ''' victory, defeat, draw = [0, 0, 0] victory_all, defeat_all, draw_all = [0, 0, 0] gestures = [rock, paper, scissors] while True: choice = input(CHOICES) if choice == "1": try: with open("rezultatai46.pkl", "rb") as file_pickle: victory_all = pickle.load(file_pickle) defeat_all = pickle.load(file_pickle) draw_all = pickle.load(file_pickle) except FileNotFoundError: with open("rezultatai46.pkl", "wb") as file_pickle: storage = victory_all, defeat_all, draw_all pickle.dumps(victory_all) pickle.dumps(defeat_all) pickle.dumps(draw_all) print('\nWelcome to ROCK PAPER SCISSORS game!\n') player_choice = int(input("Your choice:\n0 - Rock\n1 - Paper\n2 - Scissors\n")) computer_choice = random.randint(0, 2) if player_choice >= 3 or computer_choice < 0: print("Character error, you lose!") else: print("Your choice:") print(gestures[player_choice]) print("Computer choice:") print(gestures[computer_choice]) if player_choice == 0 and computer_choice == 2: victory += 1 print("Tu laimėjai!") elif computer_choice == 0 and player_choice == 2: defeat += 1 print("Tu pralaimėjai!") elif computer_choice > player_choice: computer_choice += 1 print("Tu pralaimėjai!") elif player_choice > computer_choice: victory += 1 print("Tu laimėjai!") elif computer_choice == player_choice: draw += 1 print("Lygiosios") with open("rezultatai46.pkl", "wb") as file_pickle: storage = victory_all, defeat_all, draw_all victory_all = (victory_all + victory) defeat_all = (defeat_all + defeat) draw_all = (draw_all + draw) pickle.dump(victory_all, file_pickle) pickle.dump(defeat_all, file_pickle) pickle.dump(draw_all, file_pickle) elif choice == "2": while True: choice = input(STATISTIC) if choice == "1": print(f"This game session:\nWon: {victory}\n" f"Lost: {defeat}\nDraaw: {draw}") elif choice == "2": with open("rezultatai46.pkl", "rb") as file_pickle: victory_all = pickle.load(file_pickle) defeat_all = pickle.load(file_pickle) draw_all = pickle.load(file_pickle) print("Won in total:", victory_all) print("Lost in total:", defeat_all) print("Draw in total:", draw_all) games_all = (victory_all + defeat_all + draw_all) victory_percentage = ((victory_all / games_all) * 100) print("Laimėjimai procentais: ", round(victory_percentage), "%") elif choice == "3": break elif choice == "3": print("Iki susitikimo!") break else: print("Netinkamas pasirinkimas") A: First - you have an error in this part: elif computer_choice > player_choice: computer_choice += 1 print("Tu pralaimėjai!") It should be adding to defeat, not computer_choice. That is the cause of seem different behaviors for wins and losses. Second: you are counting the current score to variables victory, defeat, draw, since program start, and always adding the total local score to the total scores when pickling - but then you update again the total score when starting the next round. Bear with me: round 1: total_victory = 0 victory = 0 Player wins: victory = 1, total_victory += victory -> 1 and it is saved round 2 starts at: victory = 1 total_victory = 1 Player wins: victory = 2 total_victory += victory -> 3 Round 3 starts at: victory = 2 total_victory = 3 And after another win goes to: victory = 3 total_victory += victory -> 6 The logic of this program is nice, and it can be seem you are being creative as you learn - which is great. But the fact you did not follow more usual patterns make it hard to fix without re-writting a significant part of your code. So, as I pointed the error, I will leave it up to you to think how to fix it. Probably the easier way is always adding "1" to total_victory, etc... as you add "1" to "victory", and never try to add "total_victory + victory" (and counterparts).
Python pickle adds first double value instead of single
This is the version of rock paper scissors game but I dont seem to find the solution to why it always adds double values to the first score that you get. For example if I play two games in a row it prints out double the amount of the first score and single amount of the second score and stores both of them inside the pickle. import pickle import random rock = ''' _______ ---' ____) (_____) (_____) (____) ---.__(___) ''' paper = ''' _______ ---' ____)____ ______) _______) _______) ---.__________) ''' scissors = ''' _______ ---' ____)____ ______) __________) (____) ---.__(___) ''' CHOICES = ''' 1 - Play 2 - Statistic 3 - Quit''' STATISTIC = ''' 1 - This game results 2 - History of all results 3 - Quit ''' victory, defeat, draw = [0, 0, 0] victory_all, defeat_all, draw_all = [0, 0, 0] gestures = [rock, paper, scissors] while True: choice = input(CHOICES) if choice == "1": try: with open("rezultatai46.pkl", "rb") as file_pickle: victory_all = pickle.load(file_pickle) defeat_all = pickle.load(file_pickle) draw_all = pickle.load(file_pickle) except FileNotFoundError: with open("rezultatai46.pkl", "wb") as file_pickle: storage = victory_all, defeat_all, draw_all pickle.dumps(victory_all) pickle.dumps(defeat_all) pickle.dumps(draw_all) print('\nWelcome to ROCK PAPER SCISSORS game!\n') player_choice = int(input("Your choice:\n0 - Rock\n1 - Paper\n2 - Scissors\n")) computer_choice = random.randint(0, 2) if player_choice >= 3 or computer_choice < 0: print("Character error, you lose!") else: print("Your choice:") print(gestures[player_choice]) print("Computer choice:") print(gestures[computer_choice]) if player_choice == 0 and computer_choice == 2: victory += 1 print("Tu laimėjai!") elif computer_choice == 0 and player_choice == 2: defeat += 1 print("Tu pralaimėjai!") elif computer_choice > player_choice: computer_choice += 1 print("Tu pralaimėjai!") elif player_choice > computer_choice: victory += 1 print("Tu laimėjai!") elif computer_choice == player_choice: draw += 1 print("Lygiosios") with open("rezultatai46.pkl", "wb") as file_pickle: storage = victory_all, defeat_all, draw_all victory_all = (victory_all + victory) defeat_all = (defeat_all + defeat) draw_all = (draw_all + draw) pickle.dump(victory_all, file_pickle) pickle.dump(defeat_all, file_pickle) pickle.dump(draw_all, file_pickle) elif choice == "2": while True: choice = input(STATISTIC) if choice == "1": print(f"This game session:\nWon: {victory}\n" f"Lost: {defeat}\nDraaw: {draw}") elif choice == "2": with open("rezultatai46.pkl", "rb") as file_pickle: victory_all = pickle.load(file_pickle) defeat_all = pickle.load(file_pickle) draw_all = pickle.load(file_pickle) print("Won in total:", victory_all) print("Lost in total:", defeat_all) print("Draw in total:", draw_all) games_all = (victory_all + defeat_all + draw_all) victory_percentage = ((victory_all / games_all) * 100) print("Laimėjimai procentais: ", round(victory_percentage), "%") elif choice == "3": break elif choice == "3": print("Iki susitikimo!") break else: print("Netinkamas pasirinkimas")
[ "First - you have an error in this part:\n\n elif computer_choice > player_choice:\n computer_choice += 1\n print(\"Tu pralaimėjai!\")\n\n\nIt should be adding to defeat, not computer_choice. That is the cause of seem different behaviors for wins and losses.\nSecond: you are counting the current score to variables victory, defeat, draw, since program start, and always adding the total local score to the total scores when pickling - but then you update again the total score when starting the next round.\nBear with me:\nround 1:\ntotal_victory = 0\nvictory = 0\nPlayer wins:\nvictory = 1, total_victory += victory -> 1 and it is saved\nround 2 starts at:\nvictory = 1\ntotal_victory = 1\nPlayer wins:\nvictory = 2\ntotal_victory += victory -> 3\nRound 3 starts at:\nvictory = 2\ntotal_victory = 3\nAnd after another win goes to:\nvictory = 3\ntotal_victory += victory -> 6\n\nThe logic of this program is nice, and it can be seem you are being creative as you learn - which is great. But the fact you did not follow more usual patterns make it hard to fix without re-writting a significant part of your code.\nSo, as I pointed the error, I will leave it up to you to think how to fix it.\nProbably the easier way is always adding \"1\" to total_victory, etc... as you add \"1\" to \"victory\", and never try to add \"total_victory + victory\" (and counterparts).\n" ]
[ 0 ]
[]
[]
[ "pickle", "python" ]
stackoverflow_0074533201_pickle_python.txt
Q: do fit function of QSVC require float values as parameters? Following is my code. The error seems to be in qsvc.fit() line but I can't understand why.one of the error line says "TypeError: Invalid parameter values, expected Sequence[Sequence[float]]." I'm pretty much sure I have passed arrays as parameters in fit function but do they need to be float type because labels are generally strings. sorry this is my first time trying this so these may seem naive. import pandas as pd import matplotlib.pyplot as plt import numpy as np from sklearn.model_selection import train_test_split from qiskit import Aer from qiskit.circuit.library import ZFeatureMap from qiskit_machine_learning.kernels import FidelityQuantumKernel from qiskit.algorithms.state_fidelities import ComputeUncompute from qiskit.primitives import Sampler from qiskit.utils import QuantumInstance from qiskit_machine_learning.algorithms import PegasosQSVC data=pd.read_csv('train.csv') X = data.loc[1:1000,["marital","balance","loan"]].values Y = data.iloc[:1000,-1].values x_train, x_test, y_train, y_test = train_test_split(X, Y) data_feature_map = ZFeatureMap(feature_dimension=3, reps=1 ) sampler = Sampler() fidelity = ComputeUncompute(sampler=sampler) data_kernel = FidelityQuantumKernel(fidelity=fidelity, feature_map=data_feature_map) pegasos_qsvc = PegasosQSVC(quantum_kernel=data_kernel, C=1000, num_steps=100) pegasos_qsvc.fit(x_train, y_train) qsvc_score = pegasos_qsvc.score(x_test, y_test) print(f"QSVC classification test score: {qsvc_score}") A: You can use values 0,1 and 2 to represent "marital", "balance" and "loan". sklearn has a LabelEncoder to help such a conversion.
do fit function of QSVC require float values as parameters?
Following is my code. The error seems to be in qsvc.fit() line but I can't understand why.one of the error line says "TypeError: Invalid parameter values, expected Sequence[Sequence[float]]." I'm pretty much sure I have passed arrays as parameters in fit function but do they need to be float type because labels are generally strings. sorry this is my first time trying this so these may seem naive. import pandas as pd import matplotlib.pyplot as plt import numpy as np from sklearn.model_selection import train_test_split from qiskit import Aer from qiskit.circuit.library import ZFeatureMap from qiskit_machine_learning.kernels import FidelityQuantumKernel from qiskit.algorithms.state_fidelities import ComputeUncompute from qiskit.primitives import Sampler from qiskit.utils import QuantumInstance from qiskit_machine_learning.algorithms import PegasosQSVC data=pd.read_csv('train.csv') X = data.loc[1:1000,["marital","balance","loan"]].values Y = data.iloc[:1000,-1].values x_train, x_test, y_train, y_test = train_test_split(X, Y) data_feature_map = ZFeatureMap(feature_dimension=3, reps=1 ) sampler = Sampler() fidelity = ComputeUncompute(sampler=sampler) data_kernel = FidelityQuantumKernel(fidelity=fidelity, feature_map=data_feature_map) pegasos_qsvc = PegasosQSVC(quantum_kernel=data_kernel, C=1000, num_steps=100) pegasos_qsvc.fit(x_train, y_train) qsvc_score = pegasos_qsvc.score(x_test, y_test) print(f"QSVC classification test score: {qsvc_score}")
[ "You can use values 0,1 and 2 to represent \"marital\", \"balance\" and \"loan\". sklearn has a LabelEncoder to help such a conversion.\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "python", "qiskit", "quantum_computing" ]
stackoverflow_0074522968_machine_learning_python_qiskit_quantum_computing.txt
Q: I need to break a for loop in python with specific condition but i am not sure what condition I should use here is my dummy data df parent children a b a c a d b e b f c g c h c i d j d k e l e m f n f o f p import pandas as pd df=pd.read_csv("myfile.csv") dfnew=pd.DataFrame(columns=["parent","children"]) x=input("enter the name of root parent : ") generation=int(input("how many generations you want in the network : ")) mylist=[x] for i in mylist: dfntemp=df[df["parent"]==i] dfnew=pd.concat([dfnew,dfntemp]) mylist2=list(dfntemp["children"]) for j in mylist2: mylist.append(j) #I need a condition to break the loop after specific number of generations here is the new df which will be used to make graph, dfnew I have tried the code mentioned above but my code is fetching all the generations. I want to break the loop after specific number of generations EDIT 2 I have already used this code to make the graph import networkx as nx from pyvis.network import Network G = nx.from_pandas_edgelist(dfnew,'parent','children') net=Network(height='400px',width='50%',bgcolor='#222222',font_color='white',directed = 'True') net.from_nx(G) net.save_graph('network.html') pyvis graph The problem is the original data i am working on for my research has around 2 billion rows, so I'm fetching the children from MySQl table using mysql connector so I can not use the original df in python to make generate the graph. A: So you want to find the successors of a given node, with a depth limit? You should use networkx directly and dfs_successors: import pandas as pd import networkx as nx G = nx.from_pandas_edgelist(df, source='parent', target='children', create_using=nx.DiGraph) root = 'b' generation = 2 nodes = {v for k, l in nx.dfs_successors(G, source=root, depth_limit=generation).items() for v in l } | {root} # adding the root to the set print(nodes) Variant with a breadth first search (bfs_successors): nodes = {v for _, l in nx.bfs_successors(G, source=root, depth_limit=generation) for v in l} | {root} print(nodes) Output: {'b', 'e', 'f', 'l', 'm', 'n', 'o', 'p'} Output for root = 'b' , generation = 1: {'b', 'e', 'f'} Your graph: A: Easiest way will be adding enumerate for index, j in enumerate(mylist2): mylist.append(j) if index >= generation: break
I need to break a for loop in python with specific condition but i am not sure what condition I should use
here is my dummy data df parent children a b a c a d b e b f c g c h c i d j d k e l e m f n f o f p import pandas as pd df=pd.read_csv("myfile.csv") dfnew=pd.DataFrame(columns=["parent","children"]) x=input("enter the name of root parent : ") generation=int(input("how many generations you want in the network : ")) mylist=[x] for i in mylist: dfntemp=df[df["parent"]==i] dfnew=pd.concat([dfnew,dfntemp]) mylist2=list(dfntemp["children"]) for j in mylist2: mylist.append(j) #I need a condition to break the loop after specific number of generations here is the new df which will be used to make graph, dfnew I have tried the code mentioned above but my code is fetching all the generations. I want to break the loop after specific number of generations EDIT 2 I have already used this code to make the graph import networkx as nx from pyvis.network import Network G = nx.from_pandas_edgelist(dfnew,'parent','children') net=Network(height='400px',width='50%',bgcolor='#222222',font_color='white',directed = 'True') net.from_nx(G) net.save_graph('network.html') pyvis graph The problem is the original data i am working on for my research has around 2 billion rows, so I'm fetching the children from MySQl table using mysql connector so I can not use the original df in python to make generate the graph.
[ "So you want to find the successors of a given node, with a depth limit?\nYou should use networkx directly and dfs_successors:\nimport pandas as pd\nimport networkx as nx\n\nG = nx.from_pandas_edgelist(df, source='parent', target='children',\n create_using=nx.DiGraph)\n\nroot = 'b'\ngeneration = 2\n\nnodes = {v for k, l in \n nx.dfs_successors(G, source=root,\n depth_limit=generation).items()\n for v in l\n } | {root} # adding the root to the set\n\nprint(nodes)\n\nVariant with a breadth first search (bfs_successors):\nnodes = {v for _, l in \n nx.bfs_successors(G, source=root,\n depth_limit=generation)\n for v in l} | {root}\nprint(nodes)\n\nOutput: {'b', 'e', 'f', 'l', 'm', 'n', 'o', 'p'}\nOutput for root = 'b' , generation = 1: {'b', 'e', 'f'}\nYour graph:\n\n", "Easiest way will be adding enumerate\nfor index, j in enumerate(mylist2):\n mylist.append(j)\n if index >= generation:\n break\n\n" ]
[ 0, 0 ]
[]
[]
[ "break", "family_tree", "for_loop", "python" ]
stackoverflow_0074533195_break_family_tree_for_loop_python.txt
Q: Can't see my widget in frame in tkinter. I want to see button in bottom of the frame in on ceneter I don't see my buttons. I want to get this: enter image description here I want to see button in bottom of the frame in on ceneter. This is my code: from tkinter import ttk from tkinter import * import tkinter as tk root = tk.Tk() root.geometry('900x650+0+0') root.title("SV Configuration") root.eval('tk::PlaceWindow . center') frame_btn = Frame(root) frame_btn.grid(row=4, column=1, sticky="nswe") frame_center = Frame(frame_btn) frame_center.place(relx=0.5, rely=0.5, anchor=CENTER) ok_btn = Button(frame_center, text="Ok") cancel_btn = Button(frame_center, text="Cancel") cancel_btn.pack() ok_btn.pack() root.mainloop() A: First, the size of frame_btn will be 1x1 because its only child frame_center is put inside the frame using .place() which does not adjust its size automatically. You need to use root.rowconfigure(4, weight=1) and root.columnconfigure(1, weight=1) (as frame_center is put at row 4 and column 1) to let frame_center to fill the available space of frame_btn. Second, the two buttons are put inside frame_center using .pack() so they will be by default packing vertically, not what you want them to be horizontally. So specify side=LEFT on both .pack() to pack them horizontally. Also it is better to specify the width and height options of the two buttons so that they have same size. Third, the frame_center is put at the center of frame_btn because .place(relx=0.5, rely=0.5, anchor=CENTER). To put it at the bottom, use .place(relx=0.5, rely=0.9, anchor=S) instead. Finally, you have imported tkinter twice: from tkinter import * import tkinter as tk wildcard import is not recommended, so remove from tkinter import * and add prefix tk. to widget classes, like tk.Button. Below is the modified code: from tkinter import ttk import tkinter as tk root = tk.Tk() root.geometry('900x650+0+0') root.title("SV Configuration") #root.eval('tk::PlaceWindow . center') root.rowconfigure(4, weight=1) root.columnconfigure(1, weight=1) frame_btn = tk.Frame(root) frame_btn.grid(row=4, column=1, sticky="nswe") frame_center = tk.Frame(frame_btn) frame_center.place(relx=0.5, rely=0.9, anchor=tk.S) ok_btn = tk.Button(frame_center, text="Ok", width=20, height=3) cancel_btn = tk.Button(frame_center, text="Cancel", width=20, height=3) cancel_btn.pack(side=tk.LEFT, padx=10) ok_btn.pack(side=tk.LEFT, padx=10) root.mainloop() And the result:
Can't see my widget in frame in tkinter. I want to see button in bottom of the frame in on ceneter
I don't see my buttons. I want to get this: enter image description here I want to see button in bottom of the frame in on ceneter. This is my code: from tkinter import ttk from tkinter import * import tkinter as tk root = tk.Tk() root.geometry('900x650+0+0') root.title("SV Configuration") root.eval('tk::PlaceWindow . center') frame_btn = Frame(root) frame_btn.grid(row=4, column=1, sticky="nswe") frame_center = Frame(frame_btn) frame_center.place(relx=0.5, rely=0.5, anchor=CENTER) ok_btn = Button(frame_center, text="Ok") cancel_btn = Button(frame_center, text="Cancel") cancel_btn.pack() ok_btn.pack() root.mainloop()
[ "First, the size of frame_btn will be 1x1 because its only child frame_center is put inside the frame using .place() which does not adjust its size automatically.\nYou need to use root.rowconfigure(4, weight=1) and root.columnconfigure(1, weight=1) (as frame_center is put at row 4 and column 1) to let frame_center to fill the available space of frame_btn.\nSecond, the two buttons are put inside frame_center using .pack() so they will be by default packing vertically, not what you want them to be horizontally. So specify side=LEFT on both .pack() to pack them horizontally. Also it is better to specify the width and height options of the two buttons so that they have same size.\nThird, the frame_center is put at the center of frame_btn because .place(relx=0.5, rely=0.5, anchor=CENTER). To put it at the bottom, use .place(relx=0.5, rely=0.9, anchor=S) instead.\nFinally, you have imported tkinter twice:\nfrom tkinter import *\nimport tkinter as tk\n\nwildcard import is not recommended, so remove from tkinter import * and add prefix tk. to widget classes, like tk.Button.\nBelow is the modified code:\nfrom tkinter import ttk\nimport tkinter as tk\n\nroot = tk.Tk()\nroot.geometry('900x650+0+0')\nroot.title(\"SV Configuration\")\n#root.eval('tk::PlaceWindow . center')\n\nroot.rowconfigure(4, weight=1)\nroot.columnconfigure(1, weight=1)\n\nframe_btn = tk.Frame(root)\nframe_btn.grid(row=4, column=1, sticky=\"nswe\")\n\nframe_center = tk.Frame(frame_btn)\nframe_center.place(relx=0.5, rely=0.9, anchor=tk.S)\n\nok_btn = tk.Button(frame_center, text=\"Ok\", width=20, height=3)\ncancel_btn = tk.Button(frame_center, text=\"Cancel\", width=20, height=3)\n\ncancel_btn.pack(side=tk.LEFT, padx=10)\nok_btn.pack(side=tk.LEFT, padx=10)\n\nroot.mainloop()\n\nAnd the result:\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x", "tkinter" ]
stackoverflow_0074531069_python_python_3.x_tkinter.txt
Q: Python libgpiod vs gpiod packages in Linux? I wrote a little test program in Python to manipulate GPIO pins on an an Intel Up Xtreme i11. First running under NixOS, I brought in the package as "libgpiod" and things are working. (MacOS package managers also know "libgpiod".) Then I tried to port this to an Ubuntu world on the same hardware. But apt and apt-get know nothing of libgpiod, they only know gpiod. pip3, too. So I installed gpiod, but the discrepancies mount up… gpiod has a member "chip" rather than "Chip" chip.get_line gets Error 22 for any small integer I can find. What I lack is documentation. Is there something, somewhere, that clearly explains the distinction between these two packages that appear to be similar but are not? And what is actually the correct way of using the Ubuntu gpiod package in Python? BTW I am running as root in both cases. Here's the code (gpiod version): import gpiod, time # pins POWER = 9 chip=gpiod.chip('gpiochip0') power=chip.get_line(POWER) power.request(consumer="motor_movement", type=gpiod.LINE_REQ_DIR_OUT) def run(): delay = 1.0 try: #power.set_value(0) while True: power.set_value(1) time.sleep(delay) power.set_value(0) time.sleep(delay) finally: cleanup() def cleanup(): power.release() if __name__ == "__main__": run() A: What you refer to as "libgpiod" library are system packages based on this C library. From its documentation: libgpiod ======== libgpiod - C library and tools for interacting with the linux GPIO character device (gpiod stands for GPIO device) Since linux 4.8 the GPIO sysfs interface is deprecated. User space should use the character device instead. This library encapsulates the ioctl calls and data structures behind a straightforward API. The library also provides python3 bindings, which have probably been using. On Ubuntu you would install everything you need issuing: apt install python3-libgpiod. Your code using this library should have looked like this: import gpiod, time # pins POWER = 9 chip = gpiod.Chip('0') power = chip.get_line(POWER) power.request(consumer="motor_movement", type=gpiod.LINE_REQ_DIR_OUT) def run(): delay = 1.0 try: #power.set_value(0) while True: power.set_value(1) time.sleep(delay) power.set_value(0) time.sleep(delay) finally: cleanup() def cleanup(): power.release() if __name__ == "__main__": run() For further usage examples see the examples section on the repo. The python package gpiod available through pip from pypi.org is, "a pure Python library and has no dependencies on other packages". So no relation to the C library mentioned above. See also this question for differences or advantages using one or the other. There is a basic example provided as documentation. To make your code working using python3-gpiod (the library installed through pip), you should modify as follows: import gpiod, time # pins POWER = 9 chip=gpiod.chip('gpiochip0') power=chip.get_line(POWER) power_config = gpiod.line_request() power_config.consumer = "motor_movement" power_config.request_type = gpiod.line_request.DIRECTION_OUTPUT power.request(power_config) def run(): delay = 1.0 try: #power.set_value(0) while True: power.set_value(1) time.sleep(delay) power.set_value(0) time.sleep(delay) finally: cleanup() def cleanup(): power.release() if __name__ == "__main__": run() Alternatively try to use help(gpiod.line.get_line) or similar to troubleshoot your code.
Python libgpiod vs gpiod packages in Linux?
I wrote a little test program in Python to manipulate GPIO pins on an an Intel Up Xtreme i11. First running under NixOS, I brought in the package as "libgpiod" and things are working. (MacOS package managers also know "libgpiod".) Then I tried to port this to an Ubuntu world on the same hardware. But apt and apt-get know nothing of libgpiod, they only know gpiod. pip3, too. So I installed gpiod, but the discrepancies mount up… gpiod has a member "chip" rather than "Chip" chip.get_line gets Error 22 for any small integer I can find. What I lack is documentation. Is there something, somewhere, that clearly explains the distinction between these two packages that appear to be similar but are not? And what is actually the correct way of using the Ubuntu gpiod package in Python? BTW I am running as root in both cases. Here's the code (gpiod version): import gpiod, time # pins POWER = 9 chip=gpiod.chip('gpiochip0') power=chip.get_line(POWER) power.request(consumer="motor_movement", type=gpiod.LINE_REQ_DIR_OUT) def run(): delay = 1.0 try: #power.set_value(0) while True: power.set_value(1) time.sleep(delay) power.set_value(0) time.sleep(delay) finally: cleanup() def cleanup(): power.release() if __name__ == "__main__": run()
[ "What you refer to as \"libgpiod\" library are system packages based on this C library.\nFrom its documentation:\nlibgpiod\n========\n\n libgpiod - C library and tools for interacting with the linux GPIO\n character device (gpiod stands for GPIO device)\n\nSince linux 4.8 the GPIO sysfs interface is deprecated. User space should use\nthe character device instead. This library encapsulates the ioctl calls and\ndata structures behind a straightforward API.\n\nThe library also provides python3 bindings, which have probably been using.\nOn Ubuntu you would install everything you need issuing:\napt install python3-libgpiod.\nYour code using this library should have looked like this:\nimport gpiod, time\n\n# pins\nPOWER = 9\n\nchip = gpiod.Chip('0')\npower = chip.get_line(POWER)\npower.request(consumer=\"motor_movement\", type=gpiod.LINE_REQ_DIR_OUT)\n\ndef run():\n delay = 1.0\n try:\n #power.set_value(0)\n while True:\n power.set_value(1)\n time.sleep(delay)\n power.set_value(0)\n time.sleep(delay)\n finally:\n cleanup()\n\ndef cleanup():\n power.release()\n\nif __name__ == \"__main__\":\n run()\n\nFor further usage examples see the examples section on the repo.\nThe python package gpiod available through pip from pypi.org is, \"a pure Python library and has no dependencies on other packages\". So no relation to the C library mentioned above.\nSee also this question for differences or advantages using one or the other.\nThere is a basic example provided as documentation.\nTo make your code working using python3-gpiod (the library installed through pip), you should modify as follows:\nimport gpiod, time\n\n# pins\nPOWER = 9\n\nchip=gpiod.chip('gpiochip0')\npower=chip.get_line(POWER)\n\npower_config = gpiod.line_request()\npower_config.consumer = \"motor_movement\"\npower_config.request_type = gpiod.line_request.DIRECTION_OUTPUT\n\npower.request(power_config)\n\ndef run():\n delay = 1.0\n try:\n #power.set_value(0)\n while True:\n power.set_value(1)\n time.sleep(delay)\n power.set_value(0)\n time.sleep(delay)\n finally:\n cleanup()\n\ndef cleanup():\n power.release()\n\nif __name__ == \"__main__\":\n run()\n\nAlternatively try to use help(gpiod.line.get_line) or similar to troubleshoot your code.\n" ]
[ 0 ]
[]
[]
[ "gpio", "libgpiod", "python" ]
stackoverflow_0074352978_gpio_libgpiod_python.txt
Q: Can anyone help me with the problem. I am trying to read my csv file in jupyter notebook pwd ls import pandas as pd DF = pd.read_csv('~/downloads/world_mortality.csv') FileNotFoundError Traceback (most recent call last) Input In [10], in <cell line: 1>() ----> 1 DF = pd.read_csv('~/downloads/world_mortality.csv') File ~\anaconda3\lib\site-packages\pandas\util\_decorators.py:311, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs) 305 if len(args) > num_allow_args: 306 warnings.warn( 307 msg.format(arguments=arguments), 308 FutureWarning, 309 stacklevel=stacklevel, 310 ) --> 311 return func(*args, **kwargs) File ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py:678, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options) 663 kwds_defaults = _refine_defaults_read( 664 dialect, 665 delimiter, (...) 674 defaults={"delimiter": ","}, 675 ) 676 kwds.update(kwds_defaults) --> 678 return _read(filepath_or_buffer, kwds) File ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py:575, in _read(filepath_or_buffer, kwds) 572 _validate_names(kwds.get("names", None)) 574 # Create the parser. --> 575 parser = TextFileReader(filepath_or_buffer, **kwds) 577 if chunksize or iterator: 578 return parser File ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py:932, in TextFileReader.__init__(self, f, engine, **kwds) 929 self.options["has_index_names"] = kwds["has_index_names"] 931 self.handles: IOHandles | None = None --> 932 self._engine = self._make_engine(f, self.engine) File ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py:1216, in TextFileReader._make_engine(self, f, engine) 1212 mode = "rb" 1213 # error: No overload variant of "get_handle" matches argument types 1214 # "Union[str, PathLike[str], ReadCsvBuffer[bytes], ReadCsvBuffer[str]]" 1215 # , "str", "bool", "Any", "Any", "Any", "Any", "Any" -> 1216 self.handles = get_handle( # type: ignore[call-overload] 1217 f, 1218 mode, 1219 encoding=self.options.get("encoding", None), 1220 compression=self.options.get("compression", None), 1221 memory_map=self.options.get("memory_map", False), 1222 is_text=is_text, 1223 errors=self.options.get("encoding_errors", "strict"), 1224 storage_options=self.options.get("storage_options", None), 1225 ) 1226 assert self.handles is not None 1227 f = self.handles.handle File ~\anaconda3\lib\site-packages\pandas\io\common.py:786, in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options) 781 elif isinstance(handle, str): 782 # Check whether the filename is to be opened in binary mode. 783 # Binary mode does not support 'encoding' and 'newline'. 784 if ioargs.encoding and "b" not in ioargs.mode: 785 # Encoding --> 786 handle = open( 787 handle, 788 ioargs.mode, 789 encoding=ioargs.encoding, 790 errors=errors, 791 newline="", 792 ) 793 else: 794 # Binary mode 795 handle = open(handle, ioargs.mode) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\user/downloads/world_mortality.csv' The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all..... A: Did you check that the file exists in python? You can check that with os.path using the exist function os.path.exists(path).
Can anyone help me with the problem. I am trying to read my csv file in jupyter notebook
pwd ls import pandas as pd DF = pd.read_csv('~/downloads/world_mortality.csv') FileNotFoundError Traceback (most recent call last) Input In [10], in <cell line: 1>() ----> 1 DF = pd.read_csv('~/downloads/world_mortality.csv') File ~\anaconda3\lib\site-packages\pandas\util\_decorators.py:311, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs) 305 if len(args) > num_allow_args: 306 warnings.warn( 307 msg.format(arguments=arguments), 308 FutureWarning, 309 stacklevel=stacklevel, 310 ) --> 311 return func(*args, **kwargs) File ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py:678, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options) 663 kwds_defaults = _refine_defaults_read( 664 dialect, 665 delimiter, (...) 674 defaults={"delimiter": ","}, 675 ) 676 kwds.update(kwds_defaults) --> 678 return _read(filepath_or_buffer, kwds) File ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py:575, in _read(filepath_or_buffer, kwds) 572 _validate_names(kwds.get("names", None)) 574 # Create the parser. --> 575 parser = TextFileReader(filepath_or_buffer, **kwds) 577 if chunksize or iterator: 578 return parser File ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py:932, in TextFileReader.__init__(self, f, engine, **kwds) 929 self.options["has_index_names"] = kwds["has_index_names"] 931 self.handles: IOHandles | None = None --> 932 self._engine = self._make_engine(f, self.engine) File ~\anaconda3\lib\site-packages\pandas\io\parsers\readers.py:1216, in TextFileReader._make_engine(self, f, engine) 1212 mode = "rb" 1213 # error: No overload variant of "get_handle" matches argument types 1214 # "Union[str, PathLike[str], ReadCsvBuffer[bytes], ReadCsvBuffer[str]]" 1215 # , "str", "bool", "Any", "Any", "Any", "Any", "Any" -> 1216 self.handles = get_handle( # type: ignore[call-overload] 1217 f, 1218 mode, 1219 encoding=self.options.get("encoding", None), 1220 compression=self.options.get("compression", None), 1221 memory_map=self.options.get("memory_map", False), 1222 is_text=is_text, 1223 errors=self.options.get("encoding_errors", "strict"), 1224 storage_options=self.options.get("storage_options", None), 1225 ) 1226 assert self.handles is not None 1227 f = self.handles.handle File ~\anaconda3\lib\site-packages\pandas\io\common.py:786, in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options) 781 elif isinstance(handle, str): 782 # Check whether the filename is to be opened in binary mode. 783 # Binary mode does not support 'encoding' and 'newline'. 784 if ioargs.encoding and "b" not in ioargs.mode: 785 # Encoding --> 786 handle = open( 787 handle, 788 ioargs.mode, 789 encoding=ioargs.encoding, 790 errors=errors, 791 newline="", 792 ) 793 else: 794 # Binary mode 795 handle = open(handle, ioargs.mode) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\user/downloads/world_mortality.csv' The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....The final result says that I can't find the csv file in my downloads file. Can anyone help me to find problems with it. I've been trying for so long,but nothing helps at all.....
[ "Did you check that the file exists in python? You can check that with os.path using the exist function os.path.exists(path).\n" ]
[ 0 ]
[]
[]
[ "excel", "jupyter_notebook", "python", "python_3.x" ]
stackoverflow_0074533354_excel_jupyter_notebook_python_python_3.x.txt
Q: Django and channels, expose model data via websockets after save I am new to websocket and channel with django. In my django project i would to expose saved data after a post_save event occur in a specific model via websocket. I have django 3.2 and i install: channels==3.0.4 channels-redis==3.3.1 then in my settings.py i add channels to my app list and set: CHANNEL_LAYERS = { 'default': { 'BACKEND': 'asgi_redis.RedisChannelLayer', 'CONFIG': { 'hosts': ["redis://:myredishost:6379/0"], }, 'ROUTING': 'backend.routing.channel_routing', } } chenge my application from WSGI to ASGI: ASGI_APPLICATION = "backend.asgi.application" then i try to create the routing.py file like this: from channels.routing import route from alarms.consumers import ws_connect, ws_disconnect channel_routing = [ route('websocket.connect', ws_connect), route('websocket.disconnect', ws_disconnect), ] and connect and disconnect methods (insert every connected user ubto User group for now): from channels import Group def ws_connect(message): Group('users').add(message.reply_channel) def ws_disconnect(message): Group('users').discard(message.reply_channel) now i have an Results_Alarms model: # Model of Alarms data class Results_Alarm(models.Model): id = models.AutoField(primary_key=True) var_id = models.ForeignKey('modbus.ModbusVariable', null=True, on_delete=models.SET_NULL) calc_id = models.ForeignKey(CalcGroup, null=True, on_delete=models.SET_NULL) templ_id = models.ForeignKey(AlarmsTemplate, null=True, on_delete=models.SET_NULL) a_trigger = models.CharField(max_length=200, verbose_name="Alarm trigger") dtopen = models.DateTimeField(verbose_name="Date of error") e_status = models.ForeignKey(AlarmStatus, null=True, on_delete=models.SET_NULL) dtclose = models.DateTimeField(verbose_name="Date of resolution", null=True, blank=True) u_involved = models.ForeignKey('accounts.CustomUser', related_name='oumanager', on_delete=models.CASCADE, null=True) ... I also create the signals.py file for manage the post_save event: @receiver(post_save, sender=Results_Alarm) def ws_alarms_data(sender, instance, created, **kwargs): #?? What here for send via websocket? My problem now is: How can i trigger post_save event and send via websocket the last saved data (maybe in json format)? i have to create a router? but how? Sorry but i am searching online without find an answer i can understand. So many thanks in advance A: I don't know if you found the answer to your question. If I understood correctly, you want to send a message to the channel from the signals. If this is the case you can use the get_channel_layer function in the doc from channels.layers import get_channel_layer channel_layer = get_channel_layer() await channel_layer.group_send( "mychannel", {"type": "chat.message", "text": "42"}, )
Django and channels, expose model data via websockets after save
I am new to websocket and channel with django. In my django project i would to expose saved data after a post_save event occur in a specific model via websocket. I have django 3.2 and i install: channels==3.0.4 channels-redis==3.3.1 then in my settings.py i add channels to my app list and set: CHANNEL_LAYERS = { 'default': { 'BACKEND': 'asgi_redis.RedisChannelLayer', 'CONFIG': { 'hosts': ["redis://:myredishost:6379/0"], }, 'ROUTING': 'backend.routing.channel_routing', } } chenge my application from WSGI to ASGI: ASGI_APPLICATION = "backend.asgi.application" then i try to create the routing.py file like this: from channels.routing import route from alarms.consumers import ws_connect, ws_disconnect channel_routing = [ route('websocket.connect', ws_connect), route('websocket.disconnect', ws_disconnect), ] and connect and disconnect methods (insert every connected user ubto User group for now): from channels import Group def ws_connect(message): Group('users').add(message.reply_channel) def ws_disconnect(message): Group('users').discard(message.reply_channel) now i have an Results_Alarms model: # Model of Alarms data class Results_Alarm(models.Model): id = models.AutoField(primary_key=True) var_id = models.ForeignKey('modbus.ModbusVariable', null=True, on_delete=models.SET_NULL) calc_id = models.ForeignKey(CalcGroup, null=True, on_delete=models.SET_NULL) templ_id = models.ForeignKey(AlarmsTemplate, null=True, on_delete=models.SET_NULL) a_trigger = models.CharField(max_length=200, verbose_name="Alarm trigger") dtopen = models.DateTimeField(verbose_name="Date of error") e_status = models.ForeignKey(AlarmStatus, null=True, on_delete=models.SET_NULL) dtclose = models.DateTimeField(verbose_name="Date of resolution", null=True, blank=True) u_involved = models.ForeignKey('accounts.CustomUser', related_name='oumanager', on_delete=models.CASCADE, null=True) ... I also create the signals.py file for manage the post_save event: @receiver(post_save, sender=Results_Alarm) def ws_alarms_data(sender, instance, created, **kwargs): #?? What here for send via websocket? My problem now is: How can i trigger post_save event and send via websocket the last saved data (maybe in json format)? i have to create a router? but how? Sorry but i am searching online without find an answer i can understand. So many thanks in advance
[ "I don't know if you found the answer to your question. If I understood correctly, you want to send a message to the channel from the signals.\nIf this is the case you can use the get_channel_layer function in the doc\nfrom channels.layers import get_channel_layer\nchannel_layer = get_channel_layer()\n\nawait channel_layer.group_send(\n \"mychannel\",\n {\"type\": \"chat.message\", \"text\": \"42\"},\n)\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_channels", "django_models", "python", "websocket" ]
stackoverflow_0070983044_django_django_channels_django_models_python_websocket.txt
Q: How to Run an ML model with Django on Live server I have a Django project that uses a public ML model("deepset/roberta-base-squad2") to make some predictions. The server receives a request with parameters which trigger a queued function. This function is what makes the predictions. But this works only on my local. Once I push my project to a live server, the model no starts to run but never completes. I have tried to set up the project using different guides, to avoid my project downloading the ML model every time a request is made, but it doesn't solve it. I don't know what else to do, please. If there's any extra information needed, I can provide. Here is my setup as it is now: views.py class BotView(GenericAPIView): serializer_class = BotSerializer def post(self, request, *args, **kwargs): try: serializer = self.serializer_class(data=request.data) serializer.is_valid(raise_exception=True) serializer.save() print(serializer.data) return Response(data=serializer.data, status=status.HTTP_200_OK) except Exception as e: print(str(e)) return Response(data=str(e), status=status.HTTP_400_BAD_REQUEST) serializers.py from .tasks import upload_to_ai class BotSerializer(serializers.Serializer): questions = serializers.ListField(required=True, write_only=True) user_info = serializers.CharField(required=True, write_only=True) merchant = serializers.CharField(required=True, write_only=True) user_id = serializers.IntegerField(required=True, write_only=True) def create(self, validated_data): # call ai and run async upload_to_ai.delay(validated_data['questions'], validated_data['user_info'], validated_data['merchant'], validated_data['user_id']) return "successful" tasks.py from bot.apps import BotConfig from model.QA_Model import predict @shared_task() def upload_to_ai(questions:list, user_info:str, merchant:str, user_id:int): model_predictions = predict(questions, BotConfig.MODEL, user_info) print(model_predictions) return apps.py class BotConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'bot' reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", top_k=3, use_gpu=False) #model pipeline MODEL = Pipeline() MODEL.add_node(component= reader, name="Reader", inputs=["Query"]) QA_models.py from haystack import Document import pandas as pd def predict(query:list, model, context): ''' This function predicts the answer to question passed as query Arguments: query: This is/are the question you intend to ask model: This is the model for the prediction context: This is the data from which the model will find it's answers ''' result = model.run_batch(queries=query, documents=[Document(content=context)]) response = convert_to_dict(result['answers'], query) return response Every time I send a request, the ML model begins to run as shown in the image but it never goes past 0%. A: I have solved this. So all the while, I had been running the ML model in a background process using Celery but it worked when I ran it on the main thread. I don't know yet why it wouldn't run in the background process though.
How to Run an ML model with Django on Live server
I have a Django project that uses a public ML model("deepset/roberta-base-squad2") to make some predictions. The server receives a request with parameters which trigger a queued function. This function is what makes the predictions. But this works only on my local. Once I push my project to a live server, the model no starts to run but never completes. I have tried to set up the project using different guides, to avoid my project downloading the ML model every time a request is made, but it doesn't solve it. I don't know what else to do, please. If there's any extra information needed, I can provide. Here is my setup as it is now: views.py class BotView(GenericAPIView): serializer_class = BotSerializer def post(self, request, *args, **kwargs): try: serializer = self.serializer_class(data=request.data) serializer.is_valid(raise_exception=True) serializer.save() print(serializer.data) return Response(data=serializer.data, status=status.HTTP_200_OK) except Exception as e: print(str(e)) return Response(data=str(e), status=status.HTTP_400_BAD_REQUEST) serializers.py from .tasks import upload_to_ai class BotSerializer(serializers.Serializer): questions = serializers.ListField(required=True, write_only=True) user_info = serializers.CharField(required=True, write_only=True) merchant = serializers.CharField(required=True, write_only=True) user_id = serializers.IntegerField(required=True, write_only=True) def create(self, validated_data): # call ai and run async upload_to_ai.delay(validated_data['questions'], validated_data['user_info'], validated_data['merchant'], validated_data['user_id']) return "successful" tasks.py from bot.apps import BotConfig from model.QA_Model import predict @shared_task() def upload_to_ai(questions:list, user_info:str, merchant:str, user_id:int): model_predictions = predict(questions, BotConfig.MODEL, user_info) print(model_predictions) return apps.py class BotConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'bot' reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", top_k=3, use_gpu=False) #model pipeline MODEL = Pipeline() MODEL.add_node(component= reader, name="Reader", inputs=["Query"]) QA_models.py from haystack import Document import pandas as pd def predict(query:list, model, context): ''' This function predicts the answer to question passed as query Arguments: query: This is/are the question you intend to ask model: This is the model for the prediction context: This is the data from which the model will find it's answers ''' result = model.run_batch(queries=query, documents=[Document(content=context)]) response = convert_to_dict(result['answers'], query) return response Every time I send a request, the ML model begins to run as shown in the image but it never goes past 0%.
[ "I have solved this. So all the while, I had been running the ML model in a background process using Celery but it worked when I ran it on the main thread. I don't know yet why it wouldn't run in the background process though.\n" ]
[ 0 ]
[]
[]
[ "django", "machine_learning", "python" ]
stackoverflow_0074512457_django_machine_learning_python.txt
Q: How to get timezone rules version used by datetime? In John Skeet's blog post about handling timezone information when storing future datetimes, he suggests storing the version of timezone rules in the database along with the local time and timezone id. His example: ID: 1 Name: KindConf LocalStart: 2022-07-10T09:00:00 Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands TimeZoneId: Europe/Amsterdam UtcStart: 2022-07-10T07:00:00Z TimeZoneRules: 2019a In python, how do you get the version of timezone rules used by datetime? For example: date.astimezone(zoneinfo.ZoneInfo('US/Pacific')) A: The method of getting timezone rules depends on the library used to create the tzinfo instance. If using pytz, the timezone database used is the Olson timezone database. import pytz print(pytz.OLSEN_VERSION) # e.g. 2021a When using zoneinfo, the system's timezone data is used by default. From the docs: By default, zoneinfo uses the system’s time zone data if available; if no system time zone data is available, the library will fall back to using the first-party tzdata package available on PyPI. There are two options: Find a platform specific method of getting the version number Setup zoneinfo with the tzdata library, which makes the timezone version number readily available. For option 2: To setup zoneinfo to ignore the system data and use the tzdata package instead, set the environment variable PYTHONTZPATH="" The IANA version number is exported by tzdata: import tzdata print(tzdata.IANA_VERSION). # e.g. 2021e A: This is a POSIX-only solution, works with tzdata v.2018a+: import os from pathlib import Path import re import zoneinfo def get_system_tzdata_version(): """ Get the used tzdata version NOTE: Only supports tzdata versions 2018a and later, as the version info was added to tzdata.zi in tzdata version 2018a. Returns ------- version: str | None The version of the system tzdata. If version could not be read from tzdata.zi, return None. """ # This is a file that contains a copy of all data # in the tzdata database. It has been part of the # tzdata since release 2017c. tzdata_zi_fname = "tzdata.zi" if os.name == "nt": # Windows raise NotImplementedError("Currently, only POSIX is supported") # Assuming we are in posix system for p in zoneinfo.TZPATH: p = Path(p) tzdata_zi = p / tzdata_zi_fname if p.exists() and tzdata_zi.exists(): break with open(tzdata_zi) as f: contents = f.read() match = re.match(r"^\s*#\s*version\s+(?P<version>.*)\n", contents) if not match: # Could not find version string from the tzdata.zi file return None return match.group("version") Explanation zoneinfo by default uses the system timezone information, which on POSIX systems is coming from the tzdata (system) package. The locations where zoneinfo reads for tzdata data are listed in zoneinfo.TZPATH The version info of the tzdata database has been written to a file called tzdata.zi since tzdata version 2018a. [tzdata NEWS]
How to get timezone rules version used by datetime?
In John Skeet's blog post about handling timezone information when storing future datetimes, he suggests storing the version of timezone rules in the database along with the local time and timezone id. His example: ID: 1 Name: KindConf LocalStart: 2022-07-10T09:00:00 Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands TimeZoneId: Europe/Amsterdam UtcStart: 2022-07-10T07:00:00Z TimeZoneRules: 2019a In python, how do you get the version of timezone rules used by datetime? For example: date.astimezone(zoneinfo.ZoneInfo('US/Pacific'))
[ "The method of getting timezone rules depends on the library used to create the tzinfo instance.\nIf using pytz, the timezone database used is the Olson timezone database.\nimport pytz\nprint(pytz.OLSEN_VERSION) # e.g. 2021a\n\nWhen using zoneinfo, the system's timezone data is used by default.\nFrom the docs: By default, zoneinfo uses the system’s time zone data if available; if no system time zone data is available, the library will fall back to using the first-party tzdata package available on PyPI.\nThere are two options:\n\nFind a platform specific method of getting the version number\nSetup zoneinfo with the tzdata library, which makes the timezone version number readily available.\n\nFor option 2:\nTo setup zoneinfo to ignore the system data and use the tzdata package instead, set the environment variable PYTHONTZPATH=\"\"\nThe IANA version number is exported by tzdata:\nimport tzdata\nprint(tzdata.IANA_VERSION). # e.g. 2021e\n\n", "This is a POSIX-only solution, works with tzdata v.2018a+:\nimport os\nfrom pathlib import Path \nimport re\nimport zoneinfo \n\n\ndef get_system_tzdata_version():\n \"\"\"\n Get the used tzdata version\n\n NOTE: Only supports tzdata versions 2018a and\n later, as the version info was added to tzdata.zi\n in tzdata version 2018a.\n\n Returns\n -------\n version: str | None\n The version of the system tzdata.\n If version could not be read from tzdata.zi,\n return None.\n \"\"\"\n\n # This is a file that contains a copy of all data\n # in the tzdata database. It has been part of the\n # tzdata since release 2017c.\n tzdata_zi_fname = \"tzdata.zi\"\n\n if os.name == \"nt\": # Windows\n raise NotImplementedError(\"Currently, only POSIX is supported\")\n\n # Assuming we are in posix system\n for p in zoneinfo.TZPATH:\n p = Path(p)\n tzdata_zi = p / tzdata_zi_fname\n if p.exists() and tzdata_zi.exists():\n break\n\n with open(tzdata_zi) as f:\n contents = f.read()\n\n match = re.match(r\"^\\s*#\\s*version\\s+(?P<version>.*)\\n\", contents)\n\n if not match:\n # Could not find version string from the tzdata.zi file\n return None\n\n return match.group(\"version\")\n\nExplanation\n\nzoneinfo by default uses the system timezone information, which on POSIX systems is coming from the tzdata (system) package.\nThe locations where zoneinfo reads for tzdata data are listed in zoneinfo.TZPATH\nThe version info of the tzdata database has been written to a file called tzdata.zi since tzdata version 2018a. [tzdata NEWS]\n\n" ]
[ 2, 0 ]
[]
[]
[ "datetime", "python", "python_3.x", "timezone", "zoneinfo" ]
stackoverflow_0070807339_datetime_python_python_3.x_timezone_zoneinfo.txt
Q: Is there abstract syntax tree (AST) in python extension module (files with suffix .so)? I can check AST in python file: python3 -m ast some_file.py But, when I compile it with nuitka: nuitka3 --module some_file.py I get some_file.so extension module and when I run python3 -m ast some_file.so I get error. So, question my is: is there abstract syntax tree (AST) in python extension module? A: A .so is almost certainly a Linux or MacOSX Shared Object (as the tag indicates). It almost certainly does not contain Python byte code, the usual content is raw binary instructions in the format that your CPU understands. Viewing the symbols in a .so file
Is there abstract syntax tree (AST) in python extension module (files with suffix .so)?
I can check AST in python file: python3 -m ast some_file.py But, when I compile it with nuitka: nuitka3 --module some_file.py I get some_file.so extension module and when I run python3 -m ast some_file.so I get error. So, question my is: is there abstract syntax tree (AST) in python extension module?
[ "A .so is almost certainly a Linux or MacOSX Shared Object (as the tag indicates). It almost certainly does not contain Python byte code, the usual content is raw binary instructions in the format that your CPU understands.\nViewing the symbols in a .so file\n" ]
[ 1 ]
[]
[]
[ ".so", "abstract_syntax_tree", "nuitka", "python", "python_3.x" ]
stackoverflow_0074533424_.so_abstract_syntax_tree_nuitka_python_python_3.x.txt
Q: Django REST Framework - How to get current user in serializer I have TransactionSerializer: class TransactionSerializer(serializers.ModelSerializer): user = UserHider(read_only=True) category_choices = tuple(UserCategories.objects.filter(user=**???**).values_list('category_name', flat=True)) category = serializers.ChoiceField(choices=category_choices) def create(self, validated_data): user = self.context['request'].user payment_amount = self.validated_data['payment_amount'] category = self.validated_data['category'] organization = self.validated_data['organization'] description = self.validated_data['description'] return Transaction.objects.create(user=user, payment_amount=payment_amount, category=category, organization=organization, description=description) class Meta: model = Transaction fields = ('user', 'payment_amount', 'date', 'time', 'category', 'organization', 'description') This totally does the job, however I need that instead of "???" the current user's ID, but I don't quite understand what basic ModelSerializer method I can use so as not to damage anything, but at the same time get the current user as a variable in order to substitute it later in the filtering place (in this case, categories are filtered if I put some specific user ID which is already registered, then on the DRF form, when creating an object, I get a drop-down list with categories specific only to my user)? I have already tried to do this through the get_user() method, and also tried to create a variable inherited from another serializer, which defines just the user ID, but I received various kinds of errors.
Django REST Framework - How to get current user in serializer
I have TransactionSerializer: class TransactionSerializer(serializers.ModelSerializer): user = UserHider(read_only=True) category_choices = tuple(UserCategories.objects.filter(user=**???**).values_list('category_name', flat=True)) category = serializers.ChoiceField(choices=category_choices) def create(self, validated_data): user = self.context['request'].user payment_amount = self.validated_data['payment_amount'] category = self.validated_data['category'] organization = self.validated_data['organization'] description = self.validated_data['description'] return Transaction.objects.create(user=user, payment_amount=payment_amount, category=category, organization=organization, description=description) class Meta: model = Transaction fields = ('user', 'payment_amount', 'date', 'time', 'category', 'organization', 'description') This totally does the job, however I need that instead of "???" the current user's ID, but I don't quite understand what basic ModelSerializer method I can use so as not to damage anything, but at the same time get the current user as a variable in order to substitute it later in the filtering place (in this case, categories are filtered if I put some specific user ID which is already registered, then on the DRF form, when creating an object, I get a drop-down list with categories specific only to my user)? I have already tried to do this through the get_user() method, and also tried to create a variable inherited from another serializer, which defines just the user ID, but I received various kinds of errors.
[]
[]
[ "UserCategories.objects.filter(user=user.id)\n\nI guess this is what you want?? your current user id\n" ]
[ -1 ]
[ "authentication", "django", "django_rest_framework", "python" ]
stackoverflow_0074532716_authentication_django_django_rest_framework_python.txt
Q: How to perform sorting using pyreadstat library I am using pyreadstat library to read sas dataset files(*.sas7bdat, *.xpt). import pyreadstat as pd import pandas as pda import sys import json FILE_LOC = sys.argv[1] PAGE_SIZE = 100 PAGE_NO = int(sys.argv[2])-1 START_FROM_ROW = (PAGE_NO * PAGE_SIZE) pda.set_option('display.max_columns',None) pda.set_option('display.width',None) pda.set_option('display.max_rows',None) df = pd.read_sas7bdat(FILE_LOC, row_offset=START_FROM_ROW, row_limit=PAGE_SIZE,output_format='dict') finalList = [] for key in df[0]: l = list(map(lambda x: str(x) if str(x)=="nan" else x, df[0][key].tolist())) nparray = {key:l} finalList.append(nparray) return json.dumps(finalList) How to perform sorting using pyreadstat library? A: Unfortunately Pyreadstat cannot return sorted data. You need to read the sas7bdat file data into memory and then you can sort it. In order to sort, take into consideration that Pyreadstat returns a tuple of a pandas dataframe and a metadata object. Once you have the dataframe you can sort it by one or multiple columns using the sort_values method . Therefore it is better to get a dataframe rather than a dictionary in this case. df, meta = pd.read_sas7bdat(FILE_LOC, row_offset=START_FROM_ROW, row_limit=PAGE_SIZE) #sort df_sorted = df.sort_values(["columnA", "columnB"]) #replace nans df = df.fillna("nan") # you can directly convert to json #check the options in the documentation, it may give you something different as you want. out = df.to_json() # otherwise transform to dict and build your json as before out = df.to_dict(orient='list')
How to perform sorting using pyreadstat library
I am using pyreadstat library to read sas dataset files(*.sas7bdat, *.xpt). import pyreadstat as pd import pandas as pda import sys import json FILE_LOC = sys.argv[1] PAGE_SIZE = 100 PAGE_NO = int(sys.argv[2])-1 START_FROM_ROW = (PAGE_NO * PAGE_SIZE) pda.set_option('display.max_columns',None) pda.set_option('display.width',None) pda.set_option('display.max_rows',None) df = pd.read_sas7bdat(FILE_LOC, row_offset=START_FROM_ROW, row_limit=PAGE_SIZE,output_format='dict') finalList = [] for key in df[0]: l = list(map(lambda x: str(x) if str(x)=="nan" else x, df[0][key].tolist())) nparray = {key:l} finalList.append(nparray) return json.dumps(finalList) How to perform sorting using pyreadstat library?
[ "Unfortunately Pyreadstat cannot return sorted data. You need to read the sas7bdat file data into memory and then you can sort it.\nIn order to sort, take into consideration that Pyreadstat returns a tuple of a pandas dataframe and a metadata object. Once you have the dataframe you can sort it by one or multiple columns using the sort_values method . Therefore it is better to get a dataframe rather than a dictionary in this case.\ndf, meta = pd.read_sas7bdat(FILE_LOC, row_offset=START_FROM_ROW, row_limit=PAGE_SIZE)\n#sort\ndf_sorted = df.sort_values([\"columnA\", \"columnB\"])\n#replace nans\ndf = df.fillna(\"nan\")\n# you can directly convert to json\n#check the options in the documentation, it may give you something different as you want.\nout = df.to_json()\n# otherwise transform to dict and build your json as before\nout = df.to_dict(orient='list')\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074514147_dataframe_pandas_python.txt
Q: Remove [255,255,255] entries from list of image RGB values I reshaped an image (included below) as a list of pixels, and now I want to remove the black ones (with value [255,255,255]). What is an efficient way to do it? I tried using IM[IM != [255,255,255]] and I got a list of values, instead of a list of value triplets. Here is the code I'm using: import cv2 import numpy as np IM = cv2.imread('Test_image.png') image = cv2.cvtColor(IM, cv2.COLOR_BGR2RGB) # reshape the image to be a list of pixels image_vec = np.array(image.reshape((image.shape[0] * image.shape[1], 3))) image_clean = image_vec[image_vec != [255,255,255]] print(image_clean) A: The issue is that numpy automatically does array-boradcasting, so using IM != [255,255,255] will compare each element to [255,255,255] and return a boolean array with the same shape as the one with the image data. Using this as a mask will return the values as 1D array. An easy way to fix this is to use np.all: image_vec[~ np.all(image_vec == 255, axis=-1)]
Remove [255,255,255] entries from list of image RGB values
I reshaped an image (included below) as a list of pixels, and now I want to remove the black ones (with value [255,255,255]). What is an efficient way to do it? I tried using IM[IM != [255,255,255]] and I got a list of values, instead of a list of value triplets. Here is the code I'm using: import cv2 import numpy as np IM = cv2.imread('Test_image.png') image = cv2.cvtColor(IM, cv2.COLOR_BGR2RGB) # reshape the image to be a list of pixels image_vec = np.array(image.reshape((image.shape[0] * image.shape[1], 3))) image_clean = image_vec[image_vec != [255,255,255]] print(image_clean)
[ "The issue is that numpy automatically does array-boradcasting, so using IM != [255,255,255] will compare each element to [255,255,255] and return a boolean array with the same shape as the one with the image data. Using this as a mask will return the values as 1D array.\nAn easy way to fix this is to use np.all:\nimage_vec[~ np.all(image_vec == 255, axis=-1)]\n\n" ]
[ 2 ]
[]
[]
[ "image", "list", "python" ]
stackoverflow_0074533331_image_list_python.txt
Q: Cannot compute simple gradient of lambda function in JAX I'm trying to compute the gradient of a lambda function that involves other gradients of functions, but the computation is hanging and I do not understand why. In particular, the code below successfully computes f_next, but not its derivative (penultimate and last line). Any help would be appreciated import jax import jax.numpy as jnp # Model parameters γ = 1.5 k = 0.1 μY = 0.03 σ = 0.03 λ = 0.1 ωb = μY/λ # PDE params. σω = σ dt =0.01 IC = lambda ω: jnp.exp(-(1-γ)*ω) f = [IC] f_x= jax.grad(f[0]) #first derivative f_xx= jax.grad(jax.grad(f[0]))#second derivative f_old = f[0] f_next = lambda ω: f_old(ω) + 100*dt * ( (0.5*σω**2)*f_xx(ω) - λ*(ω-ωb)*f_x(ω) - k*f_old(ω) + jnp.exp(-(1-γ)*ω)) print(f_next(0.)) f.append(f_next) f_x= jax.grad(f[1]) #first derivative print(f_x(0.)) A: It is because you're trying to define f_x using f_x in penultimate line so you are trying to compute gradient indefinitely. If you change it by: new_f_x = jax.grad(f[1]) it will work. By the way, even if in your case the model parameters are constants, your functions have side effects (impure) and should not be grad them at this form. Instead you should add the parameters in your functions like that: # Model parameters params = {'γ': 1.5, 'k': 0.1, 'μY': 0.03, 'σ': 0.03, 'λ': 0.1, 'ωb': 0.03 / 0.1} IC = lambda ω, params: jnp.exp(-(1-params['γ']) * ω) def f_next(ω, params): γ = params['γ'] k = params['k'] σ = params['σ'] λ = params['λ'] ωb = params['ωb'] # PDE params. σω = σ dt = 0.01 f_x = jax.grad(IC) f_xx = jax.grad(jax.grad(IC)) return f_old(ω, params) + 100*dt * ( (0.5 * σω**2) * f_xx(ω, params) - λ * (ω-ωb) * f_x(ω, params) - k * f_old(ω, params) + jnp.exp(-(1-γ) * ω) ) f = [IC] f_old = f[0] print(f_next(0., params)) f.append(f_next) new_f_x = jax.grad(f[1]) print(new_f_x(0., params)) Now you can compute the corrects gradients with other parameters with the same functions. You can even change the parameters inside f_next if needed. Note that using a dictionary of parameters as function input is very classic in Jax.
Cannot compute simple gradient of lambda function in JAX
I'm trying to compute the gradient of a lambda function that involves other gradients of functions, but the computation is hanging and I do not understand why. In particular, the code below successfully computes f_next, but not its derivative (penultimate and last line). Any help would be appreciated import jax import jax.numpy as jnp # Model parameters γ = 1.5 k = 0.1 μY = 0.03 σ = 0.03 λ = 0.1 ωb = μY/λ # PDE params. σω = σ dt =0.01 IC = lambda ω: jnp.exp(-(1-γ)*ω) f = [IC] f_x= jax.grad(f[0]) #first derivative f_xx= jax.grad(jax.grad(f[0]))#second derivative f_old = f[0] f_next = lambda ω: f_old(ω) + 100*dt * ( (0.5*σω**2)*f_xx(ω) - λ*(ω-ωb)*f_x(ω) - k*f_old(ω) + jnp.exp(-(1-γ)*ω)) print(f_next(0.)) f.append(f_next) f_x= jax.grad(f[1]) #first derivative print(f_x(0.))
[ "It is because you're trying to define f_x using f_x in penultimate line so you are trying to compute gradient indefinitely. If you change it by:\nnew_f_x = jax.grad(f[1])\n\nit will work.\nBy the way, even if in your case the model parameters are constants, your functions have side effects (impure) and should not be grad them at this form. Instead you should add the parameters in your functions like that:\n# Model parameters\nparams = {'γ': 1.5,\n 'k': 0.1,\n 'μY': 0.03,\n 'σ': 0.03,\n 'λ': 0.1,\n 'ωb': 0.03 / 0.1}\n\nIC = lambda ω, params: jnp.exp(-(1-params['γ']) * ω)\n\n\ndef f_next(ω, params):\n γ = params['γ']\n k = params['k']\n σ = params['σ']\n λ = params['λ']\n ωb = params['ωb']\n\n # PDE params.\n σω = σ\n dt = 0.01\n\n f_x = jax.grad(IC)\n f_xx = jax.grad(jax.grad(IC))\n return f_old(ω, params) + 100*dt * (\n (0.5 * σω**2) * f_xx(ω, params) - λ * (ω-ωb) * f_x(ω, params)\n - k * f_old(ω, params) + jnp.exp(-(1-γ) * ω)\n )\n\nf = [IC]\nf_old = f[0]\n\nprint(f_next(0., params))\nf.append(f_next)\n\nnew_f_x = jax.grad(f[1])\nprint(new_f_x(0., params))\n\nNow you can compute the corrects gradients with other parameters with the same functions. You can even change the parameters inside f_next if needed. Note that using a dictionary of parameters as function input is very classic in Jax.\n" ]
[ 0 ]
[]
[]
[ "autograd", "jax", "python" ]
stackoverflow_0074532784_autograd_jax_python.txt
Q: Add Categorical Column with Specific Count I'm trying to create a new categorical column of countries with specific percentage values. Take the following dataset, for instance: df = sns.load_dataset("titanic") I'm trying the following script to get the new column: country = ['UK', 'Ireland', 'France'] df["country"] = np.random.choice(country, len(df)) df["country"].value_counts(normalize=True) UK 0.344557 Ireland 0.328844 France 0.326599 However, I'm getting all the countries with equal count. I want specific count for each country: Desired Output df["country"].value_counts(normalize=True) UK 0.91 Ireland 0.06 France 0.03 What would be the ideal way of getting the desired output? Any suggestions would be appreciated. Thanks! A: Do you want to change the probabilities of numpy.random.choice? df["country"] = np.random.choice(country, len(df), p=[0.91, 0.06, 0.03]) df["country"].value_counts(normalize=True) Output: UK 0.902357 Ireland 0.058361 France 0.039282 Name: country, dtype: float64 If you want a exact number of values (within the limit of the precision): p = [0.91, 0.06, 0.03] r = (np.array(p)*len(df)).round().astype(int) # the sum MUST be equal to len(df) # or # r = [811, 53, 27] a = np.repeat(country, r) np.random.shuffle(a) df['country'] = a df["country"].value_counts(normalize=True) Output: UK 0.910213 Ireland 0.059484 France 0.030303 Name: country, dtype: float64
Add Categorical Column with Specific Count
I'm trying to create a new categorical column of countries with specific percentage values. Take the following dataset, for instance: df = sns.load_dataset("titanic") I'm trying the following script to get the new column: country = ['UK', 'Ireland', 'France'] df["country"] = np.random.choice(country, len(df)) df["country"].value_counts(normalize=True) UK 0.344557 Ireland 0.328844 France 0.326599 However, I'm getting all the countries with equal count. I want specific count for each country: Desired Output df["country"].value_counts(normalize=True) UK 0.91 Ireland 0.06 France 0.03 What would be the ideal way of getting the desired output? Any suggestions would be appreciated. Thanks!
[ "Do you want to change the probabilities of numpy.random.choice?\ndf[\"country\"] = np.random.choice(country, len(df), p=[0.91, 0.06, 0.03])\ndf[\"country\"].value_counts(normalize=True)\n\nOutput:\nUK 0.902357\nIreland 0.058361\nFrance 0.039282\nName: country, dtype: float64\n\nIf you want a exact number of values (within the limit of the precision):\np = [0.91, 0.06, 0.03]\nr = (np.array(p)*len(df)).round().astype(int) # the sum MUST be equal to len(df)\n# or\n# r = [811, 53, 27]\n\na = np.repeat(country, r)\nnp.random.shuffle(a)\n\ndf['country'] = a\n\ndf[\"country\"].value_counts(normalize=True)\n\nOutput:\nUK 0.910213\nIreland 0.059484\nFrance 0.030303\nName: country, dtype: float64\n\n" ]
[ 3 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074533638_dataframe_pandas_python.txt
Q: Attempt to request Access token for Zoom OAuth API results in invalid redirect url I am using the tutorial in the following link to create an Access Token automatically for Oauth Zoom API: OAuth with Zoom The issue lies in the first step where I am required to provide a redirect link. Everytime I try to make a post request to their API, I get an error "Invalid redirect url (4,700)". This token which I desire will be needed to use the rest of the functions in the API. The token can be generated manually, but I desire it to be automated for my process. I am using python requests for this. Here is the code: import requests import json headers = { 'content-type': "application/json" } url = "https://zoom.us/oauth/authorize?response_type=code&client_id=ulDD9pB4RG28mFrX0jnIQ&redirect_uri=https://zoom.us" res = requests.post(url,headers = headers) print(res.text) I have looked this issue up everywhere, but I haven't been able to find an answer to my issue. Any help for this would be greatly appreciated. Thank you. A: If you check the documentation for Oauth 2.o Authorization you will find that the Redirect Uri is defined as folows So the redirect URI is the endpoint on your system which is designed to handle the oauth response, it must have also been added to the Oauth app settings when you set up your project on zoom. You have added https://zoom.us unless your a developer at zoom i dont think that you can have a app located at https://zoom.us designed to handle the oauth response. I would expect a redirect uri to look something like https://www.yourdomain.com/zoom/oauthcallback.py If you even check the tutorial you are following you will notice they use https://yourapp.com https://zoom.us/oauth/authorize?response_type=code&client_id=7lstjK9NTyett_oeXtFiEQ&redirect_uri=https://yourapp.com A: I would double check that the redirect_uri in your /oauth request URL matches exactly that which is referenced in your oauth configuration/whitelist settings for Zoom. Even the smallest difference like http vs https or including wwww., etc. can throw an error. You might find some of the tips in this OAuth Troubleshooting Guide helpful for other common OAuth errors to check, too! The first item in that guide covers some common invalid redirect errors.
Attempt to request Access token for Zoom OAuth API results in invalid redirect url
I am using the tutorial in the following link to create an Access Token automatically for Oauth Zoom API: OAuth with Zoom The issue lies in the first step where I am required to provide a redirect link. Everytime I try to make a post request to their API, I get an error "Invalid redirect url (4,700)". This token which I desire will be needed to use the rest of the functions in the API. The token can be generated manually, but I desire it to be automated for my process. I am using python requests for this. Here is the code: import requests import json headers = { 'content-type': "application/json" } url = "https://zoom.us/oauth/authorize?response_type=code&client_id=ulDD9pB4RG28mFrX0jnIQ&redirect_uri=https://zoom.us" res = requests.post(url,headers = headers) print(res.text) I have looked this issue up everywhere, but I haven't been able to find an answer to my issue. Any help for this would be greatly appreciated. Thank you.
[ "If you check the documentation for Oauth 2.o Authorization you will find that the Redirect Uri is defined as folows\n\nSo the redirect URI is the endpoint on your system which is designed to handle the oauth response, it must have also been added to the Oauth app settings when you set up your project on zoom.\nYou have added https://zoom.us unless your a developer at zoom i dont think that you can have a app located at https://zoom.us designed to handle the oauth response.\nI would expect a redirect uri to look something like https://www.yourdomain.com/zoom/oauthcallback.py\nIf you even check the tutorial you are following you will notice they use https://yourapp.com\nhttps://zoom.us/oauth/authorize?response_type=code&client_id=7lstjK9NTyett_oeXtFiEQ&redirect_uri=https://yourapp.com\n\n", "I would double check that the redirect_uri in your /oauth request URL matches exactly that which is referenced in your oauth configuration/whitelist settings for Zoom.\nEven the smallest difference like http vs https or including wwww., etc. can throw an error. You might find some of the tips in this OAuth Troubleshooting Guide helpful for other common OAuth errors to check, too!\nThe first item in that guide covers some common invalid redirect errors.\n" ]
[ 1, 0 ]
[]
[]
[ "oauth", "python", "python_requests", "zoom_sdk" ]
stackoverflow_0064853114_oauth_python_python_requests_zoom_sdk.txt
Q: Could not build wheels for pyarrow This issue occurred when I install streamlit. I had also tried to install "pyarrow" separately. But the same error occurred. Both Window and Python are 64bit. Can anyone please help me with this Issue? Thank you in advance. enter image description here enter image description here Also tried to install pyproject.toml. A: pyarrow wheels are not available for Python3.11 on PyPi yet. There is a minor pyarrow release 10.0.1 being voted at the moment that should be released soon. See this thread for the release approval: https://lists.apache.org/thread/rlkrj9lnfmwgn7kq8hvmzf06l5z6w30k And this thread for asking for the 10.0.1 release to add pyarrow wheels for Python 3.11: https://lists.apache.org/thread/xrlztoz8no289rt6kr6qz52b8yjr3mob Once the release is approved and published the pyarrow team will publish the new wheels to PyPi.
Could not build wheels for pyarrow
This issue occurred when I install streamlit. I had also tried to install "pyarrow" separately. But the same error occurred. Both Window and Python are 64bit. Can anyone please help me with this Issue? Thank you in advance. enter image description here enter image description here Also tried to install pyproject.toml.
[ "pyarrow wheels are not available for Python3.11 on PyPi yet. There is a minor pyarrow release 10.0.1 being voted at the moment that should be released soon. See this thread for the release approval:\nhttps://lists.apache.org/thread/rlkrj9lnfmwgn7kq8hvmzf06l5z6w30k\nAnd this thread for asking for the 10.0.1 release to add pyarrow wheels for Python 3.11:\nhttps://lists.apache.org/thread/xrlztoz8no289rt6kr6qz52b8yjr3mob\nOnce the release is approved and published the pyarrow team will publish the new wheels to PyPi.\n" ]
[ 1 ]
[]
[]
[ "pyarrow", "python", "streamlit" ]
stackoverflow_0074532185_pyarrow_python_streamlit.txt
Q: Twitter API error, Invalid or expired token I joined the Twitter API developer portal to collect tweet data using Twitter API, got an approval email(academic research level) from Twitter, and got issued both access tokens and keys. When I tried to collect tweet data with Python, I kept getting errors. unauthorized: 401 unauthorized 89 - Invalid or expanded talk. Here is my code: import tweepy import csv import ssl import datetime import pandas as pd # Oauth keys consumer_key = ' ' consumer_secret = ' ' access_token = ' ' access_token_secret = ' ' # Authentication with Twitter auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) print(api) status = api.user_timeline(screen_name = 'CarlForAlabama', count=1)[0] status.text error: Unauthorized: 401 Unauthorized 89 - Invalid or expired token. def get_tweet(api, user): status = api.user_timeline(screen_name = 'CarlForAlabama', count=20)[0] return status.text print(status.text) NameError: name 'status' is not defined I'd appreciate it if you could tell me how to solve it. Thanks A: A couple of quick things to check for an invalid access_token type error in OAuth: Check that the access_token was in fact successfully passed in the Authorization header of your request, and it's not null or undefined Rule out that an invalid access_token was used (typo, invalid string, etc.) Ensure that the access_token was not expired — expiring tokens are usually only valid for 1 hour. You might also find this guide on OAuth Troubleshooting helpful for some additional tips!
Twitter API error, Invalid or expired token
I joined the Twitter API developer portal to collect tweet data using Twitter API, got an approval email(academic research level) from Twitter, and got issued both access tokens and keys. When I tried to collect tweet data with Python, I kept getting errors. unauthorized: 401 unauthorized 89 - Invalid or expanded talk. Here is my code: import tweepy import csv import ssl import datetime import pandas as pd # Oauth keys consumer_key = ' ' consumer_secret = ' ' access_token = ' ' access_token_secret = ' ' # Authentication with Twitter auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) print(api) status = api.user_timeline(screen_name = 'CarlForAlabama', count=1)[0] status.text error: Unauthorized: 401 Unauthorized 89 - Invalid or expired token. def get_tweet(api, user): status = api.user_timeline(screen_name = 'CarlForAlabama', count=20)[0] return status.text print(status.text) NameError: name 'status' is not defined I'd appreciate it if you could tell me how to solve it. Thanks
[ "A couple of quick things to check for an invalid access_token type error in OAuth:\n\nCheck that the access_token was in fact successfully passed in the Authorization header of your request, and it's not null or undefined\nRule out that an invalid access_token was used (typo, invalid string, etc.)\nEnsure that the access_token was not expired — expiring tokens are usually only valid for 1 hour.\n\nYou might also find this guide on OAuth Troubleshooting helpful for some additional tips!\n" ]
[ 0 ]
[]
[]
[ "python", "twitter", "twitterapi_python" ]
stackoverflow_0073863559_python_twitter_twitterapi_python.txt
Q: Haystack's ElasticsearchDocumentStore() cannot connect running ElasticSearch container I am using ElasticSearch version 8.5.1 and the latest python library of ElasticSearch concurrent with version 8.5.1. Also, my Python version is 3.10.4. I was trying to follow this tutorial but clearly some of the software have changed a few things over the past year. I am having trouble with Haystack's ElasticsearchDocumentStore. After following the ElasticSearch instructions here for deploying an instance of a single node in a container using a docker image, I was able to run the following 2 code blocks successfully: import requests from datetime import datetime from elasticsearch import Elasticsearch from elasticsearch import RequestsHttpConnection client = Elasticsearch( [{ 'host': '127.0.0.1', 'port': 9200,'scheme': 'https'}], ca_certs="../http_ca.crt", http_auth=('username', 'password')) resp = client.info() resp # this executed correctly and this just for good measure: r = requests.get('https://localhost:9200/_cluster/health', verify="../http_ca.crt", headers={"Authorization": 'Basic ' + TOKEN}) r.json() # this executed correctly Then I tried from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore doc_store = ElasticsearchDocumentStore( host="localhost", port=9200, scheme="https", username = "username", password = "password", index = "doc1", ) and no matter what I try above, I get this error: Output exceeds the size limit. Open the full output data in a text editor WARNING:elasticsearch:GET https://localhost:9200/ [status:N/A request:0.029s] Traceback (most recent call last): File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\connectionpool.py", line 1042, in validate_conn conn.connect() File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\connection.py", line 414, in connect self.sock = ssl_wrap_socket( File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\util\ssl.py", line 449, in ssl_wrap_socket ssl_sock = ssl_wrap_socket_impl( File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\util\ssl.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "C:\Python310\lib\ssl.py", line 512, in wrap_socket return self.sslsocket_class._create( File "C:\Python310\lib\ssl.py", line 1070, in _create self.do_handshake() File "C:\Python310\lib\ssl.py", line 1341, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997) During handling of the above exception, another exception occurred: Traceback (most recent call last): ... self.do_handshake() File "C:\Python310\lib\ssl.py", line 1341, in do_handshake self._sslobj.do_handshake() urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997) Output exceeds the size limit. Open the full output data in a text editor ConnectionError Traceback (most recent call last) File c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\haystack\document_stores\elasticsearch.py:272, in ElasticsearchDocumentStore._init_elastic_client(cls, host, port, username, password, api_key_id, api_key, aws4auth, scheme, ca_certs, verify_certs, timeout, use_system_proxy) 271 if not status: --> 272 raise ConnectionError( 273 f"Initial connection to Elasticsearch failed. Make sure you run an Elasticsearch instance " 274 f"at {hosts} and that it has finished the initial ramp up (can take > 30s)." 275 ) 276 except Exception: ConnectionError: Initial connection to Elasticsearch failed. Make sure you run an Elasticsearch instance at [{'host': 'localhost', 'port': 9200}] and that it has finished the initial ramp up (can take > 30s). During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) Cell In [97], line 1 ----> 1 doc_store = ElasticsearchDocumentStore( 2 host="localhost", 3 port=9200, 4 scheme="https", 5 username = "username", 6 password = "password", 7 index = "aurelius", 8 9 ) ... 278 f"Initial connection to Elasticsearch failed. Make sure you run an Elasticsearch instance at {hosts} and that it has finished the initial ramp up (can take > 30s)." 279 ) 280 return client ConnectionError: Initial connection to Elasticsearch failed. Make sure you run an Elasticsearch instance at [{'host': 'localhost', 'port': 9200}] and that it has finished the initial ramp up (can take > 30s). Any ideas or solutions? I have tried with and without the parameters that the function can take, and nothing works. A: It seems that I simply forgot to add in the parameter ca_certs="../http_ca.crt" after copying the security certificate from the container onto the local machine.
Haystack's ElasticsearchDocumentStore() cannot connect running ElasticSearch container
I am using ElasticSearch version 8.5.1 and the latest python library of ElasticSearch concurrent with version 8.5.1. Also, my Python version is 3.10.4. I was trying to follow this tutorial but clearly some of the software have changed a few things over the past year. I am having trouble with Haystack's ElasticsearchDocumentStore. After following the ElasticSearch instructions here for deploying an instance of a single node in a container using a docker image, I was able to run the following 2 code blocks successfully: import requests from datetime import datetime from elasticsearch import Elasticsearch from elasticsearch import RequestsHttpConnection client = Elasticsearch( [{ 'host': '127.0.0.1', 'port': 9200,'scheme': 'https'}], ca_certs="../http_ca.crt", http_auth=('username', 'password')) resp = client.info() resp # this executed correctly and this just for good measure: r = requests.get('https://localhost:9200/_cluster/health', verify="../http_ca.crt", headers={"Authorization": 'Basic ' + TOKEN}) r.json() # this executed correctly Then I tried from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore doc_store = ElasticsearchDocumentStore( host="localhost", port=9200, scheme="https", username = "username", password = "password", index = "doc1", ) and no matter what I try above, I get this error: Output exceeds the size limit. Open the full output data in a text editor WARNING:elasticsearch:GET https://localhost:9200/ [status:N/A request:0.029s] Traceback (most recent call last): File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\connectionpool.py", line 1042, in validate_conn conn.connect() File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\connection.py", line 414, in connect self.sock = ssl_wrap_socket( File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\util\ssl.py", line 449, in ssl_wrap_socket ssl_sock = ssl_wrap_socket_impl( File "c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\urllib3\util\ssl.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "C:\Python310\lib\ssl.py", line 512, in wrap_socket return self.sslsocket_class._create( File "C:\Python310\lib\ssl.py", line 1070, in _create self.do_handshake() File "C:\Python310\lib\ssl.py", line 1341, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997) During handling of the above exception, another exception occurred: Traceback (most recent call last): ... self.do_handshake() File "C:\Python310\lib\ssl.py", line 1341, in do_handshake self._sslobj.do_handshake() urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997) Output exceeds the size limit. Open the full output data in a text editor ConnectionError Traceback (most recent call last) File c:\Users\k.mufti\Desktop\QA_system.venv\lib\site-packages\haystack\document_stores\elasticsearch.py:272, in ElasticsearchDocumentStore._init_elastic_client(cls, host, port, username, password, api_key_id, api_key, aws4auth, scheme, ca_certs, verify_certs, timeout, use_system_proxy) 271 if not status: --> 272 raise ConnectionError( 273 f"Initial connection to Elasticsearch failed. Make sure you run an Elasticsearch instance " 274 f"at {hosts} and that it has finished the initial ramp up (can take > 30s)." 275 ) 276 except Exception: ConnectionError: Initial connection to Elasticsearch failed. Make sure you run an Elasticsearch instance at [{'host': 'localhost', 'port': 9200}] and that it has finished the initial ramp up (can take > 30s). During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) Cell In [97], line 1 ----> 1 doc_store = ElasticsearchDocumentStore( 2 host="localhost", 3 port=9200, 4 scheme="https", 5 username = "username", 6 password = "password", 7 index = "aurelius", 8 9 ) ... 278 f"Initial connection to Elasticsearch failed. Make sure you run an Elasticsearch instance at {hosts} and that it has finished the initial ramp up (can take > 30s)." 279 ) 280 return client ConnectionError: Initial connection to Elasticsearch failed. Make sure you run an Elasticsearch instance at [{'host': 'localhost', 'port': 9200}] and that it has finished the initial ramp up (can take > 30s). Any ideas or solutions? I have tried with and without the parameters that the function can take, and nothing works.
[ "It seems that I simply forgot to add in the parameter ca_certs=\"../http_ca.crt\" after copying the security certificate from the container onto the local machine.\n" ]
[ 0 ]
[]
[]
[ "docker", "elasticsearch", "haystack", "python", "ssl" ]
stackoverflow_0074533736_docker_elasticsearch_haystack_python_ssl.txt
Q: KERAS stuck randomly while adding first layer inside docker container I have created a classification model using Python 3.9.5, Keras 2.4.3 and tensorflow-cpu 2.5.0. The model works fine in on my Windows 10 development environment but it stops executing further script and gets stuck when I deploy it in a Docker container. The step where it gets stuck and becomes unresponsive is when I add the first layer. Nothing gets printed in logs either. This behavior is random (has happened 4th and 20th time and any number of times in between) while training. For reproducible results I train the model in a separate process due to the randomness produced by 3rd party libraries used in my FastAPI application. Also, I do not see anything out of the ordinary when I run docker ps. Source code / logs Model Structure try: log.info("Initializing Sequential Model") model = Sequential() log.info("Initializing GlorotNormal") initializer = initializers.GlorotNormal() log.info("Adding LSTM as input layer ") model.add(LSTM(100, input_shape=( train_x.shape[1:]), return_sequences=False)) log.info("Adding hidden dense layer") model.add(Dense(64, activation='selu', name="layer2", kernel_initializer=initializer)) log.info("Adding Dropout") model.add(Dropout(rate=0.5)) log.info("Adding Output layer") model.add(Dense(len(intent_tags), activation='softmax', name="layer3")) log.info("Generating model Summary") model.summary() log.info("Compiling model") model.compile(loss='categorical_crossentropy', optimizer= tf.keras.optimizers.Adamax(learning_rate=0.005), metrics=['accuracy']) log.info("Model Compiled succesfully") Model fit: model: Sequential = create_training_model(train_x, train_y, intent_tags) log.info("Model Created") add_into_queue: LambdaCallback = LambdaCallback(on_epoch_end=lambda epoch,_: queue.put({"type": "progress", "sub_type": "training_progress", "progress": f'EPOCHS: {epoch+1}/{configuration_epochs}'})) es: EarlyStopping = EarlyStopping(monitor='loss', mode='min',verbose=1, patience=30, min_delta=1) log.info("fitting Training") history: object = model.fit(train_x, train_y, epochs=200, batch_size=5, verbose=1, validation_data=(test_x, test_y), callbacks=[es, add_into_queue]) if es.stopped_epoch: training_completed_message: str = f"Training completed {es.stopped_epoch}/{configuration_epochs} Epoch, Early Stopping applied" log.info(training_completed_message) progress_data: dict = {"type": "progress", "sub_type":"training_completed" , "progress": str(training_completed_message)} queue.put(progress_data) else: progress_data: dict = {type": "progress", "sub_type": "training_completed","progress": str(configuration_epochs)} queue.put(progress_data) Fastapi websocket code snippet for training model: try: configuration["TRAINING_COUNT"] +=1 log.info(f"Training Count: {configuration['TRAINING_COUNT']}") log.info("Starts training on seprate procces") multi_process = Process(target=chatbot_training, args=(qestions_answers, training_type, client_id, saved_file_path, queue), name=f"training_process_{client_id}") multi_process.start() log.info("Initializing thread to send training progress") data_progress_thread = threading.Thread(target = send_data_progress_call, args=[websocket, queue] , name="data_progress_thread") data_progress_thread.daemon = True data_progress_thread.start() Dockerfile FROM python:3.9.5-slim-buster COPY ./ /app WORKDIR /app RUN pip install -r requirements.txt && \ python -m nltk.downloader punkt && \ python -m nltk.downloader wordnet && \ python -m nltk.downloader averaged_perceptron_tagger && \ python -m pip cache purge ENV PYTHONHASHSEED=100 CMD ["python", "./starfighter/app.py"] Docker Container logs: error logs Result of docker stats container_name: Container stats Results of docker top container_name: top command results Logs on development environment Logs on development environment Steps to reproduce: train model 20-40 times to reproduce the error, for saving time use small dataset Environment information Server OS = Centos 7 docker base image = python:3.9.5-slim-buster Python Version = 3.9.5 tensorflow-cpu==2.5.0 keras==2.4.3 nltk==3.5 pyspellchecker==0.6.2 pandas==1.2.4 fastapi==0.65.1 aiofiles==0.7.0 openpyxl==3.0.7 websockets==9.0.2 numpy==1.19.5 strictyaml uvicorn==0.13.4 PyYAML==5.4.1 A: As you are using uvicorn, the uvicorn workers get resources from docker and everytime the model layers are created it gets stored in the memory, and the issue of stucking up is due to the lack of workers resources given to it by the container. So, you can try another wsgi server or try profiling the workers how much resources it is using up. I had a similar issue while running inside docker and gunicorn where the models were preloaded so removing the --preload argument did the job.
KERAS stuck randomly while adding first layer inside docker container
I have created a classification model using Python 3.9.5, Keras 2.4.3 and tensorflow-cpu 2.5.0. The model works fine in on my Windows 10 development environment but it stops executing further script and gets stuck when I deploy it in a Docker container. The step where it gets stuck and becomes unresponsive is when I add the first layer. Nothing gets printed in logs either. This behavior is random (has happened 4th and 20th time and any number of times in between) while training. For reproducible results I train the model in a separate process due to the randomness produced by 3rd party libraries used in my FastAPI application. Also, I do not see anything out of the ordinary when I run docker ps. Source code / logs Model Structure try: log.info("Initializing Sequential Model") model = Sequential() log.info("Initializing GlorotNormal") initializer = initializers.GlorotNormal() log.info("Adding LSTM as input layer ") model.add(LSTM(100, input_shape=( train_x.shape[1:]), return_sequences=False)) log.info("Adding hidden dense layer") model.add(Dense(64, activation='selu', name="layer2", kernel_initializer=initializer)) log.info("Adding Dropout") model.add(Dropout(rate=0.5)) log.info("Adding Output layer") model.add(Dense(len(intent_tags), activation='softmax', name="layer3")) log.info("Generating model Summary") model.summary() log.info("Compiling model") model.compile(loss='categorical_crossentropy', optimizer= tf.keras.optimizers.Adamax(learning_rate=0.005), metrics=['accuracy']) log.info("Model Compiled succesfully") Model fit: model: Sequential = create_training_model(train_x, train_y, intent_tags) log.info("Model Created") add_into_queue: LambdaCallback = LambdaCallback(on_epoch_end=lambda epoch,_: queue.put({"type": "progress", "sub_type": "training_progress", "progress": f'EPOCHS: {epoch+1}/{configuration_epochs}'})) es: EarlyStopping = EarlyStopping(monitor='loss', mode='min',verbose=1, patience=30, min_delta=1) log.info("fitting Training") history: object = model.fit(train_x, train_y, epochs=200, batch_size=5, verbose=1, validation_data=(test_x, test_y), callbacks=[es, add_into_queue]) if es.stopped_epoch: training_completed_message: str = f"Training completed {es.stopped_epoch}/{configuration_epochs} Epoch, Early Stopping applied" log.info(training_completed_message) progress_data: dict = {"type": "progress", "sub_type":"training_completed" , "progress": str(training_completed_message)} queue.put(progress_data) else: progress_data: dict = {type": "progress", "sub_type": "training_completed","progress": str(configuration_epochs)} queue.put(progress_data) Fastapi websocket code snippet for training model: try: configuration["TRAINING_COUNT"] +=1 log.info(f"Training Count: {configuration['TRAINING_COUNT']}") log.info("Starts training on seprate procces") multi_process = Process(target=chatbot_training, args=(qestions_answers, training_type, client_id, saved_file_path, queue), name=f"training_process_{client_id}") multi_process.start() log.info("Initializing thread to send training progress") data_progress_thread = threading.Thread(target = send_data_progress_call, args=[websocket, queue] , name="data_progress_thread") data_progress_thread.daemon = True data_progress_thread.start() Dockerfile FROM python:3.9.5-slim-buster COPY ./ /app WORKDIR /app RUN pip install -r requirements.txt && \ python -m nltk.downloader punkt && \ python -m nltk.downloader wordnet && \ python -m nltk.downloader averaged_perceptron_tagger && \ python -m pip cache purge ENV PYTHONHASHSEED=100 CMD ["python", "./starfighter/app.py"] Docker Container logs: error logs Result of docker stats container_name: Container stats Results of docker top container_name: top command results Logs on development environment Logs on development environment Steps to reproduce: train model 20-40 times to reproduce the error, for saving time use small dataset Environment information Server OS = Centos 7 docker base image = python:3.9.5-slim-buster Python Version = 3.9.5 tensorflow-cpu==2.5.0 keras==2.4.3 nltk==3.5 pyspellchecker==0.6.2 pandas==1.2.4 fastapi==0.65.1 aiofiles==0.7.0 openpyxl==3.0.7 websockets==9.0.2 numpy==1.19.5 strictyaml uvicorn==0.13.4 PyYAML==5.4.1
[ "As you are using uvicorn, the uvicorn workers get resources from docker and everytime the model layers are created it gets stored in the memory, and the issue of stucking up is due to the lack of workers resources given to it by the container.\nSo, you can try another wsgi server or try profiling the workers how much resources it is using up.\nI had a similar issue while running inside docker and gunicorn where the models were preloaded so removing the --preload argument did the job.\n" ]
[ 1 ]
[]
[]
[ "docker", "fastapi", "keras", "python", "tensorflow" ]
stackoverflow_0068121104_docker_fastapi_keras_python_tensorflow.txt
Q: Python script for postgres table partitioning I want to write python script to partition postgres table based on months for the given year, if that month already exists in database pass else create partition for that month. Kindly suggest pyspark , using for loop to iterate over A: I once did something like this. It may need adaptation for the specific situation. It also needs to be executed on the database. """ Generate SQL for adding partitions """ SCHEMA_NAME = 'something' TABLE_NAME = 'other' YEAR_START = 2023 YEAR_END = 2024 for y in range(YEAR_START, YEAR_END + 1): for m in range(1, 13): if m == 12: print(f'create table if not exists {SCHEMA_NAME}.{TABLE_NAME}_{y:04d}_{m:02d} partition of ' + f'{SCHEMA_NAME}.{TABLE_NAME} for values from ' + f'(\'{y:04d}-{m:02d}-01 00:00:00\') to ' + f'(\'{y+1:04d}-01-01 00:00:00\');') else: print(f'create table if not exists {SCHEMA_NAME}.{TABLE_NAME}_{y:04d}_{m:02d} partition of ' + f'{SCHEMA_NAME}.{TABLE_NAME} for values from ' + f'(\'{y:04d}-{m:02d}-01 00:00:00\') to ' + f'(\'{y:04d}-{m+1:02d}-01 00:00:00\');')
Python script for postgres table partitioning
I want to write python script to partition postgres table based on months for the given year, if that month already exists in database pass else create partition for that month. Kindly suggest pyspark , using for loop to iterate over
[ "I once did something like this. It may need adaptation for the specific situation. It also needs to be executed on the database.\n\"\"\"\nGenerate SQL for adding partitions \n\"\"\"\nSCHEMA_NAME = 'something'\nTABLE_NAME = 'other'\nYEAR_START = 2023\nYEAR_END = 2024\n\nfor y in range(YEAR_START, YEAR_END + 1):\n for m in range(1, 13):\n if m == 12:\n print(f'create table if not exists {SCHEMA_NAME}.{TABLE_NAME}_{y:04d}_{m:02d} partition of ' +\n f'{SCHEMA_NAME}.{TABLE_NAME} for values from ' +\n f'(\\'{y:04d}-{m:02d}-01 00:00:00\\') to ' + f'(\\'{y+1:04d}-01-01 00:00:00\\');')\n else:\n print(f'create table if not exists {SCHEMA_NAME}.{TABLE_NAME}_{y:04d}_{m:02d} partition of ' +\n f'{SCHEMA_NAME}.{TABLE_NAME} for values from ' +\n f'(\\'{y:04d}-{m:02d}-01 00:00:00\\') to ' + f'(\\'{y:04d}-{m+1:02d}-01 00:00:00\\');')\n\n\n" ]
[ 0 ]
[]
[]
[ "postgresql", "pyspark", "python" ]
stackoverflow_0074518819_postgresql_pyspark_python.txt
Q: Is there a way to filter widgets in a ScrollArea with a QLineEdit based on specific attributes? I'm doing an app in PyQt through Qt Designer and I've populated a (container widget inside a) Scroll Area with a list of cards (custom widgets that contains informations). I've put outside of the scroll area a QLineEdit and I want to use this QLineEdit to filter the cards based on specific attributes of each card (name, id, username). Is there any way to do this? I know the question is a bit poorly written, but I'm a bit lost on how should I approach this problem. I tried to search for "searchbar" and "filter", but nothing looks like what I need. Here's a sample of my current code (without any attempt of implementing the search function): class MainWindow(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.ui = Ui_MainWindow() self.ui.setupUi(self) # users_df: dataframe with users data users_df, groups_df = extract_load_data() # scrollArea scroll_area = self.ui.scrollArea_cards_members # we create a widget container content_widget = QWidget() # we set this container as the widget of the scroll area scroll_area.setWidget(content_widget) scroll_area.setWidgetResizable(True) # set layout self.scroll_layout = QVBoxLayout(content_widget) # iterate in the dataframe for idx in users_df.index: member_series = users_df.iloc[idx] self.member_card = MemberCard(member_series) self.scroll_layout.addWidget(self.member_card) # make the scroll area justified top self.scroll_layout.addStretch() self.show() And the MemberCard looks like: class MemberCard(QWidget): ''' Member card widget class ''' def __init__(self, member_series, parent=None): ''' Parameters ---------- card_container: member_series: pd.Series series formed by the integer location of the ```users_df``` (member_series = users_df.iloc[x]) ''' super(MemberCard, self).__init__(parent) self.dict = dict(member_series) # Ui_MemberCard: class created by QtDesigner self.card = Ui_MemberCard() self.card.setupUi(self) self.fill_card_info() def fill_card_info(self,): ''' Method that fills the informations of the Member Card ''' self.card.name_label.setText(self.dict['name']) self.card.username.setText(self.dict['username']) self.card.id.setText(str(self.dict['id'])) self.card.joined_in.setText(self.dict['created_at']) A: What @jfaccioni commented in the original post was really a clear, easy, and effective solution, so I will make this question as answered posting it here. To connect these fields you need to create one method to update the scrollArea and one function to verify the matches. For me it was something like this: class MainWindow(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.ui = Ui_MainWindow() self.ui.setupUi(self) # users_df: dataframe with users data users_df, groups_df = extract_load_data() # scrollArea scroll_area = self.ui.scrollArea_cards_members # we create a widget container content_widget = QWidget() # we set this container as the widget of the scroll area scroll_area.setWidget(content_widget) scroll_area.setWidgetResizable(True) # set layout self.scroll_layout = QVBoxLayout(content_widget) # iterate in the dataframe for idx in users_df.index: member_series = users_df.iloc[idx] self.member_card = MemberCard(member_series) self.scroll_layout.addWidget(self.member_card) # make the scroll area justified top self.scroll_layout.addStretch() self.show() def update_members(self, string): ''' Method that hides/shows member cards based on a search string Parameters ---------- string: str string input on the searchbar ''' for member_card in self.member_cards: visible = filter_members(string, member_card) member_card.setVisible(visible) And the auxiliary filter_members function: def filter_members(string, member_card): ''' Filter function that can filter a member_card based on a string The member_card will be filtered by name, username, or id Parameters ---------- string: str string input in the QLineEdit searchbar member_card: :obj: MemberCard MemberCard object to be tested against the string Returns ------- bool ''' member_id = str(member_card.dict['id']) member_name = member_card.dict['name'].lower() member_username = member_card.dict['username'].lower() string = string.lower() return ((string in member_id) or (string in member_name) or (string in member_username))
Is there a way to filter widgets in a ScrollArea with a QLineEdit based on specific attributes?
I'm doing an app in PyQt through Qt Designer and I've populated a (container widget inside a) Scroll Area with a list of cards (custom widgets that contains informations). I've put outside of the scroll area a QLineEdit and I want to use this QLineEdit to filter the cards based on specific attributes of each card (name, id, username). Is there any way to do this? I know the question is a bit poorly written, but I'm a bit lost on how should I approach this problem. I tried to search for "searchbar" and "filter", but nothing looks like what I need. Here's a sample of my current code (without any attempt of implementing the search function): class MainWindow(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.ui = Ui_MainWindow() self.ui.setupUi(self) # users_df: dataframe with users data users_df, groups_df = extract_load_data() # scrollArea scroll_area = self.ui.scrollArea_cards_members # we create a widget container content_widget = QWidget() # we set this container as the widget of the scroll area scroll_area.setWidget(content_widget) scroll_area.setWidgetResizable(True) # set layout self.scroll_layout = QVBoxLayout(content_widget) # iterate in the dataframe for idx in users_df.index: member_series = users_df.iloc[idx] self.member_card = MemberCard(member_series) self.scroll_layout.addWidget(self.member_card) # make the scroll area justified top self.scroll_layout.addStretch() self.show() And the MemberCard looks like: class MemberCard(QWidget): ''' Member card widget class ''' def __init__(self, member_series, parent=None): ''' Parameters ---------- card_container: member_series: pd.Series series formed by the integer location of the ```users_df``` (member_series = users_df.iloc[x]) ''' super(MemberCard, self).__init__(parent) self.dict = dict(member_series) # Ui_MemberCard: class created by QtDesigner self.card = Ui_MemberCard() self.card.setupUi(self) self.fill_card_info() def fill_card_info(self,): ''' Method that fills the informations of the Member Card ''' self.card.name_label.setText(self.dict['name']) self.card.username.setText(self.dict['username']) self.card.id.setText(str(self.dict['id'])) self.card.joined_in.setText(self.dict['created_at'])
[ "What @jfaccioni commented in the original post was really a clear, easy, and effective solution, so I will make this question as answered posting it here. To connect these fields you need to create one method to update the scrollArea and one function to verify the matches. For me it was something like this:\nclass MainWindow(QMainWindow):\n def __init__(self):\n QMainWindow.__init__(self)\n self.ui = Ui_MainWindow()\n self.ui.setupUi(self) \n # users_df: dataframe with users data\n users_df, groups_df = extract_load_data() \n # scrollArea\n scroll_area = self.ui.scrollArea_cards_members\n # we create a widget container\n content_widget = QWidget()\n # we set this container as the widget of the scroll area\n scroll_area.setWidget(content_widget)\n scroll_area.setWidgetResizable(True)\n # set layout\n self.scroll_layout = QVBoxLayout(content_widget)\n # iterate in the dataframe\n for idx in users_df.index:\n member_series = users_df.iloc[idx]\n self.member_card = MemberCard(member_series)\n self.scroll_layout.addWidget(self.member_card)\n # make the scroll area justified top\n self.scroll_layout.addStretch()\n\n \n self.show()\n\n def update_members(self, string):\n '''\n Method that hides/shows member cards based on a search string\n \n Parameters\n ----------\n string: str\n string input on the searchbar\n '''\n for member_card in self.member_cards:\n visible = filter_members(string, member_card)\n member_card.setVisible(visible)\n\nAnd the auxiliary filter_members function:\ndef filter_members(string, member_card):\n '''\n Filter function that can filter a member_card based on a string\n The member_card will be filtered by name, username, or id\n \n Parameters\n ----------\n string: str\n string input in the QLineEdit searchbar\n member_card: :obj: MemberCard\n MemberCard object to be tested against the string\n \n Returns\n -------\n bool\n '''\n member_id = str(member_card.dict['id'])\n member_name = member_card.dict['name'].lower()\n member_username = member_card.dict['username'].lower()\n string = string.lower()\n return ((string in member_id) or (string in member_name) or (string in member_username))\n\n" ]
[ 0 ]
[]
[]
[ "filter", "pyqt", "python", "qt" ]
stackoverflow_0073990099_filter_pyqt_python_qt.txt
Q: Resample with specific condition in pandas I have a dataframe df that looks like the following: Start date Final date Value ID Serial 2022-09-01 01:09:07.093 2022-09-01 05:43:55.092999999 10.92 200 120 2022-09-01 01:14:07.093 2022-09-01 05:43:55.092999999 10.92 200 120 2022-09-01 01:19:07.093 2022-09-01 05:43:55.092999999 10.92 200 120 2022-09-01 01:13:07.093 2022-09-01 03:41:55.092999999 11.85 201 122 ... 2022-09-02 01:19:07.093 2022-09-03 07:43:55.092999999 7.35 300 124 2022-09-02 01:24:07.093 2022-09-03 07:43:55.092999999 7.35 300 124 ... For each match of "ID" and "Serial", the data is registered every five minutes from "Start date" until "End date". I want to resample this dataframe on a 15 minutes basis and take the sum of "Value". My basic approach was: df = df.resample('15min', on='Start date')['Value'].sum() However, this counts each match of "ID" and "Serial" more than once per time interval. What I want is to resample the dataframe but considering just once each match of "ID" and "Serial" per 15 minutes gap. For the given example, the output should look like the following (since the "ID" and "Serial" are repeated, the resample function should consider it just once per time gap): Date Value 2022-09-01 01:00:00 22.70 2022-09-01 01:15:00 10.92 ... 2022-09-02 01:15:00 7.35 ... Instead, what I get at the moment is: Date Value 2022-09-01 01:00:00 33.69 2022-09-01 01:15:00 10.92 ... 2022-09-02 01:15:00 14.7 ... Note: For each time gap I have lots of different "ID" and "Serial" combinations. A: You can use a groupby.apply : out = (df.groupby(pd.Grouper(freq='15T', key='Start date')) .apply(lambda x: x.drop_duplicates(subset=['ID', 'Serial'])['Value'].sum()) ) Output: Start date 2022-09-01 01:00:00 22.77 2022-09-01 01:15:00 10.92 2022-09-01 01:30:00 0.00 2022-09-01 01:45:00 0.00 2022-09-01 02:00:00 0.00 ... 2022-09-02 00:15:00 0.00 2022-09-02 00:30:00 0.00 2022-09-02 00:45:00 0.00 2022-09-02 01:00:00 0.00 2022-09-02 01:15:00 7.35 Freq: 15T, Length: 98, dtype: float64
Resample with specific condition in pandas
I have a dataframe df that looks like the following: Start date Final date Value ID Serial 2022-09-01 01:09:07.093 2022-09-01 05:43:55.092999999 10.92 200 120 2022-09-01 01:14:07.093 2022-09-01 05:43:55.092999999 10.92 200 120 2022-09-01 01:19:07.093 2022-09-01 05:43:55.092999999 10.92 200 120 2022-09-01 01:13:07.093 2022-09-01 03:41:55.092999999 11.85 201 122 ... 2022-09-02 01:19:07.093 2022-09-03 07:43:55.092999999 7.35 300 124 2022-09-02 01:24:07.093 2022-09-03 07:43:55.092999999 7.35 300 124 ... For each match of "ID" and "Serial", the data is registered every five minutes from "Start date" until "End date". I want to resample this dataframe on a 15 minutes basis and take the sum of "Value". My basic approach was: df = df.resample('15min', on='Start date')['Value'].sum() However, this counts each match of "ID" and "Serial" more than once per time interval. What I want is to resample the dataframe but considering just once each match of "ID" and "Serial" per 15 minutes gap. For the given example, the output should look like the following (since the "ID" and "Serial" are repeated, the resample function should consider it just once per time gap): Date Value 2022-09-01 01:00:00 22.70 2022-09-01 01:15:00 10.92 ... 2022-09-02 01:15:00 7.35 ... Instead, what I get at the moment is: Date Value 2022-09-01 01:00:00 33.69 2022-09-01 01:15:00 10.92 ... 2022-09-02 01:15:00 14.7 ... Note: For each time gap I have lots of different "ID" and "Serial" combinations.
[ "You can use a groupby.apply :\nout = (df.groupby(pd.Grouper(freq='15T', key='Start date'))\n .apply(lambda x: x.drop_duplicates(subset=['ID', 'Serial'])['Value'].sum())\n )\n\nOutput:\nStart date\n2022-09-01 01:00:00 22.77\n2022-09-01 01:15:00 10.92\n2022-09-01 01:30:00 0.00\n2022-09-01 01:45:00 0.00\n2022-09-01 02:00:00 0.00\n ... \n2022-09-02 00:15:00 0.00\n2022-09-02 00:30:00 0.00\n2022-09-02 00:45:00 0.00\n2022-09-02 01:00:00 0.00\n2022-09-02 01:15:00 7.35\nFreq: 15T, Length: 98, dtype: float64\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074533833_dataframe_pandas_python.txt
Q: PySpark - converting sas macro with scan function to pyspark I am a beginner in pyspark and python, and trying to convert one of my SAS macro to pyspark, but unable to find useful resources which are equivalent to SCAN function in SAS and also having difficulties when executing while loop in EMR studio pyspark cluster. I am trying to convert the following SAS macro to pyspark, thank you all. -- start macro -- %let a=1; %do %while (%scan(&varlist., &a.) ne ); %let d = %scan(&varlist., &a.); %put &d. ; -- end macro -- ## &varlist variable contains the values similar to the following list [Decimal('124.00000'), Decimal('416.000000'), Decimal('205.00000'), Decimal('332.000000')] A: This would be the equivalent Python code: for d in [124.0, 416.0, 205.0, 332.0]: print(d)
PySpark - converting sas macro with scan function to pyspark
I am a beginner in pyspark and python, and trying to convert one of my SAS macro to pyspark, but unable to find useful resources which are equivalent to SCAN function in SAS and also having difficulties when executing while loop in EMR studio pyspark cluster. I am trying to convert the following SAS macro to pyspark, thank you all. -- start macro -- %let a=1; %do %while (%scan(&varlist., &a.) ne ); %let d = %scan(&varlist., &a.); %put &d. ; -- end macro -- ## &varlist variable contains the values similar to the following list [Decimal('124.00000'), Decimal('416.000000'), Decimal('205.00000'), Decimal('332.000000')]
[ "This would be the equivalent Python code:\nfor d in [124.0, 416.0, 205.0, 332.0]:\n print(d)\n\n" ]
[ 0 ]
[]
[]
[ "database", "pandas", "pyspark", "python", "sas" ]
stackoverflow_0074532824_database_pandas_pyspark_python_sas.txt
Q: How can I use Python to read and capture images from a GIGE camera? I have been working on a codebar recognition project for weeks,. I was asked to use GIGE cameras to recognize the code bars from a PCB and I choosed to use python for the job. So far, I've finished the recognition of codebars from a picture with Opencv. The problem is how to connect to a GIGE camera and grab a photo with My program. Unfortunately, I found Opencv doesn't support GIGE camera so I had to choose Halcon instead. However, even though I can use HDevelop to connect and capture the image, I find no solution to link it to my Python program as Halcon program can only be exported as C# or C++ btw, I tried to use pythonnet and ironPython, but I don't how could I use them to execute a C# script(.cs file) A: I was struggling a lot with this, but I found this method by accident. I have an IDS industrial vision camera (IDS GV-5860-CP) which has a supported Python library. The IDS Peak IPL SDK has an extension to convert the image to a NumPy 3D array. My code makes connection with the camera and accesses the datastream of the Camera. This datastream fills the buffer with data that is converted to an image. This conversion needs to be known RGB formats. That data is written in an RGB format that is shaped in arrays. Those arrays can be turn in to a NumPy 3D array. This array is accessible for OpenCV and can be showed as an image. Most of the Gige Vision camera's work with buffers. Be cautious because buffers can cause delay. If the acquired buffer is converted to an image (NOT WRITTEN, WRITING AN IMAGE TAKES A LOT OF PROCESING POWER), the converted image only needs to be changed in a NumPy 3D array to acquire your image that can be shown in the OpenCV window. This is my code with the IDS industrial Camera, hopefully it can help by your own project. My code: import numpy as np import cv2 import sys from ids_peak import ids_peak as peak from ids_peak_ipl import ids_peak_ipl as ipl from ids_peak import ids_peak_ipl_extension m_device = None m_dataStream = None m_node_map_remote_device = None out = None def open_camera(): print("connection- camera") global m_device, m_node_map_remote_device try: # Create instance of the device manager device_manager = peak.DeviceManager.Instance() # Update the device manager device_manager.Update() # Return if no device was found if device_manager.Devices().empty(): return False # open the first openable device in the device manager's device list device_count = device_manager.Devices().size() for i in range(device_count): if device_manager.Devices()[i].IsOpenable(): m_device = device_manager.Devices()[i].OpenDevice(peak.DeviceAccessType_Control) # Get NodeMap of the RemoteDevice for all accesses to the GenICam NodeMap tree m_node_map_remote_device = m_device.RemoteDevice().NodeMaps()[0] min_frame_rate = 0 max_frame_rate = 50 inc_frame_rate = 0 # Get frame rate range. All values in fps. min_frame_rate = m_node_map_remote_device.FindNode("AcquisitionFrameRate").Minimum() max_frame_rate = m_node_map_remote_device.FindNode("AcquisitionFrameRate").Maximum() if m_node_map_remote_device.FindNode("AcquisitionFrameRate").HasConstantIncrement(): inc_frame_rate = m_node_map_remote_device.FindNode("AcquisitionFrameRate").Increment() else: # If there is no increment, it might be useful to choose a suitable increment for a GUI control element (e.g. a slider) inc_frame_rate = 0.1 # Get the current frame rate frame_rate = m_node_map_remote_device.FindNode("AcquisitionFrameRate").Value() # Set frame rate to maximum m_node_map_remote_device.FindNode("AcquisitionFrameRate").SetValue(max_frame_rate) return True except Exception as e: # ... str_error = str(e) print("Error by connection camera") return False def prepare_acquisition(): print("opening stream") global m_dataStream try: data_streams = m_device.DataStreams() if data_streams.empty(): print("no stream possible") # no data streams available return False m_dataStream = m_device.DataStreams()[0].OpenDataStream() print("open stream") return True except Exception as e: # ... str_error = str(e) print("Error by prep acquisition") return False def set_roi(x, y, width, height): print("setting ROI") try: # Get the minimum ROI and set it. After that there are no size restrictions anymore x_min = m_node_map_remote_device.FindNode("OffsetX").Minimum() y_min = m_node_map_remote_device.FindNode("OffsetY").Minimum() w_min = m_node_map_remote_device.FindNode("Width").Minimum() h_min = m_node_map_remote_device.FindNode("Height").Minimum() m_node_map_remote_device.FindNode("OffsetX").SetValue(x_min) m_node_map_remote_device.FindNode("OffsetY").SetValue(y_min) m_node_map_remote_device.FindNode("Width").SetValue(w_min) m_node_map_remote_device.FindNode("Height").SetValue(h_min) # Get the maximum ROI values x_max = m_node_map_remote_device.FindNode("OffsetX").Maximum() y_max = m_node_map_remote_device.FindNode("OffsetY").Maximum() w_max = m_node_map_remote_device.FindNode("Width").Maximum() h_max = m_node_map_remote_device.FindNode("Height").Maximum() if (x < x_min) or (y < y_min) or (x > x_max) or (y > y_max): print("Error x and y values") return False elif (width < w_min) or (height < h_min) or ((x + width) > w_max) or ((y + height) > h_max): print("Error width and height") return False else: # Now, set final AOI m_node_map_remote_device.FindNode("OffsetX").SetValue(x) m_node_map_remote_device.FindNode("OffsetY").SetValue(y) m_node_map_remote_device.FindNode("Width").SetValue(width) m_node_map_remote_device.FindNode("Height").SetValue(height) return True except Exception as e: # ... str_error = str(e) print("Error by setting ROI") print(str_error) return False def alloc_and_announce_buffers(): print("allocating buffers") try: if m_dataStream: # Flush queue and prepare all buffers for revoking m_dataStream.Flush(peak.DataStreamFlushMode_DiscardAll) # Clear all old buffers for buffer in m_dataStream.AnnouncedBuffers(): m_dataStream.RevokeBuffer(buffer) payload_size = m_node_map_remote_device.FindNode("PayloadSize").Value() # Get number of minimum required buffers num_buffers_min_required = m_dataStream.NumBuffersAnnouncedMinRequired() # Alloc buffers for count in range(num_buffers_min_required): buffer = m_dataStream.AllocAndAnnounceBuffer(payload_size) m_dataStream.QueueBuffer(buffer) return True except Exception as e: # ... str_error = str(e) print("Error by allocating buffers") print(str_error) return False def start_acquisition(): print("Start acquisition") try: m_dataStream.StartAcquisition(peak.AcquisitionStartMode_Default, peak.DataStream.INFINITE_NUMBER) m_node_map_remote_device.FindNode("TLParamsLocked").SetValue(1) m_node_map_remote_device.FindNode("AcquisitionStart").Execute() return True except Exception as e: # ... str_error = str(e) print(str_error) return False def saving_acquisition(): fourcc = cv2.VideoWriter_fourcc('W','M','V','2') out = cv2.VideoWriter( "video", fourcc, 50, (1936, 1096)) while True: try: # Get buffer from device's DataStream. Wait 5000 ms. The buffer is automatically locked until it is queued again. buffer = m_dataStream.WaitForFinishedBuffer(5000) image = ids_peak_ipl_extension.BufferToImage(buffer) # Create IDS peak IPL image for debayering and convert it to RGBa8 format image_processed = image.ConvertTo(ipl.PixelFormatName_BGR8) # Queue buffer again m_dataStream.QueueBuffer(buffer) image_python = image_processed.get_numpy_3D() frame = image_python out.write(frame) cv2.imshow('videoview',frame) key = cv2.waitKey(1) if key == ord('q'): break except Exception as e: # ... str_error = str(e) print("Error by saving acquisition") print(str_error) return False def main(): # initialize library peak.Library.Initialize() if not open_camera(): # error sys.exit(-1) if not prepare_acquisition(): # error sys.exit(-2) if not alloc_and_announce_buffers(): # error sys.exit(-3) if not start_acquisition(): # error sys.exit(-4) if not saving_acquisition(): out.release() cv2.destroyAllWindows() print("oke") # error peak.Library.Close() print('executed') sys.exit(0) if __name__ == '__main__': main()
How can I use Python to read and capture images from a GIGE camera?
I have been working on a codebar recognition project for weeks,. I was asked to use GIGE cameras to recognize the code bars from a PCB and I choosed to use python for the job. So far, I've finished the recognition of codebars from a picture with Opencv. The problem is how to connect to a GIGE camera and grab a photo with My program. Unfortunately, I found Opencv doesn't support GIGE camera so I had to choose Halcon instead. However, even though I can use HDevelop to connect and capture the image, I find no solution to link it to my Python program as Halcon program can only be exported as C# or C++ btw, I tried to use pythonnet and ironPython, but I don't how could I use them to execute a C# script(.cs file)
[ "I was struggling a lot with this, but I found this method by accident. I have an IDS industrial vision camera (IDS GV-5860-CP) which has a supported Python library. The IDS Peak IPL SDK has an extension to convert the image to a NumPy 3D array.\nMy code makes connection with the camera and accesses the datastream of the Camera. This datastream fills the buffer with data that is converted to an image. This conversion needs to be known RGB formats. That data is written in an RGB format that is shaped in arrays. Those arrays can be turn in to a NumPy 3D array. This array is accessible for OpenCV and can be showed as an image.\nMost of the Gige Vision camera's work with buffers. Be cautious because buffers can cause delay. If the acquired buffer is converted to an image (NOT WRITTEN, WRITING AN IMAGE TAKES A LOT OF PROCESING POWER), the converted image only needs to be changed in a NumPy 3D array to acquire your image that can be shown in the OpenCV window.\nThis is my code with the IDS industrial Camera, hopefully it can help by your own project.\nMy code:\nimport numpy as np \nimport cv2\nimport sys\n\nfrom ids_peak import ids_peak as peak\nfrom ids_peak_ipl import ids_peak_ipl as ipl\nfrom ids_peak import ids_peak_ipl_extension\n\n\n\n\n\n\nm_device = None\nm_dataStream = None\nm_node_map_remote_device = None\nout = None\n\n\ndef open_camera():\n print(\"connection- camera\")\n global m_device, m_node_map_remote_device\n try:\n # Create instance of the device manager\n device_manager = peak.DeviceManager.Instance()\n \n # Update the device manager\n device_manager.Update()\n \n # Return if no device was found\n if device_manager.Devices().empty():\n return False\n \n # open the first openable device in the device manager's device list\n device_count = device_manager.Devices().size()\n for i in range(device_count):\n if device_manager.Devices()[i].IsOpenable():\n m_device = device_manager.Devices()[i].OpenDevice(peak.DeviceAccessType_Control)\n \n # Get NodeMap of the RemoteDevice for all accesses to the GenICam NodeMap tree\n m_node_map_remote_device = m_device.RemoteDevice().NodeMaps()[0]\n min_frame_rate = 0\n max_frame_rate = 50\n inc_frame_rate = 0\n\n \n # Get frame rate range. All values in fps.\n min_frame_rate = m_node_map_remote_device.FindNode(\"AcquisitionFrameRate\").Minimum()\n max_frame_rate = m_node_map_remote_device.FindNode(\"AcquisitionFrameRate\").Maximum()\n \n if m_node_map_remote_device.FindNode(\"AcquisitionFrameRate\").HasConstantIncrement():\n inc_frame_rate = m_node_map_remote_device.FindNode(\"AcquisitionFrameRate\").Increment()\n else:\n # If there is no increment, it might be useful to choose a suitable increment for a GUI control element (e.g. a slider)\n inc_frame_rate = 0.1\n \n # Get the current frame rate\n frame_rate = m_node_map_remote_device.FindNode(\"AcquisitionFrameRate\").Value()\n \n # Set frame rate to maximum\n m_node_map_remote_device.FindNode(\"AcquisitionFrameRate\").SetValue(max_frame_rate)\n return True\n except Exception as e:\n # ...\n str_error = str(e)\n print(\"Error by connection camera\")\n return False\n \n \ndef prepare_acquisition():\n print(\"opening stream\")\n global m_dataStream\n\n try:\n data_streams = m_device.DataStreams()\n if data_streams.empty():\n print(\"no stream possible\")\n # no data streams available\n return False\n \n m_dataStream = m_device.DataStreams()[0].OpenDataStream()\n print(\"open stream\")\n \n return True\n except Exception as e:\n # ...\n str_error = str(e)\n print(\"Error by prep acquisition\")\n return False\n \n \ndef set_roi(x, y, width, height):\n print(\"setting ROI\")\n try:\n # Get the minimum ROI and set it. After that there are no size restrictions anymore\n x_min = m_node_map_remote_device.FindNode(\"OffsetX\").Minimum()\n y_min = m_node_map_remote_device.FindNode(\"OffsetY\").Minimum()\n w_min = m_node_map_remote_device.FindNode(\"Width\").Minimum()\n h_min = m_node_map_remote_device.FindNode(\"Height\").Minimum()\n \n m_node_map_remote_device.FindNode(\"OffsetX\").SetValue(x_min)\n m_node_map_remote_device.FindNode(\"OffsetY\").SetValue(y_min)\n m_node_map_remote_device.FindNode(\"Width\").SetValue(w_min)\n m_node_map_remote_device.FindNode(\"Height\").SetValue(h_min)\n \n # Get the maximum ROI values\n x_max = m_node_map_remote_device.FindNode(\"OffsetX\").Maximum()\n y_max = m_node_map_remote_device.FindNode(\"OffsetY\").Maximum()\n w_max = m_node_map_remote_device.FindNode(\"Width\").Maximum()\n h_max = m_node_map_remote_device.FindNode(\"Height\").Maximum()\n \n if (x < x_min) or (y < y_min) or (x > x_max) or (y > y_max):\n print(\"Error x and y values\")\n return False\n elif (width < w_min) or (height < h_min) or ((x + width) > w_max) or ((y + height) > h_max):\n print(\"Error width and height\")\n return False\n else:\n # Now, set final AOI\n m_node_map_remote_device.FindNode(\"OffsetX\").SetValue(x)\n m_node_map_remote_device.FindNode(\"OffsetY\").SetValue(y)\n m_node_map_remote_device.FindNode(\"Width\").SetValue(width)\n m_node_map_remote_device.FindNode(\"Height\").SetValue(height)\n \n return True\n except Exception as e:\n # ...\n str_error = str(e)\n print(\"Error by setting ROI\")\n print(str_error)\n return False\n \n \ndef alloc_and_announce_buffers():\n print(\"allocating buffers\")\n try:\n if m_dataStream:\n # Flush queue and prepare all buffers for revoking\n m_dataStream.Flush(peak.DataStreamFlushMode_DiscardAll)\n \n # Clear all old buffers\n for buffer in m_dataStream.AnnouncedBuffers():\n m_dataStream.RevokeBuffer(buffer)\n \n payload_size = m_node_map_remote_device.FindNode(\"PayloadSize\").Value()\n \n # Get number of minimum required buffers\n num_buffers_min_required = m_dataStream.NumBuffersAnnouncedMinRequired()\n \n # Alloc buffers\n for count in range(num_buffers_min_required):\n buffer = m_dataStream.AllocAndAnnounceBuffer(payload_size)\n m_dataStream.QueueBuffer(buffer)\n \n return True\n except Exception as e:\n # ...\n str_error = str(e)\n print(\"Error by allocating buffers\")\n print(str_error)\n return False\n \n \ndef start_acquisition():\n print(\"Start acquisition\")\n\n try:\n m_dataStream.StartAcquisition(peak.AcquisitionStartMode_Default, peak.DataStream.INFINITE_NUMBER)\n m_node_map_remote_device.FindNode(\"TLParamsLocked\").SetValue(1)\n m_node_map_remote_device.FindNode(\"AcquisitionStart\").Execute()\n \n return True\n except Exception as e:\n # ...\n str_error = str(e)\n print(str_error)\n return False\n\ndef saving_acquisition(): \n fourcc = cv2.VideoWriter_fourcc('W','M','V','2')\n out = cv2.VideoWriter( \"video\", fourcc, 50, (1936, 1096))\n while True:\n try:\n \n # Get buffer from device's DataStream. Wait 5000 ms. The buffer is automatically locked until it is queued again.\n buffer = m_dataStream.WaitForFinishedBuffer(5000)\n\n image = ids_peak_ipl_extension.BufferToImage(buffer)\n \n # Create IDS peak IPL image for debayering and convert it to RGBa8 format\n \n image_processed = image.ConvertTo(ipl.PixelFormatName_BGR8)\n # Queue buffer again\n m_dataStream.QueueBuffer(buffer)\n \n image_python = image_processed.get_numpy_3D()\n\n frame = image_python\n \n out.write(frame)\n cv2.imshow('videoview',frame)\n \n key = cv2.waitKey(1)\n if key == ord('q'):\n break\n\n \n except Exception as e:\n # ...\n str_error = str(e)\n print(\"Error by saving acquisition\")\n print(str_error)\n return False\n\n \ndef main():\n \n # initialize library\n peak.Library.Initialize()\n \n if not open_camera():\n # error\n sys.exit(-1)\n \n if not prepare_acquisition():\n # error\n sys.exit(-2)\n \n if not alloc_and_announce_buffers():\n # error\n sys.exit(-3)\n \n if not start_acquisition():\n # error\n sys.exit(-4)\n\n if not saving_acquisition():\n out.release()\n cv2.destroyAllWindows()\n print(\"oke\")\n # error\n \n peak.Library.Close()\n print('executed')\n sys.exit(0)\n \nif __name__ == '__main__':\n main()\n\n" ]
[ 0 ]
[]
[]
[ "gige_sdk", "halcon", "opencv", "python" ]
stackoverflow_0056441004_gige_sdk_halcon_opencv_python.txt
Q: How do I select rows from a DataFrame based on column values with given conditions? How to apply rules in python, if i want A, B = 1,2 and C,D = 3,4 and E,F = 5,6 each and drop the remaining Type Set 1 A 1 2 B 2 3 B 3 4 C 4 5 D 5 6 A 2 7 F 3 8 F 2 9 E 1 10 D 5 11 E 5 12 C 6 i tried using drop but its lengthy A: What about using multiple masks: m1 = df['Type'].isin(['A', 'B']) m2 = df['Type'].isin(['C', 'D']) m3 = df['Set'].isin([1, 2]) m4 = df['Set'].isin([3, 4]) out = df.loc[(m1&m3)|(m2&m4)] Or: m1 = df['Type'].isin(['A', 'B']) m2 = df['Type'].isin(['C', 'D']) m3 = df.loc[m1, 'Set'].isin([1, 2]).reindex(df.index, fill_value=False) m4 = df.loc[m2, 'Set'].isin([3, 4]) out = df.loc[m3 | m4] Output: Type Set 1 A 1 2 B 2 4 C 4 6 A 2 7 C 3 9 B 1
How do I select rows from a DataFrame based on column values with given conditions?
How to apply rules in python, if i want A, B = 1,2 and C,D = 3,4 and E,F = 5,6 each and drop the remaining Type Set 1 A 1 2 B 2 3 B 3 4 C 4 5 D 5 6 A 2 7 F 3 8 F 2 9 E 1 10 D 5 11 E 5 12 C 6 i tried using drop but its lengthy
[ "What about using multiple masks:\nm1 = df['Type'].isin(['A', 'B'])\nm2 = df['Type'].isin(['C', 'D'])\n\nm3 = df['Set'].isin([1, 2])\nm4 = df['Set'].isin([3, 4])\n\nout = df.loc[(m1&m3)|(m2&m4)]\n\nOr:\nm1 = df['Type'].isin(['A', 'B'])\nm2 = df['Type'].isin(['C', 'D'])\n\nm3 = df.loc[m1, 'Set'].isin([1, 2]).reindex(df.index, fill_value=False)\nm4 = df.loc[m2, 'Set'].isin([3, 4])\n\nout = df.loc[m3 | m4]\n\nOutput:\n Type Set\n1 A 1\n2 B 2\n4 C 4\n6 A 2\n7 C 3\n9 B 1\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074533908_pandas_python.txt
Q: How to merge common strings with different values between parenthesis in Python I am processing some strings within lists that look like these: ['COLOR INCLUDES (40)', 'LONG_DESCRIPTION CONTAINS ("BLACK")', 'COLOR INCLUDES (38)'] ['COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839)', 'COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,409,800,823,830,833,838,839)', 'COLOR INCLUDES (800)'] Thing is, I want to merge similar strings with their values into one, for each list. Expecting something like this: ['COLOR INCLUDES (40,38)', 'LONG_DESCRIPTION CONTAINS ("BLACK")'] ['COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,409,800,823,830,833,838,839)'] And some strings may have values without (): ['FAMILY EQUALS 1145'] What could be the more pythonic and fastest (lazy :P) way of doing this? I have tried using regex to match strings until a "(" appears, but some strings don't have values between (), and can't find a fitting solution. I have also tried STree function from suffix_trees lib, which finds the LCS (Longest Common Subsequence) from a list of strings, but then ran out of ideas about handling the values and the closing parenthesis: from suffix_trees import STree st = STree.STree(['COLOR INCLUDES(30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839)', 'COLOR INCLUDES(30,31,32,33,56,74,84,85,93,99,184,409,800,823,830,833,838,839)', 'COLOR INCLUDES (800)']) st.lcs() out: 'COLOR INCLUDES (' EDIT: SOLVED As @stef in the answer said, I broke the problem in smaller pieces and I solved it with his help. Let me paste here the Class Rule_process and the result: class Rule_process: def __init__(self): self.rules = '(COLOR INCLUDES (40)) OR (LONG_DESCRIPTION CONTAINS ("BLACK")):1|||COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839):0|||COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,409,800,823,830,833,838,839):0|||COLOR INCLUDES (40):1|||COLOR INCLUDES (800):0' self.rules_dict = { 0:None, 1:None, 2:None, 4:None, } def append_rules(self): rules = self.rules.split("|||") values_0 = [] values_1 = [] values_2 = [] values_4 = [] for rule in range(len(rules)): if rules[rule][-1]=='0': rules[rule] = rules[rule][:-2] # self.rules_dict[0].append(rules[rule]) values_0.append(rules[rule]) elif rules[rule][-1]=='1': rules[rule] = rules[rule][:-2] # self.rules_dict[1].append(rules[rule]) values_1.append(rules[rule]) elif rules[rule][-1]=='2': rules[rule] = rules[rule][:-2] # self.rules_dict[2].append(rules[rule]) values_2.append(rules[rule]) elif rules[rule][-1]=='4': rules[rule] = rules[rule][:-2] # self.rules_dict[4].append(rules[rule]) values_4.append(rules[rule]) if values_0!=[]: self.rules_dict[0] = values_0 if values_1!=[]: self.rules_dict[1] = values_1 if values_2!=[]: self.rules_dict[2] = values_2 if values_4!=[]: self.rules_dict[4] = values_4 regex = r'^\(' # for rules in self.rules_dict.values(): for key in self.rules_dict.keys(): if self.rules_dict[key] is not None: for rule in range(len(self.rules_dict[key])): new_rule = self.rules_dict[key][rule].split(' OR ') if len(new_rule)>1: joined_rule = [] for r in new_rule: r = r.replace("))",")") r = re.sub(regex, "", r) joined_rule.append(r) self.rules_dict[key].remove(self.rules_dict[key][rule]) self.rules_dict[key].extend(joined_rule) self.rules_dict[key] = list(set(self.rules_dict[key])) else: new_rule = [r.replace("))",")") for r in new_rule] new_rule = [re.sub(regex, "", r) for r in new_rule] new_rule = ", ".join(new_rule) self.rules_dict[key][rule] = new_rule self.rules_dict[key] = list(set(self.rules_dict[key])) return self.rules_dict def split_rule(self): # COLOR INCLUDES (30,31,32,33) -> name = 'COLOR INCLUDES', values = [30,31,32,33] # LONG_DESCRIPTION CONTAINS ("BLACK") -> name = LONG_DESCRIPTION, values ='"BLACK"' new_dict = { 0:None, 1:None, 2:None, 4:None, } for key in self.rules_dict.keys(): pql_dict = {} if self.rules_dict[key] is not None: for rule in range(len(self.rules_dict[key])): #self.rules_dict[key][rule] -> COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839) rule = self.rules_dict[key][rule] name = rule.rsplit(maxsplit=1)[0] #------------------------------->COLOR INCLUDES values_as_str = rule.rsplit(maxsplit=1)[1].replace("(","") values_as_str = values_as_str.replace(")","") #-------------------------------> 30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839 try: values = list(map(int, values_as_str.split(","))) # [30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839] except: values = values_as_str # '"BLACK"' if name in pql_dict.keys(): pql_dict[name] = pql_dict[name] + (values) pql_dict[name] = list(set(pql_dict[name])) else: pql_dict.setdefault(name, values) # pql_dict = {'COLOR INCLUDES': [32, 33, 800, 99, 833, 838, 839, 74, 84, 85, 30, 823, 184, 409, 56, 93, 830, 31]} for name in pql_dict.keys(): values = pql_dict[name] joined_rule = name + " " + str(values) if new_dict[key] is not None: new_dict[key] = new_dict[key] + [joined_rule] else: new_dict[key] = [joined_rule] self.rules_dict = new_dict And the result: process = Rule_process() process.append_rules() process.split_rule() process.rules_dict OUT: {0: ['COLOR INCLUDES [32, 33, 800, 99, 833, 838, 839, 74, 84, 85, 30, 823, 184, 409, 56, 93, 830, 31]'], 1: ['COLOR INCLUDES [40]', 'LONG_DESCRIPTION CONTAINS "BLACK"'], 2: None, 4: None} A: Split this task into smaller, simpler tasks. First task: Write a function that takes a string and returns a pair (name, list_of_values) where name is the first part of the string and list_of_values is a python list of integers. Hint: You can use '(' in s to test whether string s contains an opening parenthesis; you can use s.split() to split on whitespace or s.rsplit(maxsplit=1) to only split on the last whitespace; s.split('(') to split on opening parenthesis; and s.split(',') to split on comma. Second task: Write a function that takes a list of pairs (name, list_of_values) and merges the lists when the names are equal. Hint: This is extremely easy in python using a dict with name as key and list_of_values as value. You can use if name in d: ... else: to test whether a name is already in the dict or not; or you can use d.get(name, []) or d.setdefault(name, []) to automatically add a name: [] entry in the dict when name is not already in the dict. Third task: Write a function to convert back, from the pairs (name, list_of_values) to the strings "name (value1, value2, ...)". This task is easier than the first task, so I suggest doing it first. Hint: ' '.join(...) and ','.join(...) can both be useful.
How to merge common strings with different values between parenthesis in Python
I am processing some strings within lists that look like these: ['COLOR INCLUDES (40)', 'LONG_DESCRIPTION CONTAINS ("BLACK")', 'COLOR INCLUDES (38)'] ['COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839)', 'COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,409,800,823,830,833,838,839)', 'COLOR INCLUDES (800)'] Thing is, I want to merge similar strings with their values into one, for each list. Expecting something like this: ['COLOR INCLUDES (40,38)', 'LONG_DESCRIPTION CONTAINS ("BLACK")'] ['COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,409,800,823,830,833,838,839)'] And some strings may have values without (): ['FAMILY EQUALS 1145'] What could be the more pythonic and fastest (lazy :P) way of doing this? I have tried using regex to match strings until a "(" appears, but some strings don't have values between (), and can't find a fitting solution. I have also tried STree function from suffix_trees lib, which finds the LCS (Longest Common Subsequence) from a list of strings, but then ran out of ideas about handling the values and the closing parenthesis: from suffix_trees import STree st = STree.STree(['COLOR INCLUDES(30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839)', 'COLOR INCLUDES(30,31,32,33,56,74,84,85,93,99,184,409,800,823,830,833,838,839)', 'COLOR INCLUDES (800)']) st.lcs() out: 'COLOR INCLUDES (' EDIT: SOLVED As @stef in the answer said, I broke the problem in smaller pieces and I solved it with his help. Let me paste here the Class Rule_process and the result: class Rule_process: def __init__(self): self.rules = '(COLOR INCLUDES (40)) OR (LONG_DESCRIPTION CONTAINS ("BLACK")):1|||COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839):0|||COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,409,800,823,830,833,838,839):0|||COLOR INCLUDES (40):1|||COLOR INCLUDES (800):0' self.rules_dict = { 0:None, 1:None, 2:None, 4:None, } def append_rules(self): rules = self.rules.split("|||") values_0 = [] values_1 = [] values_2 = [] values_4 = [] for rule in range(len(rules)): if rules[rule][-1]=='0': rules[rule] = rules[rule][:-2] # self.rules_dict[0].append(rules[rule]) values_0.append(rules[rule]) elif rules[rule][-1]=='1': rules[rule] = rules[rule][:-2] # self.rules_dict[1].append(rules[rule]) values_1.append(rules[rule]) elif rules[rule][-1]=='2': rules[rule] = rules[rule][:-2] # self.rules_dict[2].append(rules[rule]) values_2.append(rules[rule]) elif rules[rule][-1]=='4': rules[rule] = rules[rule][:-2] # self.rules_dict[4].append(rules[rule]) values_4.append(rules[rule]) if values_0!=[]: self.rules_dict[0] = values_0 if values_1!=[]: self.rules_dict[1] = values_1 if values_2!=[]: self.rules_dict[2] = values_2 if values_4!=[]: self.rules_dict[4] = values_4 regex = r'^\(' # for rules in self.rules_dict.values(): for key in self.rules_dict.keys(): if self.rules_dict[key] is not None: for rule in range(len(self.rules_dict[key])): new_rule = self.rules_dict[key][rule].split(' OR ') if len(new_rule)>1: joined_rule = [] for r in new_rule: r = r.replace("))",")") r = re.sub(regex, "", r) joined_rule.append(r) self.rules_dict[key].remove(self.rules_dict[key][rule]) self.rules_dict[key].extend(joined_rule) self.rules_dict[key] = list(set(self.rules_dict[key])) else: new_rule = [r.replace("))",")") for r in new_rule] new_rule = [re.sub(regex, "", r) for r in new_rule] new_rule = ", ".join(new_rule) self.rules_dict[key][rule] = new_rule self.rules_dict[key] = list(set(self.rules_dict[key])) return self.rules_dict def split_rule(self): # COLOR INCLUDES (30,31,32,33) -> name = 'COLOR INCLUDES', values = [30,31,32,33] # LONG_DESCRIPTION CONTAINS ("BLACK") -> name = LONG_DESCRIPTION, values ='"BLACK"' new_dict = { 0:None, 1:None, 2:None, 4:None, } for key in self.rules_dict.keys(): pql_dict = {} if self.rules_dict[key] is not None: for rule in range(len(self.rules_dict[key])): #self.rules_dict[key][rule] -> COLOR INCLUDES (30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839) rule = self.rules_dict[key][rule] name = rule.rsplit(maxsplit=1)[0] #------------------------------->COLOR INCLUDES values_as_str = rule.rsplit(maxsplit=1)[1].replace("(","") values_as_str = values_as_str.replace(")","") #-------------------------------> 30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839 try: values = list(map(int, values_as_str.split(","))) # [30,31,32,33,56,74,84,85,93,99,184,800,823,830,833,838,839] except: values = values_as_str # '"BLACK"' if name in pql_dict.keys(): pql_dict[name] = pql_dict[name] + (values) pql_dict[name] = list(set(pql_dict[name])) else: pql_dict.setdefault(name, values) # pql_dict = {'COLOR INCLUDES': [32, 33, 800, 99, 833, 838, 839, 74, 84, 85, 30, 823, 184, 409, 56, 93, 830, 31]} for name in pql_dict.keys(): values = pql_dict[name] joined_rule = name + " " + str(values) if new_dict[key] is not None: new_dict[key] = new_dict[key] + [joined_rule] else: new_dict[key] = [joined_rule] self.rules_dict = new_dict And the result: process = Rule_process() process.append_rules() process.split_rule() process.rules_dict OUT: {0: ['COLOR INCLUDES [32, 33, 800, 99, 833, 838, 839, 74, 84, 85, 30, 823, 184, 409, 56, 93, 830, 31]'], 1: ['COLOR INCLUDES [40]', 'LONG_DESCRIPTION CONTAINS "BLACK"'], 2: None, 4: None}
[ "Split this task into smaller, simpler tasks.\nFirst task:\nWrite a function that takes a string and returns a pair (name, list_of_values) where name is the first part of the string and list_of_values is a python list of integers.\nHint: You can use '(' in s to test whether string s contains an opening parenthesis; you can use s.split() to split on whitespace or s.rsplit(maxsplit=1) to only split on the last whitespace; s.split('(') to split on opening parenthesis; and s.split(',') to split on comma.\nSecond task:\nWrite a function that takes a list of pairs (name, list_of_values) and merges the lists when the names are equal.\nHint: This is extremely easy in python using a dict with name as key and list_of_values as value. You can use if name in d: ... else: to test whether a name is already in the dict or not; or you can use d.get(name, []) or d.setdefault(name, []) to automatically add a name: [] entry in the dict when name is not already in the dict.\nThird task:\nWrite a function to convert back, from the pairs (name, list_of_values) to the strings \"name (value1, value2, ...)\". This task is easier than the first task, so I suggest doing it first.\nHint: ' '.join(...) and ','.join(...) can both be useful.\n" ]
[ 1 ]
[]
[]
[ "lcs", "nlp", "python", "substring" ]
stackoverflow_0074533266_lcs_nlp_python_substring.txt
Q: How to go from a contour to an image mask in with Matplotlib If I plot a 2D array and contour it, I can get the access to the segmentation map, via cs = plt.contour(...); cs.allsegs but it's parameterized as a line. I'd like a segmap boolean mask of what's interior to the line, so I can, say, quickly sum everything within that contour. Many thanks! A: I dont think there is a really easy way, mainly because you want to mix raster and vector data. Matplotlib paths fortunately have a way to check if a point is within the path, doing this for all pixels will make a mask, but i think this method can get very slow for large datasets. import matplotlib.patches as patches from matplotlib.nxutils import points_inside_poly import matplotlib.pyplot as plt import numpy as np # generate some data X, Y = np.meshgrid(np.arange(-3.0, 3.0, 0.025), np.arange(-3.0, 3.0, 0.025)) Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0) Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1) # difference of Gaussians Z = 10.0 * (Z2 - Z1) fig, axs = plt.subplots(1,2, figsize=(12,6), subplot_kw={'xticks': [], 'yticks': [], 'frameon': False}) # create a normal contour plot axs[0].set_title('Standard contour plot') im = axs[0].imshow(Z, cmap=plt.cm.Greys_r) cs = axs[0].contour(Z, np.arange(-3, 4, .5), linewidths=2, colors='red', linestyles='solid') # get the path from 1 of the contour lines verts = cs.collections[7].get_paths()[0] # highlight the selected contour with yellow axs[0].add_patch(patches.PathPatch(verts, facecolor='none', ec='yellow', lw=2, zorder=50)) # make a mask from it with the dimensions of Z mask = verts.contains_points(list(np.ndindex(Z.shape))) mask = mask.reshape(Z.shape).T axs[1].set_title('Mask of everything within one contour line') axs[1].imshow(mask, cmap=plt.cm.Greys_r, interpolation='none') # get the sum of everything within the contour # the mask is inverted because everything within the contour should not be masked print np.ma.MaskedArray(Z, mask=~mask).sum() Note that contour lines which 'leave' the plot at different edges by default wont make a path which follows these edges. These lines would need some additional processing. A: Another way, perhaps more intuitive, is the binary_fill_holes function from scipy.ndimage. import numpy as np import scipy image = np.zeros((512, 512)) image[contour1[:, 0], contour1[:, 1]] = 1 masked_image = scipy.ndimage.morphology.binary_fill_holes(image) ``` A: Here is how to create a filled polygon from contours and create a binary mask using OpenCV import cv2 import numpy as np import matplotlib.pyplot as plt mask = np.zeros((10,10,3), dtype=np.uint8) # polygon's coordinates coords = np.array([[3,3],[3,6],[6,6],[6,3]]) cv2.drawContours(mask, [coords], contourIdx=-1, color=(1,1,1), thickness=-1) bin_mask = mask[:,:,0].astype(np.float32) plt.imshow(bin_mask, cmap='gray') contourIdx=-1 - draw all contours color=(1,1,1) - a number from 0 to 255 for each channel; since we generate a binary mask it is set to 1 thickness=-1 - fills the polygon
How to go from a contour to an image mask in with Matplotlib
If I plot a 2D array and contour it, I can get the access to the segmentation map, via cs = plt.contour(...); cs.allsegs but it's parameterized as a line. I'd like a segmap boolean mask of what's interior to the line, so I can, say, quickly sum everything within that contour. Many thanks!
[ "I dont think there is a really easy way, mainly because you want to mix raster and vector data. Matplotlib paths fortunately have a way to check if a point is within the path, doing this for all pixels will make a mask, but i think this method can get very slow for large datasets.\nimport matplotlib.patches as patches\nfrom matplotlib.nxutils import points_inside_poly\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# generate some data\nX, Y = np.meshgrid(np.arange(-3.0, 3.0, 0.025), np.arange(-3.0, 3.0, 0.025))\nZ1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)\nZ2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)\n# difference of Gaussians\nZ = 10.0 * (Z2 - Z1)\n\nfig, axs = plt.subplots(1,2, figsize=(12,6), subplot_kw={'xticks': [], 'yticks': [], 'frameon': False})\n\n# create a normal contour plot\naxs[0].set_title('Standard contour plot')\nim = axs[0].imshow(Z, cmap=plt.cm.Greys_r)\ncs = axs[0].contour(Z, np.arange(-3, 4, .5), linewidths=2, colors='red', linestyles='solid')\n\n# get the path from 1 of the contour lines\nverts = cs.collections[7].get_paths()[0]\n\n# highlight the selected contour with yellow\naxs[0].add_patch(patches.PathPatch(verts, facecolor='none', ec='yellow', lw=2, zorder=50))\n\n# make a mask from it with the dimensions of Z\nmask = verts.contains_points(list(np.ndindex(Z.shape)))\nmask = mask.reshape(Z.shape).T\n\naxs[1].set_title('Mask of everything within one contour line')\naxs[1].imshow(mask, cmap=plt.cm.Greys_r, interpolation='none')\n\n# get the sum of everything within the contour\n# the mask is inverted because everything within the contour should not be masked\nprint np.ma.MaskedArray(Z, mask=~mask).sum()\n\nNote that contour lines which 'leave' the plot at different edges by default wont make a path which follows these edges. These lines would need some additional processing.\n\n", "Another way, perhaps more intuitive, is the binary_fill_holes function from scipy.ndimage.\nimport numpy as np\nimport scipy\n\n\nimage = np.zeros((512, 512))\nimage[contour1[:, 0], contour1[:, 1]] = 1\nmasked_image = scipy.ndimage.morphology.binary_fill_holes(image)\n```\n\n", "Here is how to create a filled polygon from contours and create a binary mask using OpenCV\nimport cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nmask = np.zeros((10,10,3), dtype=np.uint8)\n# polygon's coordinates\ncoords = np.array([[3,3],[3,6],[6,6],[6,3]])\n\ncv2.drawContours(mask, [coords], contourIdx=-1, color=(1,1,1), thickness=-1)\nbin_mask = mask[:,:,0].astype(np.float32)\nplt.imshow(bin_mask, cmap='gray')\n\n\ncontourIdx=-1 - draw all contours\n\ncolor=(1,1,1) - a number from 0 to 255 for each channel; since we\ngenerate a binary mask it is set to 1\n\nthickness=-1 - fills the polygon\n\n\n\n" ]
[ 7, 5, 0 ]
[]
[]
[ "contour", "mask", "matplotlib", "plot", "python" ]
stackoverflow_0016975458_contour_mask_matplotlib_plot_python.txt
Q: CNN model did not learn anything from the training data. Where are the mistakes I made? The shape of the train/test data is (samples, 256, 256, 1). The training dataset has around 1400 samples, the validation dataset has 150 samples, and the test dataset has 250 samples. Then I build a CNN model for a six-object classification task. However, no matter how hard I tuning the parameters and add/remove layers(conv&dense), I get a chance level of accuracy all the time (around 16.5%). Thus, I would like to know whether I made some deadly mistakes while building the model. Or there is something wrong with the data itself, not the CNN model. Code: def build_cnn_model(input_shape, activation='relu'): model = Sequential() # 3 Convolution layer with Max polling model.add(Conv2D(64, (5, 5), activation=activation, padding = 'same', input_shape=input_shape)) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, (5, 5), activation=activation, padding = 'same')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(256, (5, 5), activation=activation, padding = 'same')) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) # 3 Full connected layer model.add(Dense(1024, activation = activation)) model.add(Dropout(0.5)) model.add(Dense(512, activation = activation)) model.add(Dropout(0.5)) model.add(Dense(6, activation = 'softmax')) # 6 classes # summarize the model print(model.summary()) return model def compile_and_fit_model(model, X_train, y_train, X_vali, y_vali, batch_size, n_epochs, LR=0.01): # compile the model model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=LR), loss='sparse_categorical_crossentropy', metrics=['sparse_categorical_accuracy']) # fit the model history = model.fit(x=X_train, y=y_train, batch_size=batch_size, epochs=n_epochs, verbose=1, validation_data=(X_vali, y_vali)) return model, history I transformed the MEG data my professor recorded into Magnitude Scalogram using CWT. pywt.cwt(data, scales, wavelet) was used. And if I plot the coefficients I got from cwt, I will have a graph like this (I emerged 62 channels into one graph). enter image description here I used the coefficients as train/test data for the CNN model. However, I tuned the parameters and tried to add/remove layers for the CNN model, and the classification accuracy was unchanged. Thus, I want to know where I made mistakes. Did I make mistakes with building the CNN model, or did I make mistakes with CWT (the way I handled data)? Please give me some advices, thank you. A: How is the accuracy of the training data? If you have a small dataset and the model does not overfit after training for a while, then something is wrong with the model. You can also test with existing datasets, which the model should be able to handle (like Fashion MNIST). Testing if you handled the data correctly is harder. Did you write unit tests for the different steps in the preprocessing pipeline?
CNN model did not learn anything from the training data. Where are the mistakes I made?
The shape of the train/test data is (samples, 256, 256, 1). The training dataset has around 1400 samples, the validation dataset has 150 samples, and the test dataset has 250 samples. Then I build a CNN model for a six-object classification task. However, no matter how hard I tuning the parameters and add/remove layers(conv&dense), I get a chance level of accuracy all the time (around 16.5%). Thus, I would like to know whether I made some deadly mistakes while building the model. Or there is something wrong with the data itself, not the CNN model. Code: def build_cnn_model(input_shape, activation='relu'): model = Sequential() # 3 Convolution layer with Max polling model.add(Conv2D(64, (5, 5), activation=activation, padding = 'same', input_shape=input_shape)) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, (5, 5), activation=activation, padding = 'same')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(256, (5, 5), activation=activation, padding = 'same')) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) # 3 Full connected layer model.add(Dense(1024, activation = activation)) model.add(Dropout(0.5)) model.add(Dense(512, activation = activation)) model.add(Dropout(0.5)) model.add(Dense(6, activation = 'softmax')) # 6 classes # summarize the model print(model.summary()) return model def compile_and_fit_model(model, X_train, y_train, X_vali, y_vali, batch_size, n_epochs, LR=0.01): # compile the model model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=LR), loss='sparse_categorical_crossentropy', metrics=['sparse_categorical_accuracy']) # fit the model history = model.fit(x=X_train, y=y_train, batch_size=batch_size, epochs=n_epochs, verbose=1, validation_data=(X_vali, y_vali)) return model, history I transformed the MEG data my professor recorded into Magnitude Scalogram using CWT. pywt.cwt(data, scales, wavelet) was used. And if I plot the coefficients I got from cwt, I will have a graph like this (I emerged 62 channels into one graph). enter image description here I used the coefficients as train/test data for the CNN model. However, I tuned the parameters and tried to add/remove layers for the CNN model, and the classification accuracy was unchanged. Thus, I want to know where I made mistakes. Did I make mistakes with building the CNN model, or did I make mistakes with CWT (the way I handled data)? Please give me some advices, thank you.
[ "How is the accuracy of the training data? If you have a small dataset and the model does not overfit after training for a while, then something is wrong with the model. You can also test with existing datasets, which the model should be able to handle (like Fashion MNIST).\nTesting if you handled the data correctly is harder. Did you write unit tests for the different steps in the preprocessing pipeline?\n" ]
[ 0 ]
[]
[]
[ "conv_neural_network", "python", "tensorflow", "time_series", "wavelet_transform" ]
stackoverflow_0074516257_conv_neural_network_python_tensorflow_time_series_wavelet_transform.txt
Q: How do I add the format for underlining text in xlsxwriter? Just want to know how to create a variable for underlining text in the package xlsxwriter. For example, this is the one I created for making it bold: bold_format = workbook.add_format({'bold': True}) Sorry if the answer is blatantly obvious, I tried looking it up to no avail. A: You can specify several different cell formats in the dictionary: cell_format = workbook.add_format({'bold': True, 'font_color': 'red', 'num_format': '$#,##0.00',}) worksheet.write('A1', 'Cell A1', cell_format) # Later... cell_format.set_font_color('green') worksheet.write('B1', 'Cell B1', cell_format)
How do I add the format for underlining text in xlsxwriter?
Just want to know how to create a variable for underlining text in the package xlsxwriter. For example, this is the one I created for making it bold: bold_format = workbook.add_format({'bold': True}) Sorry if the answer is blatantly obvious, I tried looking it up to no avail.
[ "You can specify several different cell formats in the dictionary:\ncell_format = workbook.add_format({'bold': True, 'font_color': 'red', 'num_format': '$#,##0.00',})\nworksheet.write('A1', 'Cell A1', cell_format)\n\n# Later...\ncell_format.set_font_color('green')\nworksheet.write('B1', 'Cell B1', cell_format)\n\n" ]
[ 1 ]
[]
[]
[ "python", "xlsxwriter" ]
stackoverflow_0074533834_python_xlsxwriter.txt
Q: Error: RuntimeError: file : Object's name 'scrollList' is not unique New to python and coding in general so I'm having trouble understanding how to do stuff. This is for a class so I can't do mel or pyMel. I'm trying to write a code that can save faces and store them in a UI however it gives me an error that "# Error: RuntimeError: file line 28: Object's name 'scrollList' is not unique." Can't seem to figure out how to solve this unique name issue. here is my code if it helps import maya.cmds as cmds def button_closure(fn, *args, **kwargs): ''' Function to wrap buttons ''' def wrapped(_): fn(*args,**kwargs) return wrapped def refresh_lst(*args): ''' function to refresh list ''' selection = cmds.ls(sl = True) # Removes the current selection from the textScrollList cmds.textScrollList("scrollList", e = True, removeAll = True) # Adds in the newest selection to be reflected in the textScrollList cmds.textScrollList("scrollList", query=True,append = selection, e = True) def store_selected(scrollList): ''' Function to select faces and store in a textScrollList ''' face_list=[] selection = cmds.ls(sl=True) #takes selection face_list = cmds.filterExpand(selection,sm=34) #makes sure to only select just the poly face cmds.textScrollList("scrollList",enable=True, append=face_list[0]) #adds the selection to the list print(face_list) def select_stored(scrollList): txt =cmds.textScrollList("scrollList",query=True, si=True) #to store it cmds.select(txt, r=True) cmds.filterExpand(selection,sm=34) print(txt, 'is selected') def parent_window(): window = cmds.window( title="UV ToolBar", iconName='UVTools', widthHeight=(316, 500)) # scroll bar cmds.scrollLayout( childResizable = True, borderVisible= True, verticalScrollBarAlwaysVisible=True) master_layout= main = cmds.columnLayout(adjustableColumn = True, rowSpacing = 5) # Text Scroll List selection_textscrollList = cmds.textScrollList("scrollList", ams = True) cmds.button(label='Store', command= button_closure(store_selected, selection_textscrollList)) cmds.setParent(master_layout) cmds.button(label='Select Stored', command= button_closure(select_stored, selection_textscrollList)) cmds.button(label='Refresh List', command= button_closure(refresh_lst, selection_textscrollList)) # Set its parent to the Maya window (denoted by '..') cmds.setParent( '..' ) # Separator for parent window cmds.separator(height=5, style='out', backgroundColor= [0.2, 0.2, 0.3]) # Show the window that we created (window) cmds.showWindow( window ) return selection_textscrollList parent_window() Tried multiple ways like defining a function or using different *args A: First make sure there is no window with the same name by using deleteUI if it exists. Next, it is not a good practice to rely on names for maya ui elements because maya renames the elements if it thinks this is necessary. So something like: selection_textscrollList = cmds.textScrollList("scrollList", ams = True) may return an element called window1|scrollLayout13|columnLayout56|scrollList or it returns an element called window1|scrollLayout13|columnLayout57|scrollList1 so the best way is to always use the returned names to access UI elements. To make this access easier I recommend the usage of a class. This way you can store all needed UI names and callbacks internally in the class and access easily them if needed without the need of global variables or strange callback constructs.
Error: RuntimeError: file : Object's name 'scrollList' is not unique
New to python and coding in general so I'm having trouble understanding how to do stuff. This is for a class so I can't do mel or pyMel. I'm trying to write a code that can save faces and store them in a UI however it gives me an error that "# Error: RuntimeError: file line 28: Object's name 'scrollList' is not unique." Can't seem to figure out how to solve this unique name issue. here is my code if it helps import maya.cmds as cmds def button_closure(fn, *args, **kwargs): ''' Function to wrap buttons ''' def wrapped(_): fn(*args,**kwargs) return wrapped def refresh_lst(*args): ''' function to refresh list ''' selection = cmds.ls(sl = True) # Removes the current selection from the textScrollList cmds.textScrollList("scrollList", e = True, removeAll = True) # Adds in the newest selection to be reflected in the textScrollList cmds.textScrollList("scrollList", query=True,append = selection, e = True) def store_selected(scrollList): ''' Function to select faces and store in a textScrollList ''' face_list=[] selection = cmds.ls(sl=True) #takes selection face_list = cmds.filterExpand(selection,sm=34) #makes sure to only select just the poly face cmds.textScrollList("scrollList",enable=True, append=face_list[0]) #adds the selection to the list print(face_list) def select_stored(scrollList): txt =cmds.textScrollList("scrollList",query=True, si=True) #to store it cmds.select(txt, r=True) cmds.filterExpand(selection,sm=34) print(txt, 'is selected') def parent_window(): window = cmds.window( title="UV ToolBar", iconName='UVTools', widthHeight=(316, 500)) # scroll bar cmds.scrollLayout( childResizable = True, borderVisible= True, verticalScrollBarAlwaysVisible=True) master_layout= main = cmds.columnLayout(adjustableColumn = True, rowSpacing = 5) # Text Scroll List selection_textscrollList = cmds.textScrollList("scrollList", ams = True) cmds.button(label='Store', command= button_closure(store_selected, selection_textscrollList)) cmds.setParent(master_layout) cmds.button(label='Select Stored', command= button_closure(select_stored, selection_textscrollList)) cmds.button(label='Refresh List', command= button_closure(refresh_lst, selection_textscrollList)) # Set its parent to the Maya window (denoted by '..') cmds.setParent( '..' ) # Separator for parent window cmds.separator(height=5, style='out', backgroundColor= [0.2, 0.2, 0.3]) # Show the window that we created (window) cmds.showWindow( window ) return selection_textscrollList parent_window() Tried multiple ways like defining a function or using different *args
[ "First make sure there is no window with the same name by using deleteUI if it exists. Next, it is not a good practice to rely on names for maya ui elements because maya renames the elements if it thinks this is necessary. So something like:\nselection_textscrollList = cmds.textScrollList(\"scrollList\", ams = True)\n\nmay return an element called window1|scrollLayout13|columnLayout56|scrollList or it returns an element called window1|scrollLayout13|columnLayout57|scrollList1 so the best way is to always use the returned names to access UI elements. To make this access easier I recommend the usage of a class. This way you can store all needed UI names and callbacks internally in the class and access easily them if needed without the need of global variables or strange callback constructs.\n" ]
[ 1 ]
[]
[]
[ "maya", "python", "python_3.x", "user_interface" ]
stackoverflow_0074532479_maya_python_python_3.x_user_interface.txt
Q: Adding a plot to a matplotlib table I have the following table: fig,ax = plt.subplots(1,1,figsize=(16,16)) ax.axis('off') nrows= 6 ncols=3 table = ax.table(cellText=[['']*ncols]*nrows,loc='top', rowLoc='center',colLoc='center') for j,text in zip(range(3),['Group','Chart','Comments']): table[(0,j)].get_text().set_text(text) for i,text in zip(range(1,nrows),list('ABCDE')): table[(i,0)].get_text().set_text(text) for i in range(nrows): for j in range(ncols): table[(i,j)].set_height(0.2) table[(i,j)]._loc = 'center' table[(i,j)].set_fontsize(16) I am trying to add a line chart to the middle column (Chart) for this example, the line can be just a diagonal plt.plot([0,1],[0,1]) any ideas? A: I agree with Stef's comment, this would probably be easier using GridSpec and subplots. But for the sake of documenting this workaround: You could use an inset axis inside the existing Table's axis. You would just have to find the xy location of the cell in which you want to plot them and their width/height. You could add this to your code: plt.draw() for r in range(1, nrows): x0, y0, w, h = (*table[r, 1].get_xy(), table[r, 1].get_width()*.8, table[r, 1].get_height()*.8) axpl = ax.inset_axes(bounds=(x0+0.2*w, y0+0.2*h, w, h)) axpl.plot([0, 1], [0, 1]) I decreased the inset axis size to fit in the cell. Not that I used plt.draw() before the call so that the table[i,j].xy get populated correctly.
Adding a plot to a matplotlib table
I have the following table: fig,ax = plt.subplots(1,1,figsize=(16,16)) ax.axis('off') nrows= 6 ncols=3 table = ax.table(cellText=[['']*ncols]*nrows,loc='top', rowLoc='center',colLoc='center') for j,text in zip(range(3),['Group','Chart','Comments']): table[(0,j)].get_text().set_text(text) for i,text in zip(range(1,nrows),list('ABCDE')): table[(i,0)].get_text().set_text(text) for i in range(nrows): for j in range(ncols): table[(i,j)].set_height(0.2) table[(i,j)]._loc = 'center' table[(i,j)].set_fontsize(16) I am trying to add a line chart to the middle column (Chart) for this example, the line can be just a diagonal plt.plot([0,1],[0,1]) any ideas?
[ "I agree with Stef's comment, this would probably be easier using GridSpec and subplots. But for the sake of documenting this workaround:\nYou could use an inset axis inside the existing Table's axis. You would just have to find the xy location of the cell in which you want to plot them and their width/height.\nYou could add this to your code:\nplt.draw()\nfor r in range(1, nrows):\n x0, y0, w, h = (*table[r, 1].get_xy(), table[r, 1].get_width()*.8, table[r, 1].get_height()*.8)\n axpl = ax.inset_axes(bounds=(x0+0.2*w, y0+0.2*h, w, h))\n axpl.plot([0, 1], [0, 1])\n\nI decreased the inset axis size to fit in the cell.\nNot that I used plt.draw() before the call so that the table[i,j].xy get populated correctly.\n\n" ]
[ 2 ]
[]
[]
[ "matplotlib", "pandas", "plot", "python", "visualization" ]
stackoverflow_0074529651_matplotlib_pandas_plot_python_visualization.txt
Q: Paramiko's SSHClient with SFTP How I can make SFTP transport through SSHClient on the remote server? I have a local host and two remote hosts. Remote hosts are backup server and web server. I need to find on backup server necessary backup file and put it on web server over SFTP. How can I make Paramiko's SFTP transport work with Paramiko's SSHClient? A: paramiko.SFTPClient Sample Usage: import paramiko paramiko.util.log_to_file("paramiko.log") # Open a transport host,port = "example.com",22 transport = paramiko.Transport((host,port)) # Auth username,password = "bar","foo" transport.connect(None,username,password) # Go! sftp = paramiko.SFTPClient.from_transport(transport) # Download filepath = "/etc/passwd" localpath = "/home/remotepasswd" sftp.get(filepath,localpath) # Upload filepath = "/home/foo.jpg" localpath = "/home/pony.jpg" sftp.put(localpath,filepath) # Close if sftp: sftp.close() if transport: transport.close() A: The accepted answer "works". But with its use of the low-level Transport class, it bypasses a host key verification, what is a security flaw, as it makes the code susceptible to Man-in-the-middle attacks. Better is to use the right Paramiko SSH API, the SSHClient, which does verify the host key: import paramiko paramiko.util.log_to_file("paramiko.log") ssh = paramiko.SSHClient() ssh.connect(host, username='user', password='password') # or # key = paramiko.RSAKey.from_private_key_file('id_rsa') # ssh.connect(host, username='user', pkey=key) sftp = ssh.open_sftp() sftp.get(remotepath, localpath) # or sftp.put(localpath, remotepath) For details about verifying the host key, see: Paramiko "Unknown Server" A: If you have a SSHClient, you can also use open_sftp(): import paramiko # lets say you have SSH client... client = paramiko.SSHClient() sftp = client.open_sftp() # then you can use upload & download as shown above ... A: In addition to the first answer which is great but depends on username/password, the following shows how to use an ssh key: from paramiko import Transport, SFTPClient, RSAKey key = RSAKey(filename='path_to_my_rsakey') con = Transport('remote_host_name_or_ip', 22) con.connect(None,username='my_username', pkey=key) sftp = SFTPClient.from_transport(con) sftp.listdir(path='.') A: For those anyone need to integrate with an ssh/sftp server that requires a private key and want to perform host key verification for the known host by using a specific public key, here is a snippet code with paramiko: import paramiko sftp_hostname = "target.hostname.com" sftp_username = "tartgetHostUsername" sftp_private_key = "/path/to/private_key_file.pvt" sftp_private_key_password = "private_key_file_passphrase_if_it_encrypted" sftp_public_key = "/path/to/public_certified_file.pub" sftp_port = 22 remote_path = "." target_local_path = "/path/to/target/folder" ssh = paramiko.SSHClient() # Load target host public cert for host key verification ssh.load_host_keys(sftp_public_key) # Load encrypted private key and ssh connect key = paramiko.RSAKey.from_private_key_file(sftp_private_key, sftp_private_key_password) ssh.connect(host=sftp_hostname, port=sftp_port, username=sftp_username, pkey=key) # Get the sftp connection sftp_connection = ssh.open_sftp() directory_list = sftp_connection.listdir(remote_path) # ... if sftp_connection: sftp_connection.close() if ssh: ssh.close() Notice that only certificates in classic Openssh format are supported, otherwise needs to be converted with the following commands (also for the latest Openssh formats): $chmod 400 /path/to/private_key_file.pvt $ssh-keygen -p -f /path/to/private_key_file.pvt -m pem -P <currentPassphrase> -N <newPassphrase> In order to avoid man in the middle attack, it is important to do not use paramiko.AutoAddPolicy() and load the public host key programmatically as above or load it from ~/.ssh/known_hosts The file must be in the format "<host_name> ssh-rsa AAAAB3NzaC1yc2EAAAA..." In case you don't have the public key and you trust the target host (take care to mitm), you can download it using $ssh-keyscan target.hostname.com command. The above code is the only way I found to avoid the following error during connection: paramiko.ssh_exception.SSHException: Server 'x.y.z' not found in known_hosts This error was prompted also with the following way to load the public certificates: key = paramiko.RSAKey(data=decodebytes(sftp_public_key)) ssh_client.get_host_keys().add(sftp_hostname, 'ssh-rsa', key) Also the following code was not able for me to load the certificate (tried also by encoding the certificate in base64): ssh_client=paramiko.SSHClient() rsa_key = paramiko.RSAKey.from_private_key_file(sftp_private_key, sftp_private_key_password) rsa_key.load_certificate(sftp_public_key) It always ends with: File "/usr/local/lib/python3.9/site-packages/paramiko/pkey.py", line 720, in from_string key_blob = decodebytes(b(fields[1])) File "/usr/lib64/python3.9/base64.py", line 538, in decodebytes return binascii.a2b_base64(s) binascii.Error: Incorrect padding The above code above worked for the SFTP integration with GoAnywhere. I hope this is helpful, I've not found any working example and spent many hours in searches and tests. The implementations using pysftp wrapper it is now to be considered as discontinued from 2016.
Paramiko's SSHClient with SFTP
How I can make SFTP transport through SSHClient on the remote server? I have a local host and two remote hosts. Remote hosts are backup server and web server. I need to find on backup server necessary backup file and put it on web server over SFTP. How can I make Paramiko's SFTP transport work with Paramiko's SSHClient?
[ "paramiko.SFTPClient\nSample Usage:\nimport paramiko\nparamiko.util.log_to_file(\"paramiko.log\")\n\n# Open a transport\nhost,port = \"example.com\",22\ntransport = paramiko.Transport((host,port))\n\n# Auth \nusername,password = \"bar\",\"foo\"\ntransport.connect(None,username,password)\n\n# Go! \nsftp = paramiko.SFTPClient.from_transport(transport)\n\n# Download\nfilepath = \"/etc/passwd\"\nlocalpath = \"/home/remotepasswd\"\nsftp.get(filepath,localpath)\n\n# Upload\nfilepath = \"/home/foo.jpg\"\nlocalpath = \"/home/pony.jpg\"\nsftp.put(localpath,filepath)\n\n# Close\nif sftp: sftp.close()\nif transport: transport.close()\n\n", "The accepted answer \"works\". But with its use of the low-level Transport class, it bypasses a host key verification, what is a security flaw, as it makes the code susceptible to Man-in-the-middle attacks.\nBetter is to use the right Paramiko SSH API, the SSHClient, which does verify the host key:\nimport paramiko\nparamiko.util.log_to_file(\"paramiko.log\")\n\nssh = paramiko.SSHClient()\nssh.connect(host, username='user', password='password')\n# or \n# key = paramiko.RSAKey.from_private_key_file('id_rsa')\n# ssh.connect(host, username='user', pkey=key)\n\nsftp = ssh.open_sftp()\n\nsftp.get(remotepath, localpath)\n# or\nsftp.put(localpath, remotepath)\n\n\nFor details about verifying the host key, see:\nParamiko \"Unknown Server\"\n", "If you have a SSHClient, you can also use open_sftp():\nimport paramiko\n\n\n# lets say you have SSH client...\nclient = paramiko.SSHClient()\n\nsftp = client.open_sftp()\n\n# then you can use upload & download as shown above\n...\n\n", "In addition to the first answer which is great but depends on username/password, the following shows how to use an ssh key:\nfrom paramiko import Transport, SFTPClient, RSAKey\nkey = RSAKey(filename='path_to_my_rsakey')\ncon = Transport('remote_host_name_or_ip', 22)\ncon.connect(None,username='my_username', pkey=key)\nsftp = SFTPClient.from_transport(con)\nsftp.listdir(path='.')\n\n", "For those anyone need to integrate with an ssh/sftp server that requires a private key and want to perform host key verification for the known host by using a specific public key, here is a snippet code with paramiko:\nimport paramiko\n\nsftp_hostname = \"target.hostname.com\"\nsftp_username = \"tartgetHostUsername\"\nsftp_private_key = \"/path/to/private_key_file.pvt\"\nsftp_private_key_password = \"private_key_file_passphrase_if_it_encrypted\"\nsftp_public_key = \"/path/to/public_certified_file.pub\"\nsftp_port = 22\nremote_path = \".\"\ntarget_local_path = \"/path/to/target/folder\"\n\nssh = paramiko.SSHClient()\n \n# Load target host public cert for host key verification\nssh.load_host_keys(sftp_public_key)\n\n# Load encrypted private key and ssh connect\nkey = paramiko.RSAKey.from_private_key_file(sftp_private_key, sftp_private_key_password)\nssh.connect(host=sftp_hostname, port=sftp_port, username=sftp_username, pkey=key)\n\n# Get the sftp connection\nsftp_connection = ssh.open_sftp()\n\ndirectory_list = sftp_connection.listdir(remote_path)\n\n# ...\n\nif sftp_connection: sftp_connection.close()\nif ssh: ssh.close()\n\nNotice that only certificates in classic Openssh format are supported, otherwise needs to be converted with the following commands (also for the latest Openssh formats):\n$chmod 400 /path/to/private_key_file.pvt\n$ssh-keygen -p -f /path/to/private_key_file.pvt -m pem -P <currentPassphrase> -N <newPassphrase>\n\nIn order to avoid man in the middle attack, it is important to do not use paramiko.AutoAddPolicy() and load the public host key programmatically as above or load it from ~/.ssh/known_hosts\nThe file must be in the format \"<host_name> ssh-rsa AAAAB3NzaC1yc2EAAAA...\"\nIn case you don't have the public key and you trust the target host (take care to mitm), you can download it using $ssh-keyscan target.hostname.com command.\nThe above code is the only way I found to avoid the following error during connection:\nparamiko.ssh_exception.SSHException: Server 'x.y.z' not found in known_hosts\n\nThis error was prompted also with the following way to load the public certificates:\nkey = paramiko.RSAKey(data=decodebytes(sftp_public_key))\nssh_client.get_host_keys().add(sftp_hostname, 'ssh-rsa', key)\n\nAlso the following code was not able for me to load the certificate (tried also by encoding the certificate in base64):\nssh_client=paramiko.SSHClient()\n\nrsa_key = paramiko.RSAKey.from_private_key_file(sftp_private_key, sftp_private_key_password)\nrsa_key.load_certificate(sftp_public_key)\n\nIt always ends with:\n File \"/usr/local/lib/python3.9/site-packages/paramiko/pkey.py\", line 720, in from_string\n key_blob = decodebytes(b(fields[1]))\n File \"/usr/lib64/python3.9/base64.py\", line 538, in decodebytes\n return binascii.a2b_base64(s)\nbinascii.Error: Incorrect padding\n\nThe above code above worked for the SFTP integration with GoAnywhere.\nI hope this is helpful, I've not found any working example and spent many hours in searches and tests.\nThe implementations using pysftp wrapper it is now to be considered as discontinued from 2016.\n" ]
[ 207, 29, 8, 4, 1 ]
[]
[]
[ "paramiko", "python", "sftp", "ssh" ]
stackoverflow_0003635131_paramiko_python_sftp_ssh.txt
Q: Kivy + pyzbar does not decode QR properly on Android I am working on a Kivy App that takes an image through: texture = self.camera.texture size = texture.size pixels = texture.pixels The information above is used for the following function: import numpy from PIL import Image from pyzbar.pyzbar import decode def convert_qr(size, pixels): pil_image = Image.frombytes(mode='RGBA', size=size,data=pixels) #This returns an array of length 480 numpypicture = numpy.array(pil_image) # PC returns a list of 1 # Android returns an empty list barcodes = decode(numpypicture) #barcode_info = barcodes[0].data.decode('utf-8') return str(len(barcodes)) Problem I know the problem comes from this line: barcodes = decode(numpypicture) but i don't know how to fix it. When I use the computer camera and run it the function returns '1' for str(len(barcodes)). When I use the android camera, the function returns '0'. This means, the barcodes = decode(numpypicture) does not decode the 'numpypicture' properly. I know for a fact that 'numpypicture' variables works because both PC and Android camera return 480 when i return len(numpypicture). It is only after the barcodes = decode(numpypicture) line that the result between PC and Android Camera is different. (They are scanning the same QR Image) Any idea how i might debug this? A: It might be a dependency issue. Make sure that you add libzbar to your requirements field in buildozer.spec file. pyzbar depends on this to work. Here is some more info about this from a repo I found zbarcamera A: Somehow the picture is mirrored in Android so flipping it with e.g. opencv if the platform is Android solves this problem: if platform is 'android': numpypicture = cv2.flip(numpypicture, 0) barcodes = decode(numpypicture)
Kivy + pyzbar does not decode QR properly on Android
I am working on a Kivy App that takes an image through: texture = self.camera.texture size = texture.size pixels = texture.pixels The information above is used for the following function: import numpy from PIL import Image from pyzbar.pyzbar import decode def convert_qr(size, pixels): pil_image = Image.frombytes(mode='RGBA', size=size,data=pixels) #This returns an array of length 480 numpypicture = numpy.array(pil_image) # PC returns a list of 1 # Android returns an empty list barcodes = decode(numpypicture) #barcode_info = barcodes[0].data.decode('utf-8') return str(len(barcodes)) Problem I know the problem comes from this line: barcodes = decode(numpypicture) but i don't know how to fix it. When I use the computer camera and run it the function returns '1' for str(len(barcodes)). When I use the android camera, the function returns '0'. This means, the barcodes = decode(numpypicture) does not decode the 'numpypicture' properly. I know for a fact that 'numpypicture' variables works because both PC and Android camera return 480 when i return len(numpypicture). It is only after the barcodes = decode(numpypicture) line that the result between PC and Android Camera is different. (They are scanning the same QR Image) Any idea how i might debug this?
[ "It might be a dependency issue. Make sure that you add libzbar to your requirements field in buildozer.spec file. pyzbar depends on this to work. Here is some more info about this from a repo I found zbarcamera\n", "Somehow the picture is mirrored in Android so flipping it with e.g. opencv if the platform is Android solves this problem:\nif platform is 'android':\n numpypicture = cv2.flip(numpypicture, 0)\nbarcodes = decode(numpypicture)\n\n" ]
[ 0, 0 ]
[]
[]
[ "android", "kivy", "python" ]
stackoverflow_0069457638_android_kivy_python.txt
Q: List View is not working but get_context_data() works I have a ListView but when I call it only the get_context_data method works (the news and category model, not the product) when I try to display the information of the models in the templates. view: class HomeView(ListView): model = Product context_object_name='products' template_name = 'main/home.html' paginate_by = 25 def get_context_data(self, **kwargs): categories = Category.objects.all() news = News.objects.all() context = { 'categories' : categories, 'news' : news, } context = super().get_context_data(**kwargs) return context There is also this piece of code: context = super().get_context_data(**kwargs) If it's written before: categories = Category.objects.all() The Product model is show but not the others. base.html <body> ... {% include "base/categories.html" %} {% block content %}{% endblock %} </body> home.html {% extends 'main/base.html' %} {% block content %} <div> ... <div> {% for product in products %} {% if product.featured == True %} <div> <div> <a href="">{{ product.author }}</a> <small>{{ product.date_posted|date:"F d, Y" }}</small> </div> <p>Some text..</p> </div> {% endif %} {% endfor %} </div> </div> {% endblock content %} categories.html <div> ... <div> {% for category in categories %} <p>{{ category.name }}</p> {% endfor %} </div> <div> {% for new in news %} <p>{{ new.title }}</p> {% endfor %} </div> </div> A: You can also try this: class HomeView(ListView): model = Product context_object_name='products' template_name = 'main/home.html' paginate_by = 25 def get_context_data(self, **kwargs): categories = Category.objects.all() news = News.objects.all() context = super().get_context_data(**kwargs) context["categories"]=categories context["news"]=news return context A: The problem is that you override context, but you need to update it. Try this: def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) categories = Category.objects.all() news = News.objects.all() context.update({ 'categories' : categories, 'news' : news, }) return context
List View is not working but get_context_data() works
I have a ListView but when I call it only the get_context_data method works (the news and category model, not the product) when I try to display the information of the models in the templates. view: class HomeView(ListView): model = Product context_object_name='products' template_name = 'main/home.html' paginate_by = 25 def get_context_data(self, **kwargs): categories = Category.objects.all() news = News.objects.all() context = { 'categories' : categories, 'news' : news, } context = super().get_context_data(**kwargs) return context There is also this piece of code: context = super().get_context_data(**kwargs) If it's written before: categories = Category.objects.all() The Product model is show but not the others. base.html <body> ... {% include "base/categories.html" %} {% block content %}{% endblock %} </body> home.html {% extends 'main/base.html' %} {% block content %} <div> ... <div> {% for product in products %} {% if product.featured == True %} <div> <div> <a href="">{{ product.author }}</a> <small>{{ product.date_posted|date:"F d, Y" }}</small> </div> <p>Some text..</p> </div> {% endif %} {% endfor %} </div> </div> {% endblock content %} categories.html <div> ... <div> {% for category in categories %} <p>{{ category.name }}</p> {% endfor %} </div> <div> {% for new in news %} <p>{{ new.title }}</p> {% endfor %} </div> </div>
[ "You can also try this:\nclass HomeView(ListView):\n model = Product\n context_object_name='products'\n template_name = 'main/home.html'\n paginate_by = 25\n\n def get_context_data(self, **kwargs):\n categories = Category.objects.all()\n news = News.objects.all()\n context = super().get_context_data(**kwargs)\n context[\"categories\"]=categories\n context[\"news\"]=news\n return context\n\n", "The problem is that you override context, but you need to update it. Try this:\ndef get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n categories = Category.objects.all()\n news = News.objects.all()\n context.update({\n 'categories' : categories,\n 'news' : news,\n })\n \n return context\n\n" ]
[ 2, 1 ]
[]
[]
[ "django", "django_templates", "django_views", "python" ]
stackoverflow_0074533558_django_django_templates_django_views_python.txt
Q: Pandas function only works on individual columns but not entire dataframe I have a dataframe like the following (example data given): df = pd.DataFrame({'smiles': ['CCCCC', 'CCCC1', 'CCCN1'], 'ID' : ['A-111', 'A112', 'A-113'], 'Parameter_1':[30.0, 31.4, 15.9], 'Parameter_2':[NaN, '0.644', '4.38E-02'], 'Date': [dt.date(2021, 1, 1), dt.date(2021, 1, 2), dt.date(2021, 1, 3)]}) I have the following function: def num_parse(element): try: float(element) return float(element) except ValueError: return(element) except TypeError: return(element) When I apply my function to individual columns it works fine - converting any string that can be floated into a float and leaving all other strings as is and also leaving the datetime column as is. df['Parameter_1'] = df['Parameter_1'].apply(num_parse) When I apply this to my entire dataframe I keep getting the following error: df = df.apply(num_parse) TypeError: cannot convert the series to <class 'float'> I am unsure why, please help. A: Use applymap() df.applymap(num_parse) You could also: df.apply(num_parse, axis=1)
Pandas function only works on individual columns but not entire dataframe
I have a dataframe like the following (example data given): df = pd.DataFrame({'smiles': ['CCCCC', 'CCCC1', 'CCCN1'], 'ID' : ['A-111', 'A112', 'A-113'], 'Parameter_1':[30.0, 31.4, 15.9], 'Parameter_2':[NaN, '0.644', '4.38E-02'], 'Date': [dt.date(2021, 1, 1), dt.date(2021, 1, 2), dt.date(2021, 1, 3)]}) I have the following function: def num_parse(element): try: float(element) return float(element) except ValueError: return(element) except TypeError: return(element) When I apply my function to individual columns it works fine - converting any string that can be floated into a float and leaving all other strings as is and also leaving the datetime column as is. df['Parameter_1'] = df['Parameter_1'].apply(num_parse) When I apply this to my entire dataframe I keep getting the following error: df = df.apply(num_parse) TypeError: cannot convert the series to <class 'float'> I am unsure why, please help.
[ "Use applymap()\ndf.applymap(num_parse)\n\nYou could also:\ndf.apply(num_parse, axis=1)\n\n" ]
[ 2 ]
[]
[]
[ "function", "pandas", "python" ]
stackoverflow_0074534214_function_pandas_python.txt
Q: How to open a json.gz file and return to dictionary in Python I have downloaded a compressed json file and want to open it as a dictionary. I used json.load but the data type still gives me a string. I want to extract a keyword list from the json file. Is there a way I can do it even though my data is a string? Here is my code: import gzip import json with gzip.open("19.04_association_data.json.gz", "r") as f: data = f.read() with open('association.json', 'w') as json_file: json.dump(data.decode('utf-8'), json_file) with open("association.json", "r") as read_it: association_data = json.load(read_it) print(type(association_data)) #The actual output is 'str' but I expect it is 'dic' A: In the first with block you already got the uncompressed string, no need to open it a second time. import gzip import json with gzip.open("19.04_association_data.json.gz", "r") as f: data = f.read() j = json.loads (data.decode('utf-8')) print (type(j)) A: Open the file using the gzip package from the standard library (docs), then read it directly into json.loads(): import gzip import json with gzip.open("19.04_association_data.json.gz", "rb") as f: data = json.loads(f.read(), encoding="utf-8") A: To read from a json.gz, you can use the following snippet: import json import gzip with gzip.open("file_path_to_read", "rt") as f: expected_dict = json.load(f) The result is of type dict. In case if you want to write to a json.gz, you can use the following snippet: import json import gzip with gzip.open("file_path_to_write", "wt") as f: json.dump(expected_dict, f)
How to open a json.gz file and return to dictionary in Python
I have downloaded a compressed json file and want to open it as a dictionary. I used json.load but the data type still gives me a string. I want to extract a keyword list from the json file. Is there a way I can do it even though my data is a string? Here is my code: import gzip import json with gzip.open("19.04_association_data.json.gz", "r") as f: data = f.read() with open('association.json', 'w') as json_file: json.dump(data.decode('utf-8'), json_file) with open("association.json", "r") as read_it: association_data = json.load(read_it) print(type(association_data)) #The actual output is 'str' but I expect it is 'dic'
[ "In the first with block you already got the uncompressed string, no need to open it a second time.\nimport gzip\nimport json\n\nwith gzip.open(\"19.04_association_data.json.gz\", \"r\") as f:\n data = f.read()\n j = json.loads (data.decode('utf-8'))\n print (type(j))\n\n\n", "Open the file using the gzip package from the standard library (docs), then read it directly into json.loads():\nimport gzip\nimport json \n\nwith gzip.open(\"19.04_association_data.json.gz\", \"rb\") as f:\n data = json.loads(f.read(), encoding=\"utf-8\")\n\n", "To read from a json.gz, you can use the following snippet:\nimport json\nimport gzip\n\nwith gzip.open(\"file_path_to_read\", \"rt\") as f:\n expected_dict = json.load(f)\n\nThe result is of type dict.\nIn case if you want to write to a json.gz, you can use the following snippet:\nimport json\nimport gzip\n\nwith gzip.open(\"file_path_to_write\", \"wt\") as f:\n json.dump(expected_dict, f)\n\n" ]
[ 4, 2, 0 ]
[]
[]
[ "json", "python", "python_3.x" ]
stackoverflow_0056677516_json_python_python_3.x.txt