content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Replace all newline characters using python I am trying to read a pdf using python and the content has many newline (crlf) characters. I tried removing them using below code: from tika import parser filename = 'myfile.pdf' raw = parser.from_file(filename) content = raw['content'] content = content.replace("\r\n", "") print(content) But the output remains unchanged. I tried using double backslashes also which didn't fix the issue. can someone please advise? A: content = content.replace("\\r\\n", "") You need to double escape them. A: I don't have access to your pdf file, so I processed one on my system. I also don't know if you need to remove all new lines or just double new lines. The code below remove double new lines, which makes the output more readable. Please let me know if this works for your current needs. from tika import parser filename = 'myfile.pdf' # Parse the PDF parsedPDF = parser.from_file(filename) # Extract the text content from the parsed PDF pdf = parsedPDF["content"] # Convert double newlines into single newlines pdf = pdf.replace('\n\n', '\n') ##################################### # Do something with the PDF ##################################### print (pdf) A: If you are having issues with different forms of line break, try the str.splitlines() function and then re-join the result using the string you're after. Like this: content = "".join(l for l in content.splitlines() if l) Then, you just have to change the value within the quotes to what you need to join on. This will allow you to detect all of the line boundaries found here. Be aware though that str.splitlines() returns a list not an iterator. So, for large strings, this will blow out your memory usage. In those cases, you are better off using the file stream or io.StringIO and read line by line. A: print(open('myfile.txt').read().replace('\n', '')) A: When you write something like t.replace("\r\n", "") python will look for a carriage-return followed by a new-line. Python will not replace carriage returns by themselves or replace new-line characters by themselves. Consider the following: t = "abc abracadabra abc" t.replace("abc", "x") Will t.replace("abc", "x") replace every occurrence of the letter a with the letter x? No Will t.replace("abc", "x") replace every occurrence of the letter b with the letter x? No Will t.replace("abc", "x") replace every occurrence of the letter c with the letter x? No What will t.replace("abc", "x") do? t.replace("abc", "x") will replace the entire string "abc" with the letter "x" Consider the following: test_input = "\r\nAPPLE\rORANGE\nKIWI\n\rPOMEGRANATE\r\nCHERRY\r\nSTRAWBERRY" t = test_input for _ in range(0, 3): t = t.replace("\r\n", "") print(repr(t)) result2 = "".join(test_input.split("\r\n")) print(repr(result2)) The output sent to the console is as follows: 'APPLE\rORANGE\nKIWI\n\rPOMEGRANATECHERRYSTRAWBERRY' 'APPLE\rORANGE\nKIWI\n\rPOMEGRANATECHERRYSTRAWBERRY' 'APPLE\rORANGE\nKIWI\n\rPOMEGRANATECHERRYSTRAWBERRY' 'APPLE\rORANGE\nKIWI\n\rPOMEGRANATECHERRYSTRAWBERRY' Note that: str.replace() replaces every occurrence of the target string, not just the left-most occurrence. str.replace() replaces the target string, but not every character of the target string. If you want to delete all new-line and carriage returns, something like the following will get the job done: in_string = "\r\n-APPLE-\r-ORANGE-\n-KIWI-\n\r-POMEGRANATE-\r\n-CHERRY-\r\n-STRAWBERRY-" out_string = "".join(filter(lambda ch: ch not in "\n\r", in_string)) print(repr(out_string)) # prints -APPLE--ORANGE--KIWI--POMEGRANATE--CHERRY--STRAWBERRY- A: You can also just use text = ''' As she said these words her foot slipped, and in another moment, splash! she was up to her chin in salt water. Her first idea was that she had somehow fallen into the sea, “and in that case I can go back by railway,” she said to herself.”''' text = ' '.join(text.splitlines()) print(text) # As she said these words her foot slipped, and in another moment, splash! she was up to her chin in salt water. Her first idea was that she had somehow fallen into the sea, “and in that case I can go back by railway,” she said to herself.”
Replace all newline characters using python
I am trying to read a pdf using python and the content has many newline (crlf) characters. I tried removing them using below code: from tika import parser filename = 'myfile.pdf' raw = parser.from_file(filename) content = raw['content'] content = content.replace("\r\n", "") print(content) But the output remains unchanged. I tried using double backslashes also which didn't fix the issue. can someone please advise?
[ "content = content.replace(\"\\\\r\\\\n\", \"\")\n\nYou need to double escape them.\n", "I don't have access to your pdf file, so I processed one on my system. I also don't know if you need to remove all new lines or just double new lines. The code below remove double new lines, which makes the output more readable.\nPlease let me know if this works for your current needs.\nfrom tika import parser\n\nfilename = 'myfile.pdf'\n\n# Parse the PDF\nparsedPDF = parser.from_file(filename)\n\n# Extract the text content from the parsed PDF\npdf = parsedPDF[\"content\"]\n\n# Convert double newlines into single newlines\npdf = pdf.replace('\\n\\n', '\\n')\n\n#####################################\n# Do something with the PDF\n#####################################\nprint (pdf)\n\n", "If you are having issues with different forms of line break, try the str.splitlines() function and then re-join the result using the string you're after. Like this:\ncontent = \"\".join(l for l in content.splitlines() if l)\n\nThen, you just have to change the value within the quotes to what you need to join on.\nThis will allow you to detect all of the line boundaries found here.\nBe aware though that str.splitlines() returns a list not an iterator. So, for large strings, this will blow out your memory usage.\nIn those cases, you are better off using the file stream or io.StringIO and read line by line.\n", "print(open('myfile.txt').read().replace('\\n', ''))\n\n", "When you write something like t.replace(\"\\r\\n\", \"\") python will look for a carriage-return followed by a new-line.\nPython will not replace carriage returns by themselves or replace new-line characters by themselves.\nConsider the following:\nt = \"abc abracadabra abc\"\nt.replace(\"abc\", \"x\")\n\n\nWill t.replace(\"abc\", \"x\") replace every occurrence of the letter a with the letter x? No\n\nWill t.replace(\"abc\", \"x\") replace every occurrence of the letter b with the letter x? No\n\nWill t.replace(\"abc\", \"x\") replace every occurrence of the letter c with the letter x? No\n\n\nWhat will t.replace(\"abc\", \"x\") do?\n\nt.replace(\"abc\", \"x\") will replace the entire string \"abc\" with the letter \"x\"\n\nConsider the following:\ntest_input = \"\\r\\nAPPLE\\rORANGE\\nKIWI\\n\\rPOMEGRANATE\\r\\nCHERRY\\r\\nSTRAWBERRY\"\n\nt = test_input\nfor _ in range(0, 3):\n t = t.replace(\"\\r\\n\", \"\")\n print(repr(t))\n\nresult2 = \"\".join(test_input.split(\"\\r\\n\"))\nprint(repr(result2))\n\nThe output sent to the console is as follows:\n'APPLE\\rORANGE\\nKIWI\\n\\rPOMEGRANATECHERRYSTRAWBERRY'\n'APPLE\\rORANGE\\nKIWI\\n\\rPOMEGRANATECHERRYSTRAWBERRY'\n'APPLE\\rORANGE\\nKIWI\\n\\rPOMEGRANATECHERRYSTRAWBERRY'\n'APPLE\\rORANGE\\nKIWI\\n\\rPOMEGRANATECHERRYSTRAWBERRY'\n\nNote that:\n\nstr.replace() replaces every occurrence of the target string, not just the left-most occurrence.\nstr.replace() replaces the target string, but not every character of the target string.\n\nIf you want to delete all new-line and carriage returns, something like the following will get the job done:\nin_string = \"\\r\\n-APPLE-\\r-ORANGE-\\n-KIWI-\\n\\r-POMEGRANATE-\\r\\n-CHERRY-\\r\\n-STRAWBERRY-\"\n\nout_string = \"\".join(filter(lambda ch: ch not in \"\\n\\r\", in_string))\n\nprint(repr(out_string))\n# prints -APPLE--ORANGE--KIWI--POMEGRANATE--CHERRY--STRAWBERRY-\n\n", "You can also just use\ntext = '''\nAs she said these words her foot slipped, and in another moment, splash! she\nwas up to her chin in salt water. Her first idea was that she had somehow\nfallen into the sea, “and in that case I can go back by railway,”\nshe said to herself.”'''\n\ntext = ' '.join(text.splitlines())\n\nprint(text)\n# As she said these words her foot slipped, and in another moment, splash! she was up to her chin in salt water. Her first idea was that she had somehow fallen into the sea, “and in that case I can go back by railway,” she said to herself.”\n\n" ]
[ 9, 2, 2, 1, 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0054760850_python_python_3.x.txt
Q: How to use a LDR to control and fan and LED ring and a timer I have a LDR, 5v fan and a ws2812 LED ring. When the LDR sees light I want the fan to turn off and the LED to turn on. When the LDR doesn't see light I want the LED to turn off and the fan to turn on for 5 mins and then if the LDR doesn't see light for a day I want the fan to turn on for 5 mins. For testing purposes I don't want to use 5 mins and a day I would rather use seconds to do the testing to make sure it work and then change it to the other times. It has been a few years since I have done anything with a pi and this is the first time doing anything on a pi pico with micropython. from machine import Pin, ADC import utime import machine from ws2812b import ws2812b num_leds = 8 pixels = ws2812b(num_leds, 0,0, delay=0) relay = Pin(15, Pin.OUT) ldr = ADC(Pin(28)) pixels.fill(0,0,0) pixels.show() relay.value(0) def setOff(): for i in range(num_leds): pixels.set_pixel(i,0,0,0) pixels.show() def setWhite(brightness): for i in range(num_leds): pixels.set_pixel(i,255,255,255) pixels.show() while True: reading = ldr.read_u16() print("Value: " , reading) utime.sleep(0.2) if reading < 10000: relay.value(0) setWhite(0) elif reading > 50000: relay.value(1) setOff() I have tried schedule and interrupt but I don't think that is the right way to do things. A: After you turn your LEDs on, create a timer that will shut them off after the configured amount of time. from machine import Timer TIMEOUT = 60 # expressed in seconds auto_off = Timer(0) auto_off.init(period=TIMEOUT*1000, mode=Timer.ONE_SHOT, callback=setOff)
How to use a LDR to control and fan and LED ring and a timer
I have a LDR, 5v fan and a ws2812 LED ring. When the LDR sees light I want the fan to turn off and the LED to turn on. When the LDR doesn't see light I want the LED to turn off and the fan to turn on for 5 mins and then if the LDR doesn't see light for a day I want the fan to turn on for 5 mins. For testing purposes I don't want to use 5 mins and a day I would rather use seconds to do the testing to make sure it work and then change it to the other times. It has been a few years since I have done anything with a pi and this is the first time doing anything on a pi pico with micropython. from machine import Pin, ADC import utime import machine from ws2812b import ws2812b num_leds = 8 pixels = ws2812b(num_leds, 0,0, delay=0) relay = Pin(15, Pin.OUT) ldr = ADC(Pin(28)) pixels.fill(0,0,0) pixels.show() relay.value(0) def setOff(): for i in range(num_leds): pixels.set_pixel(i,0,0,0) pixels.show() def setWhite(brightness): for i in range(num_leds): pixels.set_pixel(i,255,255,255) pixels.show() while True: reading = ldr.read_u16() print("Value: " , reading) utime.sleep(0.2) if reading < 10000: relay.value(0) setWhite(0) elif reading > 50000: relay.value(1) setOff() I have tried schedule and interrupt but I don't think that is the right way to do things.
[ "After you turn your LEDs on, create a timer that will shut them off after the configured amount of time.\nfrom machine import Timer\nTIMEOUT = 60 # expressed in seconds\nauto_off = Timer(0)\nauto_off.init(period=TIMEOUT*1000, mode=Timer.ONE_SHOT, callback=setOff)\n\n" ]
[ 0 ]
[]
[]
[ "micropython", "multithreading", "python", "timer" ]
stackoverflow_0074353231_micropython_multithreading_python_timer.txt
Q: constant pandas warning in pycharm console "FutureWarning: iteritems is deprecated Why does this code give me a warning message? import pandas as pd import numpy as np test = pd.DataFrame({'a':np.array([0,1,2]), 'b': np.array([3,4,5])}) It seems anything I do in pandas throws these long error messages, I'm not sure if this is a problem with pycharm or if I'm doing something wrong in pandas. A: You can ignore the warning, it is a known bug and is being fixed.
constant pandas warning in pycharm console "FutureWarning: iteritems is deprecated
Why does this code give me a warning message? import pandas as pd import numpy as np test = pd.DataFrame({'a':np.array([0,1,2]), 'b': np.array([3,4,5])}) It seems anything I do in pandas throws these long error messages, I'm not sure if this is a problem with pycharm or if I'm doing something wrong in pandas.
[ "You can ignore the warning, it is a known bug and is being fixed.\n" ]
[ 1 ]
[]
[]
[ "pandas", "pycharm", "python" ]
stackoverflow_0074501292_pandas_pycharm_python.txt
Q: extracting x and y data from a "messy" txt file I assume the question might be quite basic, but I had no idea how I should search for this specific issue: I have a .txt file where over several lines, several x-y data points are present per line. x and y values that belong together are seperated by a comma, while the the different couples are seperated by space. Here in example: 2,20 12,40 13,100 14,300 15,440 16,10 24,50 25,350 26,2322 27,3323 28,9999 29,2152 30,2622 31,50 I simply want to use python to store all x and y values in individual arrays. There must be an easy solution but I just cant get my head arround it how I should read them out. Thanks a lot for any help in advance. I tried to read out all line by themselfe and each line then value by value, but that is not working. A: fileInp = "2,20 12,40 13,100 14,300 15,440 16,10 24,50 25,350 26,2322 27,3323 28,9999 29,2152 30,2622 31,50" x = list() y = list() for data in fileInp.split(): x_y_data = data.split(",") x.append(x_y_data[0]) y.append(x_y_data[1]) print(x) print(y)
extracting x and y data from a "messy" txt file
I assume the question might be quite basic, but I had no idea how I should search for this specific issue: I have a .txt file where over several lines, several x-y data points are present per line. x and y values that belong together are seperated by a comma, while the the different couples are seperated by space. Here in example: 2,20 12,40 13,100 14,300 15,440 16,10 24,50 25,350 26,2322 27,3323 28,9999 29,2152 30,2622 31,50 I simply want to use python to store all x and y values in individual arrays. There must be an easy solution but I just cant get my head arround it how I should read them out. Thanks a lot for any help in advance. I tried to read out all line by themselfe and each line then value by value, but that is not working.
[ "fileInp = \"2,20 12,40 13,100 14,300 15,440 16,10 24,50 25,350 26,2322 27,3323 28,9999 29,2152 30,2622 31,50\"\n\nx = list()\ny = list()\n\nfor data in fileInp.split():\n x_y_data = data.split(\",\")\n x.append(x_y_data[0])\n y.append(x_y_data[1])\n \n\nprint(x)\nprint(y)\n\n" ]
[ 0 ]
[]
[]
[ "python", "txt" ]
stackoverflow_0074501237_python_txt.txt
Q: nextcord Command isn't working when I use Global Variables in f-strings @client.event async def on_message(message): if message.author == client.user: return if message.author.id == (isohel): #the userID for the user isohel is saved in the variable "isohel" if f"@{bot}" in message.content: #the bots ID is saved in the variable bot. await message.reply("you pinged me") ^that is what i want my code to look like, so i can use the variable names instead of the userIDs so its easier to read incase of errors. However, this code does work. @client.event async def on_message(message): if message.author == client.user: return if message.author.id == 476433240891195403: #the userID of the user "isohel" if "@1042552458175598642" in message.content: #the bots userID await message.reply("you pinged me") i just want my code to be easy to read in case of errors in the future. A: When I tried this code that you gave me it worked so I think there may be a problem elsewhere. This was the code that I used. Try changing the variables to match yours then test this code. I'm not sure if you are using nextcord or discord.py but this is how it would work in discord.py: import random import discord from discord.ext import commands MY_USER_ID = MY_BOT_ID = YOUR_PREFIX = "" YOUR_TOKEN = "" intents = discord.Intents.all() client = discord.Client(command_prefix=YOUR_PREFIX, intents=intents) @client.event async def on_message(message): if message.author == client.user: return if message.author.id == MY_USER_ID: #the userID of the user "isohel" if F"@{MY_BOT_ID}" in message.content: #the bots userID await message.reply("you pinged me") client.run(YOUR_TOKEN) It could be to do with your variables' names because bot can be used instead of client so there might have been some confusion.
nextcord Command isn't working when I use Global Variables in f-strings
@client.event async def on_message(message): if message.author == client.user: return if message.author.id == (isohel): #the userID for the user isohel is saved in the variable "isohel" if f"@{bot}" in message.content: #the bots ID is saved in the variable bot. await message.reply("you pinged me") ^that is what i want my code to look like, so i can use the variable names instead of the userIDs so its easier to read incase of errors. However, this code does work. @client.event async def on_message(message): if message.author == client.user: return if message.author.id == 476433240891195403: #the userID of the user "isohel" if "@1042552458175598642" in message.content: #the bots userID await message.reply("you pinged me") i just want my code to be easy to read in case of errors in the future.
[ "When I tried this code that you gave me it worked so I think there may be a problem elsewhere. This was the code that I used. Try changing the variables to match yours then test this code. I'm not sure if you are using nextcord or discord.py but this is how it would work in discord.py:\nimport random\nimport discord\nfrom discord.ext import commands\n\n\nMY_USER_ID = \nMY_BOT_ID = \nYOUR_PREFIX = \"\"\nYOUR_TOKEN = \"\"\nintents = discord.Intents.all()\nclient = discord.Client(command_prefix=YOUR_PREFIX, intents=intents)\n\n@client.event\nasync def on_message(message):\n if message.author == client.user:\n return\n\n if message.author.id == MY_USER_ID: #the userID of the user \"isohel\"\n if F\"@{MY_BOT_ID}\" in message.content: #the bots userID\n await message.reply(\"you pinged me\")\n\nclient.run(YOUR_TOKEN)\n\nIt could be to do with your variables' names because bot can be used instead of client so there might have been some confusion.\n" ]
[ 0 ]
[]
[]
[ "discord", "nextcord", "python" ]
stackoverflow_0074501228_discord_nextcord_python.txt
Q: Failed building wheel for PyAudio (M1 chip) When I try to install PyAudio on my MAC (M1) with the command: pip install PyAudio I get the following error: Collecting PyAudio Using cached PyAudio-0.2.12.tar.gz (42 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: PyAudio Building wheel for PyAudio (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for PyAudio (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [16 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.9-universal2-cpython-39 copying src/pyaudio.py -> build/lib.macosx-10.9-universal2-cpython-39 running build_ext building '_portaudio' extension creating build/temp.macosx-10.9-universal2-cpython-39 creating build/temp.macosx-10.9-universal2-cpython-39/src clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration -DMACOSX=1 -I/usr/local/include -I/usr/include -I/Users/dusankovacevic/Desktop/doctrina/venv/include -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -c src/_portaudiomodule.c -o build/temp.macosx-10.9-universal2-cpython-39/src/_portaudiomodule.o src/_portaudiomodule.c:30:10: fatal error: 'Python.h' file not found #include "Python.h" ^~~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for PyAudio Failed to build PyAudio ERROR: Could not build wheels for PyAudio, which is required to install pyproject.toml-based projects I already have brew installed portaudio :/ and I am woring inside venv where I have pyenv install 3.9-dev installed . Any help is appreciated! A: try sudo apt update sudo apt install portaudio19-dev pip install pyaudio
Failed building wheel for PyAudio (M1 chip)
When I try to install PyAudio on my MAC (M1) with the command: pip install PyAudio I get the following error: Collecting PyAudio Using cached PyAudio-0.2.12.tar.gz (42 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: PyAudio Building wheel for PyAudio (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for PyAudio (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [16 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.9-universal2-cpython-39 copying src/pyaudio.py -> build/lib.macosx-10.9-universal2-cpython-39 running build_ext building '_portaudio' extension creating build/temp.macosx-10.9-universal2-cpython-39 creating build/temp.macosx-10.9-universal2-cpython-39/src clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration -DMACOSX=1 -I/usr/local/include -I/usr/include -I/Users/dusankovacevic/Desktop/doctrina/venv/include -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -c src/_portaudiomodule.c -o build/temp.macosx-10.9-universal2-cpython-39/src/_portaudiomodule.o src/_portaudiomodule.c:30:10: fatal error: 'Python.h' file not found #include "Python.h" ^~~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for PyAudio Failed to build PyAudio ERROR: Could not build wheels for PyAudio, which is required to install pyproject.toml-based projects I already have brew installed portaudio :/ and I am woring inside venv where I have pyenv install 3.9-dev installed . Any help is appreciated!
[ "\n\ntry\nsudo apt update\nsudo apt install portaudio19-dev\npip install pyaudio\n\n\n\n" ]
[ 0 ]
[]
[]
[ "pip", "pyaudio", "python" ]
stackoverflow_0074394845_pip_pyaudio_python.txt
Q: Class (static) variables and methods How do I create class (i.e. static) variables or methods in Python? A: Variables declared inside the class definition, but not inside a method are class or static variables: >>> class MyClass: ... i = 3 ... >>> MyClass.i 3 As @millerdev points out, this creates a class-level i variable, but this is distinct from any instance-level i variable, so you could have >>> m = MyClass() >>> m.i = 4 >>> MyClass.i, m.i >>> (3, 4) This is different from C++ and Java, but not so different from C#, where a static member can't be accessed using a reference to an instance. See what the Python tutorial has to say on the subject of classes and class objects. @Steve Johnson has already answered regarding static methods, also documented under "Built-in Functions" in the Python Library Reference. class C: @staticmethod def f(arg1, arg2, ...): ... @beidy recommends classmethods over staticmethod, as the method then receives the class type as the first argument. A: @Blair Conrad said static variables declared inside the class definition, but not inside a method are class or "static" variables: >>> class Test(object): ... i = 3 ... >>> Test.i 3 There are a few gotcha's here. Carrying on from the example above: >>> t = Test() >>> t.i # "static" variable accessed via instance 3 >>> t.i = 5 # but if we assign to the instance ... >>> Test.i # we have not changed the "static" variable 3 >>> t.i # we have overwritten Test.i on t by creating a new attribute t.i 5 >>> Test.i = 6 # to change the "static" variable we do it by assigning to the class >>> t.i 5 >>> Test.i 6 >>> u = Test() >>> u.i 6 # changes to t do not affect new instances of Test # Namespaces are one honking great idea -- let's do more of those! >>> Test.__dict__ {'i': 6, ...} >>> t.__dict__ {'i': 5} >>> u.__dict__ {} Notice how the instance variable t.i got out of sync with the "static" class variable when the attribute i was set directly on t. This is because i was re-bound within the t namespace, which is distinct from the Test namespace. If you want to change the value of a "static" variable, you must change it within the scope (or object) where it was originally defined. I put "static" in quotes because Python does not really have static variables in the sense that C++ and Java do. Although it doesn't say anything specific about static variables or methods, the Python tutorial has some relevant information on classes and class objects. @Steve Johnson also answered regarding static methods, also documented under "Built-in Functions" in the Python Library Reference. class Test(object): @staticmethod def f(arg1, arg2, ...): ... @beid also mentioned classmethod, which is similar to staticmethod. A classmethod's first argument is the class object. Example: class Test(object): i = 3 # class (or static) variable @classmethod def g(cls, arg): # here we can use 'cls' instead of the class name (Test) if arg > cls.i: cls.i = arg # would be the same as Test.i = arg1 A: Static and Class Methods As the other answers have noted, static and class methods are easily accomplished using the built-in decorators: class Test(object): # regular instance method: def my_method(self): pass # class method: @classmethod def my_class_method(cls): pass # static method: @staticmethod def my_static_method(): pass As usual, the first argument to my_method() is bound to the class instance object. In contrast, the first argument to my_class_method() is bound to the class object itself (e.g., in this case, Test). For my_static_method(), none of the arguments are bound, and having arguments at all is optional. "Static Variables" However, implementing "static variables" (well, mutable static variables, anyway, if that's not a contradiction in terms...) is not as straight forward. As millerdev pointed out in his answer, the problem is that Python's class attributes are not truly "static variables". Consider: class Test(object): i = 3 # This is a class attribute x = Test() x.i = 12 # Attempt to change the value of the class attribute using x instance assert x.i == Test.i # ERROR assert Test.i == 3 # Test.i was not affected assert x.i == 12 # x.i is a different object than Test.i This is because the line x.i = 12 has added a new instance attribute i to x instead of changing the value of the Test class i attribute. Partial expected static variable behavior, i.e., syncing of the attribute between multiple instances (but not with the class itself; see "gotcha" below), can be achieved by turning the class attribute into a property: class Test(object): _i = 3 @property def i(self): return type(self)._i @i.setter def i(self,val): type(self)._i = val ## ALTERNATIVE IMPLEMENTATION - FUNCTIONALLY EQUIVALENT TO ABOVE ## ## (except with separate methods for getting and setting i) ## class Test(object): _i = 3 def get_i(self): return type(self)._i def set_i(self,val): type(self)._i = val i = property(get_i, set_i) Now you can do: x1 = Test() x2 = Test() x1.i = 50 assert x2.i == x1.i # no error assert x2.i == 50 # the property is synced The static variable will now remain in sync between all class instances. (NOTE: That is, unless a class instance decides to define its own version of _i! But if someone decides to do THAT, they deserve what they get, don't they???) Note that technically speaking, i is still not a 'static variable' at all; it is a property, which is a special type of descriptor. However, the property behavior is now equivalent to a (mutable) static variable synced across all class instances. Immutable "Static Variables" For immutable static variable behavior, simply omit the property setter: class Test(object): _i = 3 @property def i(self): return type(self)._i ## ALTERNATIVE IMPLEMENTATION - FUNCTIONALLY EQUIVALENT TO ABOVE ## ## (except with separate methods for getting i) ## class Test(object): _i = 3 def get_i(self): return type(self)._i i = property(get_i) Now attempting to set the instance i attribute will return an AttributeError: x = Test() assert x.i == 3 # success x.i = 12 # ERROR One Gotcha to be Aware of Note that the above methods only work with instances of your class - they will not work when using the class itself. So for example: x = Test() assert x.i == Test.i # ERROR # x.i and Test.i are two different objects: type(Test.i) # class 'property' type(x.i) # class 'int' The line assert Test.i == x.i produces an error, because the i attribute of Test and x are two different objects. Many people will find this surprising. However, it should not be. If we go back and inspect our Test class definition (the second version), we take note of this line: i = property(get_i) Clearly, the member i of Test must be a property object, which is the type of object returned from the property function. If you find the above confusing, you are most likely still thinking about it from the perspective of other languages (e.g. Java or c++). You should go study the property object, about the order in which Python attributes are returned, the descriptor protocol, and the method resolution order (MRO). I present a solution to the above 'gotcha' below; however I would suggest - strenuously - that you do not try to do something like the following until - at minimum - you thoroughly understand why assert Test.i = x.i causes an error. REAL, ACTUAL Static Variables - Test.i == x.i I present the (Python 3) solution below for informational purposes only. I am not endorsing it as a "good solution". I have my doubts as to whether emulating the static variable behavior of other languages in Python is ever actually necessary. However, regardless as to whether it is actually useful, the below should help further understanding of how Python works. UPDATE: this attempt is really pretty awful; if you insist on doing something like this (hint: please don't; Python is a very elegant language and shoe-horning it into behaving like another language is just not necessary), use the code in Ethan Furman's answer instead. Emulating static variable behavior of other languages using a metaclass A metaclass is the class of a class. The default metaclass for all classes in Python (i.e., the "new style" classes post Python 2.3 I believe) is type. For example: type(int) # class 'type' type(str) # class 'type' class Test(): pass type(Test) # class 'type' However, you can define your own metaclass like this: class MyMeta(type): pass And apply it to your own class like this (Python 3 only): class MyClass(metaclass = MyMeta): pass type(MyClass) # class MyMeta Below is a metaclass I have created which attempts to emulate "static variable" behavior of other languages. It basically works by replacing the default getter, setter, and deleter with versions which check to see if the attribute being requested is a "static variable". A catalog of the "static variables" is stored in the StaticVarMeta.statics attribute. All attribute requests are initially attempted to be resolved using a substitute resolution order. I have dubbed this the "static resolution order", or "SRO". This is done by looking for the requested attribute in the set of "static variables" for a given class (or its parent classes). If the attribute does not appear in the "SRO", the class will fall back on the default attribute get/set/delete behavior (i.e., "MRO"). from functools import wraps class StaticVarsMeta(type): '''A metaclass for creating classes that emulate the "static variable" behavior of other languages. I do not advise actually using this for anything!!! Behavior is intended to be similar to classes that use __slots__. However, "normal" attributes and __statics___ can coexist (unlike with __slots__). Example usage: class MyBaseClass(metaclass = StaticVarsMeta): __statics__ = {'a','b','c'} i = 0 # regular attribute a = 1 # static var defined (optional) class MyParentClass(MyBaseClass): __statics__ = {'d','e','f'} j = 2 # regular attribute d, e, f = 3, 4, 5 # Static vars a, b, c = 6, 7, 8 # Static vars (inherited from MyBaseClass, defined/re-defined here) class MyChildClass(MyParentClass): __statics__ = {'a','b','c'} j = 2 # regular attribute (redefines j from MyParentClass) d, e, f = 9, 10, 11 # Static vars (inherited from MyParentClass, redefined here) a, b, c = 12, 13, 14 # Static vars (overriding previous definition in MyParentClass here)''' statics = {} def __new__(mcls, name, bases, namespace): # Get the class object cls = super().__new__(mcls, name, bases, namespace) # Establish the "statics resolution order" cls.__sro__ = tuple(c for c in cls.__mro__ if isinstance(c,mcls)) # Replace class getter, setter, and deleter for instance attributes cls.__getattribute__ = StaticVarsMeta.__inst_getattribute__(cls, cls.__getattribute__) cls.__setattr__ = StaticVarsMeta.__inst_setattr__(cls, cls.__setattr__) cls.__delattr__ = StaticVarsMeta.__inst_delattr__(cls, cls.__delattr__) # Store the list of static variables for the class object # This list is permanent and cannot be changed, similar to __slots__ try: mcls.statics[cls] = getattr(cls,'__statics__') except AttributeError: mcls.statics[cls] = namespace['__statics__'] = set() # No static vars provided # Check and make sure the statics var names are strings if any(not isinstance(static,str) for static in mcls.statics[cls]): typ = dict(zip((not isinstance(static,str) for static in mcls.statics[cls]), map(type,mcls.statics[cls])))[True].__name__ raise TypeError('__statics__ items must be strings, not {0}'.format(typ)) # Move any previously existing, not overridden statics to the static var parent class(es) if len(cls.__sro__) > 1: for attr,value in namespace.items(): if attr not in StaticVarsMeta.statics[cls] and attr != ['__statics__']: for c in cls.__sro__[1:]: if attr in StaticVarsMeta.statics[c]: setattr(c,attr,value) delattr(cls,attr) return cls def __inst_getattribute__(self, orig_getattribute): '''Replaces the class __getattribute__''' @wraps(orig_getattribute) def wrapper(self, attr): if StaticVarsMeta.is_static(type(self),attr): return StaticVarsMeta.__getstatic__(type(self),attr) else: return orig_getattribute(self, attr) return wrapper def __inst_setattr__(self, orig_setattribute): '''Replaces the class __setattr__''' @wraps(orig_setattribute) def wrapper(self, attr, value): if StaticVarsMeta.is_static(type(self),attr): StaticVarsMeta.__setstatic__(type(self),attr, value) else: orig_setattribute(self, attr, value) return wrapper def __inst_delattr__(self, orig_delattribute): '''Replaces the class __delattr__''' @wraps(orig_delattribute) def wrapper(self, attr): if StaticVarsMeta.is_static(type(self),attr): StaticVarsMeta.__delstatic__(type(self),attr) else: orig_delattribute(self, attr) return wrapper def __getstatic__(cls,attr): '''Static variable getter''' for c in cls.__sro__: if attr in StaticVarsMeta.statics[c]: try: return getattr(c,attr) except AttributeError: pass raise AttributeError(cls.__name__ + " object has no attribute '{0}'".format(attr)) def __setstatic__(cls,attr,value): '''Static variable setter''' for c in cls.__sro__: if attr in StaticVarsMeta.statics[c]: setattr(c,attr,value) break def __delstatic__(cls,attr): '''Static variable deleter''' for c in cls.__sro__: if attr in StaticVarsMeta.statics[c]: try: delattr(c,attr) break except AttributeError: pass raise AttributeError(cls.__name__ + " object has no attribute '{0}'".format(attr)) def __delattr__(cls,attr): '''Prevent __sro__ attribute from deletion''' if attr == '__sro__': raise AttributeError('readonly attribute') super().__delattr__(attr) def is_static(cls,attr): '''Returns True if an attribute is a static variable of any class in the __sro__''' if any(attr in StaticVarsMeta.statics[c] for c in cls.__sro__): return True return False A: You can also add class variables to classes on the fly >>> class X: ... pass ... >>> X.bar = 0 >>> x = X() >>> x.bar 0 >>> x.foo Traceback (most recent call last): File "<interactive input>", line 1, in <module> AttributeError: X instance has no attribute 'foo' >>> X.foo = 1 >>> x.foo 1 And class instances can change class variables class X: l = [] def __init__(self): self.l.append(1) print X().l print X().l >python test.py [1] [1, 1] A: Personally I would use a classmethod whenever I needed a static method. Mainly because I get the class as an argument. class myObj(object): def myMethod(cls) ... myMethod = classmethod(myMethod) or use a decorator class myObj(object): @classmethod def myMethod(cls) For static properties.. Its time you look up some python definition.. variable can always change. There are two types of them mutable and immutable.. Also, there are class attributes and instance attributes.. Nothing really like static attributes in the sense of java & c++ Why use static method in pythonic sense, if it has no relation whatever to the class! If I were you, I'd either use classmethod or define the method independent from the class. A: One special thing to note about static properties & instance properties, shown in the example below: class my_cls: my_prop = 0 #static property print my_cls.my_prop #--> 0 #assign value to static property my_cls.my_prop = 1 print my_cls.my_prop #--> 1 #access static property thru' instance my_inst = my_cls() print my_inst.my_prop #--> 1 #instance property is different from static property #after being assigned a value my_inst.my_prop = 2 print my_cls.my_prop #--> 1 print my_inst.my_prop #--> 2 This means before assigning the value to instance property, if we try to access the property thru' instance, the static value is used. Each property declared in python class always has a static slot in memory. A: Static methods in python are called classmethods. Take a look at the following code class MyClass: def myInstanceMethod(self): print 'output from an instance method' @classmethod def myStaticMethod(cls): print 'output from a static method' >>> MyClass.myInstanceMethod() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unbound method myInstanceMethod() must be called [...] >>> MyClass.myStaticMethod() output from a static method Notice that when we call the method myInstanceMethod, we get an error. This is because it requires that method be called on an instance of this class. The method myStaticMethod is set as a classmethod using the decorator @classmethod. Just for kicks and giggles, we could call myInstanceMethod on the class by passing in an instance of the class, like so: >>> MyClass.myInstanceMethod(MyClass()) output from an instance method A: It is possible to have static class variables, but probably not worth the effort. Here's a proof-of-concept written in Python 3 -- if any of the exact details are wrong the code can be tweaked to match just about whatever you mean by a static variable: class Static: def __init__(self, value, doc=None): self.deleted = False self.value = value self.__doc__ = doc def __get__(self, inst, cls=None): if self.deleted: raise AttributeError('Attribute not set') return self.value def __set__(self, inst, value): self.deleted = False self.value = value def __delete__(self, inst): self.deleted = True class StaticType(type): def __delattr__(cls, name): obj = cls.__dict__.get(name) if isinstance(obj, Static): obj.__delete__(name) else: super(StaticType, cls).__delattr__(name) def __getattribute__(cls, *args): obj = super(StaticType, cls).__getattribute__(*args) if isinstance(obj, Static): obj = obj.__get__(cls, cls.__class__) return obj def __setattr__(cls, name, val): # check if object already exists obj = cls.__dict__.get(name) if isinstance(obj, Static): obj.__set__(name, val) else: super(StaticType, cls).__setattr__(name, val) and in use: class MyStatic(metaclass=StaticType): """ Testing static vars """ a = Static(9) b = Static(12) c = 3 class YourStatic(MyStatic): d = Static('woo hoo') e = Static('doo wop') and some tests: ms1 = MyStatic() ms2 = MyStatic() ms3 = MyStatic() assert ms1.a == ms2.a == ms3.a == MyStatic.a assert ms1.b == ms2.b == ms3.b == MyStatic.b assert ms1.c == ms2.c == ms3.c == MyStatic.c ms1.a = 77 assert ms1.a == ms2.a == ms3.a == MyStatic.a ms2.b = 99 assert ms1.b == ms2.b == ms3.b == MyStatic.b MyStatic.a = 101 assert ms1.a == ms2.a == ms3.a == MyStatic.a MyStatic.b = 139 assert ms1.b == ms2.b == ms3.b == MyStatic.b del MyStatic.b for inst in (ms1, ms2, ms3): try: getattr(inst, 'b') except AttributeError: pass else: print('AttributeError not raised on %r' % attr) ms1.c = 13 ms2.c = 17 ms3.c = 19 assert ms1.c == 13 assert ms2.c == 17 assert ms3.c == 19 MyStatic.c = 43 assert ms1.c == 13 assert ms2.c == 17 assert ms3.c == 19 ys1 = YourStatic() ys2 = YourStatic() ys3 = YourStatic() MyStatic.b = 'burgler' assert ys1.a == ys2.a == ys3.a == YourStatic.a == MyStatic.a assert ys1.b == ys2.b == ys3.b == YourStatic.b == MyStatic.b assert ys1.d == ys2.d == ys3.d == YourStatic.d assert ys1.e == ys2.e == ys3.e == YourStatic.e ys1.a = 'blah' assert ys1.a == ys2.a == ys3.a == YourStatic.a == MyStatic.a ys2.b = 'kelp' assert ys1.b == ys2.b == ys3.b == YourStatic.b == MyStatic.b ys1.d = 'fee' assert ys1.d == ys2.d == ys3.d == YourStatic.d ys2.e = 'fie' assert ys1.e == ys2.e == ys3.e == YourStatic.e MyStatic.a = 'aargh' assert ys1.a == ys2.a == ys3.a == YourStatic.a == MyStatic.a A: When define some member variable outside any member method, the variable can be either static or non-static depending on how the variable is expressed. CLASSNAME.var is static variable INSTANCENAME.var is not static variable. self.var inside class is not static variable. var inside the class member function is not defined. For example: #!/usr/bin/python class A: var=1 def printvar(self): print "self.var is %d" % self.var print "A.var is %d" % A.var a = A() a.var = 2 a.printvar() A.var = 3 a.printvar() The results are self.var is 2 A.var is 1 self.var is 2 A.var is 3 A: @dataclass definitions provide class-level names that are used to define the instance variables and the initialization method, __init__(). If you want class-level variable in @dataclass you should use typing.ClassVar type hint. The ClassVar type's parameters define the class-level variable's type. from typing import ClassVar from dataclasses import dataclass @dataclass class Test: i: ClassVar[int] = 10 x: int y: int def __repr__(self): return f"Test({self.x=}, {self.y=}, {Test.i=})" Usage examples: > test1 = Test(5, 6) > test2 = Test(10, 11) > test1 Test(self.x=5, self.y=6, Test.i=10) > test2 Test(self.x=10, self.y=11, Test.i=10) A: You could also enforce a class to be static using metaclass. class StaticClassError(Exception): pass class StaticClass: __metaclass__ = abc.ABCMeta def __new__(cls, *args, **kw): raise StaticClassError("%s is a static class and cannot be initiated." % cls) class MyClass(StaticClass): a = 1 b = 3 @staticmethod def add(x, y): return x+y Then whenever by accident you try to initialize MyClass you'll get an StaticClassError. A: One very interesting point about Python's attribute lookup is that it can be used to create "virtual variables": class A(object): label="Amazing" def __init__(self,d): self.data=d def say(self): print("%s %s!"%(self.label,self.data)) class B(A): label="Bold" # overrides A.label A(5).say() # Amazing 5! B(3).say() # Bold 3! Normally there aren't any assignments to these after they are created. Note that the lookup uses self because, although label is static in the sense of not being associated with a particular instance, the value still depends on the (class of the) instance. A: With Object datatypes it is possible. But with primitive types like bool, int, float or str bahaviour is different from other OOP languages. Because in inherited class static attribute does not exist. If attribute does not exists in inherited class, Python start to look for it in parent class. If found in parent class, its value will be returned. When you decide to change value in inherited class, static attribute will be created in runtime. In next time of reading inherited static attribute its value will be returned, bacause it is already defined. Objects (lists, dicts) works as a references so it is safe to use them as static attributes and inherit them. Object address is not changed when you change its attribute values. Example with integer data type: class A: static = 1 class B(A): pass print(f"int {A.static}") # get 1 correctly print(f"int {B.static}") # get 1 correctly A.static = 5 print(f"int {A.static}") # get 5 correctly print(f"int {B.static}") # get 5 correctly B.static = 6 print(f"int {A.static}") # expected 6, but get 5 incorrectly print(f"int {B.static}") # get 6 correctly A.static = 7 print(f"int {A.static}") # get 7 correctly print(f"int {B.static}") # get unchanged 6 Solution based on refdatatypes library: from refdatatypes.refint import RefInt class AAA: static = RefInt(1) class BBB(AAA): pass print(f"refint {AAA.static.value}") # get 1 correctly print(f"refint {BBB.static.value}") # get 1 correctly AAA.static.value = 5 print(f"refint {AAA.static.value}") # get 5 correctly print(f"refint {BBB.static.value}") # get 5 correctly BBB.static.value = 6 print(f"refint {AAA.static.value}") # get 6 correctly print(f"refint {BBB.static.value}") # get 6 correctly AAA.static.value = 7 print(f"refint {AAA.static.value}") # get 7 correctly print(f"refint {BBB.static.value}") # get 7 correctly A: Yes, definitely possible to write static variables and methods in python. Static Variables : Variable declared at class level are called static variable which can be accessed directly using class name. >>> class A: ...my_var = "shagun" >>> print(A.my_var) shagun Instance variables: Variables that are related and accessed by instance of a class are instance variables. >>> a = A() >>> a.my_var = "pruthi" >>> print(A.my_var,a.my_var) shagun pruthi Static Methods: Similar to variables, static methods can be accessed directly using class Name. No need to create an instance. But keep in mind, a static method cannot call a non-static method in python. >>> class A: ... @staticmethod ... def my_static_method(): ... print("Yippey!!") ... >>> A.my_static_method() Yippey!! A: In regards to this answer, for a constant static variable, you can use a descriptor. Here's an example: class ConstantAttribute(object): '''You can initialize my value but not change it.''' def __init__(self, value): self.value = value def __get__(self, obj, type=None): return self.value def __set__(self, obj, val): pass class Demo(object): x = ConstantAttribute(10) class SubDemo(Demo): x = 10 demo = Demo() subdemo = SubDemo() # should not change demo.x = 100 # should change subdemo.x = 100 print "small demo", demo.x print "small subdemo", subdemo.x print "big demo", Demo.x print "big subdemo", SubDemo.x resulting in ... small demo 10 small subdemo 100 big demo 10 big subdemo 10 You can always raise an exception if quietly ignoring setting value (pass above) is not your thing. If you're looking for a C++, Java style static class variable: class StaticAttribute(object): def __init__(self, value): self.value = value def __get__(self, obj, type=None): return self.value def __set__(self, obj, val): self.value = val Have a look at this answer and the official docs HOWTO for more information about descriptors. A: Absolutely Yes, Python by itself don't have any static data member explicitly, but We can have by doing so class A: counter =0 def callme (self): A.counter +=1 def getcount (self): return self.counter >>> x=A() >>> y=A() >>> print(x.getcount()) >>> print(y.getcount()) >>> x.callme() >>> print(x.getcount()) >>> print(y.getcount()) output 0 0 1 1 explanation here object (x) alone increment the counter variable from 0 to 1 by not object y. But result it as "static counter" A: The best way I found is to use another class. You can create an object and then use it on other objects. class staticFlag: def __init__(self): self.__success = False def isSuccess(self): return self.__success def succeed(self): self.__success = True class tryIt: def __init__(self, staticFlag): self.isSuccess = staticFlag.isSuccess self.succeed = staticFlag.succeed tryArr = [] flag = staticFlag() for i in range(10): tryArr.append(tryIt(flag)) if i == 5: tryArr[i].succeed() print tryArr[i].isSuccess() With the example above, I made a class named staticFlag. This class should present the static var __success (Private Static Var). tryIt class represented the regular class we need to use. Now I made an object for one flag (staticFlag). This flag will be sent as reference to all the regular objects. All these objects are being added to the list tryArr. This Script Results: False False False False False True True True True True A: Summarizing others' answers and adding, there are many ways to declare Static Methods or Variables in python. 1. Using staticmethod() as a decorator: One can simply put a decorator above a method(function) declared to make it a static method. For eg. class Calculator: @staticmethod def multiply(n1, n2, *args): Res = 1 for num in args: Res *= num return n1 * n2 * Res print(Calculator.multiply(1, 2, 3, 4)) # 24 2. Using staticmethod() as a parameter function: This method can receive an argument which is of function type, and it returns a static version of the function passed. For eg. class Calculator: def add(n1, n2, *args): return n1 + n2 + sum(args) Calculator.add = staticmethod(Calculator.add) print(Calculator.add(1, 2, 3, 4)) # 10 3. Using classmethod() as a decorator: @classmethod has similar effect on a function as @staticmethod has, but this time, an additional argument is needed to be accepted in the function (similar to self parameter for instance variables). For eg. class Calculator: num = 0 def __init__(self, digits) -> None: Calculator.num = int(''.join(digits)) @classmethod def get_digits(cls, num): digits = list(str(num)) calc = cls(digits) return calc.num print(Calculator.get_digits(314159)) # 314159 4. Using classmethod() as a parameter function: @classmethod can also be used as a parameter function, in case one doesn't want to modify class definition. For eg. class Calculator: def divide(cls, n1, n2, *args): Res = 1 for num in args: Res *= num return n1 / n2 / Res Calculator.divide = classmethod(Calculator.divide) print(Calculator.divide(15, 3, 5)) # 1.0 5. Direct declaration A method/variable declared outside all other methods, but inside a class is automatically static. class Calculator: def subtract(n1, n2, *args): return n1 - n2 - sum(args) print(Calculator.subtract(10, 2, 3, 4)) # 1 The whole program class Calculator: num = 0 def __init__(self, digits) -> None: Calculator.num = int(''.join(digits)) @staticmethod def multiply(n1, n2, *args): Res = 1 for num in args: Res *= num return n1 * n2 * Res def add(n1, n2, *args): return n1 + n2 + sum(args) @classmethod def get_digits(cls, num): digits = list(str(num)) calc = cls(digits) return calc.num def divide(cls, n1, n2, *args): Res = 1 for num in args: Res *= num return n1 / n2 / Res def subtract(n1, n2, *args): return n1 - n2 - sum(args) Calculator.add = staticmethod(Calculator.add) Calculator.divide = classmethod(Calculator.divide) print(Calculator.multiply(1, 2, 3, 4)) # 24 print(Calculator.add(1, 2, 3, 4)) # 10 print(Calculator.get_digits(314159)) # 314159 print(Calculator.divide(15, 3, 5)) # 1.0 print(Calculator.subtract(10, 2, 3, 4)) # 1 Refer to Python Documentation for mastering OOP in python. A: To avoid any potential confusion, I would like to contrast static variables and immutable objects. Some primitive object types like integers, floats, strings, and touples are immutable in Python. This means that the object that is referred to by a given name cannot change if it is of one of the aforementioned object types. The name can be reassigned to a different object, but the object itself may not be changed. Making a variable static takes this a step further by disallowing the variable name to point to any object but that to which it currently points. (Note: this is a general software concept and not specific to Python; please see others' posts for information about implementing statics in Python). A: Static Variables in Class factory python3.6 For anyone using a class factory with python3.6 and up use the nonlocal keyword to add it to the scope / context of the class being created like so: >>> def SomeFactory(some_var=None): ... class SomeClass(object): ... nonlocal some_var ... def print(): ... print(some_var) ... return SomeClass ... >>> SomeFactory(some_var="hello world").print() hello world A: So this is probably a hack, but I've been using eval(str) to obtain an static object, kind of a contradiction, in python 3. There is an Records.py file that has nothing but class objects defined with static methods and constructors that save some arguments. Then from another .py file I import Records but i need to dynamically select each object and then instantiate it on demand according to the type of data being read in. So where object_name = 'RecordOne' or the class name, I call cur_type = eval(object_name) and then to instantiate it you do cur_inst = cur_type(args) However before you instantiate you can call static methods from cur_type.getName() for example, kind of like abstract base class implementation or whatever the goal is. However in the backend, it's probably instantiated in python and is not truly static, because eval is returning an object....which must have been instantiated....that gives static like behavior. A: If you are attempting to share a static variable for, by example, increasing it across other instances, something like this script works fine: # -*- coding: utf-8 -*- class Worker: id = 1 def __init__(self): self.name = '' self.document = '' self.id = Worker.id Worker.id += 1 def __str__(self): return u"{}.- {} {}".format(self.id, self.name, self.document).encode('utf8') class Workers: def __init__(self): self.list = [] def add(self, name, doc): worker = Worker() worker.name = name worker.document = doc self.list.append(worker) if __name__ == "__main__": workers = Workers() for item in (('Fiona', '0009898'), ('Maria', '66328191'), ("Sandra", '2342184'), ('Elvira', '425872')): workers.add(item[0], item[1]) for worker in workers.list: print(worker) print("next id: %i" % Worker.id) A: You can use a list or a dictionary to get "static behavior" between instances. class Fud: class_vars = {'origin_open':False} def __init__(self, origin = True): self.origin = origin self.opened = True if origin: self.class_vars['origin_open'] = True def make_another_fud(self): ''' Generating another Fud() from the origin instance ''' return Fud(False) def close(self): self.opened = False if self.origin: self.class_vars['origin_open'] = False fud1 = Fud() fud2 = fud1.make_another_fud() print (f"is this the original fud: {fud2.origin}") print (f"is the original fud open: {fud2.class_vars['origin_open']}") # is this the original fud: False # is the original fud open: True fud1.close() print (f"is the original fud open: {fud2.class_vars['origin_open']}") # is the original fud open: False A: Put it this way the static variable is created when a user-defined a class come into existence and the define a static variable it should follow the keyword self, class Student: the correct way of static declaration i = 10 incorrect self.i = 10 A: Not like the @staticmethod but class variables are static method of class and are shared with all the instances. Now you can access it like instance = MyClass() print(instance.i) or print(MyClass.i) you have to assign the value to these variables I was trying class MyClass: i: str and assigning the value in one method call, in that case it will not work and will throw an error i is not attribute of MyClass A: Class variable and allow for subclassing Assuming you are not looking for a truly static variable but rather something pythonic that will do the same sort of job for consenting adults, then use a class variable. This will provide you with a variable which all instances can access (and update) Beware: Many of the other answers which use a class variable will break subclassing. You should avoid referencing the class directly by name. from contextlib import contextmanager class Sheldon(object): foo = 73 def __init__(self, n): self.n = n def times(self): cls = self.__class__ return cls.foo * self.n #self.foo * self.n would give the same result here but is less readable # it will also create a local variable which will make it easier to break your code def updatefoo(self): cls = self.__class__ cls.foo *= self.n #self.foo *= self.n will not work here # assignment will try to create a instance variable foo @classmethod @contextmanager def reset_after_test(cls): originalfoo = cls.foo yield cls.foo = originalfoo #if you don't do this then running a full test suite will fail #updates to foo in one test will be kept for later tests will give you the same functionality as using Sheldon.foo to address the variable and will pass tests like these: def test_times(): with Sheldon.reset_after_test(): s = Sheldon(2) assert s.times() == 146 def test_update(): with Sheldon.reset_after_test(): s = Sheldon(2) s.updatefoo() assert Sheldon.foo == 146 def test_two_instances(): with Sheldon.reset_after_test(): s = Sheldon(2) s3 = Sheldon(3) assert s.times() == 146 assert s3.times() == 219 s3.updatefoo() assert s.times() == 438 It will also allow someone else to simply: class Douglas(Sheldon): foo = 42 which will also work: def test_subclassing(): with Sheldon.reset_after_test(), Douglas.reset_after_test(): s = Sheldon(2) d = Douglas(2) assert d.times() == 84 assert s.times() == 146 d.updatefoo() assert d.times() == 168 #Douglas.Foo was updated assert s.times() == 146 #Seldon.Foo is still 73 def test_subclassing_reset(): with Sheldon.reset_after_test(), Douglas.reset_after_test(): s = Sheldon(2) d = Douglas(2) assert d.times() == 84 #Douglas.foo was reset after the last test assert s.times() == 146 #and so was Sheldon.foo For great advice on things to watch out for when creating classes check out Raymond Hettinger's video https://www.youtube.com/watch?v=HTLu2DFOdTg A: You can create the class variable x, the instance variable name, the instance method test1(self), the class method test2(cls) and the static method test3() as shown below: class Person: x = "Hello" # Class variable def __init__(self, name): self.name = name # Instance variable def test1(self): # Instance method print("Test1") @classmethod def test2(cls): # Class method print("Test2") @staticmethod def test3(): # Static method print("Test3") I explain about class variable in my answer and class method and static method in my answer and instance method in my answer.
Class (static) variables and methods
How do I create class (i.e. static) variables or methods in Python?
[ "Variables declared inside the class definition, but not inside a method are class or static variables:\n>>> class MyClass:\n... i = 3\n...\n>>> MyClass.i\n3 \n\nAs @millerdev points out, this creates a class-level i variable, but this is distinct from any instance-level i variable, so you could have\n>>> m = MyClass()\n>>> m.i = 4\n>>> MyClass.i, m.i\n>>> (3, 4)\n\nThis is different from C++ and Java, but not so different from C#, where a static member can't be accessed using a reference to an instance.\nSee what the Python tutorial has to say on the subject of classes and class objects.\n@Steve Johnson has already answered regarding static methods, also documented under \"Built-in Functions\" in the Python Library Reference.\nclass C:\n @staticmethod\n def f(arg1, arg2, ...): ...\n\n@beidy recommends classmethods over staticmethod, as the method then receives the class type as the first argument.\n", "@Blair Conrad said static variables declared inside the class definition, but not inside a method are class or \"static\" variables:\n>>> class Test(object):\n... i = 3\n...\n>>> Test.i\n3\n\nThere are a few gotcha's here. Carrying on from the example above:\n>>> t = Test()\n>>> t.i # \"static\" variable accessed via instance\n3\n>>> t.i = 5 # but if we assign to the instance ...\n>>> Test.i # we have not changed the \"static\" variable\n3\n>>> t.i # we have overwritten Test.i on t by creating a new attribute t.i\n5\n>>> Test.i = 6 # to change the \"static\" variable we do it by assigning to the class\n>>> t.i\n5\n>>> Test.i\n6\n>>> u = Test()\n>>> u.i\n6 # changes to t do not affect new instances of Test\n\n# Namespaces are one honking great idea -- let's do more of those!\n>>> Test.__dict__\n{'i': 6, ...}\n>>> t.__dict__\n{'i': 5}\n>>> u.__dict__\n{}\n\nNotice how the instance variable t.i got out of sync with the \"static\" class variable when the attribute i was set directly on t. This is because i was re-bound within the t namespace, which is distinct from the Test namespace. If you want to change the value of a \"static\" variable, you must change it within the scope (or object) where it was originally defined. I put \"static\" in quotes because Python does not really have static variables in the sense that C++ and Java do.\nAlthough it doesn't say anything specific about static variables or methods, the Python tutorial has some relevant information on classes and class objects. \n@Steve Johnson also answered regarding static methods, also documented under \"Built-in Functions\" in the Python Library Reference.\nclass Test(object):\n @staticmethod\n def f(arg1, arg2, ...):\n ...\n\n@beid also mentioned classmethod, which is similar to staticmethod. A classmethod's first argument is the class object. Example:\nclass Test(object):\n i = 3 # class (or static) variable\n @classmethod\n def g(cls, arg):\n # here we can use 'cls' instead of the class name (Test)\n if arg > cls.i:\n cls.i = arg # would be the same as Test.i = arg1\n\n\n", "Static and Class Methods\nAs the other answers have noted, static and class methods are easily accomplished using the built-in decorators:\nclass Test(object):\n\n # regular instance method:\n def my_method(self):\n pass\n\n # class method:\n @classmethod\n def my_class_method(cls):\n pass\n\n # static method:\n @staticmethod\n def my_static_method():\n pass\n\nAs usual, the first argument to my_method() is bound to the class instance object. In contrast, the first argument to my_class_method() is bound to the class object itself (e.g., in this case, Test). For my_static_method(), none of the arguments are bound, and having arguments at all is optional.\n\"Static Variables\"\nHowever, implementing \"static variables\" (well, mutable static variables, anyway, if that's not a contradiction in terms...) is not as straight forward. As millerdev pointed out in his answer, the problem is that Python's class attributes are not truly \"static variables\". Consider:\nclass Test(object):\n i = 3 # This is a class attribute\n\nx = Test()\nx.i = 12 # Attempt to change the value of the class attribute using x instance\nassert x.i == Test.i # ERROR\nassert Test.i == 3 # Test.i was not affected\nassert x.i == 12 # x.i is a different object than Test.i\n\nThis is because the line x.i = 12 has added a new instance attribute i to x instead of changing the value of the Test class i attribute.\nPartial expected static variable behavior, i.e., syncing of the attribute between multiple instances (but not with the class itself; see \"gotcha\" below), can be achieved by turning the class attribute into a property:\nclass Test(object):\n\n _i = 3\n\n @property\n def i(self):\n return type(self)._i\n\n @i.setter\n def i(self,val):\n type(self)._i = val\n\n## ALTERNATIVE IMPLEMENTATION - FUNCTIONALLY EQUIVALENT TO ABOVE ##\n## (except with separate methods for getting and setting i) ##\n\nclass Test(object):\n\n _i = 3\n\n def get_i(self):\n return type(self)._i\n\n def set_i(self,val):\n type(self)._i = val\n\n i = property(get_i, set_i)\n\nNow you can do:\nx1 = Test()\nx2 = Test()\nx1.i = 50\nassert x2.i == x1.i # no error\nassert x2.i == 50 # the property is synced\n\nThe static variable will now remain in sync between all class instances.\n(NOTE: That is, unless a class instance decides to define its own version of _i! But if someone decides to do THAT, they deserve what they get, don't they???)\nNote that technically speaking, i is still not a 'static variable' at all; it is a property, which is a special type of descriptor. However, the property behavior is now equivalent to a (mutable) static variable synced across all class instances.\nImmutable \"Static Variables\"\nFor immutable static variable behavior, simply omit the property setter:\nclass Test(object):\n\n _i = 3\n\n @property\n def i(self):\n return type(self)._i\n\n## ALTERNATIVE IMPLEMENTATION - FUNCTIONALLY EQUIVALENT TO ABOVE ##\n## (except with separate methods for getting i) ##\n\nclass Test(object):\n\n _i = 3\n\n def get_i(self):\n return type(self)._i\n\n i = property(get_i)\n\nNow attempting to set the instance i attribute will return an AttributeError:\nx = Test()\nassert x.i == 3 # success\nx.i = 12 # ERROR\n\nOne Gotcha to be Aware of\nNote that the above methods only work with instances of your class - they will not work when using the class itself. So for example:\nx = Test()\nassert x.i == Test.i # ERROR\n\n# x.i and Test.i are two different objects:\ntype(Test.i) # class 'property'\ntype(x.i) # class 'int'\n\nThe line assert Test.i == x.i produces an error, because the i attribute of Test and x are two different objects.\nMany people will find this surprising. However, it should not be. If we go back and inspect our Test class definition (the second version), we take note of this line:\n i = property(get_i) \n\nClearly, the member i of Test must be a property object, which is the type of object returned from the property function.\nIf you find the above confusing, you are most likely still thinking about it from the perspective of other languages (e.g. Java or c++). You should go study the property object, about the order in which Python attributes are returned, the descriptor protocol, and the method resolution order (MRO).\nI present a solution to the above 'gotcha' below; however I would suggest - strenuously - that you do not try to do something like the following until - at minimum - you thoroughly understand why assert Test.i = x.i causes an error.\nREAL, ACTUAL Static Variables - Test.i == x.i\nI present the (Python 3) solution below for informational purposes only. I am not endorsing it as a \"good solution\". I have my doubts as to whether emulating the static variable behavior of other languages in Python is ever actually necessary. However, regardless as to whether it is actually useful, the below should help further understanding of how Python works.\nUPDATE: this attempt is really pretty awful; if you insist on doing something like this (hint: please don't; Python is a very elegant language and shoe-horning it into behaving like another language is just not necessary), use the code in Ethan Furman's answer instead.\nEmulating static variable behavior of other languages using a metaclass\nA metaclass is the class of a class. The default metaclass for all classes in Python (i.e., the \"new style\" classes post Python 2.3 I believe) is type. For example:\ntype(int) # class 'type'\ntype(str) # class 'type'\nclass Test(): pass\ntype(Test) # class 'type'\n\nHowever, you can define your own metaclass like this:\nclass MyMeta(type): pass\n\nAnd apply it to your own class like this (Python 3 only):\nclass MyClass(metaclass = MyMeta):\n pass\n\ntype(MyClass) # class MyMeta\n\nBelow is a metaclass I have created which attempts to emulate \"static variable\" behavior of other languages. It basically works by replacing the default getter, setter, and deleter with versions which check to see if the attribute being requested is a \"static variable\".\nA catalog of the \"static variables\" is stored in the StaticVarMeta.statics attribute. All attribute requests are initially attempted to be resolved using a substitute resolution order. I have dubbed this the \"static resolution order\", or \"SRO\". This is done by looking for the requested attribute in the set of \"static variables\" for a given class (or its parent classes). If the attribute does not appear in the \"SRO\", the class will fall back on the default attribute get/set/delete behavior (i.e., \"MRO\").\nfrom functools import wraps\n\nclass StaticVarsMeta(type):\n '''A metaclass for creating classes that emulate the \"static variable\" behavior\n of other languages. I do not advise actually using this for anything!!!\n \n Behavior is intended to be similar to classes that use __slots__. However, \"normal\"\n attributes and __statics___ can coexist (unlike with __slots__). \n \n Example usage: \n \n class MyBaseClass(metaclass = StaticVarsMeta):\n __statics__ = {'a','b','c'}\n i = 0 # regular attribute\n a = 1 # static var defined (optional)\n \n class MyParentClass(MyBaseClass):\n __statics__ = {'d','e','f'}\n j = 2 # regular attribute\n d, e, f = 3, 4, 5 # Static vars\n a, b, c = 6, 7, 8 # Static vars (inherited from MyBaseClass, defined/re-defined here)\n \n class MyChildClass(MyParentClass):\n __statics__ = {'a','b','c'}\n j = 2 # regular attribute (redefines j from MyParentClass)\n d, e, f = 9, 10, 11 # Static vars (inherited from MyParentClass, redefined here)\n a, b, c = 12, 13, 14 # Static vars (overriding previous definition in MyParentClass here)'''\n statics = {}\n def __new__(mcls, name, bases, namespace):\n # Get the class object\n cls = super().__new__(mcls, name, bases, namespace)\n # Establish the \"statics resolution order\"\n cls.__sro__ = tuple(c for c in cls.__mro__ if isinstance(c,mcls))\n \n # Replace class getter, setter, and deleter for instance attributes\n cls.__getattribute__ = StaticVarsMeta.__inst_getattribute__(cls, cls.__getattribute__)\n cls.__setattr__ = StaticVarsMeta.__inst_setattr__(cls, cls.__setattr__)\n cls.__delattr__ = StaticVarsMeta.__inst_delattr__(cls, cls.__delattr__)\n # Store the list of static variables for the class object\n # This list is permanent and cannot be changed, similar to __slots__\n try:\n mcls.statics[cls] = getattr(cls,'__statics__')\n except AttributeError:\n mcls.statics[cls] = namespace['__statics__'] = set() # No static vars provided\n # Check and make sure the statics var names are strings\n if any(not isinstance(static,str) for static in mcls.statics[cls]):\n typ = dict(zip((not isinstance(static,str) for static in mcls.statics[cls]), map(type,mcls.statics[cls])))[True].__name__\n raise TypeError('__statics__ items must be strings, not {0}'.format(typ))\n # Move any previously existing, not overridden statics to the static var parent class(es)\n if len(cls.__sro__) > 1:\n for attr,value in namespace.items():\n if attr not in StaticVarsMeta.statics[cls] and attr != ['__statics__']:\n for c in cls.__sro__[1:]:\n if attr in StaticVarsMeta.statics[c]:\n setattr(c,attr,value)\n delattr(cls,attr)\n return cls\n def __inst_getattribute__(self, orig_getattribute):\n '''Replaces the class __getattribute__'''\n @wraps(orig_getattribute)\n def wrapper(self, attr):\n if StaticVarsMeta.is_static(type(self),attr):\n return StaticVarsMeta.__getstatic__(type(self),attr)\n else:\n return orig_getattribute(self, attr)\n return wrapper\n def __inst_setattr__(self, orig_setattribute):\n '''Replaces the class __setattr__'''\n @wraps(orig_setattribute)\n def wrapper(self, attr, value):\n if StaticVarsMeta.is_static(type(self),attr):\n StaticVarsMeta.__setstatic__(type(self),attr, value)\n else:\n orig_setattribute(self, attr, value)\n return wrapper\n def __inst_delattr__(self, orig_delattribute):\n '''Replaces the class __delattr__'''\n @wraps(orig_delattribute)\n def wrapper(self, attr):\n if StaticVarsMeta.is_static(type(self),attr):\n StaticVarsMeta.__delstatic__(type(self),attr)\n else:\n orig_delattribute(self, attr)\n return wrapper\n def __getstatic__(cls,attr):\n '''Static variable getter'''\n for c in cls.__sro__:\n if attr in StaticVarsMeta.statics[c]:\n try:\n return getattr(c,attr)\n except AttributeError:\n pass\n raise AttributeError(cls.__name__ + \" object has no attribute '{0}'\".format(attr))\n def __setstatic__(cls,attr,value):\n '''Static variable setter'''\n for c in cls.__sro__:\n if attr in StaticVarsMeta.statics[c]:\n setattr(c,attr,value)\n break\n def __delstatic__(cls,attr):\n '''Static variable deleter'''\n for c in cls.__sro__:\n if attr in StaticVarsMeta.statics[c]:\n try:\n delattr(c,attr)\n break\n except AttributeError:\n pass\n raise AttributeError(cls.__name__ + \" object has no attribute '{0}'\".format(attr))\n def __delattr__(cls,attr):\n '''Prevent __sro__ attribute from deletion'''\n if attr == '__sro__':\n raise AttributeError('readonly attribute')\n super().__delattr__(attr)\n def is_static(cls,attr):\n '''Returns True if an attribute is a static variable of any class in the __sro__'''\n if any(attr in StaticVarsMeta.statics[c] for c in cls.__sro__):\n return True\n return False\n\n", "You can also add class variables to classes on the fly\n>>> class X:\n... pass\n... \n>>> X.bar = 0\n>>> x = X()\n>>> x.bar\n0\n>>> x.foo\nTraceback (most recent call last):\n File \"<interactive input>\", line 1, in <module>\nAttributeError: X instance has no attribute 'foo'\n>>> X.foo = 1\n>>> x.foo\n1\n\nAnd class instances can change class variables\nclass X:\n l = []\n def __init__(self):\n self.l.append(1)\n\nprint X().l\nprint X().l\n\n>python test.py\n[1]\n[1, 1]\n\n", "Personally I would use a classmethod whenever I needed a static method. Mainly because I get the class as an argument.\nclass myObj(object):\n def myMethod(cls)\n ...\n myMethod = classmethod(myMethod) \n\nor use a decorator\nclass myObj(object):\n @classmethod\n def myMethod(cls)\n\nFor static properties.. Its time you look up some python definition.. variable can always change. There are two types of them mutable and immutable.. Also, there are class attributes and instance attributes.. Nothing really like static attributes in the sense of java & c++\nWhy use static method in pythonic sense, if it has no relation whatever to the class! If I were you, I'd either use classmethod or define the method independent from the class.\n", "One special thing to note about static properties & instance properties, shown in the example below:\nclass my_cls:\n my_prop = 0\n\n#static property\nprint my_cls.my_prop #--> 0\n\n#assign value to static property\nmy_cls.my_prop = 1 \nprint my_cls.my_prop #--> 1\n\n#access static property thru' instance\nmy_inst = my_cls()\nprint my_inst.my_prop #--> 1\n\n#instance property is different from static property \n#after being assigned a value\nmy_inst.my_prop = 2\nprint my_cls.my_prop #--> 1\nprint my_inst.my_prop #--> 2\n\nThis means before assigning the value to instance property, if we try to access the property thru' instance, the static value is used. Each property declared in python class always has a static slot in memory.\n", "Static methods in python are called classmethods. Take a look at the following code\nclass MyClass:\n\n def myInstanceMethod(self):\n print 'output from an instance method'\n\n @classmethod\n def myStaticMethod(cls):\n print 'output from a static method'\n\n>>> MyClass.myInstanceMethod()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: unbound method myInstanceMethod() must be called [...]\n\n>>> MyClass.myStaticMethod()\noutput from a static method\n\nNotice that when we call the method myInstanceMethod, we get an error. This is because it requires that method be called on an instance of this class. The method myStaticMethod is set as a classmethod using the decorator @classmethod.\nJust for kicks and giggles, we could call myInstanceMethod on the class by passing in an instance of the class, like so:\n>>> MyClass.myInstanceMethod(MyClass())\noutput from an instance method\n\n", "It is possible to have static class variables, but probably not worth the effort.\nHere's a proof-of-concept written in Python 3 -- if any of the exact details are wrong the code can be tweaked to match just about whatever you mean by a static variable:\n\nclass Static:\n def __init__(self, value, doc=None):\n self.deleted = False\n self.value = value\n self.__doc__ = doc\n def __get__(self, inst, cls=None):\n if self.deleted:\n raise AttributeError('Attribute not set')\n return self.value\n def __set__(self, inst, value):\n self.deleted = False\n self.value = value\n def __delete__(self, inst):\n self.deleted = True\n\nclass StaticType(type):\n def __delattr__(cls, name):\n obj = cls.__dict__.get(name)\n if isinstance(obj, Static):\n obj.__delete__(name)\n else:\n super(StaticType, cls).__delattr__(name)\n def __getattribute__(cls, *args):\n obj = super(StaticType, cls).__getattribute__(*args)\n if isinstance(obj, Static):\n obj = obj.__get__(cls, cls.__class__)\n return obj\n def __setattr__(cls, name, val):\n # check if object already exists\n obj = cls.__dict__.get(name)\n if isinstance(obj, Static):\n obj.__set__(name, val)\n else:\n super(StaticType, cls).__setattr__(name, val)\n\nand in use:\nclass MyStatic(metaclass=StaticType):\n \"\"\"\n Testing static vars\n \"\"\"\n a = Static(9)\n b = Static(12)\n c = 3\n\nclass YourStatic(MyStatic):\n d = Static('woo hoo')\n e = Static('doo wop')\n\nand some tests:\nms1 = MyStatic()\nms2 = MyStatic()\nms3 = MyStatic()\nassert ms1.a == ms2.a == ms3.a == MyStatic.a\nassert ms1.b == ms2.b == ms3.b == MyStatic.b\nassert ms1.c == ms2.c == ms3.c == MyStatic.c\nms1.a = 77\nassert ms1.a == ms2.a == ms3.a == MyStatic.a\nms2.b = 99\nassert ms1.b == ms2.b == ms3.b == MyStatic.b\nMyStatic.a = 101\nassert ms1.a == ms2.a == ms3.a == MyStatic.a\nMyStatic.b = 139\nassert ms1.b == ms2.b == ms3.b == MyStatic.b\ndel MyStatic.b\nfor inst in (ms1, ms2, ms3):\n try:\n getattr(inst, 'b')\n except AttributeError:\n pass\n else:\n print('AttributeError not raised on %r' % attr)\nms1.c = 13\nms2.c = 17\nms3.c = 19\nassert ms1.c == 13\nassert ms2.c == 17\nassert ms3.c == 19\nMyStatic.c = 43\nassert ms1.c == 13\nassert ms2.c == 17\nassert ms3.c == 19\n\nys1 = YourStatic()\nys2 = YourStatic()\nys3 = YourStatic()\nMyStatic.b = 'burgler'\nassert ys1.a == ys2.a == ys3.a == YourStatic.a == MyStatic.a\nassert ys1.b == ys2.b == ys3.b == YourStatic.b == MyStatic.b\nassert ys1.d == ys2.d == ys3.d == YourStatic.d\nassert ys1.e == ys2.e == ys3.e == YourStatic.e\nys1.a = 'blah'\nassert ys1.a == ys2.a == ys3.a == YourStatic.a == MyStatic.a\nys2.b = 'kelp'\nassert ys1.b == ys2.b == ys3.b == YourStatic.b == MyStatic.b\nys1.d = 'fee'\nassert ys1.d == ys2.d == ys3.d == YourStatic.d\nys2.e = 'fie'\nassert ys1.e == ys2.e == ys3.e == YourStatic.e\nMyStatic.a = 'aargh'\nassert ys1.a == ys2.a == ys3.a == YourStatic.a == MyStatic.a\n\n", "When define some member variable outside any member method, the variable can be either static or non-static depending on how the variable is expressed. \n\nCLASSNAME.var is static variable\nINSTANCENAME.var is not static variable. \nself.var inside class is not static variable. \nvar inside the class member function is not defined.\n\nFor example:\n#!/usr/bin/python\n\nclass A:\n var=1\n\n def printvar(self):\n print \"self.var is %d\" % self.var\n print \"A.var is %d\" % A.var\n\n\n a = A()\n a.var = 2\n a.printvar()\n\n A.var = 3\n a.printvar()\n\nThe results are\nself.var is 2\nA.var is 1\nself.var is 2\nA.var is 3\n\n", "@dataclass definitions provide class-level names that are used to define the instance variables and the initialization method, __init__(). If you want class-level variable in @dataclass you should use typing.ClassVar type hint. The ClassVar type's parameters define the class-level variable's type.\nfrom typing import ClassVar\nfrom dataclasses import dataclass\n\n@dataclass\nclass Test:\n i: ClassVar[int] = 10\n x: int\n y: int\n \n def __repr__(self):\n return f\"Test({self.x=}, {self.y=}, {Test.i=})\"\n\nUsage examples:\n> test1 = Test(5, 6)\n> test2 = Test(10, 11)\n\n> test1\nTest(self.x=5, self.y=6, Test.i=10)\n> test2\nTest(self.x=10, self.y=11, Test.i=10)\n\n", "You could also enforce a class to be static using metaclass.\nclass StaticClassError(Exception):\n pass\n\n\nclass StaticClass:\n __metaclass__ = abc.ABCMeta\n\n def __new__(cls, *args, **kw):\n raise StaticClassError(\"%s is a static class and cannot be initiated.\"\n % cls)\n\nclass MyClass(StaticClass):\n a = 1\n b = 3\n\n @staticmethod\n def add(x, y):\n return x+y\n\nThen whenever by accident you try to initialize MyClass you'll get an StaticClassError.\n", "One very interesting point about Python's attribute lookup is that it can be used to create \"virtual variables\":\nclass A(object):\n\n label=\"Amazing\"\n\n def __init__(self,d): \n self.data=d\n\n def say(self): \n print(\"%s %s!\"%(self.label,self.data))\n\nclass B(A):\n label=\"Bold\" # overrides A.label\n\nA(5).say() # Amazing 5!\nB(3).say() # Bold 3!\n\nNormally there aren't any assignments to these after they are created. Note that the lookup uses self because, although label is static in the sense of not being associated with a particular instance, the value still depends on the (class of the) instance.\n", "With Object datatypes it is possible. But with primitive types like bool, int, float or str bahaviour is different from other OOP languages. Because in inherited class static attribute does not exist. If attribute does not exists in inherited class, Python start to look for it in parent class. If found in parent class, its value will be returned. When you decide to change value in inherited class, static attribute will be created in runtime. In next time of reading inherited static attribute its value will be returned, bacause it is already defined. Objects (lists, dicts) works as a references so it is safe to use them as static attributes and inherit them. Object address is not changed when you change its attribute values.\nExample with integer data type:\nclass A:\n static = 1\n\n\nclass B(A):\n pass\n\n\nprint(f\"int {A.static}\") # get 1 correctly\nprint(f\"int {B.static}\") # get 1 correctly\n\nA.static = 5\nprint(f\"int {A.static}\") # get 5 correctly\nprint(f\"int {B.static}\") # get 5 correctly\n\nB.static = 6\nprint(f\"int {A.static}\") # expected 6, but get 5 incorrectly\nprint(f\"int {B.static}\") # get 6 correctly\n\nA.static = 7\nprint(f\"int {A.static}\") # get 7 correctly\nprint(f\"int {B.static}\") # get unchanged 6\n\nSolution based on refdatatypes library:\nfrom refdatatypes.refint import RefInt\n\n\nclass AAA:\n static = RefInt(1)\n\n\nclass BBB(AAA):\n pass\n\n\nprint(f\"refint {AAA.static.value}\") # get 1 correctly\nprint(f\"refint {BBB.static.value}\") # get 1 correctly\n\nAAA.static.value = 5\nprint(f\"refint {AAA.static.value}\") # get 5 correctly\nprint(f\"refint {BBB.static.value}\") # get 5 correctly\n\nBBB.static.value = 6\nprint(f\"refint {AAA.static.value}\") # get 6 correctly\nprint(f\"refint {BBB.static.value}\") # get 6 correctly\n\nAAA.static.value = 7\nprint(f\"refint {AAA.static.value}\") # get 7 correctly\nprint(f\"refint {BBB.static.value}\") # get 7 correctly\n\n", "Yes, definitely possible to write static variables and methods in python.\nStatic Variables :\nVariable declared at class level are called static variable which can be accessed directly using class name.\n >>> class A:\n ...my_var = \"shagun\"\n\n >>> print(A.my_var)\n shagun\n\nInstance variables: Variables that are related and accessed by instance of a class are instance variables.\n >>> a = A()\n >>> a.my_var = \"pruthi\"\n >>> print(A.my_var,a.my_var)\n shagun pruthi\n\nStatic Methods: Similar to variables, static methods can be accessed directly using class Name. No need to create an instance. \nBut keep in mind, a static method cannot call a non-static method in python.\n >>> class A:\n ... @staticmethod\n ... def my_static_method():\n ... print(\"Yippey!!\")\n ... \n >>> A.my_static_method()\n Yippey!!\n\n", "In regards to this answer, for a constant static variable, you can use a descriptor. Here's an example:\nclass ConstantAttribute(object):\n '''You can initialize my value but not change it.'''\n def __init__(self, value):\n self.value = value\n\n def __get__(self, obj, type=None):\n return self.value\n\n def __set__(self, obj, val):\n pass\n\n\nclass Demo(object):\n x = ConstantAttribute(10)\n\n\nclass SubDemo(Demo):\n x = 10\n\n\ndemo = Demo()\nsubdemo = SubDemo()\n# should not change\ndemo.x = 100\n# should change\nsubdemo.x = 100\nprint \"small demo\", demo.x\nprint \"small subdemo\", subdemo.x\nprint \"big demo\", Demo.x\nprint \"big subdemo\", SubDemo.x\n\nresulting in ...\nsmall demo 10\nsmall subdemo 100\nbig demo 10\nbig subdemo 10\n\nYou can always raise an exception if quietly ignoring setting value (pass above) is not your thing. If you're looking for a C++, Java style static class variable:\nclass StaticAttribute(object):\n def __init__(self, value):\n self.value = value\n\n def __get__(self, obj, type=None):\n return self.value\n\n def __set__(self, obj, val):\n self.value = val\n\nHave a look at this answer and the official docs HOWTO for more information about descriptors. \n", "Absolutely Yes,\n Python by itself don't have any static data member explicitly, but We can have by doing so \nclass A:\n counter =0\n def callme (self):\n A.counter +=1\n def getcount (self):\n return self.counter \n>>> x=A()\n>>> y=A()\n>>> print(x.getcount())\n>>> print(y.getcount())\n>>> x.callme() \n>>> print(x.getcount())\n>>> print(y.getcount())\n\noutput\n0\n0\n1\n1\n\nexplanation\nhere object (x) alone increment the counter variable\nfrom 0 to 1 by not object y. But result it as \"static counter\"\n\n", "The best way I found is to use another class. You can create an object and then use it on other objects.\nclass staticFlag:\n def __init__(self):\n self.__success = False\n def isSuccess(self):\n return self.__success\n def succeed(self):\n self.__success = True\n\nclass tryIt:\n def __init__(self, staticFlag):\n self.isSuccess = staticFlag.isSuccess\n self.succeed = staticFlag.succeed\n\ntryArr = []\nflag = staticFlag()\nfor i in range(10):\n tryArr.append(tryIt(flag))\n if i == 5:\n tryArr[i].succeed()\n print tryArr[i].isSuccess()\n\nWith the example above, I made a class named staticFlag.\nThis class should present the static var __success (Private Static Var).\ntryIt class represented the regular class we need to use.\nNow I made an object for one flag (staticFlag). This flag will be sent as reference to all the regular objects.\nAll these objects are being added to the list tryArr.\n\nThis Script Results:\nFalse\nFalse\nFalse\nFalse\nFalse\nTrue\nTrue\nTrue\nTrue\nTrue\n\n", "Summarizing others' answers and adding, there are many ways to declare Static Methods or Variables in python.\n1. Using staticmethod() as a decorator:\nOne can simply put a decorator above a method(function) declared to make it a static method. For eg.\nclass Calculator:\n @staticmethod\n def multiply(n1, n2, *args):\n Res = 1\n for num in args: Res *= num\n return n1 * n2 * Res\n\nprint(Calculator.multiply(1, 2, 3, 4)) # 24\n\n2. Using staticmethod() as a parameter function:\nThis method can receive an argument which is of function type, and it returns a static version of the function passed. For eg.\nclass Calculator:\n def add(n1, n2, *args):\n return n1 + n2 + sum(args)\n\nCalculator.add = staticmethod(Calculator.add)\nprint(Calculator.add(1, 2, 3, 4)) # 10\n\n3. Using classmethod() as a decorator:\n@classmethod has similar effect on a function as @staticmethod has, but\nthis time, an additional argument is needed to be accepted in the function (similar to self parameter for instance variables). For eg.\nclass Calculator:\n num = 0\n def __init__(self, digits) -> None:\n Calculator.num = int(''.join(digits))\n\n @classmethod\n def get_digits(cls, num):\n digits = list(str(num))\n calc = cls(digits)\n return calc.num\n\nprint(Calculator.get_digits(314159)) # 314159\n\n4. Using classmethod() as a parameter function:\n@classmethod can also be used as a parameter function, in case one doesn't want to modify class definition. For eg.\nclass Calculator:\n def divide(cls, n1, n2, *args):\n Res = 1\n for num in args: Res *= num\n return n1 / n2 / Res\n\nCalculator.divide = classmethod(Calculator.divide)\n\nprint(Calculator.divide(15, 3, 5)) # 1.0\n\n5. Direct declaration\nA method/variable declared outside all other methods, but inside a class is automatically static.\nclass Calculator: \n def subtract(n1, n2, *args):\n return n1 - n2 - sum(args)\n\nprint(Calculator.subtract(10, 2, 3, 4)) # 1\n\nThe whole program\nclass Calculator:\n num = 0\n def __init__(self, digits) -> None:\n Calculator.num = int(''.join(digits))\n \n \n @staticmethod\n def multiply(n1, n2, *args):\n Res = 1\n for num in args: Res *= num\n return n1 * n2 * Res\n\n\n def add(n1, n2, *args):\n return n1 + n2 + sum(args)\n \n\n @classmethod\n def get_digits(cls, num):\n digits = list(str(num))\n calc = cls(digits)\n return calc.num\n\n\n def divide(cls, n1, n2, *args):\n Res = 1\n for num in args: Res *= num\n return n1 / n2 / Res\n\n\n def subtract(n1, n2, *args):\n return n1 - n2 - sum(args)\n \n\n\n\nCalculator.add = staticmethod(Calculator.add)\nCalculator.divide = classmethod(Calculator.divide)\n\nprint(Calculator.multiply(1, 2, 3, 4)) # 24\nprint(Calculator.add(1, 2, 3, 4)) # 10\nprint(Calculator.get_digits(314159)) # 314159\nprint(Calculator.divide(15, 3, 5)) # 1.0\nprint(Calculator.subtract(10, 2, 3, 4)) # 1\n\nRefer to Python Documentation for mastering OOP in python.\n", "To avoid any potential confusion, I would like to contrast static variables and immutable objects.\nSome primitive object types like integers, floats, strings, and touples are immutable in Python. This means that the object that is referred to by a given name cannot change if it is of one of the aforementioned object types. The name can be reassigned to a different object, but the object itself may not be changed.\nMaking a variable static takes this a step further by disallowing the variable name to point to any object but that to which it currently points. (Note: this is a general software concept and not specific to Python; please see others' posts for information about implementing statics in Python).\n", "Static Variables in Class factory python3.6\nFor anyone using a class factory with python3.6 and up use the nonlocal keyword to add it to the scope / context of the class being created like so:\n>>> def SomeFactory(some_var=None):\n... class SomeClass(object):\n... nonlocal some_var\n... def print():\n... print(some_var)\n... return SomeClass\n... \n>>> SomeFactory(some_var=\"hello world\").print()\nhello world\n\n", "So this is probably a hack, but I've been using eval(str) to obtain an static object, kind of a contradiction, in python 3.\nThere is an Records.py file that has nothing but class objects defined with static methods and constructors that save some arguments. Then from another .py file I import Records but i need to dynamically select each object and then instantiate it on demand according to the type of data being read in.\nSo where object_name = 'RecordOne' or the class name, I call cur_type = eval(object_name) and then to instantiate it you do cur_inst = cur_type(args)\nHowever before you instantiate you can call static methods from cur_type.getName() for example, kind of like abstract base class implementation or whatever the goal is. However in the backend, it's probably instantiated in python and is not truly static, because eval is returning an object....which must have been instantiated....that gives static like behavior.\n", "If you are attempting to share a static variable for, by example, increasing it across other instances, something like this script works fine:\n# -*- coding: utf-8 -*-\nclass Worker:\n id = 1\n\n def __init__(self):\n self.name = ''\n self.document = ''\n self.id = Worker.id\n Worker.id += 1\n\n def __str__(self):\n return u\"{}.- {} {}\".format(self.id, self.name, self.document).encode('utf8')\n\n\nclass Workers:\n def __init__(self):\n self.list = []\n\n def add(self, name, doc):\n worker = Worker()\n worker.name = name\n worker.document = doc\n self.list.append(worker)\n\n\nif __name__ == \"__main__\":\n workers = Workers()\n for item in (('Fiona', '0009898'), ('Maria', '66328191'), (\"Sandra\", '2342184'), ('Elvira', '425872')):\n workers.add(item[0], item[1])\n for worker in workers.list:\n print(worker)\n print(\"next id: %i\" % Worker.id)\n\n", "You can use a list or a dictionary to get \"static behavior\" between instances.\nclass Fud:\n\n class_vars = {'origin_open':False}\n\n def __init__(self, origin = True):\n self.origin = origin\n self.opened = True\n if origin:\n self.class_vars['origin_open'] = True\n\n\n def make_another_fud(self):\n ''' Generating another Fud() from the origin instance '''\n\n return Fud(False)\n\n\n def close(self):\n self.opened = False\n if self.origin:\n self.class_vars['origin_open'] = False\n\n\nfud1 = Fud()\nfud2 = fud1.make_another_fud()\n\nprint (f\"is this the original fud: {fud2.origin}\")\nprint (f\"is the original fud open: {fud2.class_vars['origin_open']}\")\n# is this the original fud: False\n# is the original fud open: True\n\nfud1.close()\n\nprint (f\"is the original fud open: {fud2.class_vars['origin_open']}\")\n# is the original fud open: False\n\n", "Put it this way the static variable is created when a user-defined a class come into existence and the define a static variable it should follow the keyword self,\nclass Student:\n\n the correct way of static declaration\n i = 10\n\n incorrect\n self.i = 10\n\n", "Not like the @staticmethod but class variables are static method of class and are shared with all the instances.\nNow you can access it like\ninstance = MyClass()\nprint(instance.i)\n\nor\nprint(MyClass.i)\n\nyou have to assign the value to these variables\nI was trying\nclass MyClass:\n i: str\n\nand assigning the value in one method call, in that case it will not work and will throw an error\ni is not attribute of MyClass\n\n", "Class variable and allow for subclassing\nAssuming you are not looking for a truly static variable but rather something pythonic that will do the same sort of job for consenting adults, then use a class variable.\nThis will provide you with a variable which all instances can access (and update)\nBeware: Many of the other answers which use a class variable will break subclassing. You should avoid referencing the class directly by name.\nfrom contextlib import contextmanager\n\nclass Sheldon(object):\n foo = 73\n\n def __init__(self, n):\n self.n = n\n\n def times(self):\n cls = self.__class__\n return cls.foo * self.n\n #self.foo * self.n would give the same result here but is less readable\n # it will also create a local variable which will make it easier to break your code\n \n def updatefoo(self):\n cls = self.__class__\n cls.foo *= self.n\n #self.foo *= self.n will not work here\n # assignment will try to create a instance variable foo\n\n @classmethod\n @contextmanager\n def reset_after_test(cls):\n originalfoo = cls.foo\n yield\n cls.foo = originalfoo\n #if you don't do this then running a full test suite will fail\n #updates to foo in one test will be kept for later tests\n\nwill give you the same functionality as using Sheldon.foo to address the variable and will pass tests like these:\ndef test_times():\n with Sheldon.reset_after_test():\n s = Sheldon(2)\n assert s.times() == 146\n\ndef test_update():\n with Sheldon.reset_after_test():\n s = Sheldon(2)\n s.updatefoo()\n assert Sheldon.foo == 146\n\ndef test_two_instances():\n with Sheldon.reset_after_test():\n s = Sheldon(2)\n s3 = Sheldon(3)\n assert s.times() == 146\n assert s3.times() == 219\n s3.updatefoo()\n assert s.times() == 438\n\nIt will also allow someone else to simply:\nclass Douglas(Sheldon):\n foo = 42\n\nwhich will also work:\ndef test_subclassing():\n with Sheldon.reset_after_test(), Douglas.reset_after_test():\n s = Sheldon(2)\n d = Douglas(2)\n assert d.times() == 84\n assert s.times() == 146\n d.updatefoo()\n assert d.times() == 168 #Douglas.Foo was updated\n assert s.times() == 146 #Seldon.Foo is still 73\n\ndef test_subclassing_reset():\n with Sheldon.reset_after_test(), Douglas.reset_after_test():\n s = Sheldon(2)\n d = Douglas(2)\n assert d.times() == 84 #Douglas.foo was reset after the last test\n assert s.times() == 146 #and so was Sheldon.foo\n\nFor great advice on things to watch out for when creating classes check out Raymond Hettinger's video https://www.youtube.com/watch?v=HTLu2DFOdTg\n", "You can create the class variable x, the instance variable name, the instance method test1(self), the class method test2(cls) and the static method test3() as shown below:\nclass Person:\n x = \"Hello\" # Class variable\n\n def __init__(self, name):\n self.name = name # Instance variable\n \n def test1(self): # Instance method\n print(\"Test1\")\n\n @classmethod\n def test2(cls): # Class method\n print(\"Test2\")\n \n @staticmethod\n def test3(): # Static method\n print(\"Test3\")\n\nI explain about class variable in my answer and class method and static method in my answer and instance method in my answer.\n" ]
[ 2264, 745, 266, 45, 28, 24, 23, 17, 16, 14, 12, 11, 11, 10, 9, 9, 6, 6, 5, 4, 3, 3, 2, 1, 0, 0, 0 ]
[]
[]
[ "class", "class_variables", "python", "static" ]
stackoverflow_0000068645_class_class_variables_python_static.txt
Q: PYSimpleGui event in combination with keyboard.is_pressed('key') working every time, except first time after program start PYSimpleGui event in combination with keyboard.is_pressed('key') working every time, except first time after program start. So the first click never works, but every other click works. Hardcoding my button event is not an option, I have to check for several keys, the program is pretty complex already. File explorer. Try it yourself....can't make it work the first time, without an artificial mouse-click at program start. import PySimpleGUI as sg import keyboard from time import sleep layout = [ [ sg.Button('', enable_events=True, key=(1, 1)) ] ] window = sg.Window('Image Browser', layout, return_keyboard_events=False, finalize=True, auto_size_buttons=False, use_default_focus=True, resizable=True, size=(100, 100)) while True: event, values = window.read(timeout=0) if event == "Exit" or event == sg.WINDOW_CLOSED: break if type(event) is tuple and keyboard.is_pressed('shift'): print("++++++++++++++++++++++++++++++++") print("++++++++++++++++++++++++++++++++") sleep(0.01) Hello there. I have tried different window arguments,..like focus, return_keyboard_events etc. Also tried different timeouts, and putting the is_pressed down a line,... like if type(event) is tuple: if keyboard.is_pressed('shift'): print("++++++++++++++++++++++++++++++++") print("++++++++++++++++++++++++++++++++") Also which button does not matter. What helps is sending an artificial mouse-click to that button, when the program starts. Seems a bit hacky..lol. It would not make a good impression if I hijack the mouse at program start, and put it back where it was...lol. User like: "What just happened??"...lol I am sure I miss something here. A: Ok I found the solution!! I just have to put ANY arbitrary keyboard read in, after the program starts. It can be keyboard.get_hotkey_name() or keyboard.is_pressed('shift') or keyboard.is_pressed('a') Full solution: import PySimpleGUI as sg import keyboard from time import sleep keyboard.get_hotkey_name() # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! layout = [ [ sg.Button('', enable_events=True, key=(1, 1)) ] ] window = sg.Window('Image Browser', layout, return_keyboard_events=False, finalize=True, auto_size_buttons=False, use_default_focus=True, resizable=True, size=(100, 100)) while True: event, values = window.read(timeout=0) if event == "Exit" or event == sg.WINDOW_CLOSED: break if type(event) is tuple and keyboard.is_pressed('shift'): print("++++++++++++++++++++++++++++++++") print("++++++++++++++++++++++++++++++++") sleep(0.01)
PYSimpleGui event in combination with keyboard.is_pressed('key') working every time, except first time after program start
PYSimpleGui event in combination with keyboard.is_pressed('key') working every time, except first time after program start. So the first click never works, but every other click works. Hardcoding my button event is not an option, I have to check for several keys, the program is pretty complex already. File explorer. Try it yourself....can't make it work the first time, without an artificial mouse-click at program start. import PySimpleGUI as sg import keyboard from time import sleep layout = [ [ sg.Button('', enable_events=True, key=(1, 1)) ] ] window = sg.Window('Image Browser', layout, return_keyboard_events=False, finalize=True, auto_size_buttons=False, use_default_focus=True, resizable=True, size=(100, 100)) while True: event, values = window.read(timeout=0) if event == "Exit" or event == sg.WINDOW_CLOSED: break if type(event) is tuple and keyboard.is_pressed('shift'): print("++++++++++++++++++++++++++++++++") print("++++++++++++++++++++++++++++++++") sleep(0.01) Hello there. I have tried different window arguments,..like focus, return_keyboard_events etc. Also tried different timeouts, and putting the is_pressed down a line,... like if type(event) is tuple: if keyboard.is_pressed('shift'): print("++++++++++++++++++++++++++++++++") print("++++++++++++++++++++++++++++++++") Also which button does not matter. What helps is sending an artificial mouse-click to that button, when the program starts. Seems a bit hacky..lol. It would not make a good impression if I hijack the mouse at program start, and put it back where it was...lol. User like: "What just happened??"...lol I am sure I miss something here.
[ "Ok I found the solution!!\nI just have to put ANY arbitrary keyboard read in, after the program starts.\nIt can be keyboard.get_hotkey_name() or keyboard.is_pressed('shift') or keyboard.is_pressed('a')\nFull solution:\nimport PySimpleGUI as sg\nimport keyboard\nfrom time import sleep\n\nkeyboard.get_hotkey_name() # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n\nlayout = [\n [\n sg.Button('', enable_events=True, key=(1, 1))\n ]\n ]\n\nwindow = sg.Window('Image Browser', layout,\n return_keyboard_events=False, finalize=True, auto_size_buttons=False, use_default_focus=True,\n resizable=True, size=(100, 100))\n\nwhile True:\n\n event, values = window.read(timeout=0)\n\n if event == \"Exit\" or event == sg.WINDOW_CLOSED:\n break\n\n if type(event) is tuple and keyboard.is_pressed('shift'):\n print(\"++++++++++++++++++++++++++++++++\")\n print(\"++++++++++++++++++++++++++++++++\")\n\n sleep(0.01)\n\n" ]
[ 0 ]
[]
[]
[ "keyboard_events", "pysimplegui", "python" ]
stackoverflow_0074485353_keyboard_events_pysimplegui_python.txt
Q: Getting the image of an element in a website with Selenium In this website there is a board at the center. If you click on it with the right mouse button, it is possible to copy/save its image. In this sense, I want to code with Selenium a way to get this image. However, the board's element is only given by <canvas width="640" height="640" class="board-canvas"></canvas> Which doesn't have any reference to the image itself, such as an URL of it, for example. So how would it be possible to get the board's image knowing its element? A: Since it is a canvas, we can use the command HTMLCanvasElement.toDataURL(). Returns the image as a base64-encoded string. You then simply decode it and write it to a file. This is a complete example of reproducible code: from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By import base64 # It is not mandatory to specify all this, but it is strongly recommended for any web scraping software opts = Options() # make web scraping 'invisible' opts.add_argument("--headless") opts.add_argument('--no-sandbox') user_agent = "user-agent=[Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.63 Safari/537.36]" opts.add_argument(user_agent) # ChromeDriverManager ensures no webdriver recovery issues under any supported operating system # If you don't like this way, use the classic webdriver with the system path browser = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=opts) browser.get('https://www.robotreboot.com/challenge') canvas = browser.find_element(By.CSS_SELECTOR, "canvas") # get the canvas as a PNG base64 string canvas_base64 = browser.execute_script("return arguments[0].toDataURL('image/png').substring(21);", canvas) # decode canvas_png = base64.b64decode(canvas_base64) # save to a file with open(r"canvas.png", 'wb') as f: f.write(canvas_png)
Getting the image of an element in a website with Selenium
In this website there is a board at the center. If you click on it with the right mouse button, it is possible to copy/save its image. In this sense, I want to code with Selenium a way to get this image. However, the board's element is only given by <canvas width="640" height="640" class="board-canvas"></canvas> Which doesn't have any reference to the image itself, such as an URL of it, for example. So how would it be possible to get the board's image knowing its element?
[ "Since it is a canvas, we can use the command HTMLCanvasElement.toDataURL().\nReturns the image as a base64-encoded string. You then simply decode it and write it to a file.\nThis is a complete example of reproducible code:\nfrom selenium import webdriver\nfrom webdriver_manager.chrome import ChromeDriverManager\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.common.by import By\nimport base64\n\n# It is not mandatory to specify all this, but it is strongly recommended for any web scraping software\nopts = Options()\n\n# make web scraping 'invisible'\nopts.add_argument(\"--headless\")\nopts.add_argument('--no-sandbox')\n\nuser_agent = \"user-agent=[Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.63 Safari/537.36]\"\nopts.add_argument(user_agent)\n\n# ChromeDriverManager ensures no webdriver recovery issues under any supported operating system\n# If you don't like this way, use the classic webdriver with the system path\nbrowser = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=opts)\n\nbrowser.get('https://www.robotreboot.com/challenge')\n\ncanvas = browser.find_element(By.CSS_SELECTOR, \"canvas\")\n\n# get the canvas as a PNG base64 string\ncanvas_base64 = browser.execute_script(\"return arguments[0].toDataURL('image/png').substring(21);\", canvas)\n\n# decode\ncanvas_png = base64.b64decode(canvas_base64)\n\n# save to a file\nwith open(r\"canvas.png\", 'wb') as f:\n f.write(canvas_png)\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x", "selenium" ]
stackoverflow_0074501154_python_python_3.x_selenium.txt
Q: Fill column based on conditional max value in Pandas I have a dataframe that looks like this (link to csv): id, time, value, approved 0, 0:00, 10, false 1, 0:01, 20, false 1, 0:02, 50, false 1, 0:03, 20, true 1, 0:04, 40, true 1, 0:05, 40, true 1, 0:06, 20, false 2, 0:07, 35, false 2, 0:08, 35, false 2, 0:09, 50, true 2, 0:10, 50, true and I want to compute a column that should be true for the first max approved value per ID. So it should be like this: id, time, value, approved, is_max 0, 0:00, 10, false, false 1, 0:01, 20, false, false 1, 0:02, 50, false, false 1, 0:03, 20, true, false 1, 0:04, 40, true, true 1, 0:05, 40, true, false 1, 0:06, 20, false, false 2, 0:07, 35, false, false 2, 0:08, 35, false, false 2, 0:09, 50, true, true 2, 0:10, 50, true, false I can achieve something close to this with df['is_max'] = df['value'] == df.groupby(['id', df['approved']])['value'].transform('max').where(df['approved']) but this will set to true both lines with a max value per ID (0:04 and 0:05 for ID 1 | 0:09 and 0:10 for ID 2). I just want the first row with the max value to be set to true. A: Here is an approach using pandas.DataFrame.mask based on your solution : approved_1st_max = df.mask(~df["approved"]).groupby("id")["value"].transform('idxmax') df["is_max"]= df.reset_index()["index"].eq(approved_1st_max) # Output : print(df) id time value approved is_max 0 0 0:00 10 False False 1 1 0:01 20 False False 2 1 0:02 50 False False 3 1 0:03 20 True False 4 1 0:04 40 True True 5 1 0:05 40 True False 6 1 0:06 20 False False 7 2 0:07 35 False False 8 2 0:08 35 False False 9 2 0:09 50 True True 10 2 0:10 50 True False
Fill column based on conditional max value in Pandas
I have a dataframe that looks like this (link to csv): id, time, value, approved 0, 0:00, 10, false 1, 0:01, 20, false 1, 0:02, 50, false 1, 0:03, 20, true 1, 0:04, 40, true 1, 0:05, 40, true 1, 0:06, 20, false 2, 0:07, 35, false 2, 0:08, 35, false 2, 0:09, 50, true 2, 0:10, 50, true and I want to compute a column that should be true for the first max approved value per ID. So it should be like this: id, time, value, approved, is_max 0, 0:00, 10, false, false 1, 0:01, 20, false, false 1, 0:02, 50, false, false 1, 0:03, 20, true, false 1, 0:04, 40, true, true 1, 0:05, 40, true, false 1, 0:06, 20, false, false 2, 0:07, 35, false, false 2, 0:08, 35, false, false 2, 0:09, 50, true, true 2, 0:10, 50, true, false I can achieve something close to this with df['is_max'] = df['value'] == df.groupby(['id', df['approved']])['value'].transform('max').where(df['approved']) but this will set to true both lines with a max value per ID (0:04 and 0:05 for ID 1 | 0:09 and 0:10 for ID 2). I just want the first row with the max value to be set to true.
[ "Here is an approach using pandas.DataFrame.mask based on your solution :\napproved_1st_max = df.mask(~df[\"approved\"]).groupby(\"id\")[\"value\"].transform('idxmax')\n\ndf[\"is_max\"]= df.reset_index()[\"index\"].eq(approved_1st_max)\n\n# Output :\nprint(df)\n\n id time value approved is_max\n0 0 0:00 10 False False\n1 1 0:01 20 False False\n2 1 0:02 50 False False\n3 1 0:03 20 True False\n4 1 0:04 40 True True\n5 1 0:05 40 True False\n6 1 0:06 20 False False\n7 2 0:07 35 False False\n8 2 0:08 35 False False\n9 2 0:09 50 True True\n10 2 0:10 50 True False\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074501097_pandas_python.txt
Q: asyncio: Why does cancelling a task lead to cancellation of other tasks added into the event loop? I use a coroutine to add another coroutine to the event loop multiple times but partway through I cancel the first coroutine. I thought this would mean that any coroutines already added to the event loop would complete successfully and no more would be added, however I find that coroutines that have already been added to the event loop also seem to be cancelled. I'm running this script in Spyder so I don't need to call run_until_complete, etc. because the event loop is already running in the background on my environment. I'm sure I'm missing something and the code is behaving exactly as it should - but I can't figure out why. I would also like to know how I might allow cancellation of runTimes but still let slowPrinter complete. Thank you! Code below import asyncio loop = asyncio.get_event_loop() async def runTimes(async_func, times): for i in range(0, times): task = loop.create_task(async_func()) await task async def slowPrinter(): await asyncio.sleep(2) print("slowPrinter done") async def doStuff(): for i in range(0, 10): await(asyncio.sleep(1)) print("doStuff done") async def doLater(delay_ms, method, *args, **kwargs): try: print("doLater " + str(delay_ms) + " " + str(method.__name__)) except AttributeError: print("doLater " + str(delay_ms)) await asyncio.sleep(delay_ms/1000) method(*args, **kwargs) print("doLater complete") task = loop.create_task(runTimes(slowPrinter, 3)) loop.create_task(doLater(3000, task.cancel)) loop.create_task(doStuff()) Output doLater 3000 cancel slowPrinter done doLater complete doStuff done Expected Output doLater 3000 cancel slowPrinter done doLater complete **slowPrinter done** doStuff done Edit: Part of the reason I have built the code without using things like run_later is because I need to port the code to micropython later so I am sticking to functions I can use on micropython. Edit2: Interestingly, task cancellation seems to propagate to tasks created from within the coroutine as well! async def runTimes(async_func, times): for i in range(0, times): task = loop.create_task(async_func()) try: await task except asyncio.CancelledError: print("cancelled as well") Output doLater 3000 cancel slowPrinter done doLater complete cancelled as well slowPrinter done doStuff done A: That's because tour task is waiting on another task: async def runTimes(async_func, times): for i in range(0, times): task = loop.create_task(async_func()) await task ## HERE! As per asyncio's documentation: To cancel a running Task use the cancel() method. Calling it will cause the Task to throw a CancelledError exception into the wrapped coroutine. If a coroutine is awaiting on a Future object during cancellation, the Future object will be cancelled. You may have to look for a way to prevent the task from being cancelled while waiting. A: With the second edit where I used this code it became clear the await expression was where the cancellation error was being thrown. async def runTimes(async_func, times): for i in range(0, times): task = loop.create_task(async_func()) try: await task except asyncio.CancelledError: print("cancelled as well") Changing to awaiting on a timer allowed the main task to complete and the cancellation was thrown on the await asyncio.sleep() method instead which works for my use case. import asyncio loop = asyncio.get_event_loop() async def runTimes(async_func, times): global current_task for i in range(0, times): current_task = loop.create_task(async_func()) while not current_task.done(): await asyncio.sleep(0.05) async def slowPrinter(): await asyncio.sleep(2) print("slowPrinter done") async def doStuff(): for i in range(0, 10): await(asyncio.sleep(1)) print("doStuff done") async def doLater(delay_ms, method, *args, **kwargs): try: print("doLater " + str(delay_ms) + " " + str(method.__name__)) except AttributeError: print("doLater " + str(delay_ms)) await asyncio.sleep(delay_ms/1000) method(*args, **kwargs) print("doLater complete") task = loop.create_task(runTimes(slowPrinter, 3)) loop.create_task(doLater(3000, task.cancel)) loop.create_task(doStuff())
asyncio: Why does cancelling a task lead to cancellation of other tasks added into the event loop?
I use a coroutine to add another coroutine to the event loop multiple times but partway through I cancel the first coroutine. I thought this would mean that any coroutines already added to the event loop would complete successfully and no more would be added, however I find that coroutines that have already been added to the event loop also seem to be cancelled. I'm running this script in Spyder so I don't need to call run_until_complete, etc. because the event loop is already running in the background on my environment. I'm sure I'm missing something and the code is behaving exactly as it should - but I can't figure out why. I would also like to know how I might allow cancellation of runTimes but still let slowPrinter complete. Thank you! Code below import asyncio loop = asyncio.get_event_loop() async def runTimes(async_func, times): for i in range(0, times): task = loop.create_task(async_func()) await task async def slowPrinter(): await asyncio.sleep(2) print("slowPrinter done") async def doStuff(): for i in range(0, 10): await(asyncio.sleep(1)) print("doStuff done") async def doLater(delay_ms, method, *args, **kwargs): try: print("doLater " + str(delay_ms) + " " + str(method.__name__)) except AttributeError: print("doLater " + str(delay_ms)) await asyncio.sleep(delay_ms/1000) method(*args, **kwargs) print("doLater complete") task = loop.create_task(runTimes(slowPrinter, 3)) loop.create_task(doLater(3000, task.cancel)) loop.create_task(doStuff()) Output doLater 3000 cancel slowPrinter done doLater complete doStuff done Expected Output doLater 3000 cancel slowPrinter done doLater complete **slowPrinter done** doStuff done Edit: Part of the reason I have built the code without using things like run_later is because I need to port the code to micropython later so I am sticking to functions I can use on micropython. Edit2: Interestingly, task cancellation seems to propagate to tasks created from within the coroutine as well! async def runTimes(async_func, times): for i in range(0, times): task = loop.create_task(async_func()) try: await task except asyncio.CancelledError: print("cancelled as well") Output doLater 3000 cancel slowPrinter done doLater complete cancelled as well slowPrinter done doStuff done
[ "That's because tour task is waiting on another task:\nasync def runTimes(async_func, times):\n for i in range(0, times):\n task = loop.create_task(async_func())\n await task ## HERE!\n\nAs per asyncio's documentation:\n\nTo cancel a running Task use the cancel() method. Calling it will\ncause the Task to throw a CancelledError exception into the wrapped\ncoroutine. If a coroutine is awaiting on a Future object during\ncancellation, the Future object will be cancelled.\n\nYou may have to look for a way to prevent the task from being cancelled while waiting.\n", "With the second edit where I used this code it became clear the await expression was where the cancellation error was being thrown.\nasync def runTimes(async_func, times):\n for i in range(0, times):\n task = loop.create_task(async_func())\n try:\n await task\n except asyncio.CancelledError:\n print(\"cancelled as well\")\n\nChanging to awaiting on a timer allowed the main task to complete and the cancellation was thrown on the await asyncio.sleep() method instead which works for my use case.\nimport asyncio\n\nloop = asyncio.get_event_loop()\n\n\nasync def runTimes(async_func, times):\n global current_task\n for i in range(0, times):\n current_task = loop.create_task(async_func())\n while not current_task.done():\n await asyncio.sleep(0.05)\n \nasync def slowPrinter():\n await asyncio.sleep(2)\n print(\"slowPrinter done\")\n \n\nasync def doStuff():\n for i in range(0, 10):\n await(asyncio.sleep(1))\n print(\"doStuff done\")\n \nasync def doLater(delay_ms, method, *args, **kwargs):\n try:\n print(\"doLater \" + str(delay_ms) + \" \" + str(method.__name__))\n except AttributeError:\n print(\"doLater \" + str(delay_ms))\n await asyncio.sleep(delay_ms/1000)\n method(*args, **kwargs)\n print(\"doLater complete\")\n \n \ntask = loop.create_task(runTimes(slowPrinter, 3))\nloop.create_task(doLater(3000, task.cancel))\nloop.create_task(doStuff())\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_asyncio" ]
stackoverflow_0074501298_python_python_asyncio.txt
Q: No module named 'bcolors' although 'Requirement already satisfied' I am trying to use bcolors in my python code in Spyder/Anaconda but it keeps telling me ModuleNotFoundError: No module named 'bcolors'. So I installed it with pip install bcolorswhich gave me Requirement already satisfied: bcolors in e:\anaconda3\lib\site-packages (1.0.4), but it still doesn't work. What am I doing wrong? A: You had that error because you are in different interpreter trying to import the module. You should append the path of the module to your working directory. import sys sys.path.append("\anaconda3\lib\site-packages") import bcolors A: Have you tried installing the pandas' library using pip install pandas A: I was having trouble with the latest distribution as well. Installing 1.0.2 worked for me.
No module named 'bcolors' although 'Requirement already satisfied'
I am trying to use bcolors in my python code in Spyder/Anaconda but it keeps telling me ModuleNotFoundError: No module named 'bcolors'. So I installed it with pip install bcolorswhich gave me Requirement already satisfied: bcolors in e:\anaconda3\lib\site-packages (1.0.4), but it still doesn't work. What am I doing wrong?
[ "You had that error because you are in different interpreter trying to import the module. You should append the path of the module to your working directory.\nimport sys\n\nsys.path.append(\"\\anaconda3\\lib\\site-packages\")\n\nimport bcolors\n\n", "Have you tried installing the pandas' library using pip install pandas\n", "I was having trouble with the latest distribution as well. Installing 1.0.2 worked for me.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "anaconda", "python" ]
stackoverflow_0073531265_anaconda_python.txt
Q: trouble understanding list.begin() | list.end() | list::iterator i void Graph::max_path(){ for(int i=0; i <N; i++){ cost[i]=0; cam_max[i]=999; } // Percorre todos os vertices adjacentes do vertice int max = 0; list<int>::iterator i; for (int a = 0; a < N ; a++){ int v = ordely[a]; for (i = adj[v].begin(); i != adj[v].end(); ++i){ int viz = *i; if (cost[viz]<cost[v]+1){ cost[viz] = cost[v]+1; if(cost[viz]>max) max = cost[viz]; } } } cout << "\nCusto maximo " << max; } I need to convert this C++ program to a python program... However, I'm struggling to understand what this adj[v].begin() inside the for loop means. Can anyone explain it to me, please? A: begin and end are iterators (specfically, pointers), which are used to iterate over a container. You could imagine begin as 0 and end as the size of an array. So it is like for (i = 0; i < size; ++i). However, the thing about pointers is that they're addresses, so in C++, i < end (where i started as begin) is more like 0xF550 < 0xF556 (example) which has the same effect of iterating 6 times assuming i increases each iteration. In fact, that's actually how for-each loops work behind the scenes in many languages. In python, just use a normal for-loop. I don't know much about python or your Graph class but I guess this could get you started: def max_path(self) : for i in range(N) : self.cost[i] = 0 self.cam_max[i] = 999 max = 0 for a in range(N) : v = self.ordely[a] for i in self.adj[v] : viz = i if self.cost[viz] < self.cost[v] + 1 : self.cost[viz] = self.cost[v] + 1 if self.cost[viz] > max : max = self.cost[viz] print("\nCusto maximo ", max) Notice how iterators weren't needed in the python version cause you used a normal for-loop. By the way, in C++, you could use for/for-each too, the code you posted is unnecessarily complicated and unoptimized. For example, the first 2 loops in your code could be merged into 1 loop cause they both had the exact same range thus I optimized them into 1.
trouble understanding list.begin() | list.end() | list::iterator i
void Graph::max_path(){ for(int i=0; i <N; i++){ cost[i]=0; cam_max[i]=999; } // Percorre todos os vertices adjacentes do vertice int max = 0; list<int>::iterator i; for (int a = 0; a < N ; a++){ int v = ordely[a]; for (i = adj[v].begin(); i != adj[v].end(); ++i){ int viz = *i; if (cost[viz]<cost[v]+1){ cost[viz] = cost[v]+1; if(cost[viz]>max) max = cost[viz]; } } } cout << "\nCusto maximo " << max; } I need to convert this C++ program to a python program... However, I'm struggling to understand what this adj[v].begin() inside the for loop means. Can anyone explain it to me, please?
[ "begin and end are iterators (specfically, pointers), which are used to iterate over a container.\nYou could imagine begin as 0 and end as the size of an array. So it is like for (i = 0; i < size; ++i).\nHowever, the thing about pointers is that they're addresses, so in C++, i < end (where i started as begin) is more like 0xF550 < 0xF556 (example) which has the same effect of iterating 6 times assuming i increases each iteration.\nIn fact, that's actually how for-each loops work behind the scenes in many languages.\nIn python, just use a normal for-loop.\nI don't know much about python or your Graph class but I guess this could get you started:\ndef max_path(self) :\nfor i in range(N) :\n self.cost[i] = 0\n self.cam_max[i] = 999\n max = 0\n for a in range(N) :\n v = self.ordely[a]\n for i in self.adj[v] :\n viz = i\n if self.cost[viz] < self.cost[v] + 1 :\n self.cost[viz] = self.cost[v] + 1\n if self.cost[viz] > max :\n max = self.cost[viz]\n print(\"\\nCusto maximo \", max)\n\nNotice how iterators weren't needed in the python version cause you used a normal for-loop.\nBy the way, in C++, you could use for/for-each too, the code you posted is unnecessarily complicated and unoptimized. For example, the first 2 loops in your code could be merged into 1 loop cause they both had the exact same range thus I optimized them into 1.\n" ]
[ 0 ]
[]
[]
[ "c++", "code_conversion", "python" ]
stackoverflow_0074501189_c++_code_conversion_python.txt
Q: Error "Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat)" I've installed Python 3.5 and while running pip install mysql-python it gives me the following error error: Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat) I have added the following lines to my Path C:\Program Files\Python 3.5\Scripts\; C:\Program Files\Python 3.5\; C:\Windows\System32; C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC; C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC I have a 64-bit Windows 7 setup on my PC. What could be the solution for mitigating this error and installing the modules correctly via pip. A: Your path only lists Visual Studio 11 and 12, it wants 14, which is Visual Studio 2015. If you install that, and remember to tick the box for Languages → C++ then it should work. On my Python 3.5 install, the error message was a little more useful, and included the URL to get it from: error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools New working link. As suggested by Fire, you may also need to upgrade setuptools package for the error to disappear: pip install --upgrade setuptools A: Binary install it the simple way! Use the binary-only option for pip. For example, for mysqlclient: pip install --only-binary :all: mysqlclient Many packages don't create a build for every single release which forces your pip to build from source. If you're happy to use the latest pre-compiled binary version, use --only-binary :all: to allow pip to use an older binary version. A: To solve any of the following errors: Failed building wheel for misaka Failed to build misaka Microsoft Visual C++ 14.0 is required Unable to find vcvarsall.bat The solution is: Go to Build Tools for Visual Studio 2017 Select free download under Visual Studio Community 2017. This will download the installer. Run the installer. Select what you need under workload tab: a. Under Windows, there are three choices. Only check Desktop development with C++. b. Under Web & Cloud, there are seven choices. Only check Python development (I believe this is optional, but I have done it). In cmd, type pip3 install misaka. Note if you already installed Visual Studio then when you run the installer, you can modify yours (click modify button under Visual Studio Community 2017) and do steps 3 and 4. Final note: If you don't want to install all modules, having the three below (or a newer version of the VC++ 2017) would be sufficient. (You can also install the Visual Studio Build Tools with only these options, so you don’t need to install Visual Studio Community Edition itself) => This minimal install is already a 4.5 GB, so saving off anything is helpful A: As the other responses point out, one solution is to install Visual Studio 2015. However, it takes a few GBs of disk space. One way around is to install precompiled binaries. The webpage Unofficial Windows Binaries for Python Extension Packages (mirror) contains precompiled binaries for many Python packages. After downloading the package of interest to you, you can install it using pip install, e.g. pip install mysqlclient‑1.3.10‑cp35‑cp35m‑win_amd64.whl. A: I had the exact issue while trying to install the Scrapy web scraping Python framework on my Windows 10 machine. I figured out the solution this way: Download the latest (the last one) wheel file from this link: wheel file for twisted package I'd recommend saving that wheel file in the directory where you've installed Python, i.e., somewhere on the local disk C: Then visit the folder where the wheel file exists and run pip install <*wheel file's name*> Finally, run the command pip install Scrapy again and you're good to use Scrapy or any other tool which required you to download a massive Windows C++ Package/SDK. Disclaimer: This solution worked for me while trying to install Scrapy, but I can't guarantee the same happening while installing other software, packages, etc. A: After reading a lot of answers on Stack Overflow and none of them working, I finally managed to solve it following the steps in this question. I will leave the steps here in case the page disappears: Please try to install Build Tools for Visual Studio 2017, select the workload “Visual C++ build tools” and check the options "C++/CLI support" and "VC++ 2015.3 v14.00 (v140) toolset for desktop" as below. A: I had this exact issue while trying to install mayavi. I also had the common error: Microsoft Visual C++ 14.0 is required when pip installing a library. After looking across many web pages and the solutions to this question, with none of them working, I figured out these steps (most taken from previous solutions) allowed this to work. Go to Build Tools for Visual Studio 2017 and install Build Tools for Visual Studio 2017. Which is under All downloads (scroll down) → Tools for Visual Studio 2017 If you have already installed this, skip to 2. Select the C++ components you require (I didn't know which I required, so I installed many of them). If you have already installed Build Tools for Visual Studio 2017 then open the application Visual Studio Installer then go to Visual Studio Build Tools 2017 → Modify → Individual Components and selected the required components. From other answers, important components appear to be: C++/CLI support, VC++ 2017 version <...> latest, Visual C++ 2017 Redistributable Update, Visual C++ tools for CMake, Windows 10 SDK <...> for Desktop C++, Visual C++ Build Tools core features, Visual Studio C++ core features. Install/Modify these components for Visual Studio Build Tools 2017. This is the important step. Open the application Visual Studio Installer then go to Visual Studio Build Tools → Launch. Which will open a CMD window at the correct location for Microsoft Visual Studio\YYYY\BuildTools. Now enter python -m pip install --upgrade setuptools within this CMD window. Finally, in this same CMD window, pip install your Python library: pip install -U <library>. A: Use this link to download and install Visual C++ 2015 Build Tools. It will automatically download visualcppbuildtools_full.exe and install Visual C++ 14.0 without actually installing Visual Studio. After the installation completes, retry pip install and you won't get the error again. I have tested it on the following platforms and versions: Python 3.6 on Windows 7 64-bit Python 3.8 on Windows 10 64-bit A: Use this and save time pip install pipwin pipwin install yourLibrary pipwin is like pip, but it installs precompiled Windows binaries provided by Christoph Gohlke. Saves you a lot of time googling and downloading. And in this case pipwin will solve the problem Error: Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat) Read more about pipwin and here they mention Microsoft Visual C++ A: I had the same problem when installing the spaCy module. And I checked the control panel, and I had several Microsoft Visual C++ redistributables installed already. I selected "Microsoft Visual Studio Community 2015" which was already installed on my PC → "Modify" → check "Common Tools for Visual C++ 2015". Then it will take some time and download more than 1 GB to install it. This fixed my issue. Now I have spaCy installed. A: I had this same problem. A solution for updating setuptools pip install -U setuptools or pip install setuptools --upgrade A: Make sure that you've installed these required packages. It worked perfectly in my case as I installed the checked packages: A: To expand on the answers by ocean800, davidsheldon and user3661384: You should now no longer use Visual Studio Tools 2015 since a newer version is available. As indicated by the Python documentation, you should be using Visual Studio Tools 2017 instead. Visual C++ Build Tools 2015 was upgraded by Microsoft to Build Tools for Visual Studio 2017. Download it from here. You will also require setuptools. If you don't have setup tools, run: pip install setuptools Or if you already have it, be sure to upgrade it. pip install setuptools --upgrade For the Python documentation link above you will see that setuptools version must be at least 34.4.0 for Visual Studio Tools to work. A: Use the link to Visual C++ 2015 Build Tools. That will install Visual C++ 14.0 without installing Visual Studio. A: I had the same issue. Downloading the Build Tools for Visual Studio 2017 worked for me. A: I had exactly the same issue and solved it by installing mysql-connector-python with: pip install mysql-connector-python I am on Python 3.7 and Windows 10 and installing Microsoft Build Tools for Visual Studio 2017 (as described here) did not solve my problem that was identical to yours. A: Just go to https://www.lfd.uci.edu/~gohlke/pythonlibs/ find your suitable package (whl file). Download it. Go to the download folder in cmd or typing 'cmd' on the address bar of the folder. Run the command : pip install mysqlclient-1.4.6-cp38-cp38-win32.whl (Type the file name correctly. I have given an example only). Your problem will be solved without installing build toll cpp of 6GB size. A: To add on top of Sushant Chaudhary's answer: In my case, I got another error regarding lxml as below: copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-3.7\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 running build_ext building 'lxml.etree' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools I had to install lxml‑4.2.3‑cp37‑cp37m‑win_amd64.whl the same way as in the answer of Sushant Chaudhary to successfully complete installation of Scrapy. Download lxml‑4.2.3‑cp37‑cp37m‑win_amd64.whl from Lxml put it in folder where Python is installed install it using pip install <file-name> Now you can run pip install scrapy. A: I just had the same issue while using the latest Python 3.6. With Windows OS 10 Home Edition and a 64-bit operating system. Steps to solve this issue: Uninstall any versions of Visual Studio you have had, through Control Panel Install Visual Studio 2015 and chose the default option that will install Visual C++ 14.0 on its own You can use PyCharm for installing Scrapy: Menu Project → Project Interpreter → + (install Scrapy) Check Scrapy in the REPL and PyCharm by import. You should not see any errors. A: I had a similar situation installing pymssql. pip was trying to build the package, because there were no official wheels for Python 3.6 and Windows. I solved it by downloading an unofficial wheel from Unofficial Windows Binaries for Python Extension Packages. Specifically for your case: MySQL-python A: None of the solutions here and elsewhere worked for me. It turns out an incompatible 32-bit version of mysqlclient is being installed on my 64-bit Windows 10 OS because I'm using a 32-bit version of Python. I had to uninstall my current Python 3.7 32 bit, and reinstalled Python 3.7 64 bit and everything is working fine now. A: I had the same exact issue on my windows 10 python version 3.8. In my case, I needed to install mysqlclient were the error occurred Microsoft Visual C++ 14.0 is required. Because installing visual studio and it's packages could be a tedious process, Here's what I did: step 1 - Go to unofficial python binaries from any browser and open its website. step 2 - press ctrl+F and type whatever you want. In my case it was mysqlclient. step 3 - Go into it and choose according to your python version and windows system. In my case it was mysqlclient‑1.4.6‑cp38‑cp38‑win32.whl and download it. step 4 - open command prompt and specify the path where you downloaded your file. In my case it was C:\Users\user\Downloads step 5 - type pip install .\mysqlclient‑1.4.6‑cp38‑cp38‑win32.whl and press enter. Thus it was installed successfully, after which I went my project terminal re-entered the required command. This solved my problem Note that, while working on the project in pycharm, I also tried installing mysql-client from the project interpreter. But mysql-client and mysqlclient are different things. I have no idea why and it did not work. A: I was facing the same problem. The following worked for me: Download the unofficial binaries file from Christoph Gohlke installers site as per the Python version installed on your system. Navigate to the folder where you have installed the file and run pip install filename For me python_ldap‑3.0.0‑cp35‑cp35m‑win_amd64.whl worked as my machine is 64 bit and Python version is 3.5. This successfully installed python-ldap on my Windows machine. You can try the same for mysql-python. A: Look if the package has an official fork that include the necessary binary wheels. I needed the package python-Levenshtein, had this error, and found the package python-Levenshtein-wheels instead. A: This works for me: pip install --only-binary :all: mysqlclient A: If Visual Studio is NOT your thing, and instead you are using VS Code, then this link will guide you thru the installer to get C++ running on your Windows. You only needs to complete the Pre-Requisites part. https://code.visualstudio.com/docs/cpp/config-msvc/#_prerequisites This is similar with other answers, but this link will probably age better than some of the responses here. PS: don't forget to run pip install --upgrade setuptools A: I tried ALL of the above and none worked. Just before before signing up for the booby hatch, I found another reason for the error : using the wrong shell on Windows. conda init cmd.exe did the trick for me. Hope it may save someone else, too. A: I had the same problem. I needed a 64-bit version of Python so I installed 3.5.0 (the most recent as of writing this). After switching to 3.4.3 all of my module installations worked. Python Releases for Windows A: Oops! Looks like they don't have Windows wheels on PyPI. In the meantime, installing from source probably works or try downloading MSVC++ 14 as suggested in the error message and by others on this page. Christoph's site also has unofficial Windows binaries for Python extension packages (.whl files). Follow the steps mentioned in the following links to install binaries: Directly in base Python In virtual environments and PyCharm Also check: filename.whl is not supported wheel on this platform A: For Python 3.7.4, the following set of commands worked: Before those commands, you need to confirm that Desktop with C++ and Python is installed in Visual Studio. cd "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build" vcvarsall.bat x86_amd64 cd \ set CL=-FI"%VCINSTALLDIR%\tools\msvc\14.16.27023\include\stdint.h" pip install pycrypto A: I had the same issue while installing mysqlclient for the Django project. In my case, it's the system architecture mismatch causing the issue. I have Windows 7 64bit version on my system. But, I had installed Python 3.7.2 32 bit version by mistake. So, I re-installed Python interpreter (64bit) and ran the command pip install mysqlclient I hope this would work with other Python packages as well. A: TLDR run vcvars64.bat After endlessly searching through similar questions with none of the solutions working. -Adding endless folders to my path and removing them. uninstalling and reinstalling visual studio commmunity and build tools. and step by step attempting to debug I finally found a solution that worked for me. (background notes if anyone is in a similar situation) I recently reset my main computer and after reinstalling the newest version of python (Python3.9) libraries I used to install with no troubles (main example pip install opencv-python) gave cl is not a full path and was not found in the PATH. after adding cl to the path from C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\Hostx64\x64 and several different windows kits one at a time getting the following. The C compiler "C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.27.29110/bin/Hostx64/x64/cl.exe" is not able to compile a simple test program. with various link errors or " Run Build Command(s):jom /nologo cmTC_7c75e\fast && The system cannot find the file specified" upgrading setuptools and wheel from both a regular command line and an admin one did nothing as well as trying to manually download a wheel or trying to install with --only-binary :all: Finally the end result that worked for me was running the correct vcvars.bat for my python installation namely running "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat" once (not vcvarsall or vcvars32) (because my python installed was 64 bit) and then running the regular command pip install opencv-python worked. A: If you have already installed Visual Studio Build Tools (as in other comments), and upgraded setuptools but it still doesn't work: Make sure to run pip under x86 or x64 Native Tools Command Prompt. It can be found under VS folder in Windows start menu. The default command line prompt may NOT provide Pip the path to the VS build tool, as is in my case. A: Following the official installation guide for Windows C++ compilers: https://wiki.python.org/moin/WindowsCompilers to upgrade setuptools and install specific Microsoft Visual C++ compiler. It has already contains some points refered in other answer. A: First you'll need to download the visual studio build tools from https://visualstudio.microsoft.com/downloads#other Rename the file vs_buildtools.exe (not required but you'll have to modify the script below) start-process -wait -filepath vs_buildtools.exe -ArgumentList '--quiet --wait --norestart --nocache --installPath C:\BuildTools ` --add Microsoft.VisualStudio.ComponentGroup.VC.Tools.142.x86.x64 ` --add Microsoft.VisualStudio.Component.Windows10SDK.19041 ` --add Microsoft.VisualStudio.Component.Windows10SDK ` --add Microsoft.VisualStudio.Component.VC.CoreIde ` --add Microsoft.VisualStudio.Component.VC.CMake.Project ` --add Microsoft.VisualStudio.Component.VC.14.29.16.11.CLI.Support ` --add Microsoft.VisualStudio.ComponentGroup.UWP.VC.v142' I created a seperate question and answer here for windows docker users Microsoft Visual C++ 14.0 is Required, installing pip package on Windows Docker A: I already had a version of VC++ that was v14+, but was encountering the issues due to Anaconda. Ultimately, the following worked for me instead of using pip, pipwin, or a wheel file. conda install <package_name_here>
Error "Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat)"
I've installed Python 3.5 and while running pip install mysql-python it gives me the following error error: Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat) I have added the following lines to my Path C:\Program Files\Python 3.5\Scripts\; C:\Program Files\Python 3.5\; C:\Windows\System32; C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC; C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC I have a 64-bit Windows 7 setup on my PC. What could be the solution for mitigating this error and installing the modules correctly via pip.
[ "Your path only lists Visual Studio 11 and 12, it wants 14, which is Visual Studio 2015. If you install that, and remember to tick the box for Languages → C++ then it should work.\nOn my Python 3.5 install, the error message was a little more useful, and included the URL to get it from:\n\nerror: Microsoft Visual C++ 14.0 is required. Get it with \"Microsoft Visual C++ Build Tools\": http://landinghub.visualstudio.com/visual-cpp-build-tools\n\nNew working link.\nAs suggested by Fire, you may also need to upgrade setuptools package for the error to disappear:\npip install --upgrade setuptools\n\n", "Binary install it the simple way!\nUse the binary-only option for pip. For example, for mysqlclient:\npip install --only-binary :all: mysqlclient\n\nMany packages don't create a build for every single release which forces your pip to build from source. If you're happy to use the latest pre-compiled binary version, use --only-binary :all: to allow pip to use an older binary version.\n", "To solve any of the following errors:\n\nFailed building wheel for misaka\nFailed to build misaka\nMicrosoft Visual C++ 14.0 is required\nUnable to find vcvarsall.bat\n\nThe solution is:\n\nGo to Build Tools for Visual Studio 2017\n\nSelect free download under Visual Studio Community 2017. This will download the installer. Run the installer.\n\nSelect what you need under workload tab:\na. Under Windows, there are three choices. Only check Desktop development with C++.\nb. Under Web & Cloud, there are seven choices. Only check Python development (I believe this is optional, but I have done it).\n\nIn cmd, type pip3 install misaka.\n\nNote if you already installed Visual Studio then when you run the installer, you can modify yours (click modify button under Visual Studio Community 2017) and do steps 3 and 4.\n\nFinal note: If you don't want to install all modules, having the three below (or a newer version of the VC++ 2017) would be sufficient. (You can also install the Visual Studio Build Tools with only these options, so you don’t need to install Visual Studio Community Edition itself) => This minimal install is already a 4.5 GB, so saving off anything is helpful\n\n\n\n", "As the other responses point out, one solution is to install Visual Studio 2015. However, it takes a few GBs of disk space.\nOne way around is to install precompiled binaries. The webpage Unofficial Windows Binaries for Python Extension Packages (mirror) contains precompiled binaries for many Python packages. After downloading the package of interest to you, you can install it using pip install, e.g. pip install mysqlclient‑1.3.10‑cp35‑cp35m‑win_amd64.whl.\n", "I had the exact issue while trying to install the Scrapy web scraping Python framework on my Windows 10 machine. I figured out the solution this way:\n\nDownload the latest (the last one) wheel file from this link: wheel file for twisted package\n\n\nI'd recommend saving that wheel file in the directory where you've installed Python, i.e., somewhere on the local disk C:\n\nThen visit the folder where the wheel file exists and run pip install <*wheel file's name*>\n\nFinally, run the command pip install Scrapy again and you're good to use Scrapy or any other tool which required you to download a massive Windows C++ Package/SDK.\n\n\nDisclaimer: This solution worked for me while trying to install Scrapy, but I can't guarantee the same happening while installing other software, packages, etc.\n", "After reading a lot of answers on Stack Overflow and none of them working, I finally managed to solve it following the steps in this question. I will leave the steps here in case the page disappears:\n\nPlease try to install Build Tools for Visual Studio 2017, select the workload “Visual C++ build tools” and check the options \"C++/CLI support\" and \"VC++ 2015.3 v14.00 (v140) toolset for desktop\" as below.\n\n\n", "I had this exact issue while trying to install mayavi.\nI also had the common error: Microsoft Visual C++ 14.0 is required when pip installing a library.\n\nAfter looking across many web pages and the solutions to this question, with none of them working, I figured out these steps (most taken from previous solutions) allowed this to work.\n\nGo to Build Tools for Visual Studio 2017 and install Build Tools for Visual Studio 2017. Which is under All downloads (scroll down) → Tools for Visual Studio 2017\n\nIf you have already installed this, skip to 2.\n\n\n\n\nSelect the C++ components you require (I didn't know which I required, so I installed many of them).\n\nIf you have already installed Build Tools for Visual Studio 2017 then open the application Visual Studio Installer then go to Visual Studio Build Tools 2017 → Modify → Individual Components and selected the required components.\nFrom other answers, important components appear to be: C++/CLI support, VC++ 2017 version <...> latest, Visual C++ 2017 Redistributable Update, Visual C++ tools for CMake, Windows 10 SDK <...> for Desktop C++, Visual C++ Build Tools core features, Visual Studio C++ core features.\n\n\n\nInstall/Modify these components for Visual Studio Build Tools 2017.\n\nThis is the important step. Open the application Visual Studio Installer then go to Visual Studio Build Tools → Launch. Which will open a CMD window at the correct location for Microsoft Visual Studio\\YYYY\\BuildTools.\n\n\n\n\nNow enter python -m pip install --upgrade setuptools within this CMD window.\n\n\nFinally, in this same CMD window, pip install your Python library: pip install -U <library>.\n\n\n\n", "Use this link to download and install Visual C++ 2015 Build Tools. It will automatically download visualcppbuildtools_full.exe and install Visual C++ 14.0 without actually installing Visual Studio.\nAfter the installation completes, retry pip install and you won't get the error again.\nI have tested it on the following platforms and versions:\nPython 3.6 on Windows 7 64-bit\nPython 3.8 on Windows 10 64-bit\n\n", "Use this and save time\npip install pipwin \npipwin install yourLibrary\n\n\npipwin is like pip, but it installs precompiled Windows binaries provided by Christoph Gohlke. Saves you a lot of time googling and downloading.\n\nAnd in this case pipwin will solve the problem\nError: Microsoft Visual C++ 14.0 is required (Unable to find vcvarsall.bat)\n\nRead more about pipwin and here they mention Microsoft Visual C++\n", "I had the same problem when installing the spaCy module. And I checked the control panel, and I had several Microsoft Visual C++ redistributables installed already.\nI selected \"Microsoft Visual Studio Community 2015\" which was already installed on my PC → \"Modify\" → check \"Common Tools for Visual C++ 2015\". Then it will take some time and download more than 1 GB to install it.\nThis fixed my issue. Now I have spaCy installed.\n", "I had this same problem. A solution for updating setuptools\npip install -U setuptools\n\nor\npip install setuptools --upgrade\n\n", "Make sure that you've installed these required packages. It worked perfectly in my case as I installed the checked packages:\n\n", "To expand on the answers by ocean800, davidsheldon and user3661384:\nYou should now no longer use Visual Studio Tools 2015 since a newer version is available. As indicated by the Python documentation, you should be using Visual Studio Tools 2017 instead.\n\nVisual C++ Build Tools 2015 was upgraded by Microsoft to Build Tools for Visual Studio 2017.\n\nDownload it from here.\nYou will also require setuptools. If you don't have setup tools, run:\npip install setuptools\n\nOr if you already have it, be sure to upgrade it.\npip install setuptools --upgrade\n\nFor the Python documentation link above you will see that setuptools version must be at least 34.4.0 for Visual Studio Tools to work.\n", "Use the link to Visual C++ 2015 Build Tools. That will install Visual C++ 14.0 without installing Visual Studio.\n", "I had the same issue. Downloading the Build Tools for Visual Studio 2017 worked for me.\n", "I had exactly the same issue and solved it by installing mysql-connector-python with:\npip install mysql-connector-python\n\nI am on Python 3.7 and Windows 10 and installing Microsoft Build Tools for Visual Studio 2017 (as described here) did not solve my problem that was identical to yours.\n", "Just go to https://www.lfd.uci.edu/~gohlke/pythonlibs/ find your suitable package (whl file). Download it. Go to the download folder in cmd or typing 'cmd' on the address bar of the folder. Run the command : \npip install mysqlclient-1.4.6-cp38-cp38-win32.whl\n\n(Type the file name correctly. I have given an example only). Your problem will be solved without installing build toll cpp of 6GB size.\n", "To add on top of Sushant Chaudhary's answer:\nIn my case, I got another error regarding lxml as below:\ncopying src\\lxml\\isoschematron\\resources\\xsl\\iso-schematron-xslt1\\readme.txt -> build\\lib.win-amd64-3.7\\lxml\\isoschematron\\resources\\xsl\\iso-schematron-xslt1\nrunning build_ext\nbuilding 'lxml.etree' extension\nerror: Microsoft Visual C++ 14.0 is required. Get it with \"Microsoft Visual C++ Build Tools\": http://landinghub.visualstudio.com/visual-cpp-build-tools\n\nI had to install lxml‑4.2.3‑cp37‑cp37m‑win_amd64.whl the same way as in the answer of Sushant Chaudhary to successfully complete installation of Scrapy.\n\nDownload lxml‑4.2.3‑cp37‑cp37m‑win_amd64.whl from Lxml\nput it in folder where Python is installed\ninstall it using pip install <file-name>\n\nNow you can run pip install scrapy.\n", "I just had the same issue while using the latest Python 3.6. With Windows OS 10 Home Edition and a 64-bit operating system.\nSteps to solve this issue:\n\nUninstall any versions of Visual Studio you have had, through Control Panel\nInstall Visual Studio 2015 and chose the default option that will install\nVisual C++ 14.0 on its own\nYou can use PyCharm for installing Scrapy: Menu Project → Project Interpreter → + (install Scrapy)\nCheck Scrapy in the REPL and PyCharm by import. You should not see any errors.\n\n", "I had a similar situation installing pymssql.\npip was trying to build the package, because there were no official wheels for Python 3.6 and Windows.\nI solved it by downloading an unofficial wheel from Unofficial Windows Binaries for Python Extension Packages.\nSpecifically for your case: MySQL-python\n", "None of the solutions here and elsewhere worked for me. It turns out an incompatible 32-bit version of mysqlclient is being installed on my 64-bit Windows 10 OS because I'm using a 32-bit version of Python.\nI had to uninstall my current Python 3.7 32 bit, and reinstalled Python 3.7 64 bit and everything is working fine now.\n", "I had the same exact issue on my windows 10 python version 3.8.\nIn my case, I needed to install mysqlclient were the error occurred Microsoft Visual C++ 14.0 is required. Because installing visual studio and it's packages could be a tedious process, Here's what I did:\nstep 1 - Go to unofficial python binaries from any browser and open its website. \nstep 2 - press ctrl+F and type whatever you want. In my case it was mysqlclient.\nstep 3 - Go into it and choose according to your python version and windows system. In my case it was mysqlclient‑1.4.6‑cp38‑cp38‑win32.whl and download it.\n \nstep 4 - open command prompt and specify the path where you downloaded your file. In my case it was C:\\Users\\user\\Downloads\nstep 5 - type pip install .\\mysqlclient‑1.4.6‑cp38‑cp38‑win32.whl and press enter.\nThus it was installed successfully, after which I went my project terminal re-entered the required command. This solved my problem\nNote that, while working on the project in pycharm, I also tried installing mysql-client from the project interpreter. But mysql-client and mysqlclient are different things. I have no idea why and it did not work.\n", "I was facing the same problem. The following worked for me:\nDownload the unofficial binaries file from Christoph Gohlke installers site as per the Python version installed on your system.\nNavigate to the folder where you have installed the file and run\npip install filename\n\nFor me python_ldap‑3.0.0‑cp35‑cp35m‑win_amd64.whl worked as my machine is 64 bit and Python version is 3.5.\nThis successfully installed python-ldap on my Windows machine. You can try the same for mysql-python.\n", "Look if the package has an official fork that include the necessary binary wheels.\nI needed the package python-Levenshtein, had this error, and found the package python-Levenshtein-wheels instead.\n", "This works for me:\npip install --only-binary :all: mysqlclient\n", "If Visual Studio is NOT your thing, and instead you are using VS Code, then this link will guide you thru the installer to get C++ running on your Windows.\nYou only needs to complete the Pre-Requisites part.\nhttps://code.visualstudio.com/docs/cpp/config-msvc/#_prerequisites\nThis is similar with other answers, but this link will probably age better than some of the responses here.\nPS: don't forget to run pip install --upgrade setuptools\n", "I tried ALL of the above and none worked. Just before before signing up for the booby hatch, I found another reason for the error : using the wrong shell on Windows.\nconda init cmd.exe\ndid the trick for me. Hope it may save someone else, too.\n", "I had the same problem. I needed a 64-bit version of Python so I installed 3.5.0 (the most recent as of writing this). After switching to 3.4.3 all of my module installations worked.\nPython Releases for Windows\n", "Oops! Looks like they don't have Windows wheels on PyPI.\nIn the meantime, installing from source probably works or try downloading MSVC++ 14 as suggested in the error message and by others on this page.\nChristoph's site also has unofficial Windows binaries for Python extension packages (.whl files).\nFollow the steps mentioned in the following links to install binaries:\n\nDirectly in base Python\nIn virtual environments and PyCharm\n\nAlso check:\nfilename.whl is not supported wheel on this platform\n", "For Python 3.7.4, the following set of commands worked:\nBefore those commands, you need to confirm that Desktop with C++ and Python is installed in Visual Studio.\ncd \"C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Auxiliary\\Build\"\nvcvarsall.bat x86_amd64\ncd \\\nset CL=-FI\"%VCINSTALLDIR%\\tools\\msvc\\14.16.27023\\include\\stdint.h\"\n \npip install pycrypto\n\n", "I had the same issue while installing mysqlclient for the Django project.\nIn my case, it's the system architecture mismatch causing the issue. I have Windows 7 64bit version on my system. But, I had installed Python 3.7.2 32 bit version by mistake. \nSo, I re-installed Python interpreter (64bit) and ran the command\npip install mysqlclient\n\nI hope this would work with other Python packages as well.\n", "TLDR run vcvars64.bat\nAfter endlessly searching through similar questions with none of the solutions working.\n-Adding endless folders to my path and removing them. uninstalling and reinstalling visual studio commmunity and build tools.\nand step by step attempting to debug I finally found a solution that worked for me.\n(background notes if anyone is in a similar situation)\nI recently reset my main computer and after reinstalling the newest version of python (Python3.9) libraries I used to install with no troubles (main example pip install opencv-python) gave\ncl\n is not a full path and was not found in the PATH.\n\nafter adding cl to the path from\nC:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\Hostx64\\x64\nand several different windows kits one at a time getting the following.\nThe C compiler\n\n\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.27.29110/bin/Hostx64/x64/cl.exe\"\n\nis not able to compile a simple test program.\n\nwith various link errors or \" Run Build Command(s):jom /nologo cmTC_7c75e\\fast && The system cannot find the file specified\"\nupgrading setuptools and wheel from both a regular command line and an admin one did nothing as well as trying to manually download a wheel or trying to install with --only-binary :all:\nFinally the end result that worked for me was running the correct vcvars.bat for my python installation namely running\n\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Auxiliary\\Build\\vcvars64.bat\" once (not vcvarsall or vcvars32) (because my python installed was 64 bit) and then running the regular command pip install opencv-python worked.\n", "If you have already installed Visual Studio Build Tools (as in other comments), and upgraded setuptools but it still doesn't work:\nMake sure to run pip under x86 or x64 Native Tools Command Prompt.\nIt can be found under VS folder in Windows start menu. The default command line prompt may NOT provide Pip the path to the VS build tool, as is in my case.\n", "Following the official installation guide for Windows C++ compilers:\nhttps://wiki.python.org/moin/WindowsCompilers\nto upgrade setuptools and install specific Microsoft Visual C++ compiler.\nIt has already contains some points refered in other answer.\n", "\nFirst you'll need to download the visual studio build tools from https://visualstudio.microsoft.com/downloads#other\nRename the file vs_buildtools.exe (not required but you'll have to modify the script below)\n\nstart-process -wait -filepath vs_buildtools.exe -ArgumentList '--quiet --wait --norestart --nocache --installPath C:\\BuildTools `\n --add Microsoft.VisualStudio.ComponentGroup.VC.Tools.142.x86.x64 `\n --add Microsoft.VisualStudio.Component.Windows10SDK.19041 `\n --add Microsoft.VisualStudio.Component.Windows10SDK `\n --add Microsoft.VisualStudio.Component.VC.CoreIde `\n --add Microsoft.VisualStudio.Component.VC.CMake.Project `\n --add Microsoft.VisualStudio.Component.VC.14.29.16.11.CLI.Support `\n --add Microsoft.VisualStudio.ComponentGroup.UWP.VC.v142'\n\nI created a seperate question and answer here for windows docker users Microsoft Visual C++ 14.0 is Required, installing pip package on Windows Docker\n", "I already had a version of VC++ that was v14+, but was encountering the issues due to Anaconda. Ultimately, the following worked for me instead of using pip, pipwin, or a wheel file.\nconda install <package_name_here>\n\n" ]
[ 185, 156, 114, 81, 61, 28, 20, 17, 14, 13, 12, 12, 11, 9, 6, 6, 5, 4, 3, 3, 3, 3, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "python", "python_3.x", "visual_c++" ]
stackoverflow_0029846087_python_python_3.x_visual_c++.txt
Q: ModuleNotFoundError: No module named 'pyperclip' Similar issues like this have been posted on StackOverflow but I did not find adequate answers to resolve this issue. I'm running Python 3.6.3 on a Windows 7 machine. From IDLE I type the following import stmt and get the subsequent error: >>> import pyperclip Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> import pyperclip ModuleNotFoundError: No module named 'pyperclip' I tried hitting Win-R (to pup up the RUN window) and typed the following: pip3 install pyperclip pip install pyperclip But it gives me an error saying "Could not fetch the URL: https://pypi.python.org/simple/pyperclip/ Could not find a version that satisfies the requirement pyperclip... No Matching distribution found" If I visit the URL mentioned (https://pypi.python.org/simple/pyperclip/) I see a bunch of pyperclip ZIP files all dif versions. But if I select a version I'm not sure where to place them/extract them or if extracting them is even the right thing to do. Any advice? A: There is a problem with the current version of the pyperclip I checked the git repo and opened a pull request for the issue. It currently doesn't support use for python3.7 A: You must navigate to your default install location for 3.6. For IDLE 32 bits it's: C:\Users\<username>\AppData\Local\Programs\Python\Python36-32\Scripts\ for IDLE 64 bits use: C:\Users\<username>\AppData\Local\Programs\Python\Python36\Scripts\ If you navigate to that directory in the command prompt (use cd command), then the command pip install pyperclip should work. A: Open cmd and type pip install pyperclip with no double quote. Enter to run for installation. If you install successfully, you can import in IDLE. A: i had this problem too! and i solved running: python -m pip install pyperclip note: replace for python that is being used, for example, if you are trying running in python 3.10: python3.10 -m pip install pyperclip this problem may is because you may more than one python environment which one of then have no pyperclip installed A: For manual Installation, I went to the link you provided in your question. Since all of them are pyperclip 1.5, I wouldn't mind downloading any one of them. However, the person who designed the site must want you to click on the topmost link, instead of scrolling all the way down. So I downloaded the 1.5.11 version, which was on the top. And on downloading (I am using my phone) I quickly extracted the zip and saw a nice setup.py file there. Btw, I wouldn't worry where to unzip or run the setup.py file. Go ahead and run the setup.py and let it take its time. Once done, still not working in IDLE? I suggest use CMD(windows) or Terminal(Linux) and check if it is working there, and then come back to me. Edit: On having problems installing the setup.py, open CMD as administrator by right clicking and selecting it from the start menu. Navigate to the directory where setup.py is located and run it inside the CMD, by typing 'setup.py' and press Enter. A: Thanks, it worked for me too. See below details as written above: Thanks Sayan! It works now. Just to clarify and detail what I did (if it helps others): downloaded the zip file and extracted the folder contents to a location on C drive Click the "Start" button and type "cmd." Right-click "Cmd," select "Run as Administrator" 3) changed directory to where the setup.py file is located in the folder I just extracted typed "setup.py install" Now I'm able to import the module into my code A: Go to windows and search for your IDE (e.g. VsCode) Then Go to the folder of your IDE and go to the properties of file that end .exe go to properties -> compatibility -> [✔️] Run this program as an administrator (apply and then Ok) A: Manual installation of pyperclip is possible. Just download the version you want from https://pypi.org/simple/pyperclip/, extract it and run setup.py install (or python setup.py install) in the extracted folder.
ModuleNotFoundError: No module named 'pyperclip'
Similar issues like this have been posted on StackOverflow but I did not find adequate answers to resolve this issue. I'm running Python 3.6.3 on a Windows 7 machine. From IDLE I type the following import stmt and get the subsequent error: >>> import pyperclip Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> import pyperclip ModuleNotFoundError: No module named 'pyperclip' I tried hitting Win-R (to pup up the RUN window) and typed the following: pip3 install pyperclip pip install pyperclip But it gives me an error saying "Could not fetch the URL: https://pypi.python.org/simple/pyperclip/ Could not find a version that satisfies the requirement pyperclip... No Matching distribution found" If I visit the URL mentioned (https://pypi.python.org/simple/pyperclip/) I see a bunch of pyperclip ZIP files all dif versions. But if I select a version I'm not sure where to place them/extract them or if extracting them is even the right thing to do. Any advice?
[ "There is a problem with the current version of the pyperclip I checked the git repo and opened a pull request for the issue. It currently doesn't support use for python3.7\n", "You must navigate to your default install location for 3.6. For IDLE 32 bits it's:\nC:\\Users\\<username>\\AppData\\Local\\Programs\\Python\\Python36-32\\Scripts\\\n\nfor IDLE 64 bits use:\nC:\\Users\\<username>\\AppData\\Local\\Programs\\Python\\Python36\\Scripts\\\n\nIf you navigate to that directory in the command prompt (use cd command), then the command \npip install pyperclip\n\nshould work.\n", "Open cmd and type pip install pyperclip with no double quote.\nEnter to run for installation.\nIf you install successfully, you can import in IDLE.\n", "i had this problem too! and i solved running: python -m pip install pyperclip\n\nnote: replace for python that is being used, for example,\nif you are trying running in python 3.10: python3.10 -m pip install pyperclip\n\nthis problem may is because you may more than one python environment which one of then have no pyperclip installed\n", "For manual Installation,\nI went to the link you provided in your question.\nSince all of them are pyperclip 1.5, I wouldn't mind downloading any one of them.\nHowever, the person who designed the site must want you to click on the topmost link, instead of scrolling all the way down.\nSo I downloaded the 1.5.11 version, which was on the top.\nAnd on downloading (I am using my phone) I quickly extracted the zip and saw a nice setup.py file there.\nBtw, I wouldn't worry where to unzip or run the setup.py file.\n\nGo ahead and run the setup.py and let it take its time.\nOnce done, still not working in IDLE? I suggest use CMD(windows) or Terminal(Linux) and check if it is working there, and then come back to me.\nEdit: On having problems installing the setup.py, open CMD as administrator by right clicking and selecting it from the start menu.\nNavigate to the directory where setup.py is located and run it inside the CMD, by typing 'setup.py' and press Enter.\n", "Thanks, it worked for me too.\nSee below details as written above:\nThanks Sayan! It works now. Just to clarify and detail what I did (if it helps others): \n\ndownloaded the zip file and extracted the folder contents to a location on C drive \nClick the \"Start\" button and type \"cmd.\" Right-click \"Cmd,\" select \"Run as Administrator\" 3) \nchanged directory to where the setup.py file is located in the\nfolder I just extracted\ntyped \"setup.py install\"\nNow I'm able to import the module into my code\n\n", "Go to windows and search for your IDE (e.g. VsCode)\n\nThen Go to the folder of your IDE and go to the properties of file that end .exe go to properties -> compatibility -> [✔️] Run this program as an administrator (apply and then Ok)\n\n\n", "Manual installation of pyperclip is possible. Just download the version you want from https://pypi.org/simple/pyperclip/, extract it and run setup.py install (or python setup.py install) in the extracted folder.\n" ]
[ 4, 3, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "pyperclip", "python" ]
stackoverflow_0047684616_pyperclip_python.txt
Q: Create DataFrame column with pairwise Last In First Out method as condition I have a DataFrame df1 with following columns: Date, Direction, Input, Output, and Amount. df1 Date Direction Input Output Amount 0 2022-01-02 In 18.5 0.0 1.0 1 2022-01-03 In 18.0 0.0 2.0 2 2022-01-04 Out 0.0 18.5 2.0 3 2022-01-05 In 16.0 0.0 1.0 4 2022-01-06 In 14.0 0.0 0.5 5 2022-01-07 Out 0.0 15.0 0.5 6 2022-01-08 Out 0.0 16.5 1.0 7 2022-01-09 Out 0.0 19.0 1.0 8 2022-01-10 In 13.0 0.0 0.9 9 2022-01-11 Out 0.0 15.0 0.9 10 2022-01-12 In 14.0 0.0 1.3 11 2022-01-13 In 12.0 0.0 1.4 I try to create an additional column; Difference that calculates the Last In First Out difference between the Input and Output. If there is an output (df1['Direction'] == 'Out') on a specific date, I try to look back and calculate the difference to the last input, which is not already used for another output. In addition, I try to control that the amount of input and output matches. The decided output df2 would look like this: Date Direction Input Output Amount Difference 0 2022-01-02 In 18.5 0.0 1.0 0.0 1 2022-01-03 In 18.0 0.0 2.0 0.0 2 2022-01-04 Out 0.0 18.5 2.0 0.5 <-- 18.5-18 3 2022-01-05 In 16.0 0.0 1.0 0.0 4 2022-01-06 In 14.0 0.0 0.5 0.0 5 2022-01-07 Out 0.0 15.0 0.5 1.0 <-- 15-14 6 2022-01-08 Out 0.0 16.5 1.0 0.5 <-- 16.5-16 (2022-01-05) 7 2022-01-09 Out 0.0 19.0 1.0 0.5 <-- 19-18.5 (2022-01-02) 8 2022-01-10 In 13.0 0.0 0.9 0.0 9 2022-01-11 Out 0.0 15.0 0.9 2.0 <-- 15-13 10 2022-01-12 In 14.0 0.0 1.3 0.0 11 2022-01-13 In 12.0 0.0 1.4 0.0 I was trying it with np.where() as condition and then substracting the shifted Input, but I don't know how to shift further to the previous Input if there are several Outputs after each other. df1['Difference'] = np.where((df1['Direction'] == 'Out'), df1['Output']-df1['Input'].shift(1),0) For reproducibility: import pandas as pd import numpy as np df1 = pd.DataFrame({ 'Date':['2022-01-02', '2022-01-03', '2022-01-04', '2022-01-05', '2022-01-06', '2022-01-07', '2022-01-08', '2022-01-09', '2022-01-10', '2022-01-11', '2022-01-12', '2022-01-13'], 'Direction':['In', 'In', 'Out', 'In', 'In', 'Out', 'Out', 'Out', 'In', 'Out', 'In', 'In'], 'Input':[18.5, 18, 0, 16, 14, 0, 0, 0, 13, 0, 14, 12], 'Output':[0, 0, 18.5, 0, 0, 15, 16.5, 19, 0, 15, 0, 0], 'Amount':[1, 2, 2, 1, 0.5, 0.5, 1, 1, 0.9, 0.9, 1.3, 1.4]}) Many thanks! A: The proper description for the problem should Last In First Out: the last unused In row is matched to each Out row. You can solve this using a stack-based approach with deque: from collections import deque inputs = deque() amount = [] for row in df1[["Direction", "Input", "Output"]].itertuples(): if row.Direction == "In": inputs.append(row.Input) amount.append(0) else: # Calculate Amount based on the last In value and remove it amount.append(row.Output - inputs.pop()) df2 = df1.assign(Amount=amount)
Create DataFrame column with pairwise Last In First Out method as condition
I have a DataFrame df1 with following columns: Date, Direction, Input, Output, and Amount. df1 Date Direction Input Output Amount 0 2022-01-02 In 18.5 0.0 1.0 1 2022-01-03 In 18.0 0.0 2.0 2 2022-01-04 Out 0.0 18.5 2.0 3 2022-01-05 In 16.0 0.0 1.0 4 2022-01-06 In 14.0 0.0 0.5 5 2022-01-07 Out 0.0 15.0 0.5 6 2022-01-08 Out 0.0 16.5 1.0 7 2022-01-09 Out 0.0 19.0 1.0 8 2022-01-10 In 13.0 0.0 0.9 9 2022-01-11 Out 0.0 15.0 0.9 10 2022-01-12 In 14.0 0.0 1.3 11 2022-01-13 In 12.0 0.0 1.4 I try to create an additional column; Difference that calculates the Last In First Out difference between the Input and Output. If there is an output (df1['Direction'] == 'Out') on a specific date, I try to look back and calculate the difference to the last input, which is not already used for another output. In addition, I try to control that the amount of input and output matches. The decided output df2 would look like this: Date Direction Input Output Amount Difference 0 2022-01-02 In 18.5 0.0 1.0 0.0 1 2022-01-03 In 18.0 0.0 2.0 0.0 2 2022-01-04 Out 0.0 18.5 2.0 0.5 <-- 18.5-18 3 2022-01-05 In 16.0 0.0 1.0 0.0 4 2022-01-06 In 14.0 0.0 0.5 0.0 5 2022-01-07 Out 0.0 15.0 0.5 1.0 <-- 15-14 6 2022-01-08 Out 0.0 16.5 1.0 0.5 <-- 16.5-16 (2022-01-05) 7 2022-01-09 Out 0.0 19.0 1.0 0.5 <-- 19-18.5 (2022-01-02) 8 2022-01-10 In 13.0 0.0 0.9 0.0 9 2022-01-11 Out 0.0 15.0 0.9 2.0 <-- 15-13 10 2022-01-12 In 14.0 0.0 1.3 0.0 11 2022-01-13 In 12.0 0.0 1.4 0.0 I was trying it with np.where() as condition and then substracting the shifted Input, but I don't know how to shift further to the previous Input if there are several Outputs after each other. df1['Difference'] = np.where((df1['Direction'] == 'Out'), df1['Output']-df1['Input'].shift(1),0) For reproducibility: import pandas as pd import numpy as np df1 = pd.DataFrame({ 'Date':['2022-01-02', '2022-01-03', '2022-01-04', '2022-01-05', '2022-01-06', '2022-01-07', '2022-01-08', '2022-01-09', '2022-01-10', '2022-01-11', '2022-01-12', '2022-01-13'], 'Direction':['In', 'In', 'Out', 'In', 'In', 'Out', 'Out', 'Out', 'In', 'Out', 'In', 'In'], 'Input':[18.5, 18, 0, 16, 14, 0, 0, 0, 13, 0, 14, 12], 'Output':[0, 0, 18.5, 0, 0, 15, 16.5, 19, 0, 15, 0, 0], 'Amount':[1, 2, 2, 1, 0.5, 0.5, 1, 1, 0.9, 0.9, 1.3, 1.4]}) Many thanks!
[ "The proper description for the problem should Last In First Out: the last unused In row is matched to each Out row.\nYou can solve this using a stack-based approach with deque:\nfrom collections import deque\n\ninputs = deque()\namount = []\n\nfor row in df1[[\"Direction\", \"Input\", \"Output\"]].itertuples():\n if row.Direction == \"In\":\n inputs.append(row.Input)\n amount.append(0)\n else:\n # Calculate Amount based on the last In value and remove it\n amount.append(row.Output - inputs.pop())\n\ndf2 = df1.assign(Amount=amount)\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074501233_dataframe_numpy_pandas_python.txt
Q: Initialise a class with objects of another class say I have a class which describes a ball and its properties: class Ball: def __init__(self, m=0.0,x=0.0, y=0.0): self.m = m self.x = x self.y = y self.r = np.array([x,y]) def pos(self): print('Current position is:', self.r) def move(self, x_move, y_move): x_moved = self.x+ x_move y_moved = self.y+ y_move r_moved = ([x_moved, y_moved]) self.r = r_moved How do i create another class which would initialise with objects from class Ball? And use methods from class Ball too? I'm trying to create something like: a = Ball(2,2,2) class Simulation: def __init___('''object of Ball e.g. a''', r): def next_move(self): position_after_next_move = a.pos + '''method move from class Ball''' I hope what I'm trying to say makes some sense. A: You are very close. You just need to pass the ball into the __init__() method and store it on the instance: class Simulation: def __init___(self, ball, r): self.ball = ball ... def next_move(self): position_after_next_move = self.ball.pos() a = Ball(2,2,2) s = Simulation(a, 42) s.next_move()
Initialise a class with objects of another class
say I have a class which describes a ball and its properties: class Ball: def __init__(self, m=0.0,x=0.0, y=0.0): self.m = m self.x = x self.y = y self.r = np.array([x,y]) def pos(self): print('Current position is:', self.r) def move(self, x_move, y_move): x_moved = self.x+ x_move y_moved = self.y+ y_move r_moved = ([x_moved, y_moved]) self.r = r_moved How do i create another class which would initialise with objects from class Ball? And use methods from class Ball too? I'm trying to create something like: a = Ball(2,2,2) class Simulation: def __init___('''object of Ball e.g. a''', r): def next_move(self): position_after_next_move = a.pos + '''method move from class Ball''' I hope what I'm trying to say makes some sense.
[ "You are very close. You just need to pass the ball into the __init__() method and store it on the instance:\nclass Simulation:\n def __init___(self, ball, r):\n self.ball = ball\n ...\n\n def next_move(self):\n position_after_next_move = self.ball.pos()\n\na = Ball(2,2,2)\ns = Simulation(a, 42)\ns.next_move()\n\n" ]
[ 2 ]
[]
[]
[ "class", "python" ]
stackoverflow_0074501406_class_python.txt
Q: Project created but its fields are empty when sent from React to Django API I've been working on this React + Django APP. And I have been making a simple CRUD functionality into this app. everything goes fine but when i came to create project and send it to the django database, it gets created but when i look at it at projects/list it only shows the delete button and and image field which is not important, i only want the title and body fields to be shows This is views.py class CreateProjectView(viewsets.ModelViewSet): serializer_class = ProjectSerializer def post(self, request): project = ProjectSerializer(data=request.data) if project.is_valid(raise_exception=True): project.save() return Response(project.data) urls.py create_project = CreateProjectView.as_view({"get": "post"}) urlpatterns = [ path("project/create", create_project, name="create-project"), ] Now React CreateProject.js import React, { useState } from 'react' const CreateProject = () => { let [project, setProject] = useState([]) let [title, setProjectTitle] = useState("") let [body, setProjectBody] = useState("") const handleChangeTitle = (value) => { setProjectTitle(project => ({ ...title, 'title': value})) console.log("Title:", value) } const handleChangeBody = (value) => { setProjectBody(project => ({ ...body, 'body': value})) console.log("Body: ", value) } let createProject = async () => { fetch('http://localhost:8000/api/project/create', { method: "POST", headers: { 'Content-Type': "application/json" }, // title: JSON.stringify(project_title), // title: project_title, // body: project_body, // image: "hello", // title: title, // body: body project: { "title": title, "body": body } // project: project.title }, ) // let project = {project_title, project_body} } let handleSubmit = () => { setProject(project) createProject() } return ( <div> <h3>title</h3> <input type="text" name="title" onChange={e => {handleChangeTitle(e.target.value)}} defaultValue={project?.title} /> <br /> <h3>body</h3> <input type="text" name="body" onChange={e => {handleChangeBody(e.target.value)}} defaultValue={project?.body} /> <br/> <br/> <br/> <button onClick={createProject}>submit</button> </div> ) } export default CreateProject ProjectViewSet in view.py class ProjectView(viewsets.ModelViewSet): queryset = Project.objects.all() serializer_class = ProjectSerializer I was expecting it to show the title and body fields and they content that were created in the create project page A: I tried a lot of solutions but nothing worked. It turns out the problem in in the CreateProject.js components where the createProject() is. so this is how i fixed it: At first I was just sending the data fields like this: title: project.title, body: project.body, but I should have been: body: JSON.stringify({ title: project.title, body: project.body, }) I tried removing the JSON.stringify and leaving it just a {} like this body: { title: project.title, } but It has to be this way. Anyways this is the correct way to send POST requests from React to Django uing React Functions let updateProject = async () => { fetch(`http://localhost:8000/api/project/update/${projectId}`, { method: "PUT", headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ title: project.title, body: project.body }) }) }
Project created but its fields are empty when sent from React to Django API
I've been working on this React + Django APP. And I have been making a simple CRUD functionality into this app. everything goes fine but when i came to create project and send it to the django database, it gets created but when i look at it at projects/list it only shows the delete button and and image field which is not important, i only want the title and body fields to be shows This is views.py class CreateProjectView(viewsets.ModelViewSet): serializer_class = ProjectSerializer def post(self, request): project = ProjectSerializer(data=request.data) if project.is_valid(raise_exception=True): project.save() return Response(project.data) urls.py create_project = CreateProjectView.as_view({"get": "post"}) urlpatterns = [ path("project/create", create_project, name="create-project"), ] Now React CreateProject.js import React, { useState } from 'react' const CreateProject = () => { let [project, setProject] = useState([]) let [title, setProjectTitle] = useState("") let [body, setProjectBody] = useState("") const handleChangeTitle = (value) => { setProjectTitle(project => ({ ...title, 'title': value})) console.log("Title:", value) } const handleChangeBody = (value) => { setProjectBody(project => ({ ...body, 'body': value})) console.log("Body: ", value) } let createProject = async () => { fetch('http://localhost:8000/api/project/create', { method: "POST", headers: { 'Content-Type': "application/json" }, // title: JSON.stringify(project_title), // title: project_title, // body: project_body, // image: "hello", // title: title, // body: body project: { "title": title, "body": body } // project: project.title }, ) // let project = {project_title, project_body} } let handleSubmit = () => { setProject(project) createProject() } return ( <div> <h3>title</h3> <input type="text" name="title" onChange={e => {handleChangeTitle(e.target.value)}} defaultValue={project?.title} /> <br /> <h3>body</h3> <input type="text" name="body" onChange={e => {handleChangeBody(e.target.value)}} defaultValue={project?.body} /> <br/> <br/> <br/> <button onClick={createProject}>submit</button> </div> ) } export default CreateProject ProjectViewSet in view.py class ProjectView(viewsets.ModelViewSet): queryset = Project.objects.all() serializer_class = ProjectSerializer I was expecting it to show the title and body fields and they content that were created in the create project page
[ "I tried a lot of solutions but nothing worked. It turns out the problem in in the CreateProject.js components where the createProject() is.\nso this is how i fixed it:\nAt first I was just sending the data fields like this:\ntitle: project.title,\nbody: project.body,\n\nbut I should have been:\nbody: JSON.stringify({\n title: project.title,\n body: project.body,\n})\n\nI tried removing the JSON.stringify and leaving it just a {} like this\nbody: {\n title: project.title,\n}\n\nbut It has to be this way.\nAnyways this is the correct way to send POST requests from React to Django uing React Functions\nlet updateProject = async () => {\n fetch(`http://localhost:8000/api/project/update/${projectId}`, {\n method: \"PUT\",\n headers: {\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify({\n title: project.title,\n body: project.body\n })\n })\n }\n\n" ]
[ 0 ]
[]
[]
[ "django", "javascript", "python", "reactjs", "web_deployment" ]
stackoverflow_0074362233_django_javascript_python_reactjs_web_deployment.txt
Q: cv2.imshow() crashes on Mac When I am running this piece of code on ipython (MacOS /python 2.7.13) cv2.startWindowThread() cv2.imshow('img', img) cv2.waitKey() cv2.destroyAllWindows() the kernel crashes. When the image appears, the only button that I can press is minimise (the one in the middle and when I press any key then the spinning wheel shows up and the only thing I can do is force quit. P.S. I have downloaded the latest python version through home-brew. A: Do you just want to look at the image? I'm not sure what you want to do with startWindowThread, but if you want to install opencv the easiest way, open the image, and view it try this: install conda (A better package manager for opencv than homebrew) then create a cv environment: conda create -n cv activate it and install opencv from menpo's channel source activate cv conda install -c menpo opencv then in python (hit q to exit): import cv2 cv2.namedWindow('imageWindow') img = cv2.imread('path/to/your/image.png') cv2.imshow('imageWindow',img) wait = True while wait: wait = cv2.waitKey()=='q113' # hit q to exit A: I have reproduced the jupyter kernel crash problem. The following is the test environment setup. - macOS 10.12.16 - python 2.7.11 - opencv 4.0.0 - ipython 5.8.0 - jupyter notebook server 5.7.4 With the change on cv2.waitKey() to waiting for a Q press, the problem goes away. Here is the code: import cv2 img = cv2.imread('sample.jpg') cv2.startWindowThread() cv2.imshow('img', img) # wait forever, if Q is pressed then close cv image window if cv2.waitKey(0) & 0xFF == ord('q'): cv2.destroyAllWindows() Hope this help. A: Had the same issue and no solutions I've found worked for me. I was able to resolve it only by copying this function from google colab tools. It doesn't show the image in a new window, but inline in the Jupyter notebook. import cv2 from IPython import display from PIL import Image def cv2_imshow(a): """A replacement for cv2.imshow() for use in Jupyter notebooks. Args: a : np.ndarray. shape (N, M) or (N, M, 1) is an NxM grayscale image. shape (N, M, 3) is an NxM BGR color image. shape (N, M, 4) is an NxM BGRA color image. """ a = a.clip(0, 255).astype('uint8') # cv2 stores colors as BGR; convert to RGB if a.ndim == 3: if a.shape[2] == 4: a = cv2.cvtColor(a, cv2.COLOR_BGRA2RGBA) else: a = cv2.cvtColor(a, cv2.COLOR_BGR2RGB) display.display(Image.fromarray(a)) A: I have the latest version of python (2.7.15) on Mac OS X 10.14.3. Why can't we just save the contents to a file and run it using the command python filename.py. It's still the same nonetheless and it works!! The sample code I tested is: import cv2 img = cv2.imread('sample.jpg') cv2.startWindowThread() cv2.imshow('img', img) cv2.waitKey() cv2.destroyAllWindows() Hope it helps! A: This is working for me. The window is shut down, the program can continue. The fly in the ointment is that the window icon is still in the dock. while cv2.waitKey(100) != 27:# loop if not get ESC cv2.imshow("Window_Name", Your_Image) cv2.destroyAllWindows() cv2.waitKey(1)
cv2.imshow() crashes on Mac
When I am running this piece of code on ipython (MacOS /python 2.7.13) cv2.startWindowThread() cv2.imshow('img', img) cv2.waitKey() cv2.destroyAllWindows() the kernel crashes. When the image appears, the only button that I can press is minimise (the one in the middle and when I press any key then the spinning wheel shows up and the only thing I can do is force quit. P.S. I have downloaded the latest python version through home-brew.
[ "Do you just want to look at the image? I'm not sure what you want to do with startWindowThread, but if you want to install opencv the easiest way, open the image, and view it try this:\ninstall conda (A better package manager for opencv than homebrew)\nthen create a cv environment:\nconda create -n cv\n\nactivate it and install opencv from menpo's channel\nsource activate cv\nconda install -c menpo opencv\n\nthen in python (hit q to exit):\nimport cv2\ncv2.namedWindow('imageWindow')\nimg = cv2.imread('path/to/your/image.png')\ncv2.imshow('imageWindow',img)\nwait = True\nwhile wait:\n wait = cv2.waitKey()=='q113' # hit q to exit\n\n", "I have reproduced the jupyter kernel crash problem. The following is the test environment setup.\n - macOS 10.12.16\n - python 2.7.11\n - opencv 4.0.0\n - ipython 5.8.0\n - jupyter notebook server 5.7.4\n\nWith the change on cv2.waitKey() to waiting for a Q press, the problem goes away.\nHere is the code:\nimport cv2\n\nimg = cv2.imread('sample.jpg')\ncv2.startWindowThread()\ncv2.imshow('img', img)\n\n# wait forever, if Q is pressed then close cv image window\nif cv2.waitKey(0) & 0xFF == ord('q'):\n cv2.destroyAllWindows()\n\nHope this help.\n", "Had the same issue and no solutions I've found worked for me. I was able to resolve it only by copying this function from google colab tools. It doesn't show the image in a new window, but inline in the Jupyter notebook.\nimport cv2\nfrom IPython import display\nfrom PIL import Image\n\ndef cv2_imshow(a):\n \"\"\"A replacement for cv2.imshow() for use in Jupyter notebooks.\n Args:\n a : np.ndarray. shape (N, M) or (N, M, 1) is an NxM grayscale image. shape\n (N, M, 3) is an NxM BGR color image. shape (N, M, 4) is an NxM BGRA color\n image.\n \"\"\"\n a = a.clip(0, 255).astype('uint8')\n # cv2 stores colors as BGR; convert to RGB\n if a.ndim == 3:\n if a.shape[2] == 4:\n a = cv2.cvtColor(a, cv2.COLOR_BGRA2RGBA)\n else:\n a = cv2.cvtColor(a, cv2.COLOR_BGR2RGB)\n display.display(Image.fromarray(a))\n\n", "I have the latest version of python (2.7.15) on Mac OS X 10.14.3.\nWhy can't we just save the contents to a file and run it using the command python filename.py. It's still the same nonetheless and it works!!\nThe sample code I tested is:\nimport cv2\n\nimg = cv2.imread('sample.jpg')\n\ncv2.startWindowThread()\ncv2.imshow('img', img)\ncv2.waitKey()\ncv2.destroyAllWindows()\n\n\nHope it helps!\n", "This is working for me. The window is shut down, the program can continue. The fly in the ointment is that the window icon is still in the dock.\nwhile cv2.waitKey(100) != 27:# loop if not get ESC\n cv2.imshow(\"Window_Name\", Your_Image)\n\ncv2.destroyAllWindows()\ncv2.waitKey(1)\n\n" ]
[ 6, 4, 3, 2, 0 ]
[]
[]
[ "ipython", "macos", "opencv", "python" ]
stackoverflow_0046348972_ipython_macos_opencv_python.txt
Q: Reindex and Interpolate data Suppose I have the following data frame. index = [0.018519, 0.037037, 0.055556, 0.074074, 0.092593, 0.111111, 0.12963, 0.148148, 0.166667, 0.185185, 0.203704, 0.222222, 0.240741, 0.259259, 0.277778, 0.296296, 0.314815, 0.333333, 0.351852, 0.37037, 0.388889, 0.407407, 0.425926, 0.444444, 0.462963, 0.481481, 0.5, 0.518519, 0.537037, 0.555556, 0.574074, 0.592593, 0.611111, 0.62963, 0.648148, 0.666667, 0.685185, 0.703704, 0.722222, 0.740741, 0.759259, 0.777778, 0.796296, 0.814815, 0.833333, 0.851852, 0.87037, 0.888889, 0.907407, 0.925926, 0.944444, 0.962963, 0.981481, 1] y = [1.5, 2, 6, 23.5, 112, 158.5, 226, 332, 354.5, 376.5, 420.5, 479.5, 513, 513.5, 515.5, 516, 519.5, 523, 525.5, 527.5, 531, 536, 541, 542, 542, 545.5, 547, 553, 553.5, 555, 555.5, 555.5, 555.5, 556, 556.5, 557, 561, 564.5, 571, 586, 589.5, 589.5, 590, 590.5, 591.5, 592, 592.5, 592.5, 594, 595.5, 604.5, 606, 608, 608.5] df = pd.DataFrame(y, index=index).astype(float) I want to reindex and interpolate the y values based on a new index I tried the following: new_index= pd.Index([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]) df= df.reindex(new_index).interpolate(method='values') print (df) It successfully reindex the data frame but still give NaN values. print (df) 0.1 NaN 0.2 NaN 0.3 NaN 0.4 NaN 0.5 547.0 0.6 559.3 0.7 571.6 0.8 583.9 0.9 596.2 1.0 608.5 Note that all interpolation methods did not work, does anyone know how to get interpolated y values for the new index? Thanks A: Here is one way to do it: # Add new values df = pd.concat( [df, pd.DataFrame(data=[pd.NA for _ in range(len(new_index))], index=new_index)] ) # Remove duplicated indices, sort, interpolate and get rid of values not in new_index df = ( df.loc[~df.index.duplicated(keep="first"), :] .sort_index() .interpolate(method="values") .reindex(new_index) ) Then: print(df) # Output 0.1 130.599498 0.2 411.699525 0.3 516.700038 0.4 534.000054 0.5 547.000000 0.6 555.500000 0.7 563.799962 0.8 590.100005 0.9 593.400016 1.0 608.500000
Reindex and Interpolate data
Suppose I have the following data frame. index = [0.018519, 0.037037, 0.055556, 0.074074, 0.092593, 0.111111, 0.12963, 0.148148, 0.166667, 0.185185, 0.203704, 0.222222, 0.240741, 0.259259, 0.277778, 0.296296, 0.314815, 0.333333, 0.351852, 0.37037, 0.388889, 0.407407, 0.425926, 0.444444, 0.462963, 0.481481, 0.5, 0.518519, 0.537037, 0.555556, 0.574074, 0.592593, 0.611111, 0.62963, 0.648148, 0.666667, 0.685185, 0.703704, 0.722222, 0.740741, 0.759259, 0.777778, 0.796296, 0.814815, 0.833333, 0.851852, 0.87037, 0.888889, 0.907407, 0.925926, 0.944444, 0.962963, 0.981481, 1] y = [1.5, 2, 6, 23.5, 112, 158.5, 226, 332, 354.5, 376.5, 420.5, 479.5, 513, 513.5, 515.5, 516, 519.5, 523, 525.5, 527.5, 531, 536, 541, 542, 542, 545.5, 547, 553, 553.5, 555, 555.5, 555.5, 555.5, 556, 556.5, 557, 561, 564.5, 571, 586, 589.5, 589.5, 590, 590.5, 591.5, 592, 592.5, 592.5, 594, 595.5, 604.5, 606, 608, 608.5] df = pd.DataFrame(y, index=index).astype(float) I want to reindex and interpolate the y values based on a new index I tried the following: new_index= pd.Index([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]) df= df.reindex(new_index).interpolate(method='values') print (df) It successfully reindex the data frame but still give NaN values. print (df) 0.1 NaN 0.2 NaN 0.3 NaN 0.4 NaN 0.5 547.0 0.6 559.3 0.7 571.6 0.8 583.9 0.9 596.2 1.0 608.5 Note that all interpolation methods did not work, does anyone know how to get interpolated y values for the new index? Thanks
[ "Here is one way to do it:\n# Add new values\ndf = pd.concat(\n [df, pd.DataFrame(data=[pd.NA for _ in range(len(new_index))], index=new_index)]\n)\n\n# Remove duplicated indices, sort, interpolate and get rid of values not in new_index\ndf = (\n df.loc[~df.index.duplicated(keep=\"first\"), :]\n .sort_index()\n .interpolate(method=\"values\")\n .reindex(new_index)\n)\n\nThen:\nprint(df)\n# Output\n0.1 130.599498\n0.2 411.699525\n0.3 516.700038\n0.4 534.000054\n0.5 547.000000\n0.6 555.500000\n0.7 563.799962\n0.8 590.100005\n0.9 593.400016\n1.0 608.500000\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074485697_pandas_python.txt
Q: How to display data from a dictionary within a list in a readable format? The data needs to be stored in this format data = {'admin': [{'title': 'Register Users with taskManager.py', 'description': 'Use taskManager.py to add the usernames and passwords for all team members that will be using this program.', 'due date': '10 Oct 2019', 'date assigned': '20 Oct 2019', 'status': 'No'}, {'title': 'Assign initial tasks', 'description': 'Use taskManager.py to assign each team member with appropriate tasks', 'due date': '10 Oct 2019', 'date assigned': '25 Oct 2019', 'status': 'No'}], 'new user': [{'title': 'Take out trash', 'description': 'Take the trash can down the street', 'due date': '10 oct 2022', 'date assigned': '20 Oct 2022', 'status': 'No'}]} I need to display this data like this: user: admin title: Register Users with taskManager.py description: Use taskManager.py to add the usernames and passwords for all team members that will be using this program date assigned: 10 Oct 2019 due date: 20 Oct 2022 status: No title: Assign initial tasks description: Use taskManager.py to assign each team member with appropriate tasks date assigned: 10 Oct 2019 due date: 25 Oct 2019 status: No user: new user title: Take out trash description: Take the trash can down the street date assigned: 10 Oct 2022 due date: 20 Oct 202 status: No How do I do this?
How to display data from a dictionary within a list in a readable format?
The data needs to be stored in this format data = {'admin': [{'title': 'Register Users with taskManager.py', 'description': 'Use taskManager.py to add the usernames and passwords for all team members that will be using this program.', 'due date': '10 Oct 2019', 'date assigned': '20 Oct 2019', 'status': 'No'}, {'title': 'Assign initial tasks', 'description': 'Use taskManager.py to assign each team member with appropriate tasks', 'due date': '10 Oct 2019', 'date assigned': '25 Oct 2019', 'status': 'No'}], 'new user': [{'title': 'Take out trash', 'description': 'Take the trash can down the street', 'due date': '10 oct 2022', 'date assigned': '20 Oct 2022', 'status': 'No'}]} I need to display this data like this: user: admin title: Register Users with taskManager.py description: Use taskManager.py to add the usernames and passwords for all team members that will be using this program date assigned: 10 Oct 2019 due date: 20 Oct 2022 status: No title: Assign initial tasks description: Use taskManager.py to assign each team member with appropriate tasks date assigned: 10 Oct 2019 due date: 25 Oct 2019 status: No user: new user title: Take out trash description: Take the trash can down the street date assigned: 10 Oct 2022 due date: 20 Oct 202 status: No How do I do this?
[]
[]
[ "Try this:\ndict = #your dict here\nfor user in dict.values():\n print(f\"user: {user}\")\n for k, v in dict[user]: # selects sub dicts\n print (f\"{k}: {v})\n\n", "You basically have a densely nested structure so if this is the final structure of your data then the easiest way is unravel it in a hard coded way using dict.items() like so:\ndata = {'admin': [{'title': 'Register Users with taskManager.py', 'description': 'Use taskManager.py to add the usernames and passwords for all team members that will be using this program.', 'due date': '10 Oct 2019', 'date assigned': '20 Oct 2019', 'status': 'No'}, {'title': 'Assign initial tasks', 'description': 'Use taskManager.py to assign each team member with appropriate tasks', 'due date': '10 Oct 2019', 'date assigned': '25 Oct 2019', 'status': 'No'}], 'new user': [{'title': 'Take out trash', 'description': 'Take the trash can down the street', 'due date': '10 oct 2022', 'date assigned': '20 Oct 2022', 'status': 'No'}]}\n\n\nfor user, tasks in data.items():\n print(\"user:\", user)\n for task in tasks:\n print()\n for field, value in task.items():\n print(f\"{field}: {value}\")\n print()\n\nThis produces the desired output:\nuser: admin\n\ntitle: Register Users with taskManager.py\ndescription: Use taskManager.py to add the usernames and passwords for all team members that will be using this program.\ndue date: 10 Oct 2019\ndate assigned: 20 Oct 2019\nstatus: No\n\n\ntitle: Assign initial tasks\ndescription: Use taskManager.py to assign each team member with appropriate tasks\ndue date: 10 Oct 2019\ndate assigned: 25 Oct 2019\nstatus: No\n\nuser: new user\n\ntitle: Take out trash\ndescription: Take the trash can down the street\ndue date: 10 oct 2022\ndate assigned: 20 Oct 2022\nstatus: No\n\n", "Loop over the keys to print the data. You may want to add extra formatting for readability afterward.\nfor i in data.keys():\n print(f\"user: {i}\")\n for j in data[i]:\n for k in j.keys():\n print(f\"{k}: {j[k]}\")\n\n" ]
[ -1, -1, -1 ]
[ "dictionary", "list", "python" ]
stackoverflow_0074501637_dictionary_list_python.txt
Q: PM2.js to Run Gunicorn/Flask App inside Virtualenv/Anaconda env I have been running gunicorn to serve a Python Flask app using the commands conda activate fooenv gunicorn --workers=4 -b 0.0.0.0:5000 --worker-class=meinheld.gmeinheld.MeinheldWorker api.app:app How can we use pm2 instead to run gunicorn/flask app inside the fooenv environment? A: supposed you can run gunicorn in your venv via e.g.: gunicorn wsgi:app -b localhost:5010 then you simply use command (in the venv): pm2 --name=myapp start "gunicorn wsgi:app -b localhost:5010" (took me way too long to figure this out btw) A: I would create a pm2.json file in the same directory as your app, then start it with pm2 start pm2.json. pm2 ls will now show your app running. { "apps": [ { "name": "my-app", "script": "gunicorn --workers=4 -b 0.0.0.0:5000 --worker-class=meinheld.gmeinheld.MeinheldWorker api.app:app", "watch": false, "max_memory_restart": "256M", "output": "/var/www/html/logs/my-app-out.log", "error": "/var/www/html/logs/my-app-error.log", "kill_timeout": 5000, "restartDelay": 5000 } ] } A: If you are using virtualenv and getting the following error on pm2 ls: /usr/bin/bash: line 1: gunicorn: command not found Then, you could try initiating the virtualenv as well in the start command: pm2 --name=ai start "cd ~/my-python-project-dir && source venv/bin/activate && gunicorn wsgi:app -b localhost:5010"
PM2.js to Run Gunicorn/Flask App inside Virtualenv/Anaconda env
I have been running gunicorn to serve a Python Flask app using the commands conda activate fooenv gunicorn --workers=4 -b 0.0.0.0:5000 --worker-class=meinheld.gmeinheld.MeinheldWorker api.app:app How can we use pm2 instead to run gunicorn/flask app inside the fooenv environment?
[ "supposed you can run gunicorn in your venv via e.g.:\ngunicorn wsgi:app -b localhost:5010\n\nthen you simply use command (in the venv):\npm2 --name=myapp start \"gunicorn wsgi:app -b localhost:5010\"\n\n(took me way too long to figure this out btw)\n", "I would create a pm2.json file in the same directory as your app, then start it with pm2 start pm2.json. pm2 ls will now show your app running.\n{\n \"apps\": [\n {\n \"name\": \"my-app\",\n \"script\": \"gunicorn --workers=4 -b 0.0.0.0:5000 --worker-class=meinheld.gmeinheld.MeinheldWorker api.app:app\",\n \"watch\": false,\n \"max_memory_restart\": \"256M\",\n \"output\": \"/var/www/html/logs/my-app-out.log\",\n \"error\": \"/var/www/html/logs/my-app-error.log\",\n \"kill_timeout\": 5000,\n \"restartDelay\": 5000\n }\n ]\n }\n\n", "If you are using virtualenv and getting the following error on pm2 ls:\n/usr/bin/bash: line 1: gunicorn: command not found\n\nThen, you could try initiating the virtualenv as well in the start command:\n pm2 --name=ai start \"cd ~/my-python-project-dir && source venv/bin/activate && gunicorn wsgi:app -b localhost:5010\"\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "conda", "flask", "gunicorn", "pm2", "python" ]
stackoverflow_0066272697_conda_flask_gunicorn_pm2_python.txt
Q: Networkx KeyError: 'source' with from_pandas_edgelist for undirected edgelist I have an edgelist in a pandas dataframe that looks like this: topic neighbor 0 K Kl 1 K Pr 2 Kl TS 3 Pr Kl 4 Pr Pr When I turn this into a Graph (using networkx as nx) with G = nx.from_pandas_edgelist(df) it gives me KeyError: 'source'. It works when I specify a source and target G = nx.from_pandas_edgelist(df, "topic", "neighbor") but this is an undirected Graph so I do not really want a source and target. Is this the way it has to be done? Will specifying a source and target have implications for later calculations of degree_centrality? A: Try this: import pandas as pd import networkx as nx df = pd.read_clipboard() print(df) Output: topic neighbor 0 K Kl 1 K Pr 2 Kl TS 3 Pr Kl 4 Pr Pr Use source and target parameters: G = nx.from_pandas_edgelist(df, source='topic', target='neighbor') nx.draw_networkx(G) Output: A: Yes, creating an undirected network from a dataframe requires specifying source and target. It's not necessary, but to be sure that the graph is undirected, one can specify create_using kwarg: from networkx import Graph, from_pandas_edgelist df = ... # note that Graph is the default setting, so specifying # create_using=Graph is optional G = from_pandas_edgelist(df, "topic", "neighbor", create_using=Graph) print(G.is_directed()) # False
Networkx KeyError: 'source' with from_pandas_edgelist for undirected edgelist
I have an edgelist in a pandas dataframe that looks like this: topic neighbor 0 K Kl 1 K Pr 2 Kl TS 3 Pr Kl 4 Pr Pr When I turn this into a Graph (using networkx as nx) with G = nx.from_pandas_edgelist(df) it gives me KeyError: 'source'. It works when I specify a source and target G = nx.from_pandas_edgelist(df, "topic", "neighbor") but this is an undirected Graph so I do not really want a source and target. Is this the way it has to be done? Will specifying a source and target have implications for later calculations of degree_centrality?
[ "Try this:\nimport pandas as pd\nimport networkx as nx\n\ndf = pd.read_clipboard()\nprint(df)\n\nOutput:\n topic neighbor\n0 K Kl\n1 K Pr\n2 Kl TS\n3 Pr Kl\n4 Pr Pr\n\nUse source and target parameters:\nG = nx.from_pandas_edgelist(df, source='topic', target='neighbor')\nnx.draw_networkx(G)\n\nOutput:\n\n", "Yes, creating an undirected network from a dataframe requires specifying source and target.\nIt's not necessary, but to be sure that the graph is undirected, one can specify create_using kwarg:\nfrom networkx import Graph, from_pandas_edgelist\n\ndf = ...\n\n# note that Graph is the default setting, so specifying\n# create_using=Graph is optional\nG = from_pandas_edgelist(df, \"topic\", \"neighbor\", create_using=Graph)\n\n\nprint(G.is_directed())\n# False\n\n" ]
[ 2, 1 ]
[]
[]
[ "networkx", "pandas", "python" ]
stackoverflow_0074501737_networkx_pandas_python.txt
Q: How to fetch two table's information from a same webpage? I have to go to here Here I have to choose applicant name = “ASIAN PAINTS” (as an example) By this code, [Google Colab] !pip install selenium !apt-get update !apt install chromium-chromedriver import re import csv import json from time import sleep from typing import Generator, List, Tuple from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver import DesiredCapabilities from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.chrome.webdriver import WebDriver from selenium.webdriver.support import expected_conditions as EC options = webdriver.ChromeOptions() options.add_argument('--headless') options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') capabilities = DesiredCapabilities.CHROME capabilities["goog:loggingPrefs"] = {"performance": "ALL"} driver = webdriver.Chrome('chromedriver', chrome_options=options, desired_capabilities=capabilities) import csv import json from time import sleep, time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver import DesiredCapabilities from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.chrome.webdriver import WebDriver from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException def save_to_csv(data: list) -> None: with open(file='ipindiaservices.csv', mode='a', encoding="utf-8") as f: writer = csv.writer(f, lineterminator='\n') writer.writerow([*data]) def start_from_page(page_number: int, driver: WebDriver) -> None: driver.execute_script( f""" document.querySelector('button.next').value = {page_number}; document.querySelector('button.next').click(); """ ) def titles_validation(driver: WebDriver) -> None: """replace empty title name with '_'""" driver.execute_script( """ let titles = document.querySelectorAll('input+.tab-pane tr:not(:first-child)>td:last-child') Array.from(titles).forEach((e) => { if (!e.textContent.trim()) { e.textContent = '_'; } }); """ ) def get_network_data(log: dict, driver: WebDriver) -> dict: log = json.loads(log["message"])["message"] if all([ "Network.responseReceived" in log["method"], "params" in log.keys(), 'CaptchaAudio' in str(log["params"].values()) ]): return driver.execute_cdp_cmd('Network.getResponseBody', {'requestId': log["params"]["requestId"]}) def get_captcha_text(driver: WebDriver, timeout: float) -> str: """Return captcha text Arguments: - driver: WebDriver - timeout: pause before receiving data from the web driver log """ driver.execute_script( """ // document.querySelector('img[title="Captcha"]').click() document.querySelector('img[title="Captcha Audio"]').click() """ ) sleep(timeout) logs = driver.get_log('performance') responses = [get_network_data(log, driver) for log in logs if get_network_data(log, driver)] if responses: return json.loads(responses[0]['body'])['CaptchaImageText'] else: get_captcha_text(driver, timeout) def submit_captcha(captcha_text: str, btn_name: str) -> None: """Submit captcha Arguments: - btn_name: captcha send button name["submit" or "search"] """ if btn_name == 'search': captcha_locator = (By.CSS_SELECTOR, 'input[name="submit"]') elif btn_name == 'submit': captcha_locator = (By.ID, 'btnSubmit') wait.until(EC.visibility_of_element_located((By.ID, 'CaptchaText'))).send_keys(captcha_text) wait.until(EC.visibility_of_element_located(captcha_locator)).click() ''' options = webdriver.ChromeOptions() options.add_argument('--headless') options.add_experimental_option("excludeSwitches", ["enable-automation", "enable-logging"]) capabilities = DesiredCapabilities.CHROME capabilities["goog:loggingPrefs"] = {"performance": "ALL"} # service = Service(executable_path="path/to/your/chromedriver.exe") # driver = webdriver.Chrome(service=service, options=options, desired_capabilities=capabilities) driver = webdriver.Chrome('chromedriver', chrome_options=options, desired_capabilities=capabilities) ''' wait = WebDriverWait(driver, 15) table_values_locator = (By.CSS_SELECTOR, 'input+.tab-pane tr:not(:first-child)>td:last-child') applicant_name_locator = (By.ID, 'TextField6') page_number_locator = (By.CSS_SELECTOR, 'span.Selected') app_satus_locator = (By.CSS_SELECTOR, 'button.btn') next_btn_locator = (By.CSS_SELECTOR, 'button.next') driver.get('https://ipindiaservices.gov.in/PublicSearch/') # sometimes an alert with an error message("") may appear, so a small pause is used sleep(1) wait.until(EC.visibility_of_element_located(applicant_name_locator)).send_keys('ltd') # on the start page and the page with the table, the names of the buttons are different captcha_text = get_captcha_text(driver, 1) submit_captcha(captcha_text, "search") # the page where the search starts start_from_page(1, driver) while True: start = time() # get current page number current_page = wait.until(EC.visibility_of_element_located(page_number_locator)).text print(f"Current page: {current_page}") # get all application status WebElements app_status_elements = wait.until(EC.visibility_of_all_elements_located(app_satus_locator)) for element in range(len(app_status_elements)): print(f"App number: {element}") # update application status WebElements app_status_elements = wait.until(EC.visibility_of_all_elements_located(app_satus_locator)) # click on application status wait.until(EC.visibility_of(app_status_elements[element])).click() # wait 2 seconds for the captcha to change sleep(2) # get text and submit captcha captcha_text = get_captcha_text(driver, 1) submit_captcha(captcha_text, "submit") try: # get all table data values(without titles) WebElements table_data_values = wait.until(EC.visibility_of_all_elements_located(table_values_locator)) # if there are empty rows in the table replace them with "_" titles_validation(driver) # save data to csv save_to_csv([val.text.replace('\n', ' ') for val in table_data_values]) except TimeoutException: print("Application Number does not exist") finally: driver.back() # print the current page number to the console print(f"Time per page: {round(time()-start, 3)}") # if the current page is equal to the specified one, then stop the search and close the driver if current_page == '1': break # click next page wait.until(EC.visibility_of_element_located(next_btn_locator)).click() driver.quit() import pandas as pd data = pd.read_csv('/content/ipindiaservices.csv') df = data.set_axis(['APPLICATION NUMBER', 'APPLICATION TYPE', 'DATE OF FILING', 'APPLICANT NAME', 'TITLE OF INVENTION','FIELD OF INVENTION','E-MAIL (As Per Record)','ADDITIONAL-EMAIL (As Per Record)','E-MAIL (UPDATED Online)','PCT INTERNATIONAL APPLICATION NUMBER','PCT INTERNATIONAL FILING DATE','PRIORITY DATE','REQUEST FOR EXAMINATION DATE','PUBLICATION DATE (U/S 11A)'], axis=1, inplace=False) df.head(2) from google.colab import drive drive.mount('drive') df.to_csv('data.csv') df.to_csv('/drive/My Drive/folder_name/name_csv_file.csv') I am successfully able to extract this information I also need to extract this table's information(yellow marked). Can it be possible? I want to append this status into my previous csv . Can it be done modifying the existing code. TIA def save_to_csv(data: list) -> None: with open(file='ipindiaservices.csv', mode='a', encoding="utf-8") as f: writer = csv.writer(f, lineterminator='\n') writer.writerow([*data]) def start_from_page(page_number: int, driver: WebDriver) -> None: driver.execute_script( f""" document.querySelector('button.next').value = {page_number}; document.querySelector('button.next').click(); """ ) def titles_validation(driver: WebDriver) -> None: """replace empty title name with '_'""" driver.execute_script( """ let titles = document.querySelectorAll('input+.tab-pane tr:not(:first-child)>td:last-child') Array.from(titles).forEach((e) => { if (!e.textContent.trim()) { e.textContent = '_'; } }); """ ) def get_network_data(log: dict, driver: WebDriver) -> dict: log = json.loads(log["message"])["message"] if all([ "Network.responseReceived" in log["method"], "params" in log.keys(), 'CaptchaAudio' in str(log["params"].values()) ]): return driver.execute_cdp_cmd('Network.getResponseBody', {'requestId': log["params"]["requestId"]}) def get_captcha_text(driver: WebDriver, timeout: float) -> str: """Return captcha text Arguments: - driver: WebDriver - timeout: pause before receiving data from the web driver log """ driver.execute_script( """ // document.querySelector('img[title="Captcha"]').click() document.querySelector('img[title="Captcha Audio"]').click() """ ) sleep(timeout) logs = driver.get_log('performance') responses = [get_network_data(log, driver) for log in logs if get_network_data(log, driver)] if responses: return json.loads(responses[0]['body'])['CaptchaImageText'] else: get_captcha_text(driver, timeout) def submit_captcha(captcha_text: str, btn_name: str) -> None: """Submit captcha Arguments: - btn_name: captcha send button name["submit" or "search"] """ if btn_name == 'search': captcha_locator = (By.CSS_SELECTOR, 'input[name="submit"]') elif btn_name == 'submit': captcha_locator = (By.ID, 'btnSubmit') wait.until(EC.visibility_of_element_located((By.ID, 'CaptchaText'))).send_keys(captcha_text) wait.until(EC.visibility_of_element_located(captcha_locator)).click() ''' options = webdriver.ChromeOptions() options.add_argument('--headless') options.add_experimental_option("excludeSwitches", ["enable-automation", "enable-logging"]) capabilities = DesiredCapabilities.CHROME capabilities["goog:loggingPrefs"] = {"performance": "ALL"} # service = Service(executable_path="path/to/your/chromedriver.exe") # driver = webdriver.Chrome(service=service, options=options, desired_capabilities=capabilities) driver = webdriver.Chrome('chromedriver', chrome_options=options, desired_capabilities=capabilities) ''' wait = WebDriverWait(driver, 15) table_values_locator = (By.CSS_SELECTOR, 'input+.tab-pane tr:not(:first-child)>td:last-child') # read 2nd Table table2_values_locator = (By.CSS_SELECTOR, 'table tr:nth-of-type(2)') applicant_name_locator = (By.ID, 'TextField6') page_number_locator = (By.CSS_SELECTOR, 'span.Selected') app_satus_locator = (By.CSS_SELECTOR, 'button.btn') next_btn_locator = (By.CSS_SELECTOR, 'button.next') driver.get('https://ipindiaservices.gov.in/PublicSearch/') # sometimes an alert with an error message("") may appear, so a small pause is used sleep(1) wait.until(EC.visibility_of_element_located(applicant_name_locator)).send_keys('ASIAN PAINTS') # give input (according to your company name) # on the start page and the page with the table, the names of the buttons are different captcha_text = get_captcha_text(driver, 1) submit_captcha(captcha_text, "search") # the page where the search starts start_from_page(1, driver) while True: start = time() # get current page number current_page = wait.until(EC.visibility_of_element_located(page_number_locator)).text print(f"Current page: {current_page}") # get all application status WebElements app_status_elements = wait.until(EC.visibility_of_all_elements_located(app_satus_locator)) for element in range(len(app_status_elements)): print(f"App number: {element}") # update application status WebElements app_status_elements = wait.until(EC.visibility_of_all_elements_located(app_satus_locator)) # click on application status wait.until(EC.visibility_of(app_status_elements[element])).click() # wait 2 seconds for the captcha to change sleep(2) # get text and submit captcha captcha_text = get_captcha_text(driver, 1) submit_captcha(captcha_text, "submit") try: # get all table data values(without titles) WebElements table_data_values = wait.until(EC.visibility_of_all_elements_located(table_values_locator)) # if there are empty rows in the table replace them with "_" titles_validation(driver) # get all 2nd-table data WebElements table_data_values2 = wait.until(EC.visibility_of_all_elements_located(table2_values_locator)) # save data to csv save_to_csv([val.text.replace('\n', ' ') for val in table_data_values]) save_to_csv([val.text.replace('\n', ' ') for val in table_data_values2]) except TimeoutException: print("Application Number does not exist") finally: driver.back() # print the current page number to the console print(f"Time per page: {round(time()-start, 3)}") # if the current page is equal to the specified one, then stop the search and close the driver if current_page == '1': break # click next page wait.until(EC.visibility_of_element_located(next_btn_locator)).click() driver.quit() I have edited the code as per the suggestion, but getting this error A: First-Step - GET the 2nd table CSS-Selector (after Code-Line 121): ... table_values_locator = (By.CSS_SELECTOR, 'input+.tab-pane tr:not(:first-child)>td:last-child') # read 2nd Table table2_values_locator = (By.CSS_SELECTOR, 'table tr:nth-of-type(2)') .... Second-Step - add the data form 2nd table css-selector to CSV (after Code-Line 163): ... try: # get all 1st-table data values(without titles) WebElements table_data_values = wait.until(EC.visibility_of_all_elements_located(table_values_locator)) # if there are empty rows in the table replace them with "_" titles_validation(driver) # get all 2nd-table data WebElements table_data_values2 = wait.until(EC.visibility_of_all_elements_located(table2_values_locator)) # save data to csv save_to_csv([val.text.replace('\n', ' ') for val in table_data_values]) save_to_csv([val.text.replace('\n', ' ') for val in table_data_values2]) ...
How to fetch two table's information from a same webpage?
I have to go to here Here I have to choose applicant name = “ASIAN PAINTS” (as an example) By this code, [Google Colab] !pip install selenium !apt-get update !apt install chromium-chromedriver import re import csv import json from time import sleep from typing import Generator, List, Tuple from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver import DesiredCapabilities from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.chrome.webdriver import WebDriver from selenium.webdriver.support import expected_conditions as EC options = webdriver.ChromeOptions() options.add_argument('--headless') options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') capabilities = DesiredCapabilities.CHROME capabilities["goog:loggingPrefs"] = {"performance": "ALL"} driver = webdriver.Chrome('chromedriver', chrome_options=options, desired_capabilities=capabilities) import csv import json from time import sleep, time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver import DesiredCapabilities from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.chrome.webdriver import WebDriver from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import TimeoutException def save_to_csv(data: list) -> None: with open(file='ipindiaservices.csv', mode='a', encoding="utf-8") as f: writer = csv.writer(f, lineterminator='\n') writer.writerow([*data]) def start_from_page(page_number: int, driver: WebDriver) -> None: driver.execute_script( f""" document.querySelector('button.next').value = {page_number}; document.querySelector('button.next').click(); """ ) def titles_validation(driver: WebDriver) -> None: """replace empty title name with '_'""" driver.execute_script( """ let titles = document.querySelectorAll('input+.tab-pane tr:not(:first-child)>td:last-child') Array.from(titles).forEach((e) => { if (!e.textContent.trim()) { e.textContent = '_'; } }); """ ) def get_network_data(log: dict, driver: WebDriver) -> dict: log = json.loads(log["message"])["message"] if all([ "Network.responseReceived" in log["method"], "params" in log.keys(), 'CaptchaAudio' in str(log["params"].values()) ]): return driver.execute_cdp_cmd('Network.getResponseBody', {'requestId': log["params"]["requestId"]}) def get_captcha_text(driver: WebDriver, timeout: float) -> str: """Return captcha text Arguments: - driver: WebDriver - timeout: pause before receiving data from the web driver log """ driver.execute_script( """ // document.querySelector('img[title="Captcha"]').click() document.querySelector('img[title="Captcha Audio"]').click() """ ) sleep(timeout) logs = driver.get_log('performance') responses = [get_network_data(log, driver) for log in logs if get_network_data(log, driver)] if responses: return json.loads(responses[0]['body'])['CaptchaImageText'] else: get_captcha_text(driver, timeout) def submit_captcha(captcha_text: str, btn_name: str) -> None: """Submit captcha Arguments: - btn_name: captcha send button name["submit" or "search"] """ if btn_name == 'search': captcha_locator = (By.CSS_SELECTOR, 'input[name="submit"]') elif btn_name == 'submit': captcha_locator = (By.ID, 'btnSubmit') wait.until(EC.visibility_of_element_located((By.ID, 'CaptchaText'))).send_keys(captcha_text) wait.until(EC.visibility_of_element_located(captcha_locator)).click() ''' options = webdriver.ChromeOptions() options.add_argument('--headless') options.add_experimental_option("excludeSwitches", ["enable-automation", "enable-logging"]) capabilities = DesiredCapabilities.CHROME capabilities["goog:loggingPrefs"] = {"performance": "ALL"} # service = Service(executable_path="path/to/your/chromedriver.exe") # driver = webdriver.Chrome(service=service, options=options, desired_capabilities=capabilities) driver = webdriver.Chrome('chromedriver', chrome_options=options, desired_capabilities=capabilities) ''' wait = WebDriverWait(driver, 15) table_values_locator = (By.CSS_SELECTOR, 'input+.tab-pane tr:not(:first-child)>td:last-child') applicant_name_locator = (By.ID, 'TextField6') page_number_locator = (By.CSS_SELECTOR, 'span.Selected') app_satus_locator = (By.CSS_SELECTOR, 'button.btn') next_btn_locator = (By.CSS_SELECTOR, 'button.next') driver.get('https://ipindiaservices.gov.in/PublicSearch/') # sometimes an alert with an error message("") may appear, so a small pause is used sleep(1) wait.until(EC.visibility_of_element_located(applicant_name_locator)).send_keys('ltd') # on the start page and the page with the table, the names of the buttons are different captcha_text = get_captcha_text(driver, 1) submit_captcha(captcha_text, "search") # the page where the search starts start_from_page(1, driver) while True: start = time() # get current page number current_page = wait.until(EC.visibility_of_element_located(page_number_locator)).text print(f"Current page: {current_page}") # get all application status WebElements app_status_elements = wait.until(EC.visibility_of_all_elements_located(app_satus_locator)) for element in range(len(app_status_elements)): print(f"App number: {element}") # update application status WebElements app_status_elements = wait.until(EC.visibility_of_all_elements_located(app_satus_locator)) # click on application status wait.until(EC.visibility_of(app_status_elements[element])).click() # wait 2 seconds for the captcha to change sleep(2) # get text and submit captcha captcha_text = get_captcha_text(driver, 1) submit_captcha(captcha_text, "submit") try: # get all table data values(without titles) WebElements table_data_values = wait.until(EC.visibility_of_all_elements_located(table_values_locator)) # if there are empty rows in the table replace them with "_" titles_validation(driver) # save data to csv save_to_csv([val.text.replace('\n', ' ') for val in table_data_values]) except TimeoutException: print("Application Number does not exist") finally: driver.back() # print the current page number to the console print(f"Time per page: {round(time()-start, 3)}") # if the current page is equal to the specified one, then stop the search and close the driver if current_page == '1': break # click next page wait.until(EC.visibility_of_element_located(next_btn_locator)).click() driver.quit() import pandas as pd data = pd.read_csv('/content/ipindiaservices.csv') df = data.set_axis(['APPLICATION NUMBER', 'APPLICATION TYPE', 'DATE OF FILING', 'APPLICANT NAME', 'TITLE OF INVENTION','FIELD OF INVENTION','E-MAIL (As Per Record)','ADDITIONAL-EMAIL (As Per Record)','E-MAIL (UPDATED Online)','PCT INTERNATIONAL APPLICATION NUMBER','PCT INTERNATIONAL FILING DATE','PRIORITY DATE','REQUEST FOR EXAMINATION DATE','PUBLICATION DATE (U/S 11A)'], axis=1, inplace=False) df.head(2) from google.colab import drive drive.mount('drive') df.to_csv('data.csv') df.to_csv('/drive/My Drive/folder_name/name_csv_file.csv') I am successfully able to extract this information I also need to extract this table's information(yellow marked). Can it be possible? I want to append this status into my previous csv . Can it be done modifying the existing code. TIA def save_to_csv(data: list) -> None: with open(file='ipindiaservices.csv', mode='a', encoding="utf-8") as f: writer = csv.writer(f, lineterminator='\n') writer.writerow([*data]) def start_from_page(page_number: int, driver: WebDriver) -> None: driver.execute_script( f""" document.querySelector('button.next').value = {page_number}; document.querySelector('button.next').click(); """ ) def titles_validation(driver: WebDriver) -> None: """replace empty title name with '_'""" driver.execute_script( """ let titles = document.querySelectorAll('input+.tab-pane tr:not(:first-child)>td:last-child') Array.from(titles).forEach((e) => { if (!e.textContent.trim()) { e.textContent = '_'; } }); """ ) def get_network_data(log: dict, driver: WebDriver) -> dict: log = json.loads(log["message"])["message"] if all([ "Network.responseReceived" in log["method"], "params" in log.keys(), 'CaptchaAudio' in str(log["params"].values()) ]): return driver.execute_cdp_cmd('Network.getResponseBody', {'requestId': log["params"]["requestId"]}) def get_captcha_text(driver: WebDriver, timeout: float) -> str: """Return captcha text Arguments: - driver: WebDriver - timeout: pause before receiving data from the web driver log """ driver.execute_script( """ // document.querySelector('img[title="Captcha"]').click() document.querySelector('img[title="Captcha Audio"]').click() """ ) sleep(timeout) logs = driver.get_log('performance') responses = [get_network_data(log, driver) for log in logs if get_network_data(log, driver)] if responses: return json.loads(responses[0]['body'])['CaptchaImageText'] else: get_captcha_text(driver, timeout) def submit_captcha(captcha_text: str, btn_name: str) -> None: """Submit captcha Arguments: - btn_name: captcha send button name["submit" or "search"] """ if btn_name == 'search': captcha_locator = (By.CSS_SELECTOR, 'input[name="submit"]') elif btn_name == 'submit': captcha_locator = (By.ID, 'btnSubmit') wait.until(EC.visibility_of_element_located((By.ID, 'CaptchaText'))).send_keys(captcha_text) wait.until(EC.visibility_of_element_located(captcha_locator)).click() ''' options = webdriver.ChromeOptions() options.add_argument('--headless') options.add_experimental_option("excludeSwitches", ["enable-automation", "enable-logging"]) capabilities = DesiredCapabilities.CHROME capabilities["goog:loggingPrefs"] = {"performance": "ALL"} # service = Service(executable_path="path/to/your/chromedriver.exe") # driver = webdriver.Chrome(service=service, options=options, desired_capabilities=capabilities) driver = webdriver.Chrome('chromedriver', chrome_options=options, desired_capabilities=capabilities) ''' wait = WebDriverWait(driver, 15) table_values_locator = (By.CSS_SELECTOR, 'input+.tab-pane tr:not(:first-child)>td:last-child') # read 2nd Table table2_values_locator = (By.CSS_SELECTOR, 'table tr:nth-of-type(2)') applicant_name_locator = (By.ID, 'TextField6') page_number_locator = (By.CSS_SELECTOR, 'span.Selected') app_satus_locator = (By.CSS_SELECTOR, 'button.btn') next_btn_locator = (By.CSS_SELECTOR, 'button.next') driver.get('https://ipindiaservices.gov.in/PublicSearch/') # sometimes an alert with an error message("") may appear, so a small pause is used sleep(1) wait.until(EC.visibility_of_element_located(applicant_name_locator)).send_keys('ASIAN PAINTS') # give input (according to your company name) # on the start page and the page with the table, the names of the buttons are different captcha_text = get_captcha_text(driver, 1) submit_captcha(captcha_text, "search") # the page where the search starts start_from_page(1, driver) while True: start = time() # get current page number current_page = wait.until(EC.visibility_of_element_located(page_number_locator)).text print(f"Current page: {current_page}") # get all application status WebElements app_status_elements = wait.until(EC.visibility_of_all_elements_located(app_satus_locator)) for element in range(len(app_status_elements)): print(f"App number: {element}") # update application status WebElements app_status_elements = wait.until(EC.visibility_of_all_elements_located(app_satus_locator)) # click on application status wait.until(EC.visibility_of(app_status_elements[element])).click() # wait 2 seconds for the captcha to change sleep(2) # get text and submit captcha captcha_text = get_captcha_text(driver, 1) submit_captcha(captcha_text, "submit") try: # get all table data values(without titles) WebElements table_data_values = wait.until(EC.visibility_of_all_elements_located(table_values_locator)) # if there are empty rows in the table replace them with "_" titles_validation(driver) # get all 2nd-table data WebElements table_data_values2 = wait.until(EC.visibility_of_all_elements_located(table2_values_locator)) # save data to csv save_to_csv([val.text.replace('\n', ' ') for val in table_data_values]) save_to_csv([val.text.replace('\n', ' ') for val in table_data_values2]) except TimeoutException: print("Application Number does not exist") finally: driver.back() # print the current page number to the console print(f"Time per page: {round(time()-start, 3)}") # if the current page is equal to the specified one, then stop the search and close the driver if current_page == '1': break # click next page wait.until(EC.visibility_of_element_located(next_btn_locator)).click() driver.quit() I have edited the code as per the suggestion, but getting this error
[ "First-Step - GET the 2nd table CSS-Selector (after Code-Line 121):\n...\ntable_values_locator = (By.CSS_SELECTOR, 'input+.tab-pane tr:not(:first-child)>td:last-child')\n# read 2nd Table\ntable2_values_locator = (By.CSS_SELECTOR, 'table tr:nth-of-type(2)')\n....\n\nSecond-Step - add the data form 2nd table css-selector to CSV (after Code-Line 163):\n...\ntry:\n # get all 1st-table data values(without titles) WebElements\n table_data_values = wait.until(EC.visibility_of_all_elements_located(table_values_locator))\n # if there are empty rows in the table replace them with \"_\"\n titles_validation(driver)\n # get all 2nd-table data WebElements\n table_data_values2 = wait.until(EC.visibility_of_all_elements_located(table2_values_locator))\n # save data to csv\n save_to_csv([val.text.replace('\\n', ' ') for val in table_data_values])\n save_to_csv([val.text.replace('\\n', ' ') for val in table_data_values2])\n...\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium_chromedriver", "web_scraping" ]
stackoverflow_0074423654_python_selenium_chromedriver_web_scraping.txt
Q: how to modify certain phrases in a string in python WHAT I AM TRYING TO DO: i am trying to add certian acronyms to any random string that is entered by the user, for example like: input (by the user): 'by the way i called them and they were not having any of it laugh out loud!' output(by the program): 'btw i called them and they were not having any of it lol! WHAT I TRIED TO DO: i tried the for loop and the split() function (for e in x.split(',')) where x is the string entered by the user. However the result of the input was the same as the output unless if the whole string was the phrase that is going to be the acronym, for example like: input: 'by the way' output: 'btw' and does not work with... input: 'by the way i called them and they were not having any of it laugh out loud!' because the output will be the same: output: 'by the way i called them and they were not having any of it laugh out loud!' A: If you have a list of abbreviations then you can just loop through every abbreviation and replace all instances in a string: abbr = { "by the way": "btw", "laugh out loud": "lol" } string = "by the way I called them and they were not having any of it laugh out loud!" for full, short in abbr.items(): string = string.replace(full, short) print(string)
how to modify certain phrases in a string in python
WHAT I AM TRYING TO DO: i am trying to add certian acronyms to any random string that is entered by the user, for example like: input (by the user): 'by the way i called them and they were not having any of it laugh out loud!' output(by the program): 'btw i called them and they were not having any of it lol! WHAT I TRIED TO DO: i tried the for loop and the split() function (for e in x.split(',')) where x is the string entered by the user. However the result of the input was the same as the output unless if the whole string was the phrase that is going to be the acronym, for example like: input: 'by the way' output: 'btw' and does not work with... input: 'by the way i called them and they were not having any of it laugh out loud!' because the output will be the same: output: 'by the way i called them and they were not having any of it laugh out loud!'
[ "If you have a list of abbreviations then you can just loop through every abbreviation and replace all instances in a string:\nabbr = {\n \"by the way\": \"btw\",\n \"laugh out loud\": \"lol\"\n}\nstring = \"by the way I called them and they were not having any of it laugh out loud!\"\nfor full, short in abbr.items():\n string = string.replace(full, short)\nprint(string)\n\n" ]
[ 1 ]
[]
[]
[ "list", "python", "string" ]
stackoverflow_0074501778_list_python_string.txt
Q: Django - issue with Not Null constraint Hi in my program I keep receiving the above exception and am unsure why. The issue happens when my requestLessons_view method tries to save the form. Views.py def requestLessons_view(request): if request.method == 'POST': form = RequestLessonsForm(request.POST) if form.is_valid() & request.user.is_authenticated: user = request.user form.save(user) return redirect('login') else: form = RequestLessonsForm() return render(request, 'RequestLessonsPage.html', {'form': form}) forms.py class RequestLessonsForm(forms.ModelForm): class Meta: model = Request fields = ['availability', 'num_of_lessons', 'interval_between_lessons', 'duration_of_lesson','further_information'] widgets = {'further_information' : forms.Textarea()} def save(self, user): super().save(commit=False) request = Request.objects.create( student = user, availability=self.cleaned_data.get('availability'), num_of_lessons=self.cleaned_data.get('num_of_lessons'), interval_between_lessons=self.cleaned_data.get('interval_between_lessons'), duration_of_lesson=self.cleaned_data.get('duration_of_lesson'), further_information=self.cleaned_data.get('further_information'), ) return request The error I receive is: IntegrityError at /request_lessons/ NOT NULL constraint failed: lessons_request.student_id A: Your .save() method is defined on the Meta class, not the form, hence the error. I would advise to let the model form handle the logic: a ModelForm can be used both to create and update the items, so by doing the save logic yourself, you basically make the form less effective. You can rewrite this to: class RequestLessonsForm(forms.ModelForm): class Meta: model = Request fields = [ 'availability', 'num_of_lessons', 'interval_between_lessons', 'duration_of_lesson', 'further_information', ] widgets = {'further_information': forms.Textarea} def save(self, user, *args, **kwargs): self.instance.student = user return super().save(*args, **kwargs) Note: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation. Note: You can limit views to a view to authenticated users with the @login_required decorator [Django-doc].
Django - issue with Not Null constraint
Hi in my program I keep receiving the above exception and am unsure why. The issue happens when my requestLessons_view method tries to save the form. Views.py def requestLessons_view(request): if request.method == 'POST': form = RequestLessonsForm(request.POST) if form.is_valid() & request.user.is_authenticated: user = request.user form.save(user) return redirect('login') else: form = RequestLessonsForm() return render(request, 'RequestLessonsPage.html', {'form': form}) forms.py class RequestLessonsForm(forms.ModelForm): class Meta: model = Request fields = ['availability', 'num_of_lessons', 'interval_between_lessons', 'duration_of_lesson','further_information'] widgets = {'further_information' : forms.Textarea()} def save(self, user): super().save(commit=False) request = Request.objects.create( student = user, availability=self.cleaned_data.get('availability'), num_of_lessons=self.cleaned_data.get('num_of_lessons'), interval_between_lessons=self.cleaned_data.get('interval_between_lessons'), duration_of_lesson=self.cleaned_data.get('duration_of_lesson'), further_information=self.cleaned_data.get('further_information'), ) return request The error I receive is: IntegrityError at /request_lessons/ NOT NULL constraint failed: lessons_request.student_id
[ "Your .save() method is defined on the Meta class, not the form, hence the error. I would advise to let the model form handle the logic: a ModelForm can be used both to create and update the items, so by doing the save logic yourself, you basically make the form less effective. You can rewrite this to:\nclass RequestLessonsForm(forms.ModelForm):\n class Meta:\n model = Request\n fields = [\n 'availability',\n 'num_of_lessons',\n 'interval_between_lessons',\n 'duration_of_lesson',\n 'further_information',\n ]\n widgets = {'further_information': forms.Textarea}\n\n def save(self, user, *args, **kwargs):\n self.instance.student = user\n return super().save(*args, **kwargs)\n\n\nNote: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.\n\n\n\nNote: You can limit views to a view to authenticated users with the\n@login_required decorator [Django-doc].\n\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074501789_django_python.txt
Q: Fbprophet installation error - failed building wheel for fbprophet I am trying to install fbprophet for Python using Pip install, but failing. I have already installed Pystan. Can I import it using Anaconda Navigator? Can someone please help. Failed building wheel for fbprophet Running setup.py clean for fbprophet Failed to build fbprophet Installing collected packages: fbprophet Running setup.py install for fbprophet ... error Complete output from command C:\ProgramData\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\SJ-Admin\\AppData\\Local\\Temp\\pip-build-bsm4sxla\\fbprophet\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\SJ-Admin\AppData\Local\Temp\pip-kvck8fw1-record\install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build\lib creating build\lib\fbprophet creating build\lib\fbprophet\stan_models Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\SJ-Admin\AppData\Local\Temp\pip-build-bsm4sxla\fbprophet\setup.py", line 126, in <module> """ File "C:\ProgramData\Anaconda3\lib\site-packages\setuptools\__init__.py", line 129, in setup return distutils.core.setup(**attrs) File "C:\ProgramData\Anaconda3\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\ProgramData\Anaconda3\lib\distutils\dist.py", line 955, in run_commands self.run_command(cmd) File "C:\ProgramData\Anaconda3\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\ProgramData\Anaconda3\lib\site-packages\setuptools\command\install.py", line 61, in run return orig.install.run(self) File "C:\ProgramData\Anaconda3\lib\distutils\command\install.py", line 545, in run self.run_command('build') File "C:\ProgramData\Anaconda3\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\ProgramData\Anaconda3\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\ProgramData\Anaconda3\lib\distutils\command\build.py", line 135, in run self.run_command(cmd_name) File "C:\ProgramData\Anaconda3\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\ProgramData\Anaconda3\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\Users\SJ-Admin\AppData\Local\Temp\pip-build-bsm4sxla\fbprophet\setup.py", line 46, in run build_stan_models(target_dir) File "C:\Users\SJ-Admin\AppData\Local\Temp\pip-build-bsm4sxla\fbprophet\setup.py", line 28, in build_stan_models from pystan import StanModel ImportError: cannot import name 'StanModel' A: Fundamental step: Switch to your environment in your Anaconda prompt: conda activate name-of-your-python-enviornment Then the following steps shall work: On Prompt install Ephem: conda install -c anaconda ephem Install Pystan: conda install -c conda-forge pystan Finally install Fbprophet: conda install -c conda-forge fbprophet If exists error from holidays package pip install holidays==0.9.12 Reference: https://github.com/facebook/prophet/issues/892 Reference for Holiday package error: https://github.com/facebook/prophet/issues/1300 A: Use offline package installer: this works with Python 3.8. and Python 3.9.x pip install localpip localpip install fbprophet A: I could install fbprophet using conda install -c conda-forge fbprophet. This was failing too due to permission issue My folder had 'read-only' permissions. I modified it to read-write. Then reran the command and was able to install fbprophet A: So after I did conda install -c conda-forge fbprophet I got at the end: EnvironmentNotWritableError: The current user does not have write permissions to the target environment. environment location: C:\ProgramData\Anaconda3 ProgramData is system folder, so changed r-w permissions(took few minutes), and I also did this for C:\Program Files\Python37 path with Lib folder. A: With the following environment OSX: Big Sur 11.6 python: python:3.7-slim $ pip install pystan==2.19 $ pip install fbprophet A: For this stack: CentOS: 7 Python: 3.8 GCC: 4.8.5 PyStan: 2.19.1.1 FbProphet: 0.7.1 You need these packages: centos-release-scl devtoolset-8 Enable SCL devtoolset-8 source /opt/rh/devtoolset-8/enable rh-python38-python rh-python38-python-devel pip install pystan==2.19.1.1 Docker image with HTTPD MOD_WSGI and FBPROPHET... FROM centos:7 EXPOSE 80 # Install Apache RUN yum -y update RUN yum -y install centos-release-scl RUN yum -y install httpd httpd-tools rh-python38-python-mod_wsgi.x86_64 devtoolset-8-gcc devtoolset-8-gcc-c++ rh-python38-python rh-python38-python-devel # Copy the wsgi module to Apache HTTP Server modules folder RUN cp /opt/rh/httpd24/root/usr/lib64/httpd/modules/mod_rh-python38-wsgi.so /lib64/httpd/modules/mod_wsgi.so ENV PATH="/opt/rh/rh-python38/root/usr/bin:/opt/rh/rh-python38/root/usr/local/bin:${PATH}" WORKDIR / COPY ROOT . WORKDIR /opt/rh/rh-python38/root RUN ./usr/bin/python3 /etc/get-pip.py RUN chmod +x /usr/local/bin/install-fb.sh && /usr/local/bin/install-fb.sh RUN pip install -r /etc/requirements.txt # Start Apache CMD ["/usr/sbin/httpd","-D","FOREGROUND"] The script install-fb.sh contains this code: $ cat ROOT/usr/local/bin/install-fb.sh #!/bin/bash source /opt/rh/devtoolset-8/enable pip install pystan==2.19.1.1 fbprophet==0.7.1 The reason to put it in its own script is the SCL enable line, to avoid the gcc not found error. I hope this helps, getting all these software packages running together is not a piece of cake :) A: Docker Image: python 3.8-slim This worked for me: pip install pystan==2.19.1.1 pip install fbprophet A: macOS Big Sur 11.5.2 python 3.7 This worked for me: pip install pystan==2.19.1.1 sudo pip install fbprophet==0.7.1 A: FB prophet documentation recommends using conda for windows users as the easiest way for installing prophet. In my case, the following solved the problem (win10): conda install -c conda-forge fbprophet -y A: Create a virtual environment using: conda create --name myenv Activate the environment using: conda activate myenv Install ephem using: conda install -c anaconda ephem Install pystan using conda install -c conda-forge pystan Install the fbrprophet using conda install -c conda-forge fbprophet At the end install holidays: pip install holidays==0.9.12 If it still doesn't work try this at last: conda install psycopg2
Fbprophet installation error - failed building wheel for fbprophet
I am trying to install fbprophet for Python using Pip install, but failing. I have already installed Pystan. Can I import it using Anaconda Navigator? Can someone please help. Failed building wheel for fbprophet Running setup.py clean for fbprophet Failed to build fbprophet Installing collected packages: fbprophet Running setup.py install for fbprophet ... error Complete output from command C:\ProgramData\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\SJ-Admin\\AppData\\Local\\Temp\\pip-build-bsm4sxla\\fbprophet\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\SJ-Admin\AppData\Local\Temp\pip-kvck8fw1-record\install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build\lib creating build\lib\fbprophet creating build\lib\fbprophet\stan_models Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\SJ-Admin\AppData\Local\Temp\pip-build-bsm4sxla\fbprophet\setup.py", line 126, in <module> """ File "C:\ProgramData\Anaconda3\lib\site-packages\setuptools\__init__.py", line 129, in setup return distutils.core.setup(**attrs) File "C:\ProgramData\Anaconda3\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\ProgramData\Anaconda3\lib\distutils\dist.py", line 955, in run_commands self.run_command(cmd) File "C:\ProgramData\Anaconda3\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\ProgramData\Anaconda3\lib\site-packages\setuptools\command\install.py", line 61, in run return orig.install.run(self) File "C:\ProgramData\Anaconda3\lib\distutils\command\install.py", line 545, in run self.run_command('build') File "C:\ProgramData\Anaconda3\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\ProgramData\Anaconda3\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\ProgramData\Anaconda3\lib\distutils\command\build.py", line 135, in run self.run_command(cmd_name) File "C:\ProgramData\Anaconda3\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\ProgramData\Anaconda3\lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\Users\SJ-Admin\AppData\Local\Temp\pip-build-bsm4sxla\fbprophet\setup.py", line 46, in run build_stan_models(target_dir) File "C:\Users\SJ-Admin\AppData\Local\Temp\pip-build-bsm4sxla\fbprophet\setup.py", line 28, in build_stan_models from pystan import StanModel ImportError: cannot import name 'StanModel'
[ "Fundamental step:\nSwitch to your environment in your Anaconda prompt: \nconda activate name-of-your-python-enviornment\nThen the following steps shall work:\n\nOn Prompt install Ephem:\nconda install -c anaconda ephem\n\nInstall Pystan:\nconda install -c conda-forge pystan\n\nFinally install Fbprophet:\nconda install -c conda-forge fbprophet\n\nIf exists error from holidays package\npip install holidays==0.9.12\n\n\nReference: https://github.com/facebook/prophet/issues/892\nReference for Holiday package error: https://github.com/facebook/prophet/issues/1300\n", "Use offline package installer: this works with Python 3.8. and Python 3.9.x\npip install localpip \nlocalpip install fbprophet\n\n", "I could install fbprophet using conda install -c conda-forge fbprophet.\nThis was failing too due to permission issue\nMy folder had 'read-only' permissions. I modified it to read-write. Then reran the command and was able to install fbprophet\n", "So after I did\nconda install -c conda-forge fbprophet\n\nI got at the end:\nEnvironmentNotWritableError: The current user does not have write permissions to the target environment.\n environment location: C:\\ProgramData\\Anaconda3\n\nProgramData is system folder, so changed r-w permissions(took few minutes), and I also did this for C:\\Program Files\\Python37 path with Lib folder.\n", "With the following environment\nOSX: Big Sur 11.6\npython: python:3.7-slim\n$ pip install pystan==2.19\n$ pip install fbprophet\n\n", "For this stack:\n\nCentOS: 7\nPython: 3.8\nGCC: 4.8.5\nPyStan: 2.19.1.1\nFbProphet: 0.7.1\n\nYou need these packages:\n\ncentos-release-scl devtoolset-8\n\nEnable SCL devtoolset-8\nsource /opt/rh/devtoolset-8/enable\n\n\nrh-python38-python rh-python38-python-devel\npip install pystan==2.19.1.1\n\nDocker image with HTTPD MOD_WSGI and FBPROPHET...\nFROM centos:7\n\nEXPOSE 80\n\n# Install Apache\nRUN yum -y update\nRUN yum -y install centos-release-scl\nRUN yum -y install httpd httpd-tools rh-python38-python-mod_wsgi.x86_64 devtoolset-8-gcc devtoolset-8-gcc-c++ rh-python38-python rh-python38-python-devel\n\n# Copy the wsgi module to Apache HTTP Server modules folder\nRUN cp /opt/rh/httpd24/root/usr/lib64/httpd/modules/mod_rh-python38-wsgi.so /lib64/httpd/modules/mod_wsgi.so\n\nENV PATH=\"/opt/rh/rh-python38/root/usr/bin:/opt/rh/rh-python38/root/usr/local/bin:${PATH}\"\n\nWORKDIR /\nCOPY ROOT .\n\nWORKDIR /opt/rh/rh-python38/root\nRUN ./usr/bin/python3 /etc/get-pip.py\n\nRUN chmod +x /usr/local/bin/install-fb.sh && /usr/local/bin/install-fb.sh\nRUN pip install -r /etc/requirements.txt\n\n# Start Apache\nCMD [\"/usr/sbin/httpd\",\"-D\",\"FOREGROUND\"]\n\nThe script install-fb.sh contains this code:\n$ cat ROOT/usr/local/bin/install-fb.sh\n#!/bin/bash\nsource /opt/rh/devtoolset-8/enable\npip install pystan==2.19.1.1 fbprophet==0.7.1\n\nThe reason to put it in its own script is the SCL enable line, to avoid the gcc not found error.\nI hope this helps, getting all these software packages running together is not a piece of cake :)\n", "Docker Image: python 3.8-slim\nThis worked for me:\npip install pystan==2.19.1.1\npip install fbprophet\n\n", "macOS Big Sur 11.5.2\npython 3.7\nThis worked for me:\npip install pystan==2.19.1.1\nsudo pip install fbprophet==0.7.1\n\n", "FB prophet documentation recommends using conda for windows users as the easiest way for installing prophet.\nIn my case, the following solved the problem (win10):\nconda install -c conda-forge fbprophet -y\n\n", "\nCreate a virtual environment using:\n\nconda create --name myenv\n\nActivate the environment using:\n\nconda activate myenv\n\nInstall ephem using:\n\nconda install -c anaconda ephem\n\nInstall pystan using\n\nconda install -c conda-forge pystan\n\nInstall the fbrprophet using\n\nconda install -c conda-forge fbprophet\n\nAt the end install holidays:\n\npip install holidays==0.9.12\n\nIf it still doesn't work try this at last:\n\nconda install psycopg2\n" ]
[ 20, 5, 3, 2, 1, 1, 0, 0, 0, 0 ]
[ "After a lot of research, I found the solution for installing fbprophet on windows 10.\nStep 1: Check the kernel in jupyter.\nLocate the folder \\jupyter\\kernels\\python3 and check the python exe location used by the kernel.\nMine was pointing to - Programs\\Python\\Python37\\python.exe\nopen CMD prompt and go to above dir.\nI am skipping pystan installation as I already installed pystan using pip command.\nStep 2 : Download the file \"Twisted-20.3.0-cp37-cp37m-win_amd64.whl\" from https://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted\npython -m pip install /Twisted-20.3.0-cp37-cp37m-win_amd64.whl\nStep 3 : pip install fbprophet\nInstalling collected packages: fbprophet\nSuccessfully installed fbprophet-0.6\nStep 4 : python\nimport fbprophet\nfbprophet.version\n'0.6'\n", "This worked for me:\npip install prophet\npip install fbprophet\n\n", "fbprophet has been renamed by prophet hence 1st install\npip install pystan==2.19.1.1\nthen do\npython -m pip install prophet\nthis will work!\nthen to import do\nfrom prophet import Prophet\n" ]
[ -1, -1, -1 ]
[ "anaconda", "facebook_prophet", "python", "python_3.x" ]
stackoverflow_0049889404_anaconda_facebook_prophet_python_python_3.x.txt
Q: File Names Chain in python I CANNOT USE ANY IMPORTED LIBRARY. I have this task where I have some directories containing some files; every file contains, besides some words, the name of the next file to be opened, in its first line. Once every word of every files contained in a directory is opened, they have to be treated in a way that should return a single string; such string contains in its first position, the most frequent first letter of every word seen before, in its second position the most frequent second letter, and so on. I have managed to do this with a directory containing 3 files, but it's not using any type of chain-like mechanism, rather a passing of local variables. Some of my college colleagues suggested I had to use slicing of lists, but I can't figure out how. I CANNOT USE ANY IMPORTED LIBRARY. This is what I got: ''' The objective of the homework assignment is to design and implement a function that reads some strings contained in a series of files and generates a new string from all the strings read. The strings to be read are contained in several files, linked together to form a closed chain. The first string in each file is the name of another file that belongs to the chain: starting from any file and following the chain, you always return to the starting file. Example: the first line of file "A.txt" is "B.txt," the first line of file "B.txt" is "C.txt," and the first line of "C.txt" is "A.txt," forming the chain "A.txt"-"B.txt"-"C.txt". In addition to the string with the name of the next file, each file also contains other strings separated by spaces, tabs, or carriage return characters. The function must read all the strings in the files in the chain and construct the string obtained by concatenating the characters with the highest frequency in each position. That is, in the string to be constructed, at position p, there will be the character with the highest frequency at position p of each string read from the files. In the case where there are multiple characters with the same frequency, consider the alphabetical order. The generated string has a length equal to the maximum length of the strings read from the files. Therefore, you must write a function that takes as input a string "filename" representing the name of a file and returns a string. The function must construct the string according to the directions outlined above and return the constructed string. Example: if the contents of the three files A.txt, B.txt, and C.txt in the directory test01 are as follows test01/A.txt test01/B.txt test01/C.txt ------------------------------------------------------------------------------- test01/B.txt test01/C.txt test01/A.txt house home kite garden park hello kitchen affair portrait balloon angel surfing the function most_frequent_chars ("test01/A.txt") will return "hareennt". ''' def file_names_list(filename): intermezzo = [] lista_file = [] a_file = open(filename) lines = a_file.readlines() for line in lines: intermezzo.extend(line.split()) del intermezzo[1:] lista_file.append(intermezzo[0]) intermezzo.pop(0) return lista_file def words_list(filename): lista_file = [] a_file = open(filename) lines = a_file.readlines()[1:] for line in lines: lista_file.extend(line.split()) return lista_file def stuff_list(filename): file_list = file_names_list(filename) the_rest = words_list(filename) second_file_name = file_names_list(file_list[0]) the_lists = words_list(file_list[0]) and words_list(second_file_name[0]) the_rest += the_lists[0:] return the_rest def most_frequent_chars(filename): huge_words_list = stuff_list(filename) maxOccurs = "" list_of_chars = [] for i in range(len(max(huge_words_list, key=len))): for item in huge_words_list: try: list_of_chars.append(item[i]) except IndexError: pass maxOccurs += max(sorted(set(list_of_chars)), key = list_of_chars.count) list_of_chars.clear() return maxOccurs print(most_frequent_chars("test01/A.txt")) A: This assignment is relatively easy, if the code has a good structure. Here is a full implementation: def read_file(fname): with open(fname, 'r') as f: return list(filter(None, [y.rstrip(' \n').lstrip(' ') for x in f for y in x.split()])) def read_chain(fname): seen = set() new = fname result = [] while not new in seen: A = read_file(new) seen.add(new) new, words = A[0], A[1:] result.extend(words) return result def most_frequent_chars (fname): all_words = read_chain(fname) result = [] for i in range(max(map(len,all_words))): chars = [word[i] for word in all_words if i<len(word)] result.append(max(sorted(set(chars)), key = chars.count)) return ''.join(result) print(most_frequent_chars("test01/A.txt")) # output: "hareennt" In the code above, we define 3 functions: read_file: simple function to read the contents of a file and return a list of strings. The command x.split() takes care of any spaces or tabs used to separate words. The final command list(filter(None, arr)) makes sure that empty strings are erased from the solution. read_chain: Simple routine to iterate through the chain of files, and return all the words contained in them. most_frequent_chars: Easy routine, where the most frequent characters are counted carefully. PS. This line of code you had is very interesting: maxOccurs += max(sorted(set(list_of_chars)), key = list_of_chars.count) I edited my code to include it. Space complexity optimization The space complexity of the previous code can be improved by orders of magnitude, if the files are scanned without storing all the words: def scan_file(fname, database): with open(fname, 'r') as f: next_file = None for x in f: for y in x.split(): if next_file is None: next_file = y else: for i,c in enumerate(y): while len(database) <= i: database.append({}) if c in database[i]: database[i][c] += 1 else: database[i][c] = 1 return next_file def most_frequent_chars (fname): database = [] seen = set() new = fname while not new in seen: seen.add(new) new = scan_file(new, database) return ''.join(max(sorted(d.keys()),key=d.get) for d in database) print(most_frequent_chars("test01/A.txt")) # output: "hareennt" Now we scan the files tracking the frequency of the characters in database, without storing intermediate arrays. A: Ok, here's my solution: def parsi_file(filename): visited_files = set() words_list = [] # Getting words from all files while filename not in visited_files: visited_files.add(filename) with open(filename) as f: filename = f.readline().strip() words_list += [line.strip() for line in f.readlines()] # Creating dictionaries of letters:count for each index letters_dicts = [] for word in words_list: for i in range(len(word)): if i > len(letters_dicts)-1: letters_dicts.append({}) letter = word[i] if letters_dicts[i].get(letter): letters_dicts[i][letter] += 1 else: letters_dicts[i][letter] = 1 # Sorting dicts and getting the "best" letter code = "" for dic in letters_dicts: sorted_letters = sorted(dic, key = lambda letter: (-dic[letter],letter)) code += sorted_letters[0] return code We first get the words_list from all files. Then, for each index, we create a dictionary of the letters in all words at that index, with their count. Now we sort the dictionary keys by descending count (-count) then by alphabetical order. Finally we get the first letter (thus the one with the max count) and add it to the "code" word for this test battery. Edit: in terms of efficiency, parsing through all words for each index will get worse as the number of words grows, so it would be better to tweak the code to simultaneously create the dictionaries for each index and parse through the list of words only once. Done.
File Names Chain in python
I CANNOT USE ANY IMPORTED LIBRARY. I have this task where I have some directories containing some files; every file contains, besides some words, the name of the next file to be opened, in its first line. Once every word of every files contained in a directory is opened, they have to be treated in a way that should return a single string; such string contains in its first position, the most frequent first letter of every word seen before, in its second position the most frequent second letter, and so on. I have managed to do this with a directory containing 3 files, but it's not using any type of chain-like mechanism, rather a passing of local variables. Some of my college colleagues suggested I had to use slicing of lists, but I can't figure out how. I CANNOT USE ANY IMPORTED LIBRARY. This is what I got: ''' The objective of the homework assignment is to design and implement a function that reads some strings contained in a series of files and generates a new string from all the strings read. The strings to be read are contained in several files, linked together to form a closed chain. The first string in each file is the name of another file that belongs to the chain: starting from any file and following the chain, you always return to the starting file. Example: the first line of file "A.txt" is "B.txt," the first line of file "B.txt" is "C.txt," and the first line of "C.txt" is "A.txt," forming the chain "A.txt"-"B.txt"-"C.txt". In addition to the string with the name of the next file, each file also contains other strings separated by spaces, tabs, or carriage return characters. The function must read all the strings in the files in the chain and construct the string obtained by concatenating the characters with the highest frequency in each position. That is, in the string to be constructed, at position p, there will be the character with the highest frequency at position p of each string read from the files. In the case where there are multiple characters with the same frequency, consider the alphabetical order. The generated string has a length equal to the maximum length of the strings read from the files. Therefore, you must write a function that takes as input a string "filename" representing the name of a file and returns a string. The function must construct the string according to the directions outlined above and return the constructed string. Example: if the contents of the three files A.txt, B.txt, and C.txt in the directory test01 are as follows test01/A.txt test01/B.txt test01/C.txt ------------------------------------------------------------------------------- test01/B.txt test01/C.txt test01/A.txt house home kite garden park hello kitchen affair portrait balloon angel surfing the function most_frequent_chars ("test01/A.txt") will return "hareennt". ''' def file_names_list(filename): intermezzo = [] lista_file = [] a_file = open(filename) lines = a_file.readlines() for line in lines: intermezzo.extend(line.split()) del intermezzo[1:] lista_file.append(intermezzo[0]) intermezzo.pop(0) return lista_file def words_list(filename): lista_file = [] a_file = open(filename) lines = a_file.readlines()[1:] for line in lines: lista_file.extend(line.split()) return lista_file def stuff_list(filename): file_list = file_names_list(filename) the_rest = words_list(filename) second_file_name = file_names_list(file_list[0]) the_lists = words_list(file_list[0]) and words_list(second_file_name[0]) the_rest += the_lists[0:] return the_rest def most_frequent_chars(filename): huge_words_list = stuff_list(filename) maxOccurs = "" list_of_chars = [] for i in range(len(max(huge_words_list, key=len))): for item in huge_words_list: try: list_of_chars.append(item[i]) except IndexError: pass maxOccurs += max(sorted(set(list_of_chars)), key = list_of_chars.count) list_of_chars.clear() return maxOccurs print(most_frequent_chars("test01/A.txt"))
[ "This assignment is relatively easy, if the code has a good structure. Here is a full implementation:\ndef read_file(fname):\n with open(fname, 'r') as f:\n return list(filter(None, [y.rstrip(' \\n').lstrip(' ') for x in f for y in x.split()]))\n\ndef read_chain(fname):\n seen = set()\n new = fname\n result = []\n while not new in seen:\n A = read_file(new)\n seen.add(new)\n new, words = A[0], A[1:]\n result.extend(words)\n return result\n\ndef most_frequent_chars (fname):\n all_words = read_chain(fname)\n result = []\n for i in range(max(map(len,all_words))):\n chars = [word[i] for word in all_words if i<len(word)]\n result.append(max(sorted(set(chars)), key = chars.count))\n return ''.join(result)\n\nprint(most_frequent_chars(\"test01/A.txt\"))\n# output: \"hareennt\"\n\nIn the code above, we define 3 functions:\n\nread_file: simple function to read the contents of a file and return a list of strings. The command x.split() takes care of any spaces or tabs used to separate words. The final command list(filter(None, arr)) makes sure that empty strings are erased from the solution.\n\nread_chain: Simple routine to iterate through the chain of files, and return all the words contained in them.\n\nmost_frequent_chars: Easy routine, where the most frequent characters are counted carefully.\n\n\n\nPS. This line of code you had is very interesting:\nmaxOccurs += max(sorted(set(list_of_chars)), key = list_of_chars.count)\nI edited my code to include it.\n\nSpace complexity optimization\nThe space complexity of the previous code can be improved by orders of magnitude, if the files are scanned without storing all the words:\ndef scan_file(fname, database):\n with open(fname, 'r') as f:\n next_file = None\n for x in f:\n for y in x.split():\n if next_file is None:\n next_file = y\n else:\n for i,c in enumerate(y):\n while len(database) <= i:\n database.append({})\n if c in database[i]:\n database[i][c] += 1\n else:\n database[i][c] = 1\n return next_file\n\ndef most_frequent_chars (fname):\n database = []\n seen = set()\n new = fname\n while not new in seen:\n seen.add(new)\n new = scan_file(new, database)\n return ''.join(max(sorted(d.keys()),key=d.get) for d in database)\nprint(most_frequent_chars(\"test01/A.txt\"))\n# output: \"hareennt\"\n\nNow we scan the files tracking the frequency of the characters in database, without storing intermediate arrays.\n", "Ok, here's my solution:\ndef parsi_file(filename):\n \n visited_files = set()\n words_list = []\n \n # Getting words from all files\n while filename not in visited_files:\n visited_files.add(filename)\n with open(filename) as f:\n filename = f.readline().strip()\n words_list += [line.strip() for line in f.readlines()] \n \n # Creating dictionaries of letters:count for each index\n letters_dicts = []\n for word in words_list:\n for i in range(len(word)): \n if i > len(letters_dicts)-1:\n letters_dicts.append({})\n letter = word[i]\n if letters_dicts[i].get(letter):\n letters_dicts[i][letter] += 1\n else:\n letters_dicts[i][letter] = 1\n \n # Sorting dicts and getting the \"best\" letter\n code = \"\"\n for dic in letters_dicts:\n sorted_letters = sorted(dic, key = lambda letter: (-dic[letter],letter))\n code += sorted_letters[0]\n \n return code\n\n\nWe first get the words_list from all files.\nThen, for each index, we create a dictionary of the letters in all words at that index, with their count.\nNow we sort the dictionary keys by descending count (-count) then by alphabetical order.\nFinally we get the first letter (thus the one with the max count) and add it to the \"code\" word for this test battery.\n\nEdit: in terms of efficiency, parsing through all words for each index will get worse as the number of words grows, so it would be better to tweak the code to simultaneously create the dictionaries for each index and parse through the list of words only once. Done.\n" ]
[ 2, 1 ]
[]
[]
[ "file", "list", "python", "slice", "string" ]
stackoverflow_0074500987_file_list_python_slice_string.txt
Q: How do I further melt horizontal values into vertical values? I have a dataframe which has horizontal identifiers (yes and no) and values, and I want to melt it into vertical values into each yes. Here is a snippet of my dataframe: option Region Store Name option1 option2 option3 option4 profit 0 Region 1 Store 1 Y Y N N 48.1575 1 Region 1 Store 2 N Y N Y 74.7667 2 Region 1 Store 3 N Y N Y 102.35 3 Region 2 Store 4 N Y N Y 114.59 4 Region 2 Store 5 N Y N Y 99.705 5 Region 2 Store 6 N Y N Y 105.07 The answer is need to get is: option Region Store Name options profit 0 Region 1 Store 1 option1 48.1575 1 Region 1 Store 1 option2 48.1575 2 Region 1 Store 2 option2 74.7667 3 Region 1 Store 2 option4 74.7667 Essentially, I need to unstack the customer options tables, assign the same profit to everything with a yes, and drop everything with a no. So far, the function I used is: e1 = pd.melt(sales_dist_e, id_vars=['Area', 'Store Name'], var_name='option').set_index(['Area', 'Store Name', 'optionx']).squeeze().unstack().reset_index() which was mostly derived from this previous related question, but I can't seem to make it work with my current example. A: IIUC, does this work? df.melt(['option', 'Region', 'Store Name', 'profit'], var_name='options')\ .query("value == 'Y'")\ .drop('value', axis=1)\ .sort_values('profit') Output: option Region Store Name profit options 0 0 Region1 Store 1 48.1575 option1 6 0 Region1 Store 1 48.1575 option2 7 1 Region1 Store 2 74.7667 option2 19 1 Region1 Store 2 74.7667 option4 10 4 Region2 Store 5 99.7050 option2 22 4 Region2 Store 5 99.7050 option4 8 2 Region1 Store 3 102.3500 option2 20 2 Region1 Store 3 102.3500 option4 11 5 Region2 Store 6 105.0700 option2 23 5 Region2 Store 6 105.0700 option4 9 3 Region2 Store 4 114.5900 option2 21 3 Region2 Store 4 114.5900 option4
How do I further melt horizontal values into vertical values?
I have a dataframe which has horizontal identifiers (yes and no) and values, and I want to melt it into vertical values into each yes. Here is a snippet of my dataframe: option Region Store Name option1 option2 option3 option4 profit 0 Region 1 Store 1 Y Y N N 48.1575 1 Region 1 Store 2 N Y N Y 74.7667 2 Region 1 Store 3 N Y N Y 102.35 3 Region 2 Store 4 N Y N Y 114.59 4 Region 2 Store 5 N Y N Y 99.705 5 Region 2 Store 6 N Y N Y 105.07 The answer is need to get is: option Region Store Name options profit 0 Region 1 Store 1 option1 48.1575 1 Region 1 Store 1 option2 48.1575 2 Region 1 Store 2 option2 74.7667 3 Region 1 Store 2 option4 74.7667 Essentially, I need to unstack the customer options tables, assign the same profit to everything with a yes, and drop everything with a no. So far, the function I used is: e1 = pd.melt(sales_dist_e, id_vars=['Area', 'Store Name'], var_name='option').set_index(['Area', 'Store Name', 'optionx']).squeeze().unstack().reset_index() which was mostly derived from this previous related question, but I can't seem to make it work with my current example.
[ "IIUC, does this work?\ndf.melt(['option', 'Region', 'Store Name', 'profit'], var_name='options')\\\n .query(\"value == 'Y'\")\\\n .drop('value', axis=1)\\\n .sort_values('profit')\n\nOutput:\n option Region Store Name profit options\n0 0 Region1 Store 1 48.1575 option1\n6 0 Region1 Store 1 48.1575 option2\n7 1 Region1 Store 2 74.7667 option2\n19 1 Region1 Store 2 74.7667 option4\n10 4 Region2 Store 5 99.7050 option2\n22 4 Region2 Store 5 99.7050 option4\n8 2 Region1 Store 3 102.3500 option2\n20 2 Region1 Store 3 102.3500 option4\n11 5 Region2 Store 6 105.0700 option2\n23 5 Region2 Store 6 105.0700 option4\n9 3 Region2 Store 4 114.5900 option2\n21 3 Region2 Store 4 114.5900 option4\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "pandas_melt", "python" ]
stackoverflow_0074501347_pandas_pandas_melt_python.txt
Q: How do i set row height in a table? I'm using the python library borb to create a PDF document. I want to set the row height in a table. If i use TableCell(paragraph, preferred_width=Decimal(150), preferred_height=Decimal(200)) in a FlexibleColumnWidthTable, the width-value will be used, but the height is ignored. Is there another way to set the height of table rows? A: disclaimer I am the author of the library you are using. I would recommend you issue a bug ticket on the GitHub repository. TableCell is meant to take into account the preferences. It may decide not to do when layout becomes impossible. As an interim measure, you can wrap your LayoutElement objects in a custom object (created by you) that always returns the desires dimensions in its get_content_box methods.
How do i set row height in a table?
I'm using the python library borb to create a PDF document. I want to set the row height in a table. If i use TableCell(paragraph, preferred_width=Decimal(150), preferred_height=Decimal(200)) in a FlexibleColumnWidthTable, the width-value will be used, but the height is ignored. Is there another way to set the height of table rows?
[ "disclaimer I am the author of the library you are using.\nI would recommend you issue a bug ticket on the GitHub repository. TableCell is meant to take into account the preferences.\nIt may decide not to do when layout becomes impossible.\nAs an interim measure, you can wrap your LayoutElement objects in a custom object (created by you) that always returns the desires dimensions in its get_content_box methods.\n" ]
[ 0 ]
[]
[]
[ "borb", "pdf", "python" ]
stackoverflow_0074421535_borb_pdf_python.txt
Q: How to instantiate nested class How can I instantiate a variable of type UseInternalClass? MyInstance = ParentClass.UseInternalClass(something=ParentClass.InternalClass({1:2})) If I try the former code, I get an error: NameError: name 'ParentClass' is not defined When I want to instantiate an type of a nested class class ParentClass(object): class InternalClass(object): def __init__(self, parameter = {}): pass pass class UseInternalClass(object): _MyVar def __init__(self, something = ParentClass.InternalClass()): #meant to make something type = InternalClass _MyVar = something pass (All the code is on the same file) A: You cannot use "ParentClass" inside the definition of the parent class since the interpreter have not yet define the class object named ParentClass. Also, InternalClass will not be define until the class ParentClass is completly define. Note: I'm note sure what you are trying to do, but if you explain your end goal, we might be able to suggest you something else to realise that. A: You can do something like this: class Child: def __init__(self, y): self.y = y class Parent: def __init__(self, x): self.x = x y = 2 * x self.child = Child(y) As an example, you create an instance of the Parent class then access its Child as follows: par = Parent(4) par.child.y # returns a value of 8 A: I am not sure if i got it right but i if you're trying to do something like that class Parent: class Child: def __init__(self,passed_data): self.data = passed_data class AnotherChild: def __init__(self,child=Parent.Child("no data passed")) self.child_obj = self.Child(data_to_pass) you can create AnotherChild object as follows another_child = Parent.AnotherChild() # here it will use the default value of "no data passed" or you might do it as follows child = Parent.Child("data") # create child object another_child = Parent.AnotherChild(child) # pass it to new child or pass it directly through your init another_child = Parent.AnotherChild(Parent.Child("data")) i guess this should work correctly if you are instantiating in the same file for example parent.py , it worked for me like that i am not sure if that what you want but i hope it helps A: Look for this ona easy example class Car: @classmethod def create_wheel(cls): return cls.Wheel() class Wheel: pass o = Car.create_wheel() print(o) # <main.Car.Wheel object at 0x7fde1d8daeb0> A: You can instantiate the outer and inner classes with __init__() as shown below: class OuterClass: def __init__(self, arg): # Here self.variable = "Outer " + arg self.inner = OuterClass.InnerClass(arg) def outer_method(self, arg): print("Outer " + arg) class InnerClass: def __init__(self, arg): # Here self.variable = "Inner " + arg def inner_method(self, arg): print("Inner " + arg) obj = OuterClass("variable") # Here print(obj.variable) print(obj.inner.variable) obj.outer_method("method") obj.inner.inner_method("method") Output: Outer variable Inner variable Outer method Inner method
How to instantiate nested class
How can I instantiate a variable of type UseInternalClass? MyInstance = ParentClass.UseInternalClass(something=ParentClass.InternalClass({1:2})) If I try the former code, I get an error: NameError: name 'ParentClass' is not defined When I want to instantiate an type of a nested class class ParentClass(object): class InternalClass(object): def __init__(self, parameter = {}): pass pass class UseInternalClass(object): _MyVar def __init__(self, something = ParentClass.InternalClass()): #meant to make something type = InternalClass _MyVar = something pass (All the code is on the same file)
[ "You cannot use \"ParentClass\" inside the definition of the parent class since the interpreter have not yet define the class object named ParentClass. Also, InternalClass will not be define until the class ParentClass is completly define.\nNote: I'm note sure what you are trying to do, but if you explain your end goal, we might be able to suggest you something else to realise that.\n", "You can do something like this:\nclass Child:\n\n def __init__(self, y):\n self.y = y\n\n\nclass Parent:\n\n def __init__(self, x):\n self.x = x\n y = 2 * x\n self.child = Child(y)\n\nAs an example, you create an instance of the Parent class then access its Child as follows:\npar = Parent(4)\n\npar.child.y # returns a value of 8\n\n", "I am not sure if i got it right but i if you're trying to do something like that \nclass Parent:\n class Child:\n def __init__(self,passed_data):\n self.data = passed_data\n class AnotherChild:\n def __init__(self,child=Parent.Child(\"no data passed\"))\n self.child_obj = self.Child(data_to_pass)\n\nyou can create AnotherChild object as follows\nanother_child = Parent.AnotherChild() \n# here it will use the default value of \"no data passed\"\n\nor you might do it as follows\nchild = Parent.Child(\"data\") # create child object\nanother_child = Parent.AnotherChild(child) # pass it to new child\n\nor pass it directly through your init \nanother_child = Parent.AnotherChild(Parent.Child(\"data\"))\n\ni guess this should work correctly if you are instantiating in the same file for example parent.py , it worked for me like that\ni am not sure if that what you want but i hope it helps \n", "Look for this ona easy example\nclass Car:\n@classmethod\ndef create_wheel(cls):\n return cls.Wheel()\n\nclass Wheel:\n pass\n\no = Car.create_wheel()\nprint(o) # <main.Car.Wheel object at 0x7fde1d8daeb0>\n", "You can instantiate the outer and inner classes with __init__() as shown below:\nclass OuterClass:\n def __init__(self, arg): # Here\n self.variable = \"Outer \" + arg\n self.inner = OuterClass.InnerClass(arg)\n \n def outer_method(self, arg):\n print(\"Outer \" + arg)\n \n class InnerClass:\n def __init__(self, arg): # Here\n self.variable = \"Inner \" + arg\n \n def inner_method(self, arg):\n print(\"Inner \" + arg)\n\nobj = OuterClass(\"variable\") # Here\nprint(obj.variable)\nprint(obj.inner.variable)\nobj.outer_method(\"method\")\nobj.inner.inner_method(\"method\")\n\nOutput:\nOuter variable\nInner variable\nOuter method\nInner method\n\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "inner_classes", "python" ]
stackoverflow_0049867582_inner_classes_python.txt
Q: Getting the same response different URL I'm getting the same response from these 2 URLs: First URL Second URL This is the code I'm using: import requests url = "https://www.amazon.it/blackfriday" querystring = {"ref_":"nav_cs_gb_td_bf_dt_cr","deals-widget":"{\"version\":1,\"viewIndex\":60,\"presetId\":\"deals-collection-all-deals\",\"sorting\":\"BY_SCORE\"}"} payload = "" headers = {"cookie": "session-id=260-4643637-2647537; session-id-time=2082787201l; i18n-prefs=EUR; ubid-acbit=258-7747562-7485655; session-token=%22aZB70z2dnXHbhJ9e02ESp7q6xO23IGnDFT2iBCiPXZFoBTTEguAJ%2FBSnV7ud6bjAca64nh3bMF1bwDykOBf9BV%2BVjbx4tUQCyBkrg8tyR8PLZ8cjzpCz%2FzQSAmjiL6mSBcspkF8xuV0bxqLeRX7JQCMrHVBFf%2BsUhxV%2FMBLCH8UPk2o5aNL7OyAFCODBdRqm72RK5DAoKeMUymlVEOtqzvZSJbP%2Fut0gobiXJblRM2c%3D%22"} response = requests.request("GET", url, data=payload, headers=headers, params=querystring) I would like to get the same response that i get on the browser How can i do it? Why does this happen? A: You have to trick the server into thinking you are a browser. You can accomplish this by setting the user agent header. headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',"cookie": "session-id=260-4643637-2647537; session-id-time=2082787201l; i18n-prefs=EUR; ubid-acbit=258-7747562-7485655; session-token=%22aZB70z2dnXHbhJ9e02ESp7q6xO23IGnDFT2iBCiPXZFoBTTEguAJ%2FBSnV7ud6bjAca64nh3bMF1bwDykOBf9BV%2BVjbx4tUQCyBkrg8tyR8PLZ8cjzpCz%2FzQSAmjiL6mSBcspkF8xuV0bxqLeRX7JQCMrHVBFf%2BsUhxV%2FMBLCH8UPk2o5aNL7OyAFCODBdRqm72RK5DAoKeMUymlVEOtqzvZSJbP%2Fut0gobiXJblRM2c%3D%22"}
Getting the same response different URL
I'm getting the same response from these 2 URLs: First URL Second URL This is the code I'm using: import requests url = "https://www.amazon.it/blackfriday" querystring = {"ref_":"nav_cs_gb_td_bf_dt_cr","deals-widget":"{\"version\":1,\"viewIndex\":60,\"presetId\":\"deals-collection-all-deals\",\"sorting\":\"BY_SCORE\"}"} payload = "" headers = {"cookie": "session-id=260-4643637-2647537; session-id-time=2082787201l; i18n-prefs=EUR; ubid-acbit=258-7747562-7485655; session-token=%22aZB70z2dnXHbhJ9e02ESp7q6xO23IGnDFT2iBCiPXZFoBTTEguAJ%2FBSnV7ud6bjAca64nh3bMF1bwDykOBf9BV%2BVjbx4tUQCyBkrg8tyR8PLZ8cjzpCz%2FzQSAmjiL6mSBcspkF8xuV0bxqLeRX7JQCMrHVBFf%2BsUhxV%2FMBLCH8UPk2o5aNL7OyAFCODBdRqm72RK5DAoKeMUymlVEOtqzvZSJbP%2Fut0gobiXJblRM2c%3D%22"} response = requests.request("GET", url, data=payload, headers=headers, params=querystring) I would like to get the same response that i get on the browser How can i do it? Why does this happen?
[ "You have to trick the server into thinking you are a browser. You can accomplish this by setting the user agent header.\nheaders = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',\"cookie\": \"session-id=260-4643637-2647537; session-id-time=2082787201l; i18n-prefs=EUR; ubid-acbit=258-7747562-7485655; session-token=%22aZB70z2dnXHbhJ9e02ESp7q6xO23IGnDFT2iBCiPXZFoBTTEguAJ%2FBSnV7ud6bjAca64nh3bMF1bwDykOBf9BV%2BVjbx4tUQCyBkrg8tyR8PLZ8cjzpCz%2FzQSAmjiL6mSBcspkF8xuV0bxqLeRX7JQCMrHVBFf%2BsUhxV%2FMBLCH8UPk2o5aNL7OyAFCODBdRqm72RK5DAoKeMUymlVEOtqzvZSJbP%2Fut0gobiXJblRM2c%3D%22\"}\n\n" ]
[ 0 ]
[]
[]
[ "ajax", "python", "request", "url", "web_scraping" ]
stackoverflow_0074501741_ajax_python_request_url_web_scraping.txt
Q: List Indexing using range() and len() not working I am trying to parse through a data structure and I have used a for loop initializing a variable i and using the range() function. I originally set my range to be the size of the records: 25,173 but then I kept receiving an --------------------------------------------------------------------------- IndexError Traceback (most recent call last) Cell In [63], line 41 37 input_list.append(HOME_AVG_PTS) 39 return input_list ---> 41 Data["AVG_PTS_HOME"]= extract_wl(AVG_PTS_HOME) Cell In [63], line 28, in extract_wl(input_list) 26 if i==25172: 27 HOME_AVG_PTS[0]= HOME_AVG_PTS[0]/count ---> 28 elif int(Data(["TEAM_ID_AWAY"][i])) == const_home_team_id: 29 if type(PTS_AWAY[i]) is int: 30 count+= 1 IndexError: list index out of range So I tried changing my for loop to be in the range of the function with the issue i.e. for i in range(len(Data["TEAM_ID_AWAY"])): But I keep receiving the same error still The Data variable holds the contents of a csv file which I have used the panda module to read and put into Data. You can assume all the column headers I have used are valid and furthermore that they all have range 25173. [Here is an image showing the range and values for the Data"TEAM_ID_HOME" AVG_PTS_HOME = [] def extract_wl(input_list): for j in range(25173): const_season_id = Data["SEASON_ID"][j] #print(const_season_id) const_game_id = int(Data["GAME_ID"][j]) #print(const_game_id) const_home_team_id = Data["TEAM_ID_HOME"][j] #print(const_home_team_id) #if j==10: #break print("Iteration #", j) print(len(Data["TEAM_ID_AWAY"])) count = 0 HOME_AVG_PTS=[0.0] for i in range(len(Data["TEAM_ID_AWAY"])): if (int(Data["GAME_ID"][i]) < const_game_id and int(Data["SEASON_ID"][i]) == const_season_id): if int(Data["TEAM_ID_HOME"][i]) == const_home_team_id: if type(PTS_HOME[i]) is int: count+= 1 HOME_AVG_PTS[0]+= PTS_HOME[i] if i==25172: HOME_AVG_PTS[0]= HOME_AVG_PTS[0]/count elif int(Data(["TEAM_ID_AWAY"][i])) == const_home_team_id: if type(PTS_AWAY[i]) is int: count+= 1 HOME_AVG_PTS[0]+= PTS_AWAY[i] if i==25172: HOME_AVG_PTS[0]= float(HOME_AVG_PTS[0]/count) print(HOME_AVG_PTS) input_list.append(HOME_AVG_PTS) return input_list Data["AVG_PTS_HOME"]= extract_wl(AVG_PTS_HOME) Can anyone point out why I am having this error or help me resolve it? In the meantime I think I am going to just create a separate function which takes a list of all the AWAY_IDs and then parse through that instead. A: elif int(Data(["TEAM_ID_AWAY"][i])) == const_home_team_id Your parentheses are in the wrong place. You have parentheses around (["TEAM_ID_AWAY"][i]), therefore it is trying to take the i'th index of the single-element list ["TEAM_ID_AWAY"]. You want Data["TEAM_ID_AWAY"][i], not Data(["TEAM_ID_AWAY"][i]). A: Hardcoding the length of the list is likely to lead to bugs down the line even if you're able to get it all working correctly with a particular data set. Iterating over the lists directly is much less error-prone and generally produces code that's easier to modify. If you want to iterate over multiple lists in parallel, use the zip function. For example, this: for j in range(25173): const_season_id = Data["SEASON_ID"][j] const_game_id = int(Data["GAME_ID"][j]) const_home_team_id = Data["TEAM_ID_HOME"][j] # do stuff with const_season_id, const_game_id, const_home_team_id can be written as: for (const_season_id, game_id, const_home_team_id) in zip( Data["SEASON_ID"], Data["GAME_ID"], Data["TEAM_ID_HOME"] ): const_game_id = int(game_id) # do stuff with const_season_id, const_game_id, const_home_team_id The zip function will never give you an IndexError because it will automatically stop iterating when it reaches the end of any of the input lists. (If you want to stop iterating when you reach the end of the longest list instead, there's zip_longest.)
List Indexing using range() and len() not working
I am trying to parse through a data structure and I have used a for loop initializing a variable i and using the range() function. I originally set my range to be the size of the records: 25,173 but then I kept receiving an --------------------------------------------------------------------------- IndexError Traceback (most recent call last) Cell In [63], line 41 37 input_list.append(HOME_AVG_PTS) 39 return input_list ---> 41 Data["AVG_PTS_HOME"]= extract_wl(AVG_PTS_HOME) Cell In [63], line 28, in extract_wl(input_list) 26 if i==25172: 27 HOME_AVG_PTS[0]= HOME_AVG_PTS[0]/count ---> 28 elif int(Data(["TEAM_ID_AWAY"][i])) == const_home_team_id: 29 if type(PTS_AWAY[i]) is int: 30 count+= 1 IndexError: list index out of range So I tried changing my for loop to be in the range of the function with the issue i.e. for i in range(len(Data["TEAM_ID_AWAY"])): But I keep receiving the same error still The Data variable holds the contents of a csv file which I have used the panda module to read and put into Data. You can assume all the column headers I have used are valid and furthermore that they all have range 25173. [Here is an image showing the range and values for the Data"TEAM_ID_HOME" AVG_PTS_HOME = [] def extract_wl(input_list): for j in range(25173): const_season_id = Data["SEASON_ID"][j] #print(const_season_id) const_game_id = int(Data["GAME_ID"][j]) #print(const_game_id) const_home_team_id = Data["TEAM_ID_HOME"][j] #print(const_home_team_id) #if j==10: #break print("Iteration #", j) print(len(Data["TEAM_ID_AWAY"])) count = 0 HOME_AVG_PTS=[0.0] for i in range(len(Data["TEAM_ID_AWAY"])): if (int(Data["GAME_ID"][i]) < const_game_id and int(Data["SEASON_ID"][i]) == const_season_id): if int(Data["TEAM_ID_HOME"][i]) == const_home_team_id: if type(PTS_HOME[i]) is int: count+= 1 HOME_AVG_PTS[0]+= PTS_HOME[i] if i==25172: HOME_AVG_PTS[0]= HOME_AVG_PTS[0]/count elif int(Data(["TEAM_ID_AWAY"][i])) == const_home_team_id: if type(PTS_AWAY[i]) is int: count+= 1 HOME_AVG_PTS[0]+= PTS_AWAY[i] if i==25172: HOME_AVG_PTS[0]= float(HOME_AVG_PTS[0]/count) print(HOME_AVG_PTS) input_list.append(HOME_AVG_PTS) return input_list Data["AVG_PTS_HOME"]= extract_wl(AVG_PTS_HOME) Can anyone point out why I am having this error or help me resolve it? In the meantime I think I am going to just create a separate function which takes a list of all the AWAY_IDs and then parse through that instead.
[ "elif int(Data([\"TEAM_ID_AWAY\"][i])) == const_home_team_id\n\nYour parentheses are in the wrong place.\nYou have parentheses around ([\"TEAM_ID_AWAY\"][i]), therefore it is trying to take the i'th index of the single-element list [\"TEAM_ID_AWAY\"].\nYou want Data[\"TEAM_ID_AWAY\"][i], not Data([\"TEAM_ID_AWAY\"][i]).\n", "Hardcoding the length of the list is likely to lead to bugs down the line even if you're able to get it all working correctly with a particular data set. Iterating over the lists directly is much less error-prone and generally produces code that's easier to modify. If you want to iterate over multiple lists in parallel, use the zip function. For example, this:\n for j in range(25173):\n const_season_id = Data[\"SEASON_ID\"][j]\n const_game_id = int(Data[\"GAME_ID\"][j])\n const_home_team_id = Data[\"TEAM_ID_HOME\"][j]\n # do stuff with const_season_id, const_game_id, const_home_team_id\n\ncan be written as:\n for (const_season_id, game_id, const_home_team_id) in zip(\n Data[\"SEASON_ID\"], Data[\"GAME_ID\"], Data[\"TEAM_ID_HOME\"]\n ):\n const_game_id = int(game_id)\n # do stuff with const_season_id, const_game_id, const_home_team_id\n\nThe zip function will never give you an IndexError because it will automatically stop iterating when it reaches the end of any of the input lists. (If you want to stop iterating when you reach the end of the longest list instead, there's zip_longest.)\n" ]
[ 0, 0 ]
[]
[]
[ "indexing", "list", "python" ]
stackoverflow_0074501871_indexing_list_python.txt
Q: Change values of a certain range of columns based on another range of columns of the same data frame I have this df x y1 y2 y3 y4 d1 d2 d3 d4 0 -17.7 7 NaN NaN NaN 5 NaN 4 NaN 1 -15.0 NaN NaN NaN 3 4 NaN NaN 8 2 -12.5 NaN NaN 2 NaN NaN NaN 1 9 I want only 1 value per row between d1 to d4, based on what value is between y1 to y4. Example: In the 1st row, value is on y1. So the value that stays is d1. The output would be: x y1 y2 y3 y4 d1 d2 d3 d4 0 -17.7 7 NaN NaN NaN 5 NaN NaN NaN 1 -15.0 NaN NaN NaN 3 NaN NaN NaN 8 2 -12.5 NaN NaN 2 NaN NaN NaN 1 NaN A: You can use where with a boolean matrix: df[['d1', 'd2', 'd3', 'd4']] = df.filter(like='d').where(df.filter(like='y').notna().to_numpy()) Output: x y1 y2 y3 y4 d1 d2 d3 d4 0 -17.7 7.0 NaN NaN NaN 5.0 NaN NaN NaN 1 -15.0 NaN NaN NaN 3.0 NaN NaN NaN 8.0 2 -12.5 NaN NaN 2.0 NaN NaN NaN 1.0 NaN
Change values of a certain range of columns based on another range of columns of the same data frame
I have this df x y1 y2 y3 y4 d1 d2 d3 d4 0 -17.7 7 NaN NaN NaN 5 NaN 4 NaN 1 -15.0 NaN NaN NaN 3 4 NaN NaN 8 2 -12.5 NaN NaN 2 NaN NaN NaN 1 9 I want only 1 value per row between d1 to d4, based on what value is between y1 to y4. Example: In the 1st row, value is on y1. So the value that stays is d1. The output would be: x y1 y2 y3 y4 d1 d2 d3 d4 0 -17.7 7 NaN NaN NaN 5 NaN NaN NaN 1 -15.0 NaN NaN NaN 3 NaN NaN NaN 8 2 -12.5 NaN NaN 2 NaN NaN NaN 1 NaN
[ "You can use where with a boolean matrix:\ndf[['d1', 'd2', 'd3', 'd4']] = df.filter(like='d').where(df.filter(like='y').notna().to_numpy())\n\nOutput:\n x y1 y2 y3 y4 d1 d2 d3 d4\n0 -17.7 7.0 NaN NaN NaN 5.0 NaN NaN NaN\n1 -15.0 NaN NaN NaN 3.0 NaN NaN NaN 8.0\n2 -12.5 NaN NaN 2.0 NaN NaN NaN 1.0 NaN\n\n" ]
[ 3 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074501938_dataframe_pandas_python.txt
Q: How to make the x-axis of a histogram (df.hist) finer (more values within a given space) I have the following code. I am looping through variables (dataframe columns) and create histograms. I have attached below an example of a graph for the column newerdf['distance']. I would like to increase the number of values on the x-axis, so that the x-axis values on the graph below say 0,1,2,3,4,5,6,7,8,9,10 rather than 0,5,10. I would be so grateful for a helping hand! listedvariables = ['distance','duration','sleepiness_bed','sleepiness_waking','normal_time_of_wakeup','number_of_times_wakeup_during_night','time_spent_awake_during_night_mins','time_of_going_to_sleep','time_to_fall_asleep_mins','sleep_onset_time','sleep_period_length_mins','total_sleep_duration_mins','time_in_bed_mins','sleep_efficiency','sleep_bout_length_mins','mid_point_of_sleep','sleepiness_resolution_index'] for i in range(0,len(listedvariables)): fig = newerdf[[listedvariables[i]]].hist(figsize=(30,20)) [x.title.set_size(40) for x in fig.ravel()] [x.tick_params(axis='x',labelsize=40) for x in fig.ravel()] [x.tick_params(axis='y',labelsize=40) for x in fig.ravel()] plt.tight_layout() A: With the following toy dataframe and plot in a Jupyter notebook: import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame( { "A": [ 1.5660150383101321, 0.3145564820111119, 0.36639603868848436, 1.0212995716690398, 0.3956186117590027, 1.5621280556024015, 1.3832769133918796, 0.5007889864878086, 0.4756689950693606, 0.9305468188471707, ] } ) plt.hist(df["A"]) output To add more ticks and labels: plt.xticks( ticks=[ 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, ], ) plt.hist(df["A"]) output
How to make the x-axis of a histogram (df.hist) finer (more values within a given space)
I have the following code. I am looping through variables (dataframe columns) and create histograms. I have attached below an example of a graph for the column newerdf['distance']. I would like to increase the number of values on the x-axis, so that the x-axis values on the graph below say 0,1,2,3,4,5,6,7,8,9,10 rather than 0,5,10. I would be so grateful for a helping hand! listedvariables = ['distance','duration','sleepiness_bed','sleepiness_waking','normal_time_of_wakeup','number_of_times_wakeup_during_night','time_spent_awake_during_night_mins','time_of_going_to_sleep','time_to_fall_asleep_mins','sleep_onset_time','sleep_period_length_mins','total_sleep_duration_mins','time_in_bed_mins','sleep_efficiency','sleep_bout_length_mins','mid_point_of_sleep','sleepiness_resolution_index'] for i in range(0,len(listedvariables)): fig = newerdf[[listedvariables[i]]].hist(figsize=(30,20)) [x.title.set_size(40) for x in fig.ravel()] [x.tick_params(axis='x',labelsize=40) for x in fig.ravel()] [x.tick_params(axis='y',labelsize=40) for x in fig.ravel()] plt.tight_layout()
[ "With the following toy dataframe and plot in a Jupyter notebook:\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n\ndf = pd.DataFrame(\n {\n \"A\": [\n 1.5660150383101321,\n 0.3145564820111119,\n 0.36639603868848436,\n 1.0212995716690398,\n 0.3956186117590027,\n 1.5621280556024015,\n 1.3832769133918796,\n 0.5007889864878086,\n 0.4756689950693606,\n 0.9305468188471707,\n ]\n }\n)\n\nplt.hist(df[\"A\"])\n\noutput\n\nTo add more ticks and labels:\nplt.xticks(\n ticks=[\n 0.0,\n 0.1,\n 0.2,\n 0.3,\n 0.4,\n 0.5,\n 0.6,\n 0.7,\n 0.8,\n 0.9,\n 1.0,\n 1.1,\n 1.2,\n 1.3,\n 1.4,\n 1.5,\n 1.6,\n ],\n)\nplt.hist(df[\"A\"])\n\noutput\n\n" ]
[ 1 ]
[]
[]
[ "jupyter_notebook", "matplotlib", "pandas", "python" ]
stackoverflow_0074483004_jupyter_notebook_matplotlib_pandas_python.txt
Q: Python 3.9.12: f-string error - SyntaxError: invalid syntax I am using Spyder with Python 3.9.12 Here is the code I have inside Spyder: user_input = (input('Please enter a number between 1 and 12:>>' )) while (not user_input.isdigit()) or (int(user_input) < 1 or int(user_input) > 12): print('Must be an integer between 1 and 12') user_input = input('Please make a selection:>> ') user_input = int(user_input) print('============================') print() print(f"This is the "{user_input}" times table") print() for i in range(1,13): print(f""{i}" x "{user_input}" = "{i=user_input}"") Error output from Spyder: runfile('/Users/user/spyder-files/For-Loops.py', wdir='/Users/user/spyder-files') File "<unknown>", line 49 print(f""This is the "{user_input}" times table"") ^ SyntaxError: invalid syntax I tried using single quotes but get the same error message: user_input = (input('Please enter a number between 1 and 12:>>' )) while (not user_input.isdigit()) or (int(user_input) < 1 or int(user_input) > 12): print('Must be an integer between 1 and 12') user_input = input('Please make a selection:>> ') user_input = int(user_input) print('============================') print() print(f'This is the '{user_input}' times table') print() for i in range(1,13): print(f''{i}' x '{user_input}' = '{i=user_input}'') Same error: runfile('/Users/user/spyder-files/For-Loops.py', wdir='/Users/user/spyder-files') File "<unknown>", line 49 print(f'This is the '{user_input}' times table') ^ SyntaxError: invalid syntax I appreciate any suggestions. Thanks. A: You used double quotes in f""{i}" x "{user_input}" = "{i=user_input}"". Now the string starts at the first double quote and ends at the second. The following text now leads to a SyntaxError. You could use triple quotes to define the string. The fourth is now part of the strings content. f""""{i}" x "{user_input}" = "{i*user_input}"""" Or use different quotes f'"{i}" x "{user_input}" = "{i=user_input}"'
Python 3.9.12: f-string error - SyntaxError: invalid syntax
I am using Spyder with Python 3.9.12 Here is the code I have inside Spyder: user_input = (input('Please enter a number between 1 and 12:>>' )) while (not user_input.isdigit()) or (int(user_input) < 1 or int(user_input) > 12): print('Must be an integer between 1 and 12') user_input = input('Please make a selection:>> ') user_input = int(user_input) print('============================') print() print(f"This is the "{user_input}" times table") print() for i in range(1,13): print(f""{i}" x "{user_input}" = "{i=user_input}"") Error output from Spyder: runfile('/Users/user/spyder-files/For-Loops.py', wdir='/Users/user/spyder-files') File "<unknown>", line 49 print(f""This is the "{user_input}" times table"") ^ SyntaxError: invalid syntax I tried using single quotes but get the same error message: user_input = (input('Please enter a number between 1 and 12:>>' )) while (not user_input.isdigit()) or (int(user_input) < 1 or int(user_input) > 12): print('Must be an integer between 1 and 12') user_input = input('Please make a selection:>> ') user_input = int(user_input) print('============================') print() print(f'This is the '{user_input}' times table') print() for i in range(1,13): print(f''{i}' x '{user_input}' = '{i=user_input}'') Same error: runfile('/Users/user/spyder-files/For-Loops.py', wdir='/Users/user/spyder-files') File "<unknown>", line 49 print(f'This is the '{user_input}' times table') ^ SyntaxError: invalid syntax I appreciate any suggestions. Thanks.
[ "You used double quotes in f\"\"{i}\" x \"{user_input}\" = \"{i=user_input}\"\". Now the string starts at the first double quote and ends at the second. The following text now leads to a SyntaxError.\nYou could use triple quotes to define the string. The fourth is now part of the strings content.\nf\"\"\"\"{i}\" x \"{user_input}\" = \"{i*user_input}\"\"\"\"\n\nOr use different quotes\nf'\"{i}\" x \"{user_input}\" = \"{i=user_input}\"'\n\n" ]
[ 0 ]
[]
[]
[ "anaconda", "f_string", "python", "python_3.x", "spyder" ]
stackoverflow_0074502004_anaconda_f_string_python_python_3.x_spyder.txt
Q: How can I download attachments from emails sent as attachments with Python? I received an email with multiple emails attached. Each email has .xls file that I want to download. How can I do this in Python? (I use the Outlook app) enter image description here I tried to move these emails to my inbox and run the code I already use: path = 'C:/Users/moliveira/Desktop/projeto_email' os.chdir(path) output_dir = Path.cwd() output_dir.mkdir(parents=True, exist_ok=True) outlook = win32com.client.Dispatch("Outlook.Application") mapi=outlook.GetNamespace("MAPI") inbox = mapi.GetDefaultFolder(6) messages = inbox.Items received_dt = datetime.now() - BDay(600) date_aux = received_dt date = received_dt.strftime('%d/%m/%Y') Subject = 'OPÇÕES RV - '+date received_dt = received_dt.strftime('%m/%d/%Y') messages = messages.Restrict("[ReceivedTime] >= '" + received_dt + " 13:00 PM" + "'") messages = messages.Restrict("[ReceivedTime] <= '" + received_dt + " 23:59 PM" + "'") messages = messages.Restrict("[Subject] = "+Subject) try: for message in list(messages): try: s = message.sender for attachment in message.Attachments: attachment.SaveASFile(os.path.join(output_dir, attachment.FileName)) print(f"attachment {attachment.FileName} from {s} saved") except Exception as e: print("error when saving the attachment:" + str(e)) except Exception as e: print("error when processing emails messages:" + str(e)) date = date_aux.strftime('%d_%m_%Y') list(messages) But the return of list(messages) is empty, meaning that it's not locating the email. I think it's because I have to "click to view more on Microsoft Exchange". Just after this I can see these emails in the app. enter image description here A: You can save the attached item to the disk and then execute it programmatically to be opened in Outlook (it is a singleton which means only one instance of Outlook can be run at the same time). Also if the attached mail item is saved to the disk you may use the NameSpace.OpenSharedItem method which opens a shared item from a specified path or URL. This method is used to open iCalendar appointment (.ics) files, vCard (.vcf) files, and Outlook message (.msg) files. So, you will get an instance of the MailItem class which represents the attached Outlook item. TO distinguish Outlook items and regular files attached to the message use the Attachment.Type property which equals to the olEmbeddeditem value when the attachment is an Outlook message format file (.msg) and is a copy of the original message. If you wants to open them without saving on the disk, see How to Open Outlook file attachment directly not saving it? ( with C# VSTO). In short, you can try to read the attached files from the cache folder maintained by Outlook. See Finding Outlook temporary folder for email attachments for more information. Also, you can use a low-level API (Extended MAPI) where you can access the PR_ATTACH_DATA_BIN property, read more about the algorithm in the Opening an attachment article.
How can I download attachments from emails sent as attachments with Python?
I received an email with multiple emails attached. Each email has .xls file that I want to download. How can I do this in Python? (I use the Outlook app) enter image description here I tried to move these emails to my inbox and run the code I already use: path = 'C:/Users/moliveira/Desktop/projeto_email' os.chdir(path) output_dir = Path.cwd() output_dir.mkdir(parents=True, exist_ok=True) outlook = win32com.client.Dispatch("Outlook.Application") mapi=outlook.GetNamespace("MAPI") inbox = mapi.GetDefaultFolder(6) messages = inbox.Items received_dt = datetime.now() - BDay(600) date_aux = received_dt date = received_dt.strftime('%d/%m/%Y') Subject = 'OPÇÕES RV - '+date received_dt = received_dt.strftime('%m/%d/%Y') messages = messages.Restrict("[ReceivedTime] >= '" + received_dt + " 13:00 PM" + "'") messages = messages.Restrict("[ReceivedTime] <= '" + received_dt + " 23:59 PM" + "'") messages = messages.Restrict("[Subject] = "+Subject) try: for message in list(messages): try: s = message.sender for attachment in message.Attachments: attachment.SaveASFile(os.path.join(output_dir, attachment.FileName)) print(f"attachment {attachment.FileName} from {s} saved") except Exception as e: print("error when saving the attachment:" + str(e)) except Exception as e: print("error when processing emails messages:" + str(e)) date = date_aux.strftime('%d_%m_%Y') list(messages) But the return of list(messages) is empty, meaning that it's not locating the email. I think it's because I have to "click to view more on Microsoft Exchange". Just after this I can see these emails in the app. enter image description here
[ "You can save the attached item to the disk and then execute it programmatically to be opened in Outlook (it is a singleton which means only one instance of Outlook can be run at the same time).\nAlso if the attached mail item is saved to the disk you may use the NameSpace.OpenSharedItem method which opens a shared item from a specified path or URL. This method is used to open iCalendar appointment (.ics) files, vCard (.vcf) files, and Outlook message (.msg) files. So, you will get an instance of the MailItem class which represents the attached Outlook item.\nTO distinguish Outlook items and regular files attached to the message use the Attachment.Type property which equals to the olEmbeddeditem value when the attachment is an Outlook message format file (.msg) and is a copy of the original message.\nIf you wants to open them without saving on the disk, see How to Open Outlook file attachment directly not saving it? ( with C# VSTO). In short, you can try to read the attached files from the cache folder maintained by Outlook. See Finding Outlook temporary folder for email attachments for more information. Also, you can use a low-level API (Extended MAPI) where you can access the PR_ATTACH_DATA_BIN property, read more about the algorithm in the Opening an attachment article.\n" ]
[ 0 ]
[]
[]
[ "email_attachments", "office_automation", "outlook", "python", "win32com" ]
stackoverflow_0074494148_email_attachments_office_automation_outlook_python_win32com.txt
Q: How to find and replace words in a python file? There is a template file: ZOYX:_sName_:IUA:S:BCSU,_sNumb_:AFAST; ZOYP:IUA:_sName_:"_ip1_",,49155:"_ip2_",30,,,49155; ZDWP:_sName_:BCSU,_sNumb_:0,3:_sName_; ZOYS:IUA:_sName_:ACT; ZERC:BTS=58,TRX=_tNumb_::FREQ=567,TSC=0,:DNAME=_sName_:CH0=TCHD,CH1=TCHD,CH2=TCHD,CH3=TCHD,CH4=TCHD,CH5=TCHD,CH6=TCHD,CH7=TCHD:; ZERM:BTS=58,TRX=_tNumb_:LEV=-91; ZERM:BTS=58,TRX=_tNumb_:PREF=N; ZERS:BTS=58,TRX=_tNumb_:U;` In it, you need to replace tNumb, sName, sNumb, _ ip1_, ip2, with the values that the user enters. That's how I did it: ` repeat="y" while repeat == "y": keys=['_ip1_', '_ip2_', '_sName_', '_sNumb_', '_tNumb_'] print(keys) #print(keys[2]) print("+++++++++++++++++++++++++++++1") values=[] #ip1, ip2, sName, sNumb, tNumb = input("Enter the IP address1: "), input("Enter the IP address2: "), input("Enter the station name: "), input("Enter the station number: "), input("Enter the transmitter number: ") ip1, ip2, sName, sNumb, tNumb = 1111, 2222, 3333, 4444, 5555 values.append(ip1) values.append(ip2) values.append(sName) values.append(sNumb) values.append(tNumb) print(values) #print(values[2]) print("+++++++++++++++++++++++++++++2") dictionary={} for i in range(len(keys)): dictionary[keys[i]] = values[i] search_text = dictionary[keys[i]] replace_text = keys[i] print(search_text) print(replace_text) print("+++++++++++++++++++++++++++++3") with open(r'template.txt', 'r') as oFile: rFile = oFile.read() #print(rFile) with open(r'output.txt', 'a') as wFile: wFile.write('\n') wFile.write('\n') wFile.write('\n') wFile.write(rFile) repeat = input("Do you want to continue? (y/n): ") if repeat == "n": break while (repeat!="y" and repeat!="n"): repeat = input("Please enter the correct answer (y/n): ") ` I have only a repeat of the text displayed in the output file. how do I find and change to the right words? I have only a repeat of the text displayed in the output file. how do I find and change to the right words? I expected to get this in the output file: ZOYX:33333:IUA:S:BCSU,55555:AFAST; ZOYP:IUA:33333:"1111",,49155:"2222",30,,,49155; ZDWP:33333:BCSU,55555:0,3:33333; ZOYS:IUA:33333:ACT; ZERC:BTS=58,TRX=3::FREQ=567,TSC=0,:DNAME=33333:CH0=TCHD,CH1=TCHD,CH2=TCHD,CH3=TCHD,CH4=TCHD,CH5=TCHD,CH6=TCHD,CH7=TCHD:; ZERM:BTS=58,TRX=4444:LEV=-91; ZERM:BTS=58,TRX=4444:PREF=N; ZERS:BTS=58,TRX=4444:U; A: In your Python code you have just read text from template.txt file and append it to output.txt file. Just add below code replace key in keys list with user inputs. for key, value in dictionary.items(): rFile = rFile.replace(key, str(value)) Also, in line keys=['_ip1_', '_ip2_', '_sName_', '_sNumb_', '_tNumb_'] '_' is added before and after each key. Please make sure that the keys in this list exactly match the text in the Template.txt file that you want to replace.
How to find and replace words in a python file?
There is a template file: ZOYX:_sName_:IUA:S:BCSU,_sNumb_:AFAST; ZOYP:IUA:_sName_:"_ip1_",,49155:"_ip2_",30,,,49155; ZDWP:_sName_:BCSU,_sNumb_:0,3:_sName_; ZOYS:IUA:_sName_:ACT; ZERC:BTS=58,TRX=_tNumb_::FREQ=567,TSC=0,:DNAME=_sName_:CH0=TCHD,CH1=TCHD,CH2=TCHD,CH3=TCHD,CH4=TCHD,CH5=TCHD,CH6=TCHD,CH7=TCHD:; ZERM:BTS=58,TRX=_tNumb_:LEV=-91; ZERM:BTS=58,TRX=_tNumb_:PREF=N; ZERS:BTS=58,TRX=_tNumb_:U;` In it, you need to replace tNumb, sName, sNumb, _ ip1_, ip2, with the values that the user enters. That's how I did it: ` repeat="y" while repeat == "y": keys=['_ip1_', '_ip2_', '_sName_', '_sNumb_', '_tNumb_'] print(keys) #print(keys[2]) print("+++++++++++++++++++++++++++++1") values=[] #ip1, ip2, sName, sNumb, tNumb = input("Enter the IP address1: "), input("Enter the IP address2: "), input("Enter the station name: "), input("Enter the station number: "), input("Enter the transmitter number: ") ip1, ip2, sName, sNumb, tNumb = 1111, 2222, 3333, 4444, 5555 values.append(ip1) values.append(ip2) values.append(sName) values.append(sNumb) values.append(tNumb) print(values) #print(values[2]) print("+++++++++++++++++++++++++++++2") dictionary={} for i in range(len(keys)): dictionary[keys[i]] = values[i] search_text = dictionary[keys[i]] replace_text = keys[i] print(search_text) print(replace_text) print("+++++++++++++++++++++++++++++3") with open(r'template.txt', 'r') as oFile: rFile = oFile.read() #print(rFile) with open(r'output.txt', 'a') as wFile: wFile.write('\n') wFile.write('\n') wFile.write('\n') wFile.write(rFile) repeat = input("Do you want to continue? (y/n): ") if repeat == "n": break while (repeat!="y" and repeat!="n"): repeat = input("Please enter the correct answer (y/n): ") ` I have only a repeat of the text displayed in the output file. how do I find and change to the right words? I have only a repeat of the text displayed in the output file. how do I find and change to the right words? I expected to get this in the output file: ZOYX:33333:IUA:S:BCSU,55555:AFAST; ZOYP:IUA:33333:"1111",,49155:"2222",30,,,49155; ZDWP:33333:BCSU,55555:0,3:33333; ZOYS:IUA:33333:ACT; ZERC:BTS=58,TRX=3::FREQ=567,TSC=0,:DNAME=33333:CH0=TCHD,CH1=TCHD,CH2=TCHD,CH3=TCHD,CH4=TCHD,CH5=TCHD,CH6=TCHD,CH7=TCHD:; ZERM:BTS=58,TRX=4444:LEV=-91; ZERM:BTS=58,TRX=4444:PREF=N; ZERS:BTS=58,TRX=4444:U;
[ "In your Python code you have just read text from template.txt file and append it to output.txt file.\nJust add below code replace key in keys list with user inputs.\nfor key, value in dictionary.items():\n rFile = rFile.replace(key, str(value))\n \n\nAlso, in line keys=['_ip1_', '_ip2_', '_sName_', '_sNumb_', '_tNumb_'] '_' is added before and after each key. Please make sure that the keys in this list exactly match the text in the Template.txt file that you want to replace.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074500925_python.txt
Q: Get an error when creating voice channel not sure what to do I'm creating a command that will create voice channels, It takes a few arguments from the user and makes a voice channel with it. Here is the code - ##TEST CREATE VC @bot.command(name="createvoice") async def createvoice(ctx, name = "Voice Channel", user_limit = 5,): guild = ctx.message.author.guild await guild.create_voice_channel(name, int(user_limit)) It works normal with 1 argument, but the issue occurs when I add more arguments such as user_limit or any other. So i type .createvoice testname 5 and I get the error - nextcord.ext.commands.errors.CommandInvokeError: Command raised an exception: TypeError: Guild.create_voice_channel() takes 2 positional arguments but 3 were given Anyone knows how to fix it and how to make it create the channel in a specific category? A: You have to provide a name. When I provided a name it worked for me. To do this change await guild.create_voice_channel(name, user_limit=5) to await guild.create_voice_channel(name="VOICE_CHANNEL_NAME", user_limit=5) and replace VOICE_CHANNEL_NAME to your wanted voice channel name. If you want to have the user choose the name simply replace name="VOICE_CHANNEL_NAME" with name=name so you would have await guild.create_voice_channel(name=name, user_limit=5) For a code that is completely customisable for the user use this (where the user can pick category, name and user limit for a voice channel): @bot.command(name="createvoice") async def createvoice(ctx, category_name: discord.CategoryChannel, name = "Voice Channel", user_limit=5): guild = ctx.message.author.guild await guild.create_voice_channel(category=category_name, name=name, user_limit=user_limit) Now the user can send something like this and this will happen. I hope this helps :)
Get an error when creating voice channel not sure what to do
I'm creating a command that will create voice channels, It takes a few arguments from the user and makes a voice channel with it. Here is the code - ##TEST CREATE VC @bot.command(name="createvoice") async def createvoice(ctx, name = "Voice Channel", user_limit = 5,): guild = ctx.message.author.guild await guild.create_voice_channel(name, int(user_limit)) It works normal with 1 argument, but the issue occurs when I add more arguments such as user_limit or any other. So i type .createvoice testname 5 and I get the error - nextcord.ext.commands.errors.CommandInvokeError: Command raised an exception: TypeError: Guild.create_voice_channel() takes 2 positional arguments but 3 were given Anyone knows how to fix it and how to make it create the channel in a specific category?
[ "You have to provide a name. When I provided a name it worked for me. To do this change\nawait guild.create_voice_channel(name, user_limit=5)\n\nto\nawait guild.create_voice_channel(name=\"VOICE_CHANNEL_NAME\", user_limit=5)\n\nand replace VOICE_CHANNEL_NAME to your wanted voice channel name.\nIf you want to have the user choose the name simply replace name=\"VOICE_CHANNEL_NAME\" with name=name so you would have\nawait guild.create_voice_channel(name=name, user_limit=5)\n\nFor a code that is completely customisable for the user use this (where the user can pick category, name and user limit for a voice channel):\n@bot.command(name=\"createvoice\")\nasync def createvoice(ctx, category_name: discord.CategoryChannel, name = \"Voice Channel\", user_limit=5):\n guild = ctx.message.author.guild\n await guild.create_voice_channel(category=category_name, name=name, user_limit=user_limit)\n\nNow the user can send something like this and this will happen.\nI hope this helps :)\n" ]
[ 0 ]
[]
[]
[ "bots", "discord.py", "nextcord", "python" ]
stackoverflow_0074501957_bots_discord.py_nextcord_python.txt
Q: How do I measure elapsed time in Python? I want to measure the time it took to execute a function. I couldn't get timeit to work: import timeit start = timeit.timeit() print("hello") end = timeit.timeit() print(end - start) A: Use time.time() to measure the elapsed wall-clock time between two points: import time start = time.time() print("hello") end = time.time() print(end - start) This gives the execution time in seconds. Another option since Python 3.3 might be to use perf_counter or process_time, depending on your requirements. Before 3.3 it was recommended to use time.clock (thanks Amber). However, it is currently deprecated: On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name. On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond. Deprecated since version 3.3: The behaviour of this function depends on the platform: use perf_counter() or process_time() instead, depending on your requirements, to have a well defined behaviour. A: Use timeit.default_timer instead of timeit.timeit. The former provides the best clock available on your platform and version of Python automatically: from timeit import default_timer as timer start = timer() # ... end = timer() print(end - start) # Time in seconds, e.g. 5.38091952400282 timeit.default_timer is assigned to time.time() or time.clock() depending on OS. On Python 3.3+ default_timer is time.perf_counter() on all platforms. See Python - time.clock() vs. time.time() - accuracy? See also: Optimizing code How to optimize for speed A: Python 3 only: Since time.clock() is deprecated as of Python 3.3, you will want to use time.perf_counter() for system-wide timing, or time.process_time() for process-wide timing, just the way you used to use time.clock(): import time t = time.process_time() #do some stuff elapsed_time = time.process_time() - t The new function process_time will not include time elapsed during sleep. A: Measuring time in seconds: from timeit import default_timer as timer from datetime import timedelta start = timer() # .... # (your code runs here) # ... end = timer() print(timedelta(seconds=end-start)) Output: 0:00:01.946339 A: Given a function you'd like to time, test.py: def foo(): # print "hello" return "hello" the easiest way to use timeit is to call it from the command line: % python -mtimeit -s'import test' 'test.foo()' 1000000 loops, best of 3: 0.254 usec per loop Do not try to use time.time or time.clock (naively) to compare the speed of functions. They can give misleading results. PS. Do not put print statements in a function you wish to time; otherwise the time measured will depend on the speed of the terminal. A: It's fun to do this with a context-manager that automatically remembers the start time upon entry to a with block, then freezes the end time on block exit. With a little trickery, you can even get a running elapsed-time tally inside the block from the same context-manager function. The core library doesn't have this (but probably ought to). Once in place, you can do things like: with elapsed_timer() as elapsed: # some lengthy code print( "midpoint at %.2f seconds" % elapsed() ) # time so far # other lengthy code print( "all done at %.2f seconds" % elapsed() ) Here's contextmanager code sufficient to do the trick: from contextlib import contextmanager from timeit import default_timer @contextmanager def elapsed_timer(): start = default_timer() elapser = lambda: default_timer() - start yield lambda: elapser() end = default_timer() elapser = lambda: end-start And some runnable demo code: import time with elapsed_timer() as elapsed: time.sleep(1) print(elapsed()) time.sleep(2) print(elapsed()) time.sleep(3) Note that by design of this function, the return value of elapsed() is frozen on block exit, and further calls return the same duration (of about 6 seconds in this toy example). A: I prefer this. timeit doc is far too confusing. from datetime import datetime start_time = datetime.now() # INSERT YOUR CODE time_elapsed = datetime.now() - start_time print('Time elapsed (hh:mm:ss.ms) {}'.format(time_elapsed)) Note, that there isn't any formatting going on here, I just wrote hh:mm:ss into the printout so one can interpret time_elapsed A: Here's another way to do this: >> from pytictoc import TicToc >> t = TicToc() # create TicToc instance >> t.tic() # Start timer >> # do something >> t.toc() # Print elapsed time Elapsed time is 2.612231 seconds. Comparing with traditional way: >> from time import time >> t1 = time() >> # do something >> t2 = time() >> elapsed = t2 - t1 >> print('Elapsed time is %f seconds.' % elapsed) Elapsed time is 2.612231 seconds. Installation: pip install pytictoc Refer to the PyPi page for more details. A: The easiest way to calculate the duration of an operation: import time start_time = time.monotonic() <operations, programs> print('seconds: ', time.monotonic() - start_time) Official docs here. A: Here are my findings after going through many good answers here as well as a few other articles. First, if you are debating between timeit and time.time, the timeit has two advantages: timeit selects the best timer available on your OS and Python version. timeit disables garbage collection, however, this is not something you may or may not want. Now the problem is that timeit is not that simple to use because it needs setup and things get ugly when you have a bunch of imports. Ideally, you just want a decorator or use with block and measure time. Unfortunately, there is nothing built-in available for this so you have two options: Option 1: Use timebudget library The timebudget is a versatile and very simple library that you can use just in one line of code after pip install. @timebudget # Record how long this function takes def my_method(): # my code Option 2: Use my small module I created below little timing utility module called timing.py. Just drop this file in your project and start using it. The only external dependency is runstats which is again small. Now you can time any function just by putting a decorator in front of it: import timing @timing.MeasureTime def MyBigFunc(): #do something time consuming for i in range(10000): print(i) timing.print_all_timings() If you want to time portion of code then just put it inside with block: import timing #somewhere in my code with timing.MeasureBlockTime("MyBlock"): #do something time consuming for i in range(10000): print(i) # rest of my code timing.print_all_timings() Advantages: There are several half-backed versions floating around so I want to point out few highlights: Use timer from timeit instead of time.time for reasons described earlier. You can disable GC during timing if you want. Decorator accepts functions with named or unnamed params. Ability to disable printing in block timing (use with timing.MeasureBlockTime() as t and then t.elapsed). Ability to keep gc enabled for block timing. A: Using time.time to measure execution gives you the overall execution time of your commands including running time spent by other processes on your computer. It is the time the user notices, but is not good if you want to compare different code snippets / algorithms / functions / ... More information on timeit: Using the timeit Module timeit – Time the execution of small bits of Python code If you want a deeper insight into profiling: http://wiki.python.org/moin/PythonSpeed/PerformanceTips#Profiling_Code How can you profile a python script? Update: I used http://pythonhosted.org/line_profiler/ a lot during the last year and find it very helpfull and recommend to use it instead of Pythons profile module. A: The python cProfile and pstats modules offer great support for measuring time elapsed in certain functions without having to add any code around the existing functions. For example if you have a python script timeFunctions.py: import time def hello(): print "Hello :)" time.sleep(0.1) def thankyou(): print "Thank you!" time.sleep(0.05) for idx in range(10): hello() for idx in range(100): thankyou() To run the profiler and generate stats for the file you can just run: python -m cProfile -o timeStats.profile timeFunctions.py What this is doing is using the cProfile module to profile all functions in timeFunctions.py and collecting the stats in the timeStats.profile file. Note that we did not have to add any code to existing module (timeFunctions.py) and this can be done with any module. Once you have the stats file you can run the pstats module as follows: python -m pstats timeStats.profile This runs the interactive statistics browser which gives you a lot of nice functionality. For your particular use case you can just check the stats for your function. In our example checking stats for both functions shows us the following: Welcome to the profile statistics browser. timeStats.profile% stats hello <timestamp> timeStats.profile 224 function calls in 6.014 seconds Random listing order was used List reduced from 6 to 1 due to restriction <'hello'> ncalls tottime percall cumtime percall filename:lineno(function) 10 0.000 0.000 1.001 0.100 timeFunctions.py:3(hello) timeStats.profile% stats thankyou <timestamp> timeStats.profile 224 function calls in 6.014 seconds Random listing order was used List reduced from 6 to 1 due to restriction <'thankyou'> ncalls tottime percall cumtime percall filename:lineno(function) 100 0.002 0.000 5.012 0.050 timeFunctions.py:7(thankyou) The dummy example does not do much but give you an idea of what can be done. The best part about this approach is that I dont have to edit any of my existing code to get these numbers and obviously help with profiling. A: Here is a tiny timer class that returns "hh:mm:ss" string: class Timer: def __init__(self): self.start = time.time() def restart(self): self.start = time.time() def get_time_hhmmss(self): end = time.time() m, s = divmod(end - self.start, 60) h, m = divmod(m, 60) time_str = "%02d:%02d:%02d" % (h, m, s) return time_str Usage: # Start timer my_timer = Timer() # ... do something # Get time string: time_hhmmss = my_timer.get_time_hhmmss() print("Time elapsed: %s" % time_hhmmss ) # ... use the timer again my_timer.restart() # ... do something # Get time: time_hhmmss = my_timer.get_time_hhmmss() # ... etc A: Here's another context manager for timing code - Usage: from benchmark import benchmark with benchmark("Test 1+1"): 1+1 => Test 1+1 : 1.41e-06 seconds or, if you need the time value with benchmark("Test 1+1") as b: 1+1 print(b.time) => Test 1+1 : 7.05e-07 seconds 7.05233786763e-07 benchmark.py: from timeit import default_timer as timer class benchmark(object): def __init__(self, msg, fmt="%0.3g"): self.msg = msg self.fmt = fmt def __enter__(self): self.start = timer() return self def __exit__(self, *args): t = timer() - self.start print(("%s : " + self.fmt + " seconds") % (self.msg, t)) self.time = t Adapted from http://dabeaz.blogspot.fr/2010/02/context-manager-for-timing-benchmarks.html A: Use profiler module. It gives a very detailed profile. import profile profile.run('main()') it outputs something like: 5 function calls in 0.047 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 :0(exec) 1 0.047 0.047 0.047 0.047 :0(setprofile) 1 0.000 0.000 0.000 0.000 <string>:1(<module>) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.047 0.047 profile:0(main()) 1 0.000 0.000 0.000 0.000 two_sum.py:2(twoSum) I've found it very informative. A: (With Ipython only) you can use %timeit to measure average processing time: def foo(): print "hello" and then: %timeit foo() the result is something like: 10000 loops, best of 3: 27 µs per loop A: I like it simple (python 3): from timeit import timeit timeit(lambda: print("hello")) Output is microseconds for a single execution: 2.430883963010274 Explanation: timeit executes the anonymous function 1 million times by default and the result is given in seconds. Therefore the result for 1 single execution is the same amount but in microseconds on average. For slow operations add a lower number of iterations or you could be waiting forever: import time timeit(lambda: time.sleep(1.5), number=1) Output is always in seconds for the total number of iterations: 1.5015795179999714 A: on python3: from time import sleep, perf_counter as pc t0 = pc() sleep(1) print(pc()-t0) elegant and short. A: One more way to use timeit: from timeit import timeit def func(): return 1 + 1 time = timeit(func, number=1) print(time) A: To get insight on every function calls recursively, do: %load_ext snakeviz %%snakeviz It just takes those 2 lines of code in a Jupyter notebook, and it generates a nice interactive diagram. For example: Here is the code. Again, the 2 lines starting with % are the only extra lines of code needed to use snakeviz: # !pip install snakeviz %load_ext snakeviz import glob import hashlib %%snakeviz files = glob.glob('*.txt') def print_files_hashed(files): for file in files: with open(file) as f: print(hashlib.md5(f.read().encode('utf-8')).hexdigest()) print_files_hashed(files) It also seems possible to run snakeviz outside notebooks. More info on the snakeviz website. A: How to measure the time between two operations. Compare the time of two operations. import time b = (123*321)*123 t1 = time.time() c = ((9999^123)*321)^123 t2 = time.time() print(t2-t1) 7.987022399902344e-05 A: If you want to be able to time functions conveniently, you can use a simple decorator: import time def timing_decorator(func): def wrapper(*args, **kwargs): start = time.perf_counter() original_return_val = func(*args, **kwargs) end = time.perf_counter() print("time elapsed in ", func.__name__, ": ", end - start, sep='') return original_return_val return wrapper You can use it on a function that you want to time like this: @timing_decorator def function_to_time(): time.sleep(1) function_to_time() Any time you call function_to_time, it will print how long it took and the name of the function being timed. A: Here's a pretty well documented and fully type hinted decorator I use as a general utility: from functools import wraps from time import perf_counter from typing import Any, Callable, Optional, TypeVar, cast F = TypeVar("F", bound=Callable[..., Any]) def timer(prefix: Optional[str] = None, precision: int = 6) -> Callable[[F], F]: """Use as a decorator to time the execution of any function. Args: prefix: String to print before the time taken. Default is the name of the function. precision: How many decimals to include in the seconds value. Examples: >>> @timer() ... def foo(x): ... return x >>> foo(123) foo: 0.000...s 123 >>> @timer("Time taken: ", 2) ... def foo(x): ... return x >>> foo(123) Time taken: 0.00s 123 """ def decorator(func: F) -> F: @wraps(func) def wrapper(*args: Any, **kwargs: Any) -> Any: nonlocal prefix prefix = prefix if prefix is not None else f"{func.__name__}: " start = perf_counter() result = func(*args, **kwargs) end = perf_counter() print(f"{prefix}{end - start:.{precision}f}s") return result return cast(F, wrapper) return decorator Example usage: from timer import timer @timer(precision=9) def takes_long(x: int) -> bool: return x in (i for i in range(x + 1)) result = takes_long(10**8) print(result) Output: takes_long: 4.942629056s True The doctests can be checked with: $ python3 -m doctest --verbose -o=ELLIPSIS timer.py And the type hints with: $ mypy timer.py A: Kind of a super later response, but maybe it serves a purpose for someone. This is a way to do it which I think is super clean. import time def timed(fun, *args): s = time.time() r = fun(*args) print('{} execution took {} seconds.'.format(fun.__name__, time.time()-s)) return(r) timed(print, "Hello") Keep in mind that "print" is a function in Python 3 and not Python 2.7. However, it works with any other function. Cheers! A: You can use timeit. Here is an example on how to test naive_func that takes parameter using Python REPL: >>> import timeit >>> def naive_func(x): ... a = 0 ... for i in range(a): ... a += i ... return a >>> def wrapper(func, *args, **kwargs): ... def wrapper(): ... return func(*args, **kwargs) ... return wrapper >>> wrapped = wrapper(naive_func, 1_000) >>> timeit.timeit(wrapped, number=1_000_000) 0.4458435332577161 You don't need wrapper function if function doesn't have any parameters. A: print_elapsed_time function is below def print_elapsed_time(prefix=''): e_time = time.time() if not hasattr(print_elapsed_time, 's_time'): print_elapsed_time.s_time = e_time else: print(f'{prefix} elapsed time: {e_time - print_elapsed_time.s_time:.2f} sec') print_elapsed_time.s_time = e_time use it in this way print_elapsed_time() .... heavy jobs ... print_elapsed_time('after heavy jobs') .... tons of jobs ... print_elapsed_time('after tons of jobs') result is after heavy jobs elapsed time: 0.39 sec after tons of jobs elapsed time: 0.60 sec the pros and cons of this function is that you don't need to pass start time A: We can also convert time into human-readable time. import time, datetime start = time.clock() def num_multi1(max): result = 0 for num in range(0, 1000): if (num % 3 == 0 or num % 5 == 0): result += num print "Sum is %d " % result num_multi1(1000) end = time.clock() value = end - start timestamp = datetime.datetime.fromtimestamp(value) print timestamp.strftime('%Y-%m-%d %H:%M:%S') A: Although it's not strictly asked in the question, it is quite often the case that you want a simple, uniform way to incrementally measure the elapsed time between several lines of code. If you are using Python 3.8 or above, you can make use of assignment expressions (a.k.a. the walrus operator) to achieve this in a fairly elegant way: import time start, times = time.perf_counter(), {} print("hello") times["print"] = -start + (start := time.perf_counter()) time.sleep(1.42) times["sleep"] = -start + (start := time.perf_counter()) a = [n**2 for n in range(10000)] times["pow"] = -start + (start := time.perf_counter()) print(times) => {'print': 2.193450927734375e-05, 'sleep': 1.4210970401763916, 'power': 0.005671024322509766} A: I made a library for this, if you want to measure a function you can just do it like this from pythonbenchmark import compare, measure import time a,b,c,d,e = 10,10,10,10,10 something = [a,b,c,d,e] @measure def myFunction(something): time.sleep(0.4) @measure def myOptimizedFunction(something): time.sleep(0.2) myFunction(input) myOptimizedFunction(input) https://github.com/Karlheinzniebuhr/pythonbenchmark A: This unique class-based approach offers a printable string representation, customizable rounding, and convenient access to the elapsed time as a string or a float. It was developed with Python 3.7. import datetime import timeit class Timer: """Measure time used.""" # Ref: https://stackoverflow.com/a/57931660/ def __init__(self, round_ndigits: int = 0): self._round_ndigits = round_ndigits self._start_time = timeit.default_timer() def __call__(self) -> float: return timeit.default_timer() - self._start_time def __str__(self) -> str: return str(datetime.timedelta(seconds=round(self(), self._round_ndigits))) Usage: # Setup timer >>> timer = Timer() # Access as a string >>> print(f'Time elapsed is {timer}.') Time elapsed is 0:00:03. >>> print(f'Time elapsed is {timer}.') Time elapsed is 0:00:04. # Access as a float >>> timer() 6.841332235 >>> timer() 7.970274425 A: As a lambda, obtain time elapsed and time stamps: import datetime t_set = lambda: datetime.datetime.now().astimezone().replace(microsecond=0) t_diff = lambda t: str(t_set() - t) t_stamp = lambda t=None: str(t) if t else str(t_set()) In practice: >>> >>> t_set() datetime.datetime(2021, 3, 21, 1, 25, 17, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=61200), 'PDT')) >>> t = t_set() >>> t_diff(t) '0:00:14' >>> t_diff(t) '0:00:23' >>> t_stamp() '2021-03-21 01:25:57-07:00' >>> t_stamp(t) '2021-03-21 01:25:22-07:00' >>> A: import time def getElapsedTime(startTime, units): elapsedInSeconds = time.time() - startTime if units == 'sec': return elapsedInSeconds if units == 'min': return elapsedInSeconds/60 if units == 'hour': return elapsedInSeconds/(60*60) A: Measure execution time of small code snippets. Unit of time: measured in seconds as a float import timeit t = timeit.Timer('li = list(map(lambda x:x*2,[1,2,3,4,5]))') t.timeit() t.repeat() >[1.2934070999999676, 1.3335035000000062, 1.422568500000125] The repeat() method is a convenience to call timeit() multiple times and return a list of results. repeat(repeat=3)¶ With this list we can take a mean of all times. By default, timeit() temporarily turns off garbage collection during the timing. time.Timer() solves this problem. Pros: timeit.Timer() makes independent timings more comparable. The gc may be an important component of the performance of the function being measured. If so, gc(garbage collector) can be re-enabled as the first statement in the setup string. For example: timeit.Timer('li = list(map(lambda x:x*2,[1,2,3,4,5]))',setup='gc.enable()') Source Python Docs! A: based on the contextmanager solution given by https://stackoverflow.com/a/30024601/5095636, hereunder the lambda free version, as flake8 warns on the usage of lambda as per E731: from contextlib import contextmanager from timeit import default_timer @contextmanager def elapsed_timer(): start_time = default_timer() class _Timer(): start = start_time end = default_timer() duration = end - start yield _Timer end_time = default_timer() _Timer.end = end_time _Timer.duration = end_time - start_time test: from time import sleep with elapsed_timer() as t: print("start:", t.start) sleep(1) print("end:", t.end) t.start t.end t.duration A: For Python 3 If you use the time module, you can get the current timestamp, and then execute your code, and get the timestamp again. Now, the time taken will be the first timestamp minus the second timestamp: import time first_stamp = int(round(time.time() * 1000)) # YOUR CODE GOES HERE time.sleep(5) second_stamp = int(round(time.time() * 1000)) # Calculate the time taken in milliseconds time_taken = second_stamp - first_stamp # To get time in seconds: time_taken_seconds = round(time_taken / 1000) print(f'{time_taken_seconds} seconds or {time_taken} milliseconds') A: The timeit module is good for timing a small piece of Python code. It can be used at least in three forms: 1- As a command-line module python2 -m timeit 'for i in xrange(10): oct(i)' 2- For a short code, pass it as arguments. import timeit timeit.Timer('for i in xrange(10): oct(i)').timeit() 3- For longer code as: import timeit code_to_test = """ a = range(100000) b = [] for i in a: b.append(i*2) """ elapsed_time = timeit.timeit(code_to_test, number=100)/100 print(elapsed_time) A: Time can also be measured by %timeit magic function as follow: %timeit -t -n 1 print("hello") n 1 is for running function only 1 time. A: You can use Benchmark Timer (disclaimer: I'm the author): Benchmark Timer Use the BenchmarkTimer class to measure the time it takes to execute some piece of code. This gives more flexibility than the built-in timeit function, and runs in the same scope as the rest of your code. Installation pip install git+https://github.com/michaelitvin/benchmark-timer.git@main#egg=benchmark-timer Usage Single iteration example from benchmark_timer import BenchmarkTimer import time with BenchmarkTimer(name="MySimpleCode") as tm, tm.single_iteration(): time.sleep(.3) Output: Benchmarking MySimpleCode... MySimpleCode benchmark: n_iters=1 avg=0.300881s std=0.000000s range=[0.300881s~0.300881s] Multiple iterations example from benchmark_timer import BenchmarkTimer import time with BenchmarkTimer(name="MyTimedCode", print_iters=True) as tm: for timing_iteration in tm.iterations(n=5, warmup=2): with timing_iteration: time.sleep(.1) print("\n===================\n") print("List of timings: ", list(tm.timings.values())) Output: Benchmarking MyTimedCode... [MyTimedCode] iter=0 took 0.099755s (warmup) [MyTimedCode] iter=1 took 0.100476s (warmup) [MyTimedCode] iter=2 took 0.100189s [MyTimedCode] iter=3 took 0.099900s [MyTimedCode] iter=4 took 0.100888s MyTimedCode benchmark: n_iters=3 avg=0.100326s std=0.000414s range=[0.099900s~0.100888s] =================== List of timings: [0.10018850000000001, 0.09990049999999995, 0.10088760000000008] A: I'm pretty late to the party, but this approach was not covered before. When we want to benchmark manually some piece of code, we may want to find out first which of class methods eats the execution time, and this is sometimes not obvious. I have built the following metaclass to solve exactly this problem: from __future__ import annotations from functools import wraps from time import time from typing import Any, Callable, TypeVar, cast F = TypeVar('F', bound=Callable[..., Any]) def timed_method(func: F, prefix: str | None = None) -> F: prefix = (prefix + ' ') if prefix else '' @wraps(func) def inner(*args, **kwargs): # type: ignore start = time() try: ret = func(*args, **kwargs) except BaseException: print(f'[ERROR] {prefix}{func.__qualname__}: {time() - start}') raise print(f'{prefix}{func.__qualname__}: {time() - start}') return ret return cast(F, inner) class TimedClass(type): def __new__( cls: type[TimedClass], name: str, bases: tuple[type[type], ...], attrs: dict[str, Any], **kwargs: Any, ) -> TimedClass: for name, attr in attrs.items(): if isinstance(attr, (classmethod, staticmethod)): attrs[name] = type(attr)(timed_method(attr.__func__)) elif isinstance(attr, property): attrs[name] = property( timed_method(attr.fget, 'get') if attr.fget is not None else None, timed_method(attr.fset, 'set') if attr.fset is not None else None, timed_method(attr.fdel, 'del') if attr.fdel is not None else None, ) elif callable(attr): attrs[name] = timed_method(attr) return super().__new__(cls, name, bases, attrs) It allows usage like the following: class MyClass(metaclass=TimedClass): def foo(self): print('foo') @classmethod def bar(cls): print('bar') @staticmethod def baz(): print('baz') @property def prop(self): print('prop') @prop.setter def prop(self, v): print('fset') @prop.deleter def prop(self): print('fdel') c = MyClass() c.foo() c.bar() c.baz() c.prop c.prop = 2 del c.prop MyClass.bar() MyClass.baz() It prints: foo MyClass.foo: 1.621246337890625e-05 bar MyClass.bar: 4.5299530029296875e-06 baz MyClass.baz: 4.291534423828125e-06 prop get MyClass.prop: 3.814697265625e-06 fset set MyClass.prop: 3.5762786865234375e-06 fdel del MyClass.prop: 3.5762786865234375e-06 bar MyClass.bar: 3.814697265625e-06 baz MyClass.baz: 4.0531158447265625e-06 It can be combined with other answers to replace time.time with something more precise. A: Here is an answer using: a concise context manager to time code snippets time.perf_counter() to compute time delta. It should be prefered as it is not adjustable (neither a sysadmin nor a daemon can change its value) contrary to time.time() (see doc) python 3.10+ (because of typing but could be easily adapted to previous versions) import time from contextlib import contextmanager from typing import Iterator @contextmanager def time_it() -> Iterator[None]: tic: float = time.perf_counter() try: yield finally: toc: float = time.perf_counter() print(f"Computation time = {1000*(toc - tic):.3f}ms") An example how to use it: # Example: vector dot product computation with time_it(): A = B = range(1000000) dot = sum(a*b for a,b in zip(A,B)) # Computation time = 95.353ms Appendix import time # to check adjustability assert time.get_clock_info('time').adjustable assert time.get_clock_info('perf_counter').adjustable is False
How do I measure elapsed time in Python?
I want to measure the time it took to execute a function. I couldn't get timeit to work: import timeit start = timeit.timeit() print("hello") end = timeit.timeit() print(end - start)
[ "Use time.time() to measure the elapsed wall-clock time between two points:\nimport time\n\nstart = time.time()\nprint(\"hello\")\nend = time.time()\nprint(end - start)\n\nThis gives the execution time in seconds.\n\nAnother option since Python 3.3 might be to use perf_counter or process_time, depending on your requirements. Before 3.3 it was recommended to use time.clock (thanks Amber). However, it is currently deprecated:\n\nOn Unix, return the current processor time as a floating point number\nexpressed in seconds. The precision, and in fact the very definition\nof the meaning of “processor time”, depends on that of the C function\nof the same name.\nOn Windows, this function returns wall-clock seconds elapsed since the\nfirst call to this function, as a floating point number, based on the\nWin32 function QueryPerformanceCounter(). The resolution is typically\nbetter than one microsecond.\nDeprecated since version 3.3: The behaviour of this function depends\non the platform: use perf_counter() or process_time() instead,\ndepending on your requirements, to have a well defined behaviour.\n\n", "Use timeit.default_timer instead of timeit.timeit. The former provides the best clock available on your platform and version of Python automatically:\nfrom timeit import default_timer as timer\n\nstart = timer()\n# ...\nend = timer()\nprint(end - start) # Time in seconds, e.g. 5.38091952400282\n\ntimeit.default_timer is assigned to time.time() or time.clock() depending on OS. On Python 3.3+ default_timer is time.perf_counter() on all platforms. See Python - time.clock() vs. time.time() - accuracy?\nSee also:\n\nOptimizing code\nHow to optimize for speed\n\n", "Python 3 only:\nSince time.clock() is deprecated as of Python 3.3, you will want to use time.perf_counter() for system-wide timing, or time.process_time() for process-wide timing, just the way you used to use time.clock():\nimport time\n\nt = time.process_time()\n#do some stuff\nelapsed_time = time.process_time() - t\n\nThe new function process_time will not include time elapsed during sleep.\n", "Measuring time in seconds:\nfrom timeit import default_timer as timer\nfrom datetime import timedelta\n\nstart = timer()\n\n# ....\n# (your code runs here)\n# ...\n\nend = timer()\nprint(timedelta(seconds=end-start))\n\nOutput:\n0:00:01.946339\n\n", "Given a function you'd like to time,\ntest.py:\ndef foo(): \n # print \"hello\" \n return \"hello\"\n\nthe easiest way to use timeit is to call it from the command line:\n% python -mtimeit -s'import test' 'test.foo()'\n1000000 loops, best of 3: 0.254 usec per loop\n\nDo not try to use time.time or time.clock (naively) to compare the speed of functions. They can give misleading results.\nPS. Do not put print statements in a function you wish to time; otherwise the time measured will depend on the speed of the terminal.\n", "It's fun to do this with a context-manager that automatically remembers the start time upon entry to a with block, then freezes the end time on block exit. With a little trickery, you can even get a running elapsed-time tally inside the block from the same context-manager function. \nThe core library doesn't have this (but probably ought to). Once in place, you can do things like:\nwith elapsed_timer() as elapsed:\n # some lengthy code\n print( \"midpoint at %.2f seconds\" % elapsed() ) # time so far\n # other lengthy code\n\nprint( \"all done at %.2f seconds\" % elapsed() )\n\nHere's contextmanager code sufficient to do the trick:\nfrom contextlib import contextmanager\nfrom timeit import default_timer\n\n@contextmanager\ndef elapsed_timer():\n start = default_timer()\n elapser = lambda: default_timer() - start\n yield lambda: elapser()\n end = default_timer()\n elapser = lambda: end-start\n\nAnd some runnable demo code:\nimport time\n\nwith elapsed_timer() as elapsed:\n time.sleep(1)\n print(elapsed())\n time.sleep(2)\n print(elapsed())\n time.sleep(3)\n\nNote that by design of this function, the return value of elapsed() is frozen on block exit, and further calls return the same duration (of about 6 seconds in this toy example). \n", "I prefer this. timeit doc is far too confusing. \nfrom datetime import datetime \n\nstart_time = datetime.now() \n\n# INSERT YOUR CODE \n\ntime_elapsed = datetime.now() - start_time \n\nprint('Time elapsed (hh:mm:ss.ms) {}'.format(time_elapsed))\n\nNote, that there isn't any formatting going on here, I just wrote hh:mm:ss into the printout so one can interpret time_elapsed\n", "Here's another way to do this:\n>> from pytictoc import TicToc\n>> t = TicToc() # create TicToc instance\n>> t.tic() # Start timer\n>> # do something\n>> t.toc() # Print elapsed time\nElapsed time is 2.612231 seconds.\n\nComparing with traditional way:\n>> from time import time\n>> t1 = time()\n>> # do something\n>> t2 = time()\n>> elapsed = t2 - t1\n>> print('Elapsed time is %f seconds.' % elapsed)\nElapsed time is 2.612231 seconds.\n\nInstallation:\npip install pytictoc\n\nRefer to the PyPi page for more details.\n", "The easiest way to calculate the duration of an operation:\nimport time\n\nstart_time = time.monotonic()\n\n<operations, programs>\n\nprint('seconds: ', time.monotonic() - start_time)\n\nOfficial docs here.\n", "Here are my findings after going through many good answers here as well as a few other articles.\nFirst, if you are debating between timeit and time.time, the timeit has two advantages:\n\ntimeit selects the best timer available on your OS and Python version.\ntimeit disables garbage collection, however, this is not something you may or may not want.\n\nNow the problem is that timeit is not that simple to use because it needs setup and things get ugly when you have a bunch of imports. Ideally, you just want a decorator or use with block and measure time. Unfortunately, there is nothing built-in available for this so you have two options:\nOption 1: Use timebudget library\nThe timebudget is a versatile and very simple library that you can use just in one line of code after pip install.\n@timebudget # Record how long this function takes\ndef my_method():\n # my code\n\nOption 2: Use my small module\nI created below little timing utility module called timing.py. Just drop this file in your project and start using it. The only external dependency is runstats which is again small.\nNow you can time any function just by putting a decorator in front of it:\nimport timing\n\n@timing.MeasureTime\ndef MyBigFunc():\n #do something time consuming\n for i in range(10000):\n print(i)\n\ntiming.print_all_timings()\n\nIf you want to time portion of code then just put it inside with block:\nimport timing\n\n#somewhere in my code\n\nwith timing.MeasureBlockTime(\"MyBlock\"):\n #do something time consuming\n for i in range(10000):\n print(i)\n\n# rest of my code\n\ntiming.print_all_timings()\n\nAdvantages:\nThere are several half-backed versions floating around so I want to point out few highlights:\n\nUse timer from timeit instead of time.time for reasons described earlier.\nYou can disable GC during timing if you want.\nDecorator accepts functions with named or unnamed params.\nAbility to disable printing in block timing (use with timing.MeasureBlockTime() as t and then t.elapsed).\nAbility to keep gc enabled for block timing.\n\n", "Using time.time to measure execution gives you the overall execution time of your commands including running time spent by other processes on your computer. It is the time the user notices, but is not good if you want to compare different code snippets / algorithms / functions / ...\nMore information on timeit:\n\nUsing the timeit Module\ntimeit – Time the execution of small bits of Python code\n\nIf you want a deeper insight into profiling:\n\nhttp://wiki.python.org/moin/PythonSpeed/PerformanceTips#Profiling_Code\nHow can you profile a python script?\n\nUpdate: I used http://pythonhosted.org/line_profiler/ a lot during the last year and find it very helpfull and recommend to use it instead of Pythons profile module.\n", "The python cProfile and pstats modules offer great support for measuring time elapsed in certain functions without having to add any code around the existing functions.\nFor example if you have a python script timeFunctions.py:\nimport time\n\ndef hello():\n print \"Hello :)\"\n time.sleep(0.1)\n\ndef thankyou():\n print \"Thank you!\"\n time.sleep(0.05)\n\nfor idx in range(10):\n hello()\n\nfor idx in range(100):\n thankyou()\n\nTo run the profiler and generate stats for the file you can just run:\npython -m cProfile -o timeStats.profile timeFunctions.py\n\nWhat this is doing is using the cProfile module to profile all functions in timeFunctions.py and collecting the stats in the timeStats.profile file. Note that we did not have to add any code to existing module (timeFunctions.py) and this can be done with any module.\nOnce you have the stats file you can run the pstats module as follows:\npython -m pstats timeStats.profile\n\nThis runs the interactive statistics browser which gives you a lot of nice functionality. For your particular use case you can just check the stats for your function. In our example checking stats for both functions shows us the following:\nWelcome to the profile statistics browser.\ntimeStats.profile% stats hello\n<timestamp> timeStats.profile\n\n 224 function calls in 6.014 seconds\n\n Random listing order was used\n List reduced from 6 to 1 due to restriction <'hello'>\n\n ncalls tottime percall cumtime percall filename:lineno(function)\n 10 0.000 0.000 1.001 0.100 timeFunctions.py:3(hello)\n\ntimeStats.profile% stats thankyou\n<timestamp> timeStats.profile\n\n 224 function calls in 6.014 seconds\n\n Random listing order was used\n List reduced from 6 to 1 due to restriction <'thankyou'>\n\n ncalls tottime percall cumtime percall filename:lineno(function)\n 100 0.002 0.000 5.012 0.050 timeFunctions.py:7(thankyou)\n\nThe dummy example does not do much but give you an idea of what can be done. The best part about this approach is that I dont have to edit any of my existing code to get these numbers and obviously help with profiling.\n", "Here is a tiny timer class that returns \"hh:mm:ss\" string: \nclass Timer:\n def __init__(self):\n self.start = time.time()\n\n def restart(self):\n self.start = time.time()\n\n def get_time_hhmmss(self):\n end = time.time()\n m, s = divmod(end - self.start, 60)\n h, m = divmod(m, 60)\n time_str = \"%02d:%02d:%02d\" % (h, m, s)\n return time_str\n\nUsage: \n# Start timer\nmy_timer = Timer()\n\n# ... do something\n\n# Get time string:\ntime_hhmmss = my_timer.get_time_hhmmss()\nprint(\"Time elapsed: %s\" % time_hhmmss )\n\n# ... use the timer again\nmy_timer.restart()\n\n# ... do something\n\n# Get time:\ntime_hhmmss = my_timer.get_time_hhmmss()\n\n# ... etc\n\n", "Here's another context manager for timing code -\nUsage: \nfrom benchmark import benchmark\n\nwith benchmark(\"Test 1+1\"):\n 1+1\n=>\nTest 1+1 : 1.41e-06 seconds\n\nor, if you need the time value\nwith benchmark(\"Test 1+1\") as b:\n 1+1\nprint(b.time)\n=>\nTest 1+1 : 7.05e-07 seconds\n7.05233786763e-07\n\nbenchmark.py:\nfrom timeit import default_timer as timer\n\nclass benchmark(object):\n\n def __init__(self, msg, fmt=\"%0.3g\"):\n self.msg = msg\n self.fmt = fmt\n\n def __enter__(self):\n self.start = timer()\n return self\n\n def __exit__(self, *args):\n t = timer() - self.start\n print((\"%s : \" + self.fmt + \" seconds\") % (self.msg, t))\n self.time = t\n\nAdapted from http://dabeaz.blogspot.fr/2010/02/context-manager-for-timing-benchmarks.html\n", "Use profiler module. It gives a very detailed profile.\nimport profile\nprofile.run('main()')\n\nit outputs something like:\n 5 function calls in 0.047 seconds\n\n Ordered by: standard name\n\n ncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.000 0.000 0.000 0.000 :0(exec)\n 1 0.047 0.047 0.047 0.047 :0(setprofile)\n 1 0.000 0.000 0.000 0.000 <string>:1(<module>)\n 0 0.000 0.000 profile:0(profiler)\n 1 0.000 0.000 0.047 0.047 profile:0(main())\n 1 0.000 0.000 0.000 0.000 two_sum.py:2(twoSum)\n\nI've found it very informative.\n", "(With Ipython only) you can use %timeit to measure average processing time:\ndef foo():\n print \"hello\"\n\nand then: \n%timeit foo()\n\nthe result is something like:\n10000 loops, best of 3: 27 µs per loop\n\n", "I like it simple (python 3):\nfrom timeit import timeit\n\ntimeit(lambda: print(\"hello\"))\n\nOutput is microseconds for a single execution:\n2.430883963010274\n\nExplanation:\ntimeit executes the anonymous function 1 million times by default and the result is given in seconds. Therefore the result for 1 single execution is the same amount but in microseconds on average.\n\nFor slow operations add a lower number of iterations or you could be waiting forever:\nimport time\n\ntimeit(lambda: time.sleep(1.5), number=1)\n\nOutput is always in seconds for the total number of iterations:\n1.5015795179999714\n\n", "on python3:\nfrom time import sleep, perf_counter as pc\nt0 = pc()\nsleep(1)\nprint(pc()-t0)\n\nelegant and short.\n", "One more way to use timeit:\nfrom timeit import timeit\n\ndef func():\n return 1 + 1\n\ntime = timeit(func, number=1)\nprint(time)\n\n", "To get insight on every function calls recursively, do:\n%load_ext snakeviz\n%%snakeviz\n\nIt just takes those 2 lines of code in a Jupyter notebook, and it generates a nice interactive diagram. For example: \n\nHere is the code. Again, the 2 lines starting with % are the only extra lines of code needed to use snakeviz: \n# !pip install snakeviz\n%load_ext snakeviz\nimport glob\nimport hashlib\n\n%%snakeviz\n\nfiles = glob.glob('*.txt')\ndef print_files_hashed(files):\n for file in files:\n with open(file) as f:\n print(hashlib.md5(f.read().encode('utf-8')).hexdigest())\nprint_files_hashed(files)\n\nIt also seems possible to run snakeviz outside notebooks. More info on the snakeviz website.\n", "How to measure the time between two operations. Compare the time of two operations.\nimport time\n\nb = (123*321)*123\nt1 = time.time()\n\nc = ((9999^123)*321)^123\nt2 = time.time()\n\nprint(t2-t1)\n\n7.987022399902344e-05\n", "If you want to be able to time functions conveniently, you can use a simple decorator:\nimport time\n\ndef timing_decorator(func):\n def wrapper(*args, **kwargs):\n start = time.perf_counter()\n original_return_val = func(*args, **kwargs)\n end = time.perf_counter()\n print(\"time elapsed in \", func.__name__, \": \", end - start, sep='')\n return original_return_val\n\n return wrapper\n\nYou can use it on a function that you want to time like this:\n@timing_decorator\ndef function_to_time():\n time.sleep(1)\n\nfunction_to_time()\n\nAny time you call function_to_time, it will print how long it took and the name of the function being timed.\n", "Here's a pretty well documented and fully type hinted decorator I use as a general utility:\nfrom functools import wraps\nfrom time import perf_counter\nfrom typing import Any, Callable, Optional, TypeVar, cast\n\nF = TypeVar(\"F\", bound=Callable[..., Any])\n\n\ndef timer(prefix: Optional[str] = None, precision: int = 6) -> Callable[[F], F]:\n \"\"\"Use as a decorator to time the execution of any function.\n\n Args:\n prefix: String to print before the time taken.\n Default is the name of the function.\n precision: How many decimals to include in the seconds value.\n\n Examples:\n >>> @timer()\n ... def foo(x):\n ... return x\n >>> foo(123)\n foo: 0.000...s\n 123\n >>> @timer(\"Time taken: \", 2)\n ... def foo(x):\n ... return x\n >>> foo(123)\n Time taken: 0.00s\n 123\n\n \"\"\"\n def decorator(func: F) -> F:\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n nonlocal prefix\n prefix = prefix if prefix is not None else f\"{func.__name__}: \"\n start = perf_counter()\n result = func(*args, **kwargs)\n end = perf_counter()\n print(f\"{prefix}{end - start:.{precision}f}s\")\n return result\n return cast(F, wrapper)\n return decorator\n\nExample usage:\nfrom timer import timer\n\n\n@timer(precision=9)\ndef takes_long(x: int) -> bool:\n return x in (i for i in range(x + 1))\n\n\nresult = takes_long(10**8)\nprint(result)\n\n\nOutput:\ntakes_long: 4.942629056s\nTrue\n\n\nThe doctests can be checked with:\n$ python3 -m doctest --verbose -o=ELLIPSIS timer.py\n\nAnd the type hints with:\n$ mypy timer.py\n\n", "Kind of a super later response, but maybe it serves a purpose for someone. This is a way to do it which I think is super clean.\nimport time\n\ndef timed(fun, *args):\n s = time.time()\n r = fun(*args)\n print('{} execution took {} seconds.'.format(fun.__name__, time.time()-s))\n return(r)\n\ntimed(print, \"Hello\")\n\nKeep in mind that \"print\" is a function in Python 3 and not Python 2.7. However, it works with any other function. Cheers!\n", "You can use timeit.\nHere is an example on how to test naive_func that takes parameter using Python REPL:\n>>> import timeit \n\n>>> def naive_func(x): \n... a = 0 \n... for i in range(a): \n... a += i \n... return a \n\n>>> def wrapper(func, *args, **kwargs): \n... def wrapper(): \n... return func(*args, **kwargs) \n... return wrapper \n\n>>> wrapped = wrapper(naive_func, 1_000) \n\n>>> timeit.timeit(wrapped, number=1_000_000) \n0.4458435332577161 \n\nYou don't need wrapper function if function doesn't have any parameters. \n", "print_elapsed_time function is below\ndef print_elapsed_time(prefix=''):\n e_time = time.time()\n if not hasattr(print_elapsed_time, 's_time'):\n print_elapsed_time.s_time = e_time\n else:\n print(f'{prefix} elapsed time: {e_time - print_elapsed_time.s_time:.2f} sec')\n print_elapsed_time.s_time = e_time\n\nuse it in this way\nprint_elapsed_time()\n.... heavy jobs ...\nprint_elapsed_time('after heavy jobs')\n.... tons of jobs ...\nprint_elapsed_time('after tons of jobs')\n\nresult is\nafter heavy jobs elapsed time: 0.39 sec\nafter tons of jobs elapsed time: 0.60 sec \n\nthe pros and cons of this function is that you don't need to pass start time\n", "We can also convert time into human-readable time.\nimport time, datetime\n\nstart = time.clock()\n\ndef num_multi1(max):\n result = 0\n for num in range(0, 1000):\n if (num % 3 == 0 or num % 5 == 0):\n result += num\n\n print \"Sum is %d \" % result\n\nnum_multi1(1000)\n\nend = time.clock()\nvalue = end - start\ntimestamp = datetime.datetime.fromtimestamp(value)\nprint timestamp.strftime('%Y-%m-%d %H:%M:%S')\n\n", "Although it's not strictly asked in the question, it is quite often the case that you want a simple, uniform way to incrementally measure the elapsed time between several lines of code.\nIf you are using Python 3.8 or above, you can make use of assignment expressions (a.k.a. the walrus operator) to achieve this in a fairly elegant way:\nimport time\n\nstart, times = time.perf_counter(), {}\n\nprint(\"hello\")\ntimes[\"print\"] = -start + (start := time.perf_counter())\n\ntime.sleep(1.42)\ntimes[\"sleep\"] = -start + (start := time.perf_counter())\n\na = [n**2 for n in range(10000)]\ntimes[\"pow\"] = -start + (start := time.perf_counter())\n\nprint(times)\n\n=>\n{'print': 2.193450927734375e-05, 'sleep': 1.4210970401763916, 'power': 0.005671024322509766}\n\n", "I made a library for this, if you want to measure a function you can just do it like this \n\nfrom pythonbenchmark import compare, measure\nimport time\n\na,b,c,d,e = 10,10,10,10,10\nsomething = [a,b,c,d,e]\n\n@measure\ndef myFunction(something):\n time.sleep(0.4)\n\n@measure\ndef myOptimizedFunction(something):\n time.sleep(0.2)\n\nmyFunction(input)\nmyOptimizedFunction(input)\n\nhttps://github.com/Karlheinzniebuhr/pythonbenchmark \n", "This unique class-based approach offers a printable string representation, customizable rounding, and convenient access to the elapsed time as a string or a float. It was developed with Python 3.7.\nimport datetime\nimport timeit\n\n\nclass Timer:\n \"\"\"Measure time used.\"\"\"\n # Ref: https://stackoverflow.com/a/57931660/\n\n def __init__(self, round_ndigits: int = 0):\n self._round_ndigits = round_ndigits\n self._start_time = timeit.default_timer()\n\n def __call__(self) -> float:\n return timeit.default_timer() - self._start_time\n\n def __str__(self) -> str:\n return str(datetime.timedelta(seconds=round(self(), self._round_ndigits)))\n\nUsage:\n# Setup timer\n>>> timer = Timer()\n\n# Access as a string\n>>> print(f'Time elapsed is {timer}.')\nTime elapsed is 0:00:03.\n>>> print(f'Time elapsed is {timer}.')\nTime elapsed is 0:00:04.\n\n# Access as a float\n>>> timer()\n6.841332235\n>>> timer()\n7.970274425\n\n", "As a lambda, obtain time elapsed and time stamps:\nimport datetime\nt_set = lambda: datetime.datetime.now().astimezone().replace(microsecond=0)\nt_diff = lambda t: str(t_set() - t)\nt_stamp = lambda t=None: str(t) if t else str(t_set())\n\nIn practice:\n>>> \n>>> t_set()\ndatetime.datetime(2021, 3, 21, 1, 25, 17, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=61200), 'PDT'))\n>>> t = t_set()\n>>> t_diff(t)\n'0:00:14'\n>>> t_diff(t)\n'0:00:23'\n>>> t_stamp()\n'2021-03-21 01:25:57-07:00'\n>>> t_stamp(t)\n'2021-03-21 01:25:22-07:00'\n>>> \n\n", "import time\n\ndef getElapsedTime(startTime, units):\n elapsedInSeconds = time.time() - startTime\n if units == 'sec':\n return elapsedInSeconds\n if units == 'min':\n return elapsedInSeconds/60\n if units == 'hour':\n return elapsedInSeconds/(60*60)\n\n", "Measure execution time of small code snippets.\n\nUnit of time: measured in seconds as a float\n\nimport timeit\nt = timeit.Timer('li = list(map(lambda x:x*2,[1,2,3,4,5]))')\nt.timeit()\nt.repeat()\n>[1.2934070999999676, 1.3335035000000062, 1.422568500000125]\n\n\nThe repeat() method is a convenience to call timeit() multiple times and return a list of results.\nrepeat(repeat=3)¶\n\nWith this list we can take a mean of all times.\nBy default, timeit() temporarily turns off garbage collection during the timing. time.Timer() solves this problem.\nPros:\n\ntimeit.Timer() makes independent timings more comparable. The gc may be an important component of the performance of the function being measured. If so, gc(garbage collector) can be re-enabled as the first statement in the setup string. For example:\n\ntimeit.Timer('li = list(map(lambda x:x*2,[1,2,3,4,5]))',setup='gc.enable()')\n\n\nSource Python Docs!\n", "based on the contextmanager solution given by https://stackoverflow.com/a/30024601/5095636, hereunder the lambda free version, as flake8 warns on the usage of lambda as per E731:\nfrom contextlib import contextmanager\nfrom timeit import default_timer\n\n@contextmanager\ndef elapsed_timer():\n start_time = default_timer()\n\n class _Timer():\n start = start_time\n end = default_timer()\n duration = end - start\n\n yield _Timer\n\n end_time = default_timer()\n _Timer.end = end_time\n _Timer.duration = end_time - start_time\n\ntest:\nfrom time import sleep\n\nwith elapsed_timer() as t:\n print(\"start:\", t.start)\n sleep(1)\n print(\"end:\", t.end)\n\nt.start\nt.end\nt.duration\n\n", "For Python 3\nIf you use the time module, you can get the current timestamp, and then execute your code, and get the timestamp again. Now, the time taken will be the first timestamp minus the second timestamp:\nimport time\n\nfirst_stamp = int(round(time.time() * 1000))\n\n# YOUR CODE GOES HERE\ntime.sleep(5)\n\nsecond_stamp = int(round(time.time() * 1000))\n\n# Calculate the time taken in milliseconds\ntime_taken = second_stamp - first_stamp\n\n# To get time in seconds:\ntime_taken_seconds = round(time_taken / 1000)\nprint(f'{time_taken_seconds} seconds or {time_taken} milliseconds')\n\n", "The timeit module is good for timing a small piece of Python code. It can be used at least in three forms: \n1- As a command-line module\npython2 -m timeit 'for i in xrange(10): oct(i)' \n\n2- For a short code, pass it as arguments.\nimport timeit\ntimeit.Timer('for i in xrange(10): oct(i)').timeit()\n\n3- For longer code as:\nimport timeit\ncode_to_test = \"\"\"\na = range(100000)\nb = []\nfor i in a:\n b.append(i*2)\n\"\"\"\nelapsed_time = timeit.timeit(code_to_test, number=100)/100\nprint(elapsed_time)\n\n", "Time can also be measured by %timeit magic function as follow:\n%timeit -t -n 1 print(\"hello\")\n\nn 1 is for running function only 1 time.\n", "You can use Benchmark Timer (disclaimer: I'm the author):\n\nBenchmark Timer\nUse the BenchmarkTimer class to measure the time it takes to execute some piece of code. \nThis gives more flexibility than the built-in timeit function, and runs in the same scope as the rest of your code.\nInstallation\npip install git+https://github.com/michaelitvin/benchmark-timer.git@main#egg=benchmark-timer\n\nUsage\nSingle iteration example\nfrom benchmark_timer import BenchmarkTimer\nimport time\n\nwith BenchmarkTimer(name=\"MySimpleCode\") as tm, tm.single_iteration():\n time.sleep(.3)\n\nOutput:\nBenchmarking MySimpleCode...\nMySimpleCode benchmark: n_iters=1 avg=0.300881s std=0.000000s range=[0.300881s~0.300881s]\n\nMultiple iterations example\nfrom benchmark_timer import BenchmarkTimer\nimport time\n\nwith BenchmarkTimer(name=\"MyTimedCode\", print_iters=True) as tm:\n for timing_iteration in tm.iterations(n=5, warmup=2):\n with timing_iteration:\n time.sleep(.1)\n\nprint(\"\\n===================\\n\")\nprint(\"List of timings: \", list(tm.timings.values()))\n\nOutput:\nBenchmarking MyTimedCode...\n[MyTimedCode] iter=0 took 0.099755s (warmup)\n[MyTimedCode] iter=1 took 0.100476s (warmup)\n[MyTimedCode] iter=2 took 0.100189s \n[MyTimedCode] iter=3 took 0.099900s \n[MyTimedCode] iter=4 took 0.100888s \nMyTimedCode benchmark: n_iters=3 avg=0.100326s std=0.000414s range=[0.099900s~0.100888s]\n\n===================\n\nList of timings: [0.10018850000000001, 0.09990049999999995, 0.10088760000000008]\n\n\n", "I'm pretty late to the party, but this approach was not covered before. When we want to benchmark manually some piece of code, we may want to find out first which of class methods eats the execution time, and this is sometimes not obvious. I have built the following metaclass to solve exactly this problem:\nfrom __future__ import annotations\n\nfrom functools import wraps\nfrom time import time\nfrom typing import Any, Callable, TypeVar, cast\n\nF = TypeVar('F', bound=Callable[..., Any])\n\n\ndef timed_method(func: F, prefix: str | None = None) -> F:\n prefix = (prefix + ' ') if prefix else ''\n\n @wraps(func)\n def inner(*args, **kwargs): # type: ignore\n start = time()\n try:\n ret = func(*args, **kwargs)\n except BaseException:\n print(f'[ERROR] {prefix}{func.__qualname__}: {time() - start}')\n raise\n \n print(f'{prefix}{func.__qualname__}: {time() - start}')\n return ret\n\n return cast(F, inner)\n\n\nclass TimedClass(type):\n def __new__(\n cls: type[TimedClass],\n name: str,\n bases: tuple[type[type], ...],\n attrs: dict[str, Any],\n **kwargs: Any,\n ) -> TimedClass:\n for name, attr in attrs.items():\n if isinstance(attr, (classmethod, staticmethod)):\n attrs[name] = type(attr)(timed_method(attr.__func__))\n elif isinstance(attr, property):\n attrs[name] = property(\n timed_method(attr.fget, 'get') if attr.fget is not None else None,\n timed_method(attr.fset, 'set') if attr.fset is not None else None,\n timed_method(attr.fdel, 'del') if attr.fdel is not None else None,\n )\n elif callable(attr):\n attrs[name] = timed_method(attr)\n\n return super().__new__(cls, name, bases, attrs)\n\nIt allows usage like the following:\nclass MyClass(metaclass=TimedClass):\n def foo(self): \n print('foo')\n \n @classmethod\n def bar(cls): \n print('bar')\n \n @staticmethod\n def baz(): \n print('baz')\n \n @property\n def prop(self): \n print('prop')\n \n @prop.setter\n def prop(self, v): \n print('fset')\n \n @prop.deleter\n def prop(self): \n print('fdel')\n\n\nc = MyClass()\n\nc.foo()\nc.bar()\nc.baz()\nc.prop\nc.prop = 2\ndel c.prop\n\nMyClass.bar()\nMyClass.baz()\n\nIt prints:\nfoo\nMyClass.foo: 1.621246337890625e-05\nbar\nMyClass.bar: 4.5299530029296875e-06\nbaz\nMyClass.baz: 4.291534423828125e-06\nprop\nget MyClass.prop: 3.814697265625e-06\nfset\nset MyClass.prop: 3.5762786865234375e-06\nfdel\ndel MyClass.prop: 3.5762786865234375e-06\nbar\nMyClass.bar: 3.814697265625e-06\nbaz\nMyClass.baz: 4.0531158447265625e-06\n\nIt can be combined with other answers to replace time.time with something more precise.\n", "Here is an answer using:\n\na concise context manager to time code snippets\ntime.perf_counter() to compute time delta. It should be prefered as it is not adjustable (neither a sysadmin nor a daemon can change its value) contrary to time.time() (see doc)\npython 3.10+ (because of typing but could be easily adapted to previous versions)\n\nimport time\nfrom contextlib import contextmanager\nfrom typing import Iterator\n\n@contextmanager\ndef time_it() -> Iterator[None]:\n tic: float = time.perf_counter()\n try:\n yield\n finally:\n toc: float = time.perf_counter()\n print(f\"Computation time = {1000*(toc - tic):.3f}ms\")\n\nAn example how to use it:\n# Example: vector dot product computation\nwith time_it():\n A = B = range(1000000)\n dot = sum(a*b for a,b in zip(A,B))\n# Computation time = 95.353ms\n\nAppendix\nimport time\n\n# to check adjustability\nassert time.get_clock_info('time').adjustable\nassert time.get_clock_info('perf_counter').adjustable is False\n\n" ]
[ 2301, 1058, 223, 209, 105, 90, 70, 65, 60, 49, 31, 21, 21, 21, 21, 19, 18, 14, 13, 13, 12, 10, 10, 9, 8, 8, 7, 7, 6, 5, 5, 3, 3, 2, 1, 0, 0, 0, 0, 0 ]
[ "In addition to %timeit in ipython you can also use %%timeit for multi-line code snippets:\nIn [1]: %%timeit\n ...: complex_func()\n ...: 2 + 2 == 5\n ...:\n ...:\n\n1 s ± 1.93 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nAlso it can be used in jupyter notebook the same way, just put magic %%timeit at the beginning of cell.\n" ]
[ -3 ]
[ "measure", "performance", "python", "timeit" ]
stackoverflow_0007370801_measure_performance_python_timeit.txt
Q: Using python library Rasterio to create a subset of a TIFF image and then display it and save it? I have two Raster images, one from band 4 with a B4 at the end and another from band 5 with B5 at the end. I want to subset the B5 raster to 800x600, then display it and save it as a GeoTiff. Then I want to compute the NDVI (I assume I'll need both the B4 and B5 to do this, but not sure). Then I want to display the NDVI subset of the B5 raster. Display it and save it as a GeoTiff. How would I create something like a 800 x 600 pixel subset of a TIFF raster image? I want to also take that TIFF and generate an NDVI image for that subset. NOTE: I am working with a Landsat image. The image has B5 at the end of the file title. What I've done so far: import rasterio from rasterio.windows import Window import matplotlib.pyplot plt # for later use with rasterio.open('MyRasterImage.tif') as src: w = src.read(1, window=Window(0, 0, 800, 600)) I want to display this using Spyder or Jupyter notebooks. So I thought to use matplotlib and did the flowing code: # Plot plt.imshow(w) plt.show() Doing this generates a 800x600 matplotlib window, but it's all purple, not sure why its producing this. Now I want to be able to display this 800x600 image. Then after that I want preform an NDVI on that subset 800x600 image. Then display the subset 800x600 image with NDVI showing. I know the formuala is: NDVI = (NIR - red) / (NIR + red) But how do I extract out NIR and red here from this single Landsat image? My attempt at that: band1 = dataset.read(1) band2 = dataset.read(2) band3 = dataset.read(3) print(band[2]) When I run that code for the bands I get the error: rasterio indexerror: band index 2 out of range (not in (1,)) When I run this code: print(w.count) It returns '1'. So this means that the Landsat image only has one band? But in order to do NDVI don't I need 3 bands? I am thinking of writing some code like this to get the NDVI from that raster. But not sure how to go about extracting out the bands: # We handle the connections with "with" with rasterio.open(bands[0]) as src: b3 = src.read(1) with rasterio.open(bands[1]) as src: b4 = src.read(1) # Allow division by zero numpy.seterr(divide='ignore', invalid='ignore') # Calculate NDVI ndvi = (b4.astype(float) - b3.astype(float)) / (b4 + b3) This code doesn't work because bands isn't define as anything so I don't know how to define bands to get the NDVI. After this I am not sure how to go about both displaying and saving the rendered image. A: You only read the first band when you used the read method. Also things might be purple because of no data. To check: import numpy as np np.any(raster == -9999) if True then you have no data To fix: NODATA = -9999 if np.any(raster == NODATA): mosaic[raster == NODATA] = np.nan
Using python library Rasterio to create a subset of a TIFF image and then display it and save it?
I have two Raster images, one from band 4 with a B4 at the end and another from band 5 with B5 at the end. I want to subset the B5 raster to 800x600, then display it and save it as a GeoTiff. Then I want to compute the NDVI (I assume I'll need both the B4 and B5 to do this, but not sure). Then I want to display the NDVI subset of the B5 raster. Display it and save it as a GeoTiff. How would I create something like a 800 x 600 pixel subset of a TIFF raster image? I want to also take that TIFF and generate an NDVI image for that subset. NOTE: I am working with a Landsat image. The image has B5 at the end of the file title. What I've done so far: import rasterio from rasterio.windows import Window import matplotlib.pyplot plt # for later use with rasterio.open('MyRasterImage.tif') as src: w = src.read(1, window=Window(0, 0, 800, 600)) I want to display this using Spyder or Jupyter notebooks. So I thought to use matplotlib and did the flowing code: # Plot plt.imshow(w) plt.show() Doing this generates a 800x600 matplotlib window, but it's all purple, not sure why its producing this. Now I want to be able to display this 800x600 image. Then after that I want preform an NDVI on that subset 800x600 image. Then display the subset 800x600 image with NDVI showing. I know the formuala is: NDVI = (NIR - red) / (NIR + red) But how do I extract out NIR and red here from this single Landsat image? My attempt at that: band1 = dataset.read(1) band2 = dataset.read(2) band3 = dataset.read(3) print(band[2]) When I run that code for the bands I get the error: rasterio indexerror: band index 2 out of range (not in (1,)) When I run this code: print(w.count) It returns '1'. So this means that the Landsat image only has one band? But in order to do NDVI don't I need 3 bands? I am thinking of writing some code like this to get the NDVI from that raster. But not sure how to go about extracting out the bands: # We handle the connections with "with" with rasterio.open(bands[0]) as src: b3 = src.read(1) with rasterio.open(bands[1]) as src: b4 = src.read(1) # Allow division by zero numpy.seterr(divide='ignore', invalid='ignore') # Calculate NDVI ndvi = (b4.astype(float) - b3.astype(float)) / (b4 + b3) This code doesn't work because bands isn't define as anything so I don't know how to define bands to get the NDVI. After this I am not sure how to go about both displaying and saving the rendered image.
[ "You only read the first band when you used the read method. Also things might be purple because of no data. To check:\nimport numpy as np\nnp.any(raster == -9999)\n\nif True then you have no data\nTo fix:\nNODATA = -9999\nif np.any(raster == NODATA):\n mosaic[raster == NODATA] = np.nan \n\n" ]
[ 0 ]
[]
[]
[ "python", "rasterio" ]
stackoverflow_0053362947_python_rasterio.txt
Q: Possible data Ingest count issue in FeatureStore I see mistake, that count of values in FeatureStore Statistic do not fit with amount of ingested values, see sample ... project_name = 'test-load' project = mlrun.get_or_create_project(project_name, context='./', user_project=True) .. fset = fstore.FeatureSet("test01", entities=['id']) # ingest 3 values fstore.ingest(fset, CSVSource("mycsv", path="a1.csv"), overwrite=False) # ingest 3 values fstore.ingest(fset, CSVSource("mycsv", path="a2.csv"), overwrite=False) and I saw only 3 values in statistic see print screen: Do you see any issue? A: The key is that statistics reflect the data for the last ingestion ONLY. It means, that number of values based on ingestions is without mistakes, you can check total of values based on e.g. FeatureVector, see sample code ... features = ["test01.F_2"] vector = fstore.FeatureVector("test_vector",features=features,with_indexes=True) resp = fstore.get_offline_features(vector) # Return values based on vector definition resp.to_dataframe()
Possible data Ingest count issue in FeatureStore
I see mistake, that count of values in FeatureStore Statistic do not fit with amount of ingested values, see sample ... project_name = 'test-load' project = mlrun.get_or_create_project(project_name, context='./', user_project=True) .. fset = fstore.FeatureSet("test01", entities=['id']) # ingest 3 values fstore.ingest(fset, CSVSource("mycsv", path="a1.csv"), overwrite=False) # ingest 3 values fstore.ingest(fset, CSVSource("mycsv", path="a2.csv"), overwrite=False) and I saw only 3 values in statistic see print screen: Do you see any issue?
[ "The key is that statistics reflect the data for the last ingestion ONLY. It means, that number of values based on ingestions is without mistakes, you can check total of values based on e.g. FeatureVector, see sample code\n...\nfeatures = [\"test01.F_2\"]\n\nvector = fstore.FeatureVector(\"test_vector\",features=features,with_indexes=True)\nresp = fstore.get_offline_features(vector)\n\n# Return values based on vector definition\nresp.to_dataframe()\n\n" ]
[ 4 ]
[]
[]
[ "feature_store", "mlrun", "python" ]
stackoverflow_0074502045_feature_store_mlrun_python.txt
Q: Python vs Javascript execution time I tried solving Maximum Subarray using both Javascript(Node.js) and Python, with brute force algorithm. Here's my code: Using python: from datetime import datetime from random import randint arr = [randint(-1000,1000) for i in range(1000)] def bruteForce(a): l = len(a) max = 0 for i in range(l): sum = 0 for j in range(i, l): sum += a[j] if(sum > max): max = sum return max start = datetime.now() bruteForce(arr) end = datetime.now() print(format(end-start)) And Javascript: function randInt(start, end) { return Math.floor(Math.random() * (end - start + 1)) } var arr = Array(1000).fill(randInt(-1000, 1000)) function bruteForce(arr) { var max = 0 for (let i = 0; i < arr.length; i++) { var sum = 0 for (let j = i; j < arr.length; j++) { sum += arr[j] max = Math.max(max, sum) } } return max } var start = performance.now() bruteForce(arr) var end = performance.now() console.log(end - start) Javascript got a result of about 0.187 seconds, while python got 4.75s - about 25 times slower. Does my code not optimized or python is really that slower than javascript? A: Yes it is. All modern JS engines are quite fast, and significantly faster than Python. But that doesn’t always matter, the context is important when deciding between languages based on performance. A: Python is not per se slower than Javascript, it depends on the implementation. Here the results comparing node and PyPy which also uses JIT: > /pypy39/python brute.py 109.8594 ms N= 10000 result= 73682 > node brute.js 167.4442000091076 ms N= 10000 result= 67495 So we could even say "python is somewhat faster" ... And if we use Cython, with a few type-hints, it will be again a lot faster - actual full C speed: > cythonize -a -i brutec.pyx > python -c "import brutec" 69.28919999999998 ms N= 10000 result= 52040 To make the comparison reasonable, I fixed a few issues in your scripts: Fix: the js script filled an array with all the same values from a single random Does the same basic kind of looping in Python - instead of using the range iterator (otherwise its a little slower) Use the same time format and increase the array length to 10000 - otherwise the times are too small regarding resolution and thread switching jitter Python code: from time import perf_counter as clock from random import randint N = 10000 arr = [randint(-1000,1000) for i in range(N)] def bruteForce(a): l = len(a) max = 0 i = 0 while i < l: sum = 0 j = i while j < l: sum += a[j] if sum > max: max = sum j += 1 i += 1 return max start = clock() r = bruteForce(arr) end = clock() print((end - start) * 1000, 'ms', 'N=', N, 'result=', r) ##print(arr[:10]) JS code: var start = -1000, end = 1000, N=10000 var arr = Array.from({length: N}, () => Math.floor(Math.random() * (end - start + 1) + start)) function bruteForce(arr) { var max = 0 for (let i = 0; i < arr.length; i++) { var sum = 0 for (let j = i; j < arr.length; j++) { sum += arr[j]; max = Math.max(max, sum) //~ if (sum > max) max = sum; } } return max } var start = performance.now() r = bruteForce(arr) var end = performance.now() console.log(end - start, 'ms', 'N=', N, 'result=', r) //~ console.log(arr.slice(0, 10)) Code for Cython (or Python), enriched with a few type-hints: import cython from time import perf_counter as clock from random import randint N = 10000 arr = [randint(-1000,1000) for i in range(N)] def bruteForce(arr): l: cython.int = len(arr) assert l <= 10000 a: cython.int[10000] = arr # copies mem from Python array max: cython.int = 0 i: cython.int = 0 while i < l: sum: cython.int = 0 j: cython.int = i while j < l: sum += a[j] if sum > max: max = sum j += 1 i += 1 return max start = clock() r = bruteForce(arr) end = clock() print((end - start) * 1000, 'ms', 'N=', N, 'result=', r) ##print(arr[:10]) (Done on a slow notebook)
Python vs Javascript execution time
I tried solving Maximum Subarray using both Javascript(Node.js) and Python, with brute force algorithm. Here's my code: Using python: from datetime import datetime from random import randint arr = [randint(-1000,1000) for i in range(1000)] def bruteForce(a): l = len(a) max = 0 for i in range(l): sum = 0 for j in range(i, l): sum += a[j] if(sum > max): max = sum return max start = datetime.now() bruteForce(arr) end = datetime.now() print(format(end-start)) And Javascript: function randInt(start, end) { return Math.floor(Math.random() * (end - start + 1)) } var arr = Array(1000).fill(randInt(-1000, 1000)) function bruteForce(arr) { var max = 0 for (let i = 0; i < arr.length; i++) { var sum = 0 for (let j = i; j < arr.length; j++) { sum += arr[j] max = Math.max(max, sum) } } return max } var start = performance.now() bruteForce(arr) var end = performance.now() console.log(end - start) Javascript got a result of about 0.187 seconds, while python got 4.75s - about 25 times slower. Does my code not optimized or python is really that slower than javascript?
[ "Yes it is. All modern JS engines are quite fast, and significantly faster than Python. But that doesn’t always matter, the context is important when deciding between languages based on performance.\n", "Python is not per se slower than Javascript, it depends on the implementation.\nHere the results comparing node and PyPy which also uses JIT:\n> /pypy39/python brute.py\n109.8594 ms N= 10000 result= 73682\n> node brute.js\n167.4442000091076 ms N= 10000 result= 67495\n\nSo we could even say \"python is somewhat faster\" ...\nAnd if we use Cython, with a few type-hints, it will be again a lot faster - actual full C speed:\n> cythonize -a -i brutec.pyx\n> python -c \"import brutec\"\n69.28919999999998 ms N= 10000 result= 52040\n\n\nTo make the comparison reasonable, I fixed a few issues in your scripts:\n\nFix: the js script filled an array with all the same values from a single random\nDoes the same basic kind of looping in Python - instead of using the range iterator (otherwise its a little slower)\nUse the same time format and increase the array length to 10000 - otherwise the times are too small regarding resolution and thread switching jitter\n\nPython code:\nfrom time import perf_counter as clock\nfrom random import randint\n\nN = 10000\narr = [randint(-1000,1000) for i in range(N)]\n\ndef bruteForce(a):\n l = len(a)\n max = 0\n i = 0\n while i < l:\n sum = 0\n j = i\n while j < l:\n sum += a[j]\n if sum > max:\n max = sum\n j += 1\n i += 1\n return max\n\nstart = clock()\nr = bruteForce(arr)\nend = clock()\nprint((end - start) * 1000, 'ms', 'N=', N, 'result=', r)\n##print(arr[:10])\n\nJS code:\nvar start = -1000, end = 1000, N=10000\nvar arr = Array.from({length: N}, \n () => Math.floor(Math.random() * (end - start + 1) + start))\n\nfunction bruteForce(arr) {\n var max = 0\n for (let i = 0; i < arr.length; i++) {\n var sum = 0\n for (let j = i; j < arr.length; j++) {\n sum += arr[j];\n max = Math.max(max, sum)\n //~ if (sum > max) max = sum;\n }\n }\n return max\n}\n\nvar start = performance.now()\nr = bruteForce(arr)\nvar end = performance.now()\nconsole.log(end - start, 'ms', 'N=', N, 'result=', r)\n//~ console.log(arr.slice(0, 10))\n\nCode for Cython (or Python), enriched with a few type-hints:\nimport cython\nfrom time import perf_counter as clock\nfrom random import randint\n\nN = 10000\narr = [randint(-1000,1000) for i in range(N)]\n\ndef bruteForce(arr):\n l: cython.int = len(arr)\n assert l <= 10000\n a: cython.int[10000] = arr # copies mem from Python array\n max: cython.int = 0\n i: cython.int = 0\n while i < l:\n sum: cython.int = 0\n j: cython.int = i\n while j < l:\n sum += a[j]\n if sum > max:\n max = sum\n j += 1\n i += 1\n return max\n\nstart = clock()\nr = bruteForce(arr)\nend = clock()\nprint((end - start) * 1000, 'ms', 'N=', N, 'result=', r)\n##print(arr[:10])\n\n\n(Done on a slow notebook)\n" ]
[ 1, 1 ]
[]
[]
[ "javascript", "python" ]
stackoverflow_0071679094_javascript_python.txt
Q: Azure function deployment of single Python script and process of installation of requirements.txt in Azure Functions I am completely new to Azure. I recently deployed my Python Script on Azure Functions (HTTP). It worked completely fine for me. The problem I faced is when my Python script needed some packages to be installed like (pandas, psycopy2). Although I put them in requirements.txt file. And after deployment requirements.txt is stored in root directory (same as of host.json) but I am getting import error. I don't really know how to install these packages in azure function. Any help would be really really appreciated. I tried deploying python script using multiple techniques but none of them worked for me, I just have one python script and I need to install requirement.txt file in azure function. Please help me with this problem. A: I've installed a package ('requests') and ran http trigger function locally by creating a new function app with python3.9. I was able to deploy it to Azure and triggered successfully without any error. Note: Make sure that while adding any package in requirements.txt file, install package in the project directory folder itself by giving: pip install --target="<local project directory path>/.python_packages/lib/site-packages" -r requirements.txt Here, I've taken requests == 2.19.1 latest version package and imported in init.py requirements.txt: azure-functions requests==2.19.1 While executing locally, I received the desired outcome:- Deployed to Azure: requirements.txt file after deploying it to Azure: Received "200 Ok" response after test/run: You can check here Refer MSDoc
Azure function deployment of single Python script and process of installation of requirements.txt in Azure Functions
I am completely new to Azure. I recently deployed my Python Script on Azure Functions (HTTP). It worked completely fine for me. The problem I faced is when my Python script needed some packages to be installed like (pandas, psycopy2). Although I put them in requirements.txt file. And after deployment requirements.txt is stored in root directory (same as of host.json) but I am getting import error. I don't really know how to install these packages in azure function. Any help would be really really appreciated. I tried deploying python script using multiple techniques but none of them worked for me, I just have one python script and I need to install requirement.txt file in azure function. Please help me with this problem.
[ "I've installed a package ('requests') and ran http trigger function locally by creating a new function app with python3.9. I was able to deploy it to Azure and triggered successfully without any error.\nNote: Make sure that while adding any package in requirements.txt file, install package in the project directory folder itself by giving:\npip install --target=\"<local project directory path>/.python_packages/lib/site-packages\" -r requirements.txt\n\nHere, I've taken requests == 2.19.1 latest version package and imported in init.py \nrequirements.txt:\nazure-functions\nrequests==2.19.1\n\nWhile executing locally, I received the desired outcome:-\n\nDeployed to Azure:\n\nrequirements.txt file after deploying it to Azure:\n\nReceived \"200 Ok\" response after test/run:\n\nYou can check here Refer MSDoc\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_functions", "azure_pipelines_yaml", "python", "requirements.txt" ]
stackoverflow_0074492187_azure_azure_functions_azure_pipelines_yaml_python_requirements.txt.txt
Q: Python get JSON Response from XHR Request I've been trying for some time to build a get request using requests and other python tools, which should actually return a JSON. To get closer to the topic, I first try to reproduce the whole thing in the browser. Thereby I already come to limits. It's about this URL: https://unverpackt-verband.de/map When I look at the network analysis in Firefox, I see the desired json under Response. But the Request section is empty. Now I would appreciate help on how to find/build a suitable request to get and process this JSON in an automated way using python. EDIT what has been tried so far: requests.get("https://api.unverpacktverband.de/map").json() Outcome: "TooManyRedirects: Exceeded 30 redirects." Error A: I'm not sure if you're looking for the following? import requests import pandas as pd headers = {'accept': 'application/json, text/plain, */*', 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36' } url = 'https://api.unverpackt-verband.de/map' r = requests.get(url, headers=headers) df = pd.json_normalize(r.json()) print(df) Result in terminal: id type lat lng 0 1985 storeNoMember 47.558307 9.709220 1 1984 plannedMember 48.941530 8.405472 2 1983 storeMember 49.999355 8.711121 3 1982 mobilePlannedMember 51.838272 6.614867 4 1981 plannedMember 52.841810 7.519561 ... ... ... ... ... 631 850 storeNoMember 50.713607 7.044930 632 849 storeNoMember 51.486631 7.214458 633 847 storeMember 49.898628 10.896140 634 846 storeMember 49.840614 7.861260 635 845 storeNoMember 52.201666 8.788376 636 rows × 4 columns
Python get JSON Response from XHR Request
I've been trying for some time to build a get request using requests and other python tools, which should actually return a JSON. To get closer to the topic, I first try to reproduce the whole thing in the browser. Thereby I already come to limits. It's about this URL: https://unverpackt-verband.de/map When I look at the network analysis in Firefox, I see the desired json under Response. But the Request section is empty. Now I would appreciate help on how to find/build a suitable request to get and process this JSON in an automated way using python. EDIT what has been tried so far: requests.get("https://api.unverpacktverband.de/map").json() Outcome: "TooManyRedirects: Exceeded 30 redirects." Error
[ "I'm not sure if you're looking for the following?\nimport requests\nimport pandas as pd\n\n\nheaders = {'accept': 'application/json, text/plain, */*',\n 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36'\n}\n\nurl = 'https://api.unverpackt-verband.de/map'\n\nr = requests.get(url, headers=headers)\n\ndf = pd.json_normalize(r.json())\nprint(df)\n\nResult in terminal:\n id type lat lng\n0 1985 storeNoMember 47.558307 9.709220\n1 1984 plannedMember 48.941530 8.405472\n2 1983 storeMember 49.999355 8.711121\n3 1982 mobilePlannedMember 51.838272 6.614867\n4 1981 plannedMember 52.841810 7.519561\n... ... ... ... ...\n631 850 storeNoMember 50.713607 7.044930\n632 849 storeNoMember 51.486631 7.214458\n633 847 storeMember 49.898628 10.896140\n634 846 storeMember 49.840614 7.861260\n635 845 storeNoMember 52.201666 8.788376\n636 rows × 4 columns\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x", "python_requests", "web_scraping" ]
stackoverflow_0074501818_python_python_3.x_python_requests_web_scraping.txt
Q: How is PyTorch's Class BCEWithLogitsLoss exactly implemented? According to the PyTorch documentation, the advantage of the class BCEWithLogitsLoss() is that one can use the log-sum-exp trick for numerical stability. If we use the class BCEWithLogitsLoss() with the parameter reduction set to None, they have a formula for that: I now simplified the terms, and obtain after some lines of calculation: I was curious to see whether this is the way how the Source code does it, but I couldn't find it.. The only code they have is this: Code for BCEWithLogitsLoss A: nn.BCEWithLogitsLoss is actually just cross entropy loss that comes inside a sigmoid function. It may be used in case your model's output layer is not wrapped with sigmoid. Typically used with the raw output of a single output layer neuron. Simply put, your model's output say pred will be a raw value. In order to get probability, you will have to use torch.sigmoid(pred). (To get actual class labels, you need torch.round(torch.sigmoid(pred)).) However, you don't need to do anything like that (i.e take sigmoid) when you use nn.BCEWithLogitsLoss. Here you just have to do the following- criterion = nn.BCEWithLogitsLoss() loss = criterion(pred, target) # pred is just raw nn output Hence, coming to implementation part, criterion accepts two torch tensors - one being the raw nn outputs, the other being the true class labels, then wraps the first using sigmoid - for each element in the tensor and then calculates Cross Entropy loss (-(target*log(sigmoid(pred))) for each pair and reduces it to mean. A: All the pytorch functional code is implemented in C++. The source code for the implementation is located here. The pytorch implementation computes BCEWithLogitsLoss as where t_n is simply -relu(x). The use of t_n here is basically a clever way to avoid taking exponentials of positive values (thus avoiding overflow). This can be made more clear by substituting t_n into the l_n which yields the following equivalent expression A: According to C++ implementation they use this function in the end: static inline at::Tensor apply_loss_reduction(const at::Tensor& unreduced, int64_t reduction) { if (reduction == at::Reduction::Mean) { return unreduced.mean(); } else if (reduction == at::Reduction::Sum) { return unreduced.sum(); } return unreduced; } As you can see in documentation they use mean reduction by default.
How is PyTorch's Class BCEWithLogitsLoss exactly implemented?
According to the PyTorch documentation, the advantage of the class BCEWithLogitsLoss() is that one can use the log-sum-exp trick for numerical stability. If we use the class BCEWithLogitsLoss() with the parameter reduction set to None, they have a formula for that: I now simplified the terms, and obtain after some lines of calculation: I was curious to see whether this is the way how the Source code does it, but I couldn't find it.. The only code they have is this: Code for BCEWithLogitsLoss
[ "nn.BCEWithLogitsLoss is actually just cross entropy loss that comes inside a sigmoid function. It may be used in case your model's output layer is not wrapped with sigmoid. Typically used with the raw output of a single output layer neuron.\nSimply put, your model's output say pred will be a raw value. In order to get probability, you will have to use torch.sigmoid(pred). (To get actual class labels, you need torch.round(torch.sigmoid(pred)).) However, you don't need to do anything like that (i.e take sigmoid) when you use nn.BCEWithLogitsLoss. Here you just have to do the following-\ncriterion = nn.BCEWithLogitsLoss()\nloss = criterion(pred, target) # pred is just raw nn output\n\nHence, coming to implementation part, criterion accepts two torch tensors - one being the raw nn outputs, the other being the true class labels, then wraps the first using sigmoid - for each element in the tensor and then calculates Cross Entropy loss (-(target*log(sigmoid(pred))) for each pair and reduces it to mean.\n", "All the pytorch functional code is implemented in C++. The source code for the implementation is located here.\nThe pytorch implementation computes BCEWithLogitsLoss as\n\nwhere t_n is simply -relu(x). The use of t_n here is basically a clever way to avoid taking exponentials of positive values (thus avoiding overflow). This can be made more clear by substituting t_n into the l_n which yields the following equivalent expression\n\n", "According to C++ implementation they use this function in the end:\nstatic inline at::Tensor apply_loss_reduction(const at::Tensor& unreduced, int64_t reduction) {\n if (reduction == at::Reduction::Mean) {\n return unreduced.mean();\n } else if (reduction == at::Reduction::Sum) {\n return unreduced.sum();\n }\n return unreduced;\n }\n\nAs you can see in documentation they use mean reduction by default.\n" ]
[ 8, 4, 0 ]
[]
[]
[ "deep_learning", "implementation", "loss", "python", "pytorch" ]
stackoverflow_0066906884_deep_learning_implementation_loss_python_pytorch.txt
Q: How to display images in python simple gui from a api url I want to read a image from api, but I am getting a error TypeError: 'module' object is not callable. I am trying to make a random meme generator import PySimpleGUI as sg from PIL import Image import requests, json cutURL = 'https://meme-api-python.herokuapp.com/gimme' imageURL = json.loads(requests.get(cutURL).content)["url"] img = Image(requests.get(imageURL).content) img_box = sg.Image(img) window = sg.Window('', [[img_box]]) while True: event, values = window.read() if event is None: break window.close() Here is the response of the api postLink "https://redd.it/yyjl2e" subreddit "dankmemes" title "Everything's fixed" url "https://i.redd.it/put9bi0vjp0a1.jpg" I tried using python simple gui module, IS there alternative way to make a random meme generator. A: PIL.Image is a module, you can not call it by Image(...), maybe you need call it by Image.open(...). At the same, tkinter/PySimpleGUI cannot handle JPG image, so conversion to PNG image is required. from io import BytesIO import PySimpleGUI as sg from PIL import Image import requests, json def image_to_data(im): """ Image object to bytes object. : Parameters im - Image object : Return bytes object. """ with BytesIO() as output: im.save(output, format="PNG") data = output.getvalue() return data cutURL = 'https://meme-api-python.herokuapp.com/gimme' imageURL = json.loads(requests.get(cutURL).content)["url"] data = requests.get(imageURL).content stream = BytesIO(data) img = Image.open(stream) img_box = sg.Image(image_to_data(img)) window = sg.Window('', [[img_box]], finalize=True) # Check if the size of the window is greater than the screen w1, h1 = window.size w2, h2 = sg.Window.get_screen_size() if w1>w2 or h1>h2: window.move(0, 0) while True: event, values = window.read() if event is None: break window.close() A: You need to use Image.open(...) - Image is a module, not a class. You can find a tutorial in the official PIL documentation. You may need to put the response content in a BytesIO object before you can use Image.open on it. BytesIO is a file-like object that exists only in memory. Most functions like Image.open that expect a file-like object will also accept BytesIO and StringIO (the text equivalent) objects. Example: from io import BytesIO def get_image(url): data = BytesIO(requests.get(url).content) return Image.open(data) A: I would do it with tk its simple and fast def window(): root = tk.Tk() panel = Label(root) panel.pack() img = None def updata(): response = requests.get(https://meme-api-python.herokuapp.com/gimme) img = Image.open(BytesIO(response.content)) img = img.resize((640, 480), Image.ANTIALIAS) #custom resolution img = ImageTk.PhotoImage(img) panel.config(image=img) panel.image = img root.update_idletasks() root.after(30, updata) updata() root.mainloop()
How to display images in python simple gui from a api url
I want to read a image from api, but I am getting a error TypeError: 'module' object is not callable. I am trying to make a random meme generator import PySimpleGUI as sg from PIL import Image import requests, json cutURL = 'https://meme-api-python.herokuapp.com/gimme' imageURL = json.loads(requests.get(cutURL).content)["url"] img = Image(requests.get(imageURL).content) img_box = sg.Image(img) window = sg.Window('', [[img_box]]) while True: event, values = window.read() if event is None: break window.close() Here is the response of the api postLink "https://redd.it/yyjl2e" subreddit "dankmemes" title "Everything's fixed" url "https://i.redd.it/put9bi0vjp0a1.jpg" I tried using python simple gui module, IS there alternative way to make a random meme generator.
[ "PIL.Image is a module, you can not call it by Image(...), maybe you need call it by Image.open(...). At the same, tkinter/PySimpleGUI cannot handle JPG image, so conversion to PNG image is required.\nfrom io import BytesIO\nimport PySimpleGUI as sg\nfrom PIL import Image\nimport requests, json\n\ndef image_to_data(im):\n \"\"\"\n Image object to bytes object.\n : Parameters\n im - Image object\n : Return\n bytes object.\n \"\"\"\n with BytesIO() as output:\n im.save(output, format=\"PNG\")\n data = output.getvalue()\n return data\n\ncutURL = 'https://meme-api-python.herokuapp.com/gimme'\n\nimageURL = json.loads(requests.get(cutURL).content)[\"url\"]\ndata = requests.get(imageURL).content\nstream = BytesIO(data)\nimg = Image.open(stream)\n\nimg_box = sg.Image(image_to_data(img))\n\nwindow = sg.Window('', [[img_box]], finalize=True)\n\n# Check if the size of the window is greater than the screen\nw1, h1 = window.size\nw2, h2 = sg.Window.get_screen_size()\nif w1>w2 or h1>h2:\n window.move(0, 0)\n\nwhile True:\n event, values = window.read()\n if event is None:\n break\nwindow.close()\n\n", "You need to use Image.open(...) - Image is a module, not a class. You can find a tutorial in the official PIL documentation.\nYou may need to put the response content in a BytesIO object before you can use Image.open on it. BytesIO is a file-like object that exists only in memory. Most functions like Image.open that expect a file-like object will also accept BytesIO and StringIO (the text equivalent) objects.\nExample:\nfrom io import BytesIO\n\ndef get_image(url):\n data = BytesIO(requests.get(url).content)\n return Image.open(data)\n\n", "I would do it with tk its simple and fast\ndef window():\n root = tk.Tk()\n panel = Label(root)\n panel.pack()\n img = None\n\n def updata():\n\n response = requests.get(https://meme-api-python.herokuapp.com/gimme)\n img = Image.open(BytesIO(response.content))\n img = img.resize((640, 480), Image.ANTIALIAS) #custom resolution\n img = ImageTk.PhotoImage(img)\n panel.config(image=img)\n panel.image = img\n \n root.update_idletasks()\n root.after(30, updata)\n\n updata()\n root.mainloop()\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "api", "pysimplegui", "python" ]
stackoverflow_0074501874_api_pysimplegui_python.txt
Q: How can I convert this tensoflow code to pytorch? How can I convert this tensoflow code to pytorch? #tensoflow Conv2D( self.filter_1, (1, 64), activation='elu', padding="same", kernel_constraint=max_norm(2., axis=(0, 1, 2)) ) nn.Sequential( nn.Conv2D(16, (1, 64), padding="same", kernel_constraint=max_norm(2., axis=(0, 1, 2)), nn.ELU() ) A: You need two things: You need to know what the input channel size is. In your example, you've only given the number of output channels, 16. Keras calculates this on its own during runtime, but you have to specify input channels when making torch nn.Conv2d. You need to implement the max_norm constraint on the conv kernel yourself. With this in mind, let's write a simple wrapper around the nn.Conv2d, that just enforces the constraint on the weights each time forward is called: import torch from torch import nn import torch.nn.functional as F class Conv2D_Norm_Constrained(nn.Conv2d): def __init__(self, max_norm_val, norm_dim, **kwargs): super().__init__(**kwargs) self.max_norm_val = max_norm_val self.norm_dim = norm_dim def get_constrained_weights(self, epsilon=1e-8): norm = self.weight.norm(2, dim=self.norm_dim, keepdim=True) return self.weight * (torch.clamp(norm, 0, self.max_norm_val) / (norm + epsilon)) def forward(self, input): return F.conv2d(input, self.get_constrained_weights(), self.bias, self.stride, self.padding, self.dilation, self.groups) Assuming your input channels are something like 8, we can write: nn.Sequential( Conv2D_Norm_Constrained(in_channels=8, out_channels=16, kernel_size=(1, 64), padding="same", max_norm_val=2.0, norm_dim=(0, 1, 2)), nn.ELU() )
How can I convert this tensoflow code to pytorch?
How can I convert this tensoflow code to pytorch? #tensoflow Conv2D( self.filter_1, (1, 64), activation='elu', padding="same", kernel_constraint=max_norm(2., axis=(0, 1, 2)) ) nn.Sequential( nn.Conv2D(16, (1, 64), padding="same", kernel_constraint=max_norm(2., axis=(0, 1, 2)), nn.ELU() )
[ "You need two things:\n\nYou need to know what the input channel size is. In your example, you've only given the number of output channels, 16. Keras calculates this on its own during runtime, but you have to specify input channels when making torch nn.Conv2d.\nYou need to implement the max_norm constraint on the conv kernel yourself.\n\nWith this in mind, let's write a simple wrapper around the nn.Conv2d, that just enforces the constraint on the weights each time forward is called:\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F\n\nclass Conv2D_Norm_Constrained(nn.Conv2d):\n def __init__(self, max_norm_val, norm_dim, **kwargs):\n super().__init__(**kwargs)\n self.max_norm_val = max_norm_val\n self.norm_dim = norm_dim\n\n def get_constrained_weights(self, epsilon=1e-8):\n norm = self.weight.norm(2, dim=self.norm_dim, keepdim=True)\n return self.weight * (torch.clamp(norm, 0, self.max_norm_val) / (norm + epsilon))\n\n def forward(self, input):\n return F.conv2d(input, self.get_constrained_weights(), self.bias, self.stride, self.padding, self.dilation, self.groups)\n\nAssuming your input channels are something like 8, we can write:\nnn.Sequential(\n Conv2D_Norm_Constrained(in_channels=8, out_channels=16, kernel_size=(1, 64), padding=\"same\", max_norm_val=2.0, norm_dim=(0, 1, 2)),\n nn.ELU()\n)\n\n" ]
[ 2 ]
[]
[]
[ "python", "pytorch", "tensorflow" ]
stackoverflow_0074498770_python_pytorch_tensorflow.txt
Q: How to set default python version for py.exe when multiple versions are installed in windows I have both 3.10 and 3.11b3 installed on my windows 10 machine. I'd like py.exe to launch 3.10. I had read that I should create py.ini and pyw.ini in both c:\windows and C:\Users\<user>\AppData\Local\Programs\Python\Launcher\ and the files should contain: [defaults] python=3.10 Multiple Python versions installed : how to set the default version for py.exe (Python Launcher for Windows) for CMD and for "Open with" I set these up after installing 3.11b3, but py.exe launches the beta. I don't have any other py.ini files. How do I fix this so c:\windows\py.exe launches my preferred default version? Two possible solutions have other issues. I could set PY_PYTHON=3.10, but that also changes python which is a problem in a venv. I could also use py -3.10, but I don't understand why the listed solution isn't working. A: Check the encoding of your .ini files: They should be in UTF-8. (UTF-8 with BOM doesn't work. Bug reported here). Then follow the steps mentioned in this answer.
How to set default python version for py.exe when multiple versions are installed in windows
I have both 3.10 and 3.11b3 installed on my windows 10 machine. I'd like py.exe to launch 3.10. I had read that I should create py.ini and pyw.ini in both c:\windows and C:\Users\<user>\AppData\Local\Programs\Python\Launcher\ and the files should contain: [defaults] python=3.10 Multiple Python versions installed : how to set the default version for py.exe (Python Launcher for Windows) for CMD and for "Open with" I set these up after installing 3.11b3, but py.exe launches the beta. I don't have any other py.ini files. How do I fix this so c:\windows\py.exe launches my preferred default version? Two possible solutions have other issues. I could set PY_PYTHON=3.10, but that also changes python which is a problem in a venv. I could also use py -3.10, but I don't understand why the listed solution isn't working.
[ "Check the encoding of your .ini files: They should be in UTF-8.\n(UTF-8 with BOM doesn't work. Bug reported here).\nThen follow the steps mentioned in this answer.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0072550867_python.txt
Q: Python: Convert multiple categorical features to dummy variables efficiently in a loop? I have a python dataframe and want to convert categorical features to dummy variables. I'm doing a logreg. Right now I only know how to do it manually one by one like below: sex = pd.get_dummies(train['Sex'], drop_first=True) embark = pd.get_dummies(train['Embarked'], drop_first=True) identity = pd.get_dummies(train['Identity'], drop_first=True) religion = pd.get_dummies(train['Religion'], drop_first=True) In reality, I actually have to do over 10 of these. How can I get dummies / set the "sex", "embark", "identity", "religion" variables in a more efficient way. Perhaps using a loop? A: categories = ['Sex', 'Embarked', 'Identity', 'Religion', ...] sex, embark, identity, religion, ... = [pd.get_dummies(train[c], drop_first=True) for c in categories]
Python: Convert multiple categorical features to dummy variables efficiently in a loop?
I have a python dataframe and want to convert categorical features to dummy variables. I'm doing a logreg. Right now I only know how to do it manually one by one like below: sex = pd.get_dummies(train['Sex'], drop_first=True) embark = pd.get_dummies(train['Embarked'], drop_first=True) identity = pd.get_dummies(train['Identity'], drop_first=True) religion = pd.get_dummies(train['Religion'], drop_first=True) In reality, I actually have to do over 10 of these. How can I get dummies / set the "sex", "embark", "identity", "religion" variables in a more efficient way. Perhaps using a loop?
[ "categories = ['Sex', 'Embarked', 'Identity', 'Religion', ...]\nsex, embark, identity, religion, ... = [pd.get_dummies(train[c], drop_first=True) for c in categories]\n\n" ]
[ 2 ]
[]
[]
[ "categorical_data", "dummy_variable", "for_loop", "python" ]
stackoverflow_0074502252_categorical_data_dummy_variable_for_loop_python.txt
Q: How do I find the intersection point of two line SEGMENTS, if one exists? I have two line segments described as below: # Segment 1 ((x1, y1), (x2, y2)) # Segment 2 ((x1, y1), (x2, y2)) I need a way to find their intersection point if one exists, using no third-party modules. I know people have asked this question before, but every single answer I've found either doesn't always work or uses a third-party module. So far, I've seen these questions and their "answers": How do I compute the intersection point of two lines? How can I check if two segments intersect? A: Here's a very simple algorithm using basic algebra. There are more efficient ways to do this, as shown in the question OP shared, but for people that don't know any linear algebra they won't be particularly helpful. Given two segments, you can use their endpoints to find the equation for each line using the point-slope formula, y - y1 = m (x - x1). Once you've found the equation for each line, set them equal to each other and algebraically solve for x. Once you find x, plug that into either equation to get y and your intersection point will be (x, y). Finally, check that this intersection point lies on both segments. If you cannot solve for x it means the lines don't intersect. One easy test would be to first check if the slopes of the two lines are the same before doing the rest of the work. If they are, the lines are parallel and will never intersect unless their intercepts are also the same, in which case the lines will intersect everywhere. Do the algebra symbolically with pen and paper first, then translate the formula for the intersection point into code. Then build out the other logic. Vertical lines require slightly different logic. If both are vertical then you'd need to check if the segments share the same x values. If they do then check if any of the y coordinates overlap. If only one segment is vertical, then find the equation for the non-vertical line, and evaluate it at the x coordinate of the vertical line. That'll give you the y coordinate of the potential intersection. If that y value is between the two y coordinates of the vertical line AND the x coordinate is on the non-vertical segment, then the segments intersect. There are plenty of opportunities for short-circuiting here, try to find them yourself during implementation. It may help to implement a custom class to represent your segments: class Segment: def __init__(self, *, x1: float, x2: float, y1: float, y2: float) -> None: self.x1 = x1 self.y1 = y1 self.x2 = x2 self.y2 = y2 def __contains__(self, x: float) -> bool: """ Check if `x` lies on this segment (implements the `in` operator, allowing for expressions like `x in some_segment`) """ pass def slope(self) -> float: try: return (self.y2 - self.y1) / (self.x2 - self.x1) except ZeroDivisionError: print("this segment is vertical!") return float("inf") def equation(self): """The equation for the line formed by this segment""" pass def intersects(self, other) -> bool: """Check if two Segment objects intersect each other""" pass
How do I find the intersection point of two line SEGMENTS, if one exists?
I have two line segments described as below: # Segment 1 ((x1, y1), (x2, y2)) # Segment 2 ((x1, y1), (x2, y2)) I need a way to find their intersection point if one exists, using no third-party modules. I know people have asked this question before, but every single answer I've found either doesn't always work or uses a third-party module. So far, I've seen these questions and their "answers": How do I compute the intersection point of two lines? How can I check if two segments intersect?
[ "Here's a very simple algorithm using basic algebra. There are more efficient ways to do this, as shown in the question OP shared, but for people that don't know any linear algebra they won't be particularly helpful.\nGiven two segments, you can use their endpoints to find the equation for each line using the point-slope formula, y - y1 = m (x - x1).\nOnce you've found the equation for each line, set them equal to each other and algebraically solve for x. Once you find x, plug that into either equation to get y and your intersection point will be (x, y).\nFinally, check that this intersection point lies on both segments.\nIf you cannot solve for x it means the lines don't intersect. One easy test would be to first check if the slopes of the two lines are the same before doing the rest of the work. If they are, the lines are parallel and will never intersect unless their intercepts are also the same, in which case the lines will intersect everywhere.\nDo the algebra symbolically with pen and paper first, then translate the formula for the intersection point into code. Then build out the other logic.\nVertical lines require slightly different logic. If both are vertical then you'd need to check if the segments share the same x values. If they do then check if any of the y coordinates overlap.\nIf only one segment is vertical, then find the equation for the non-vertical line, and evaluate it at the x coordinate of the vertical line. That'll give you the y coordinate of the potential intersection. If that y value is between the two y coordinates of the vertical line AND the x coordinate is on the non-vertical segment, then the segments intersect.\nThere are plenty of opportunities for short-circuiting here, try to find them yourself during implementation.\nIt may help to implement a custom class to represent your segments:\nclass Segment:\n def __init__(self, *, x1: float, x2: float, y1: float, y2: float) -> None:\n self.x1 = x1\n self.y1 = y1\n self.x2 = x2\n self.y2 = y2\n\n def __contains__(self, x: float) -> bool:\n \"\"\"\n Check if `x` lies on this segment (implements the `in` operator, \n allowing for expressions like `x in some_segment`)\n \"\"\"\n pass\n\n def slope(self) -> float:\n try:\n return (self.y2 - self.y1) / (self.x2 - self.x1)\n except ZeroDivisionError:\n print(\"this segment is vertical!\")\n return float(\"inf\")\n\n def equation(self):\n \"\"\"The equation for the line formed by this segment\"\"\"\n pass\n\n def intersects(self, other) -> bool:\n \"\"\"Check if two Segment objects intersect each other\"\"\"\n pass\n\n" ]
[ 1 ]
[]
[]
[ "intersection", "line_segment", "math", "python" ]
stackoverflow_0074502061_intersection_line_segment_math_python.txt
Q: How to communicate from one python script with another via network? I have a server side (Python 3) and a client side (Python 2.7), i am trying to use the socket module. The idea is, that the server side is permanently active and the client socket connects through call of a function. Then data needs to be sent from server to client until the client disconnects (manually). The server should then go back into the listening process and wait until the next connection. I do not have any experience with sockets, i have been trying some examples that i found. In the first line, my problem is to reconnect to the same server socket. Server side: import socket HOST = "127.0.0.1" PORT = 65432 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) s.listen() conn, addr = s.accept() print("Connected by", addr) for x in range(10): data = conn.recv(1024) if not data: break conn.sendall(data) conn.close() Client side (with Tkinter-GUI): import Tkinter as tk import socket import random import time keyState = False HOST = '127.0.0.1' PORT = 65432 def onButton(): global keyState if(not keyState ): keyState = not keyState key_button.config(relief='sunken') connectSocket() print(keyState) return if(keyState ): keyState = not keyState key_button.config(relief='raised') disconnectSocket() print(keyState ) return def connectSocket(): print("connectSocket()") global HOST, PORT s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) for x in range(10): if(x<5): val = random.uniform(0.0, 400.0) else: val = random.uniform(-400,0) s.sendall(str(val)) data = s.recv(1024) print 'Received', repr(data) s.close() def disconnectSocket(): print("disconnectSocket()") return #Main GUI root = tk.Tk() root.title('Python Socket Test') root.configure(background='white') root.geometry("200x300") #Button root.update() softkey_button = tk.Button(root, text="Softkey", command = lambda: onButton(), relief='flat') softkey_button.place(x=75,y=200) root.mainloop() A: You simply need to add a while True loop on your server side, corrently you are only accepting one connection, and after the connection is closed the program stops. Try this on your server file: import socket HOST = "127.0.0.1" PORT = 65432 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) while True: s.listen() conn, addr = s.accept() print("Connected by", addr) for x in range(10): data = conn.recv(1024) if not data: break conn.sendall(data) conn.close()
How to communicate from one python script with another via network?
I have a server side (Python 3) and a client side (Python 2.7), i am trying to use the socket module. The idea is, that the server side is permanently active and the client socket connects through call of a function. Then data needs to be sent from server to client until the client disconnects (manually). The server should then go back into the listening process and wait until the next connection. I do not have any experience with sockets, i have been trying some examples that i found. In the first line, my problem is to reconnect to the same server socket. Server side: import socket HOST = "127.0.0.1" PORT = 65432 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) s.listen() conn, addr = s.accept() print("Connected by", addr) for x in range(10): data = conn.recv(1024) if not data: break conn.sendall(data) conn.close() Client side (with Tkinter-GUI): import Tkinter as tk import socket import random import time keyState = False HOST = '127.0.0.1' PORT = 65432 def onButton(): global keyState if(not keyState ): keyState = not keyState key_button.config(relief='sunken') connectSocket() print(keyState) return if(keyState ): keyState = not keyState key_button.config(relief='raised') disconnectSocket() print(keyState ) return def connectSocket(): print("connectSocket()") global HOST, PORT s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) for x in range(10): if(x<5): val = random.uniform(0.0, 400.0) else: val = random.uniform(-400,0) s.sendall(str(val)) data = s.recv(1024) print 'Received', repr(data) s.close() def disconnectSocket(): print("disconnectSocket()") return #Main GUI root = tk.Tk() root.title('Python Socket Test') root.configure(background='white') root.geometry("200x300") #Button root.update() softkey_button = tk.Button(root, text="Softkey", command = lambda: onButton(), relief='flat') softkey_button.place(x=75,y=200) root.mainloop()
[ "You simply need to add a while True loop on your server side, corrently you are only accepting one connection, and after the connection is closed the program stops. Try this on your server file:\nimport socket\n\nHOST = \"127.0.0.1\"\nPORT = 65432\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.bind((HOST, PORT))\n\nwhile True:\n s.listen()\n conn, addr = s.accept()\n print(\"Connected by\", addr)\n \n for x in range(10):\n data = conn.recv(1024)\n if not data:\n break\n conn.sendall(data)\n \n conn.close()\n\n" ]
[ 0 ]
[]
[]
[ "client", "networking", "python", "server", "sockets" ]
stackoverflow_0074502243_client_networking_python_server_sockets.txt
Q: Compare two columns from two different data frame with two conditions The context here is that I'm comparing the values of two columns—the key and the date. If the criterion is met, we will now create a new column with the flag = Y else "" Condition: if key are matching and date in df1 > date in df2 then "Y" else "" We will therefore iterate through all of the rows in df1 and see if the key matches in df2, at which point we will check dateF and date for that row to see if it is greater, and if it is, we will save "Y" in a new column flag. Update 1: There can be multiple rows in df1 with same key and different dates Df1: Key Date Another 123 2022-03-04 Apple 321 2022-05-01 Red 234 2022-07-08 Green Df2: Key Date 123 2022-03-01 321 2022-05-01 234 2022-07-01 Expected O/P: Explanation: as we can see first row and 3rd row key are matching and the DateF in df1 > Date in df2 so Y Key Date Another Flag 123 2022-03-04 Apple Y 321 2022-05-01 Red 234 2022-07-08 Green Y Code to create all dfs: import pandas as pd data = [[123, pd.to_datetime('2022-03-04 '),'Apple'], [321, pd.to_datetime('2022-05-01 '),'Red'], [234, pd.to_datetime('2022-07-08 '),'Green']] df1 = pd.DataFrame(data, columns=['Key', 'DateF', 'Another']) #df2 data1 = [[123, pd.to_datetime('2022-03-01 ')], [321, pd.to_datetime('2022-05-01 ')], [234, pd.to_datetime('2022-07-01 ')]] df2 = pd.DataFrame(data1, columns=['Key', 'Date']) Have tried this but i think i am going wrong. for i in df1.Key.unique(): df1.loc[(df1[i] == df2[i]) & (r['DateF'] > df2['Date]), "Flag"] = "Y" Thank You! A: You can use pandas.Series.gt to compare the two dates then pandas.DataFrame.loc with a boolean mask to create the new column and flag it at the same time. df1.loc[df1['Date'].gt(df2['Date']), "Flag"]= "Y" # Output : print(df1) Key Date Another Flag 0 123 2022-03-04 Apple Y 1 321 2022-05-01 Red NaN 2 234 2022-07-08 Green Y A: You can use merge if your dataframes are not the same size: final=df1.merge(df2,left_on='Key',right_on='Key',how='left') final.loc[final['DateF'] > final['Date'], "Flag"]="Y" final=final.drop(['Date'],axis=1) Key DateF Another Flag 0 123 2022-03-04 Apple Y 1 321 2022-05-01 Red 2 234 2022-07-08 Green Y A: This code is not as elegant as the answers, but it also works: ref_dates = dict(zip(df2.Key,df2.Date)) df1['Flag'] = ['Y' if date>ref_dates.get(key,'0000-00-00') else '' for key,date in zip(df1.Key,df1.DateF)] We first create a dictionary (ref_dates) with the dates in df2, and then iterate over df1 comparing them with DateF.
Compare two columns from two different data frame with two conditions
The context here is that I'm comparing the values of two columns—the key and the date. If the criterion is met, we will now create a new column with the flag = Y else "" Condition: if key are matching and date in df1 > date in df2 then "Y" else "" We will therefore iterate through all of the rows in df1 and see if the key matches in df2, at which point we will check dateF and date for that row to see if it is greater, and if it is, we will save "Y" in a new column flag. Update 1: There can be multiple rows in df1 with same key and different dates Df1: Key Date Another 123 2022-03-04 Apple 321 2022-05-01 Red 234 2022-07-08 Green Df2: Key Date 123 2022-03-01 321 2022-05-01 234 2022-07-01 Expected O/P: Explanation: as we can see first row and 3rd row key are matching and the DateF in df1 > Date in df2 so Y Key Date Another Flag 123 2022-03-04 Apple Y 321 2022-05-01 Red 234 2022-07-08 Green Y Code to create all dfs: import pandas as pd data = [[123, pd.to_datetime('2022-03-04 '),'Apple'], [321, pd.to_datetime('2022-05-01 '),'Red'], [234, pd.to_datetime('2022-07-08 '),'Green']] df1 = pd.DataFrame(data, columns=['Key', 'DateF', 'Another']) #df2 data1 = [[123, pd.to_datetime('2022-03-01 ')], [321, pd.to_datetime('2022-05-01 ')], [234, pd.to_datetime('2022-07-01 ')]] df2 = pd.DataFrame(data1, columns=['Key', 'Date']) Have tried this but i think i am going wrong. for i in df1.Key.unique(): df1.loc[(df1[i] == df2[i]) & (r['DateF'] > df2['Date]), "Flag"] = "Y" Thank You!
[ "You can use pandas.Series.gt to compare the two dates then pandas.DataFrame.loc with a boolean mask to create the new column and flag it at the same time.\ndf1.loc[df1['Date'].gt(df2['Date']), \"Flag\"]= \"Y\"\n\n# Output :\nprint(df1)\n\n Key Date Another Flag\n0 123 2022-03-04 Apple Y\n1 321 2022-05-01 Red NaN\n2 234 2022-07-08 Green Y\n\n", "You can use merge if your dataframes are not the same size:\nfinal=df1.merge(df2,left_on='Key',right_on='Key',how='left')\nfinal.loc[final['DateF'] > final['Date'], \"Flag\"]=\"Y\"\nfinal=final.drop(['Date'],axis=1)\n\n Key DateF Another Flag\n0 123 2022-03-04 Apple Y\n1 321 2022-05-01 Red \n2 234 2022-07-08 Green Y\n\n\n", "This code is not as elegant as the answers, but it also works:\nref_dates = dict(zip(df2.Key,df2.Date))\ndf1['Flag'] = ['Y' if date>ref_dates.get(key,'0000-00-00') else '' for key,date in zip(df1.Key,df1.DateF)]\n\nWe first create a dictionary (ref_dates) with the dates in df2, and then iterate over df1 comparing them with DateF.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074502143_pandas_python.txt
Q: Python float to Decimal conversion Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first. This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')). Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported? A: Python <2.7 "%.15g" % f Or in Python 3.0: format(f, ".15g") Python 2.7+, 3.2+ Just pass the float to Decimal constructor directly, like this: from decimal import Decimal Decimal(f) A: You said in your question: Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost. A: I suggest this >>> a = 2.111111 >>> a 2.1111110000000002 >>> str(a) '2.111111' >>> decimal.Decimal(str(a)) Decimal('2.111111') A: you can convert and than quantize to keep 5 digits after comma via Decimal(float).quantize(Decimal("1.00000")) A: Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal) I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number. A: The "official" string representation of a float is given by the repr() built-in: >>> repr(1.5) '1.5' >>> repr(12345.678901234567890123456789) '12345.678901234567' You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage. A: When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor? A: The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01 A: I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be: Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported? Short answer / solution: Yes. def ftod(val, prec = 15): return Decimal(val).quantize(Decimal(10)**-prec) Long Answer: As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float. It is possible though to round that value with a reasonable precision and convert it into Decimal. In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check. >>> 0.1 + 0.2 == 0.3 False Now let's do this with conversion to decimal (complete example): >>> from decimal import Decimal >>> def ftod(val, prec = 15): # float to Decimal ... return Decimal(val).quantize(Decimal(10)**-prec) ... >>> ftod(0.1) + ftod(0.2) == ftod(0.3) True The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize(): >>> Decimal(10)**-4 Decimal('0.0001') Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions): >>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]: ... print("{:8} {:.18f}".format(type(x).__name__+":", x)) ... float: 0.100000000000000006 float: 0.200000000000000011 float: 0.299999999999999989 Decimal: 0.100000000000000000 Decimal: 0.200000000000000000 Decimal: 0.300000000000000000 And last I want to know for which precision the comparision still works: >>> for p in [15, 16, 17]: ... print("Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}".format(p, ... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p))) ... Rounding precision: 15. Check 0.1 + 0.2 == 0.3 is True Rounding precision: 16. Check 0.1 + 0.2 == 0.3 is True Rounding precision: 17. Check 0.1 + 0.2 == 0.3 is False 15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try: >>> import sys >>> sys.float_info sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1) With float having 53 bits mantissa on my system, I calculated the number of decimal digits: >>> import math >>> math.log10(2**53) 15.954589770191003 Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above). Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-) Any suggestions / improvements / complaints are welcome. A: The "right" way to do this was documented in 1990 by Steele and White's and Clinger's PLDI 1990 papers. You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float. A: You can use JSON to accomplish it import json from decimal import Decimal float_value = 123456.2365 decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal) A: Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step: import decimal class DecimalBuilder(float): def __or__(self, a): return decimal.Decimal(str(a)) >>> d = DecimalBuilder() >>> x = d|0.1 >>> y = d|0.2 >>> x + y # works as desired Decimal('0.3') >>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float' >>> (d|0.1) + (d|0.2) # works as desired Decimal('0.3') It's a workaround but it surely allows savings in code typing and it's very readable. A: The question is based on the wrong assertion that: 'Python Decimal doesn't support being constructed from float'. In python3, Decimal class can do it as: from decimal import * getcontext().prec = 128 #high precision set print(Decimal(100000.3)) A: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision) That's the right value with all decimals included, and so: 'there is no garbage after 15th decimal place ...' You can verify on line with a IEEE754 converter like: https://www.binaryconvert.com/convert_double.html A: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision) or directly in python3 : print(f'{100000.3:.128f}'.strip('0')) A: 100000.300000000002910383045673370361328125 Preserving value as the user has entered, it's made with string conversion as: Decimal(str(100000.3)) A: 100000.3
Python float to Decimal conversion
Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first. This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')). Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
[ "Python <2.7\n\"%.15g\" % f\n\nOr in Python 3.0:\nformat(f, \".15g\")\n\nPython 2.7+, 3.2+\nJust pass the float to Decimal constructor directly, like this:\nfrom decimal import Decimal\nDecimal(f)\n\n", "You said in your question: \n\nCan someone suggest a good way to\n convert from float to Decimal\n preserving value as the user has\n entered\n\nBut every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.\n", "I suggest this\n>>> a = 2.111111\n>>> a\n2.1111110000000002\n>>> str(a)\n'2.111111'\n>>> decimal.Decimal(str(a))\nDecimal('2.111111')\n\n", "you can convert and than quantize to keep 5 digits after comma via\nDecimal(float).quantize(Decimal(\"1.00000\"))\n\n", "Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)\nI think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.\n", "The \"official\" string representation of a float is given by the repr() built-in:\n>>> repr(1.5)\n'1.5'\n>>> repr(12345.678901234567890123456789)\n'12345.678901234567'\n\nYou can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.\n", "When you say \"preserving value as the user has entered\", why not just store the user-entered value as a string, and pass that to the Decimal constructor?\n", "The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, \".2g\") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, \".2f\") == 0.01\n", "I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:\n\nCan someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?\n\nShort answer / solution: Yes.\ndef ftod(val, prec = 15):\n return Decimal(val).quantize(Decimal(10)**-prec)\n\nLong Answer:\nAs nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float.\nIt is possible though to round that value with a reasonable precision and convert it into Decimal.\nIn my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.\n>>> 0.1 + 0.2 == 0.3\nFalse\n\nNow let's do this with conversion to decimal (complete example):\n>>> from decimal import Decimal\n>>> def ftod(val, prec = 15): # float to Decimal\n... return Decimal(val).quantize(Decimal(10)**-prec)\n... \n>>> ftod(0.1) + ftod(0.2) == ftod(0.3)\nTrue\n\nThe answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():\n>>> Decimal(10)**-4\nDecimal('0.0001')\n\nHere's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):\n>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:\n... print(\"{:8} {:.18f}\".format(type(x).__name__+\":\", x))\n... \nfloat: 0.100000000000000006\nfloat: 0.200000000000000011\nfloat: 0.299999999999999989\nDecimal: 0.100000000000000000\nDecimal: 0.200000000000000000\nDecimal: 0.300000000000000000\n\nAnd last I want to know for which precision the comparision still works:\n>>> for p in [15, 16, 17]:\n... print(\"Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}\".format(p,\n... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))\n... \nRounding precision: 15. Check 0.1 + 0.2 == 0.3 is True\nRounding precision: 16. Check 0.1 + 0.2 == 0.3 is True\nRounding precision: 17. Check 0.1 + 0.2 == 0.3 is False\n\n15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:\n>>> import sys\n>>> sys.float_info\nsys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)\n\nWith float having 53 bits mantissa on my system, I calculated the number of decimal digits:\n>>> import math\n>>> math.log10(2**53)\n15.954589770191003\n\nWhich tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).\nAnyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)\nAny suggestions / improvements / complaints are welcome.\n", "The \"right\" way to do this was documented in 1990 by Steele and White's and\nClinger's PLDI 1990 papers.\nYou might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.\n", "You can use JSON to accomplish it\nimport json\nfrom decimal import Decimal\n\nfloat_value = 123456.2365\ndecimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)\n\n", "Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:\nimport decimal\nclass DecimalBuilder(float):\n def __or__(self, a):\n return decimal.Decimal(str(a))\n\n>>> d = DecimalBuilder()\n>>> x = d|0.1\n>>> y = d|0.2\n>>> x + y # works as desired\nDecimal('0.3')\n>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis\nTypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'\n>>> (d|0.1) + (d|0.2) # works as desired\nDecimal('0.3')\n\nIt's a workaround but it surely allows savings in code typing and it's very readable.\n", "The question is based on the wrong assertion that:\n'Python Decimal doesn't support being constructed from float'.\nIn python3, Decimal class can do it as:\nfrom decimal import *\ngetcontext().prec = 128 #high precision set\nprint(Decimal(100000.3))\n\nA: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision)\nThat's the right value with all decimals included, and so:\n'there is no garbage after 15th decimal place ...'\nYou can verify on line with a IEEE754 converter like:\nhttps://www.binaryconvert.com/convert_double.html\nA: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision)\nor directly in python3 :\n\n\n\nprint(f'{100000.3:.128f}'.strip('0'))\n\n\n\nA: 100000.300000000002910383045673370361328125\nPreserving value as the user has entered, it's made with string conversion as:\n\n\n\nDecimal(str(100000.3))\n\n\n\nA: 100000.3\n" ]
[ 73, 31, 31, 6, 5, 4, 2, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "decimal", "python" ]
stackoverflow_0000316238_decimal_python.txt
Q: Keyboard module multiple if import keyboard while True: if keyboard.read_key() == "up": print("up") if keyboard.read_key() == "down": print("down") if keyboard.read_key() == "enter": print("enter") Sometimes the print function only run after second key press. Python 3.11 I literally tried every other module and every possible if-elif-while combination. A: To make the code a bit cleaner, you can consider using a dictionary with messages: import keyboard message = {"up": "up", "down": "down", "enter": "enter"} while True: key = keyboard.read_key() if key in message: print(message[key]) while keyboard.is_pressed(key): pass If you have a lot of messages for different keys, using a dictionary could be faster as well. @Sedus Just for clarity, discovered that the order of the commands had to be changed to avoid double key presses while keyboard.is_pressed(key): pass print(message[key])
Keyboard module multiple if
import keyboard while True: if keyboard.read_key() == "up": print("up") if keyboard.read_key() == "down": print("down") if keyboard.read_key() == "enter": print("enter") Sometimes the print function only run after second key press. Python 3.11 I literally tried every other module and every possible if-elif-while combination.
[ "To make the code a bit cleaner, you can consider using a dictionary with messages:\nimport keyboard\nmessage = {\"up\": \"up\", \"down\": \"down\", \"enter\": \"enter\"}\nwhile True:\n key = keyboard.read_key()\n \n if key in message:\n print(message[key])\n while keyboard.is_pressed(key): pass\n\nIf you have a lot of messages for different keys, using a dictionary could be faster as well.\n\n@Sedus Just for clarity, discovered that the order of the commands had to be changed to avoid double key presses\n while keyboard.is_pressed(key): pass\n print(message[key])\n\n" ]
[ 0 ]
[]
[]
[ "keyboard", "python" ]
stackoverflow_0074502206_keyboard_python.txt
Q: How to remove backslash from JSON file Currently using python to create the JSON, here is a snippet of my output: "{\"ownerName\":{\"0\":\"VANGUARD GROUP INC\",\"1\":\"BLACKROCK INC.\" ...and so on The code I've used is below: import requests import pandas as pd import json headers = { 'accept': 'application/json, text/plain, */*', 'origin': 'https://www.nasdaq.com', 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36' } pd.set_option('display.max_columns', None) pd.set_option('display.max_colwidth', None) url = 'https://api.nasdaq.com/api/company/AAPL/institutional-holdings?limit=10&offset=0&type=TOTAL&sortColumn=marketValue&sortOrder=DESC' r = requests.get(url, headers=headers) df = pd.json_normalize(r.json()['data']['holdingsTransactions']['table']['rows']) df1 = df.replace("\ "," ") df2 = df1.to_json() with open('AAPL_institutional_table_MRKTVAL.json', 'w') as f: json.dump(df2, f) I included the line df2 = df1.to_json() otherwise without it the "JSON is not steralizable". I have also attempted to include df1 = df.replace("\ "," ") as an amateur approach to replace the backslahses with nothing, but still no luck. A: You're double-encoding the Json, so that's why you have the escaped output. Try: import requests import pandas as pd import json headers = { "accept": "application/json, text/plain, */*", "origin": "https://www.nasdaq.com", "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36", } pd.set_option("display.max_columns", None) pd.set_option("display.max_colwidth", None) url = "https://api.nasdaq.com/api/company/AAPL/institutional-holdings?limit=10&offset=0&type=TOTAL&sortColumn=marketValue&sortOrder=DESC" r = requests.get(url, headers=headers) df = pd.json_normalize( r.json()["data"]["holdingsTransactions"]["table"]["rows"] ) df.to_json("AAPL_institutional_table_MRKTVAL.json", indent=4) # <-- write `df` directly to file as Json Creates AAPL_institutional_table_MRKTVAL.json: { "ownerName":{ "0":"VANGUARD GROUP INC", "1":"BLACKROCK INC.", "2":"BERKSHIRE HATHAWAY INC", "3":"STATE STREET CORP", "4":"FMR LLC", "5":"GEODE CAPITAL MANAGEMENT, LLC", "6":"PRICE T ROWE ASSOCIATES INC \/MD\/", "7":"MORGAN STANLEY", "8":"NORTHERN TRUST CORP", "9":"BANK OF AMERICA CORP \/DE\/" }, "date":{ "0":"09\/30\/2022", "1":"09\/30\/2022", "2":"09\/30\/2022", "3":"09\/30\/2022", "4":"09\/30\/2022", "5":"09\/30\/2022", "6":"09\/30\/2022", "7":"09\/30\/2022", "8":"09\/30\/2022", "9":"09\/30\/2022" }, "sharesHeld":{ "0":"1,272,378,901", "1":"1,020,245,185", "2":"894,802,319", "3":"591,543,874", "4":"350,900,116", "5":"279,758,518", "6":"224,863,541", "7":"182,728,771", "8":"176,084,862", "9":"142,260,591" }, "sharesChange":{ "0":"-4,940,153", "1":"-8,443,132", "2":"0", "3":"-6,634,650", "4":"6,582,142", "5":"1,502,326", "6":"-13,047,242", "7":"278,206", "8":"-3,744,060", "9":"-6,873,324" }, "sharesChangePCT":{ "0":"-0.387%", "1":"-0.821%", "2":"0%", "3":"-1.109%", "4":"1.912%", "5":"0.54%", "6":"-5.484%", "7":"0.152%", "8":"-2.082%", "9":"-4.609%" }, "marketValue":{ "0":"$192,498,204", "1":"$154,352,894", "2":"$135,374,643", "3":"$89,494,673", "4":"$53,087,679", "5":"$42,324,666", "6":"$34,019,605", "7":"$27,645,036", "8":"$26,639,879", "9":"$21,522,605" }, "url":{ "0":"\/market-activity\/institutional-portfolio\/vanguard-group-inc-61322", "1":"\/market-activity\/institutional-portfolio\/blackrock-inc-711679", "2":"\/market-activity\/institutional-portfolio\/berkshire-hathaway-inc-54239", "3":"\/market-activity\/institutional-portfolio\/state-street-corp-6697", "4":"\/market-activity\/institutional-portfolio\/fmr-llc-12407", "5":"\/market-activity\/institutional-portfolio\/geode-capital-management-llc-396991", "6":"\/market-activity\/institutional-portfolio\/price-t-rowe-associates-inc-md-2145", "7":"\/market-activity\/institutional-portfolio\/morgan-stanley-5929", "8":"\/market-activity\/institutional-portfolio\/northern-trust-corp-10923", "9":"\/market-activity\/institutional-portfolio\/bank-of-america-corp-de-15519" } }
How to remove backslash from JSON file
Currently using python to create the JSON, here is a snippet of my output: "{\"ownerName\":{\"0\":\"VANGUARD GROUP INC\",\"1\":\"BLACKROCK INC.\" ...and so on The code I've used is below: import requests import pandas as pd import json headers = { 'accept': 'application/json, text/plain, */*', 'origin': 'https://www.nasdaq.com', 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36' } pd.set_option('display.max_columns', None) pd.set_option('display.max_colwidth', None) url = 'https://api.nasdaq.com/api/company/AAPL/institutional-holdings?limit=10&offset=0&type=TOTAL&sortColumn=marketValue&sortOrder=DESC' r = requests.get(url, headers=headers) df = pd.json_normalize(r.json()['data']['holdingsTransactions']['table']['rows']) df1 = df.replace("\ "," ") df2 = df1.to_json() with open('AAPL_institutional_table_MRKTVAL.json', 'w') as f: json.dump(df2, f) I included the line df2 = df1.to_json() otherwise without it the "JSON is not steralizable". I have also attempted to include df1 = df.replace("\ "," ") as an amateur approach to replace the backslahses with nothing, but still no luck.
[ "You're double-encoding the Json, so that's why you have the escaped output. Try:\nimport requests\nimport pandas as pd\nimport json\n\nheaders = {\n \"accept\": \"application/json, text/plain, */*\",\n \"origin\": \"https://www.nasdaq.com\",\n \"User-Agent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.79 Safari/537.36\",\n}\n\npd.set_option(\"display.max_columns\", None)\npd.set_option(\"display.max_colwidth\", None)\n\nurl = \"https://api.nasdaq.com/api/company/AAPL/institutional-holdings?limit=10&offset=0&type=TOTAL&sortColumn=marketValue&sortOrder=DESC\"\nr = requests.get(url, headers=headers)\ndf = pd.json_normalize(\n r.json()[\"data\"][\"holdingsTransactions\"][\"table\"][\"rows\"]\n)\n\ndf.to_json(\"AAPL_institutional_table_MRKTVAL.json\", indent=4) # <-- write `df` directly to file as Json\n\nCreates AAPL_institutional_table_MRKTVAL.json:\n{\n \"ownerName\":{\n \"0\":\"VANGUARD GROUP INC\",\n \"1\":\"BLACKROCK INC.\",\n \"2\":\"BERKSHIRE HATHAWAY INC\",\n \"3\":\"STATE STREET CORP\",\n \"4\":\"FMR LLC\",\n \"5\":\"GEODE CAPITAL MANAGEMENT, LLC\",\n \"6\":\"PRICE T ROWE ASSOCIATES INC \\/MD\\/\",\n \"7\":\"MORGAN STANLEY\",\n \"8\":\"NORTHERN TRUST CORP\",\n \"9\":\"BANK OF AMERICA CORP \\/DE\\/\"\n },\n \"date\":{\n \"0\":\"09\\/30\\/2022\",\n \"1\":\"09\\/30\\/2022\",\n \"2\":\"09\\/30\\/2022\",\n \"3\":\"09\\/30\\/2022\",\n \"4\":\"09\\/30\\/2022\",\n \"5\":\"09\\/30\\/2022\",\n \"6\":\"09\\/30\\/2022\",\n \"7\":\"09\\/30\\/2022\",\n \"8\":\"09\\/30\\/2022\",\n \"9\":\"09\\/30\\/2022\"\n },\n \"sharesHeld\":{\n \"0\":\"1,272,378,901\",\n \"1\":\"1,020,245,185\",\n \"2\":\"894,802,319\",\n \"3\":\"591,543,874\",\n \"4\":\"350,900,116\",\n \"5\":\"279,758,518\",\n \"6\":\"224,863,541\",\n \"7\":\"182,728,771\",\n \"8\":\"176,084,862\",\n \"9\":\"142,260,591\"\n },\n \"sharesChange\":{\n \"0\":\"-4,940,153\",\n \"1\":\"-8,443,132\",\n \"2\":\"0\",\n \"3\":\"-6,634,650\",\n \"4\":\"6,582,142\",\n \"5\":\"1,502,326\",\n \"6\":\"-13,047,242\",\n \"7\":\"278,206\",\n \"8\":\"-3,744,060\",\n \"9\":\"-6,873,324\"\n },\n \"sharesChangePCT\":{\n \"0\":\"-0.387%\",\n \"1\":\"-0.821%\",\n \"2\":\"0%\",\n \"3\":\"-1.109%\",\n \"4\":\"1.912%\",\n \"5\":\"0.54%\",\n \"6\":\"-5.484%\",\n \"7\":\"0.152%\",\n \"8\":\"-2.082%\",\n \"9\":\"-4.609%\"\n },\n \"marketValue\":{\n \"0\":\"$192,498,204\",\n \"1\":\"$154,352,894\",\n \"2\":\"$135,374,643\",\n \"3\":\"$89,494,673\",\n \"4\":\"$53,087,679\",\n \"5\":\"$42,324,666\",\n \"6\":\"$34,019,605\",\n \"7\":\"$27,645,036\",\n \"8\":\"$26,639,879\",\n \"9\":\"$21,522,605\"\n },\n \"url\":{\n \"0\":\"\\/market-activity\\/institutional-portfolio\\/vanguard-group-inc-61322\",\n \"1\":\"\\/market-activity\\/institutional-portfolio\\/blackrock-inc-711679\",\n \"2\":\"\\/market-activity\\/institutional-portfolio\\/berkshire-hathaway-inc-54239\",\n \"3\":\"\\/market-activity\\/institutional-portfolio\\/state-street-corp-6697\",\n \"4\":\"\\/market-activity\\/institutional-portfolio\\/fmr-llc-12407\",\n \"5\":\"\\/market-activity\\/institutional-portfolio\\/geode-capital-management-llc-396991\",\n \"6\":\"\\/market-activity\\/institutional-portfolio\\/price-t-rowe-associates-inc-md-2145\",\n \"7\":\"\\/market-activity\\/institutional-portfolio\\/morgan-stanley-5929\",\n \"8\":\"\\/market-activity\\/institutional-portfolio\\/northern-trust-corp-10923\",\n \"9\":\"\\/market-activity\\/institutional-portfolio\\/bank-of-america-corp-de-15519\"\n }\n}\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "json", "pandas", "python" ]
stackoverflow_0074501229_dataframe_json_pandas_python.txt
Q: Detecting a specific color in a circular area and adding horizontal lines inside the circle I was working to reproduce an optical illusion that you find here(image) but I having trouble adding horizontal lines inside of the circles: My attempt so far: -Detect the certain colors of the circles -Detect contours, and extract circle center points, and radius -Then try to draw horizontal lines (which I failed) Here is my code: import numpy as np import cv2 img = 255*np.ones((800, 800, 3), np.uint8) height, width,_ = img.shape #filling the image with lines for i in range(0, height, 15): cv2.line(img, (0, i+3), (width, i+3), (255, 0, 0), 4) cv2.line(img, (0, i+8), (width, i+8), (0, 255, 0), 4) cv2.line(img, (0, i+13), (width, i+13), (0, 0, 255), 4) #adding 5 gray circles for i in range(0, height, int(height/5)): cv2.circle(img, (i+50, i+50), 75, (128, 128, 128), -1) #finding rannge of gray circles lower=np.array([127,127,127]) upper=np.array([129,129,129]) mask = cv2.inRange(img, lower, upper) #contours contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) for cnt in contours: #draw circles around the contours coordinates = cv2.minEnclosingCircle(cnt) #coordinates and radius: center = (int(coordinates[0][0]), int(coordinates[0][1])) radius = int(coordinates[1]) #I wanted to do a sanity check before the for loop (I added a line the longest line should be 2*radius) cv2.line(img, (center[0]-radius, center[1]), (center[0]+radius, center[1]), (0, 0, 0), 4) for i in range(0, radius, int(radius/5)): cv2.line(img, (center[0]-radius+i, center[1]+i), (center[0]+radius-i, center[1]+i), (0, 0, 0), 4) cv2.line(img, (center[0]-radius+i, center[1]-i), (center[0]+radius-i, center[1]-i), (0, 0, 0), 4) cv2.imwrite('munker.png',img) And here is the result: As you can see the values in the for loop are not proportional to the boundaries of the circle, so the lines are short(except the longest line). What am I missing here? I tried the Hough transform but I had a similar problem. For more clarity, I write a code to show what I wanted: for i in range(0, 360, 15): x = int(center[0] + radius * np.cos(np.deg2rad(i))) y = int(center[1] + radius * np.sin(np.deg2rad(i))) cv2.line(img, (x,y), (x, y), (0, 255, 255), 10) I want to merge the yellow dots with horizontal lines. But my math is finished right here. Sorry, it's long, I was just trying to make things clear. Thank you for your time. A: As @fmw42 pointed out in the comment, splitting the RGB channels and applying a mask is very effective at being able to fill the inside of the circles with horizontal lines. import numpy as np import cv2 img = 255*np.ones((800, 800, 3), np.uint8) height, width,_ = img.shape for i in range(0, height, 15): cv2.line(img, (0, i+3), (width, i+3), (255, 0, 0), 4) cv2.line(img, (0, i+8), (width, i+8), (0, 255, 0), 4) cv2.line(img, (0, i+13), (width, i+13), (0, 0, 255), 4) b, g, r = cv2.split(img) mask_b = np.zeros((height, width), np.uint8) mask_g = np.zeros((height, width), np.uint8) mask_r = np.zeros((height, width), np.uint8) for i in range(0, height, int(height/5)): cv2.circle(mask_b, (i, i), 75, 255, -1) cv2.circle(mask_g, (i, i), 75, 255, -1) cv2.circle(mask_r, (i, i), 75, 255, -1) #apply the mask to the channels b = cv2.bitwise_and(b, b, mask=mask_b) g = cv2.bitwise_and(g, g, mask=mask_g) r = cv2.bitwise_and(r, r, mask=mask_r) #merge the channels img = cv2.merge((b, g, r)) cv2.imshow('image', img) cv2.waitKey(0)
Detecting a specific color in a circular area and adding horizontal lines inside the circle
I was working to reproduce an optical illusion that you find here(image) but I having trouble adding horizontal lines inside of the circles: My attempt so far: -Detect the certain colors of the circles -Detect contours, and extract circle center points, and radius -Then try to draw horizontal lines (which I failed) Here is my code: import numpy as np import cv2 img = 255*np.ones((800, 800, 3), np.uint8) height, width,_ = img.shape #filling the image with lines for i in range(0, height, 15): cv2.line(img, (0, i+3), (width, i+3), (255, 0, 0), 4) cv2.line(img, (0, i+8), (width, i+8), (0, 255, 0), 4) cv2.line(img, (0, i+13), (width, i+13), (0, 0, 255), 4) #adding 5 gray circles for i in range(0, height, int(height/5)): cv2.circle(img, (i+50, i+50), 75, (128, 128, 128), -1) #finding rannge of gray circles lower=np.array([127,127,127]) upper=np.array([129,129,129]) mask = cv2.inRange(img, lower, upper) #contours contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) for cnt in contours: #draw circles around the contours coordinates = cv2.minEnclosingCircle(cnt) #coordinates and radius: center = (int(coordinates[0][0]), int(coordinates[0][1])) radius = int(coordinates[1]) #I wanted to do a sanity check before the for loop (I added a line the longest line should be 2*radius) cv2.line(img, (center[0]-radius, center[1]), (center[0]+radius, center[1]), (0, 0, 0), 4) for i in range(0, radius, int(radius/5)): cv2.line(img, (center[0]-radius+i, center[1]+i), (center[0]+radius-i, center[1]+i), (0, 0, 0), 4) cv2.line(img, (center[0]-radius+i, center[1]-i), (center[0]+radius-i, center[1]-i), (0, 0, 0), 4) cv2.imwrite('munker.png',img) And here is the result: As you can see the values in the for loop are not proportional to the boundaries of the circle, so the lines are short(except the longest line). What am I missing here? I tried the Hough transform but I had a similar problem. For more clarity, I write a code to show what I wanted: for i in range(0, 360, 15): x = int(center[0] + radius * np.cos(np.deg2rad(i))) y = int(center[1] + radius * np.sin(np.deg2rad(i))) cv2.line(img, (x,y), (x, y), (0, 255, 255), 10) I want to merge the yellow dots with horizontal lines. But my math is finished right here. Sorry, it's long, I was just trying to make things clear. Thank you for your time.
[ "As @fmw42 pointed out in the comment, splitting the RGB channels and applying a mask is very effective at being able to fill the inside of the circles with horizontal lines.\nimport numpy as np\nimport cv2\n\nimg = 255*np.ones((800, 800, 3), np.uint8)\nheight, width,_ = img.shape\nfor i in range(0, height, 15):\n cv2.line(img, (0, i+3), (width, i+3), (255, 0, 0), 4)\n cv2.line(img, (0, i+8), (width, i+8), (0, 255, 0), 4)\n cv2.line(img, (0, i+13), (width, i+13), (0, 0, 255), 4)\nb, g, r = cv2.split(img)\nmask_b = np.zeros((height, width), np.uint8)\nmask_g = np.zeros((height, width), np.uint8)\nmask_r = np.zeros((height, width), np.uint8)\nfor i in range(0, height, int(height/5)):\n cv2.circle(mask_b, (i, i), 75, 255, -1)\n cv2.circle(mask_g, (i, i), 75, 255, -1)\n cv2.circle(mask_r, (i, i), 75, 255, -1)\n\n#apply the mask to the channels\nb = cv2.bitwise_and(b, b, mask=mask_b)\ng = cv2.bitwise_and(g, g, mask=mask_g)\nr = cv2.bitwise_and(r, r, mask=mask_r)\n#merge the channels\nimg = cv2.merge((b, g, r))\n\ncv2.imshow('image', img)\ncv2.waitKey(0)\n\n" ]
[ 1 ]
[]
[]
[ "colors", "geometry", "image_processing", "opencv", "python" ]
stackoverflow_0074500038_colors_geometry_image_processing_opencv_python.txt
Q: Is there a way to convert number words to Integers? I need to convert one into 1, two into 2 and so on. Is there a way to do this with a library or a class or anything? A: The majority of this code is to set up the numwords dict, which is only done on the first call. def text2int(textnum, numwords={}): if not numwords: units = [ "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", ] tens = ["", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"] scales = ["hundred", "thousand", "million", "billion", "trillion"] numwords["and"] = (1, 0) for idx, word in enumerate(units): numwords[word] = (1, idx) for idx, word in enumerate(tens): numwords[word] = (1, idx * 10) for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0) current = result = 0 for word in textnum.split(): if word not in numwords: raise Exception("Illegal word: " + word) scale, increment = numwords[word] current = current * scale + increment if scale > 100: result += current current = 0 return result + current print text2int("seven billion one hundred million thirty one thousand three hundred thirty seven") #7100031337 A: I have just released a python module to PyPI called word2number for the exact purpose. https://github.com/akshaynagpal/w2n Install it using: pip install word2number make sure your pip is updated to the latest version. Usage: from word2number import w2n print w2n.word_to_num("two million three thousand nine hundred and eighty four") 2003984 A: I needed something a bit different since my input is from a speech-to-text conversion and the solution is not always to sum the numbers. For example, "my zipcode is one two three four five" should not convert to "my zipcode is 15". I took Andrew's answer and tweaked it to handle a few other cases people highlighted as errors, and also added support for examples like the zipcode one I mentioned above. Some basic test cases are shown below, but I'm sure there is still room for improvement. def is_number(x): if type(x) == str: x = x.replace(',', '') try: float(x) except: return False return True def text2int (textnum, numwords={}): units = [ 'zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten', 'eleven', 'twelve', 'thirteen', 'fourteen', 'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen', ] tens = ['', '', 'twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'] scales = ['hundred', 'thousand', 'million', 'billion', 'trillion'] ordinal_words = {'first':1, 'second':2, 'third':3, 'fifth':5, 'eighth':8, 'ninth':9, 'twelfth':12} ordinal_endings = [('ieth', 'y'), ('th', '')] if not numwords: numwords['and'] = (1, 0) for idx, word in enumerate(units): numwords[word] = (1, idx) for idx, word in enumerate(tens): numwords[word] = (1, idx * 10) for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0) textnum = textnum.replace('-', ' ') current = result = 0 curstring = '' onnumber = False lastunit = False lastscale = False def is_numword(x): if is_number(x): return True if word in numwords: return True return False def from_numword(x): if is_number(x): scale = 0 increment = int(x.replace(',', '')) return scale, increment return numwords[x] for word in textnum.split(): if word in ordinal_words: scale, increment = (1, ordinal_words[word]) current = current * scale + increment if scale > 100: result += current current = 0 onnumber = True lastunit = False lastscale = False else: for ending, replacement in ordinal_endings: if word.endswith(ending): word = "%s%s" % (word[:-len(ending)], replacement) if (not is_numword(word)) or (word == 'and' and not lastscale): if onnumber: # Flush the current number we are building curstring += repr(result + current) + " " curstring += word + " " result = current = 0 onnumber = False lastunit = False lastscale = False else: scale, increment = from_numword(word) onnumber = True if lastunit and (word not in scales): # Assume this is part of a string of individual numbers to # be flushed, such as a zipcode "one two three four five" curstring += repr(result + current) result = current = 0 if scale > 1: current = max(1, current) current = current * scale + increment if scale > 100: result += current current = 0 lastscale = False lastunit = False if word in scales: lastscale = True elif word in units: lastunit = True if onnumber: curstring += repr(result + current) return curstring Some tests... one two three -> 123 three forty five -> 345 three and forty five -> 3 and 45 three hundred and forty five -> 345 three hundred -> 300 twenty five hundred -> 2500 three thousand and six -> 3006 three thousand six -> 3006 nineteenth -> 19 twentieth -> 20 first -> 1 my zip is one two three four five -> my zip is 12345 nineteen ninety six -> 1996 fifty-seventh -> 57 one million -> 1000000 first hundred -> 100 I will buy the first thousand -> I will buy the 1000 # probably should leave ordinal in the string thousand -> 1000 hundred and six -> 106 1 million -> 1000000 A: If anyone is interested, I hacked up a version that maintains the rest of the string (though it may have bugs, haven't tested it too much). def text2int (textnum, numwords={}): if not numwords: units = [ "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", ] tens = ["", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"] scales = ["hundred", "thousand", "million", "billion", "trillion"] numwords["and"] = (1, 0) for idx, word in enumerate(units): numwords[word] = (1, idx) for idx, word in enumerate(tens): numwords[word] = (1, idx * 10) for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0) ordinal_words = {'first':1, 'second':2, 'third':3, 'fifth':5, 'eighth':8, 'ninth':9, 'twelfth':12} ordinal_endings = [('ieth', 'y'), ('th', '')] textnum = textnum.replace('-', ' ') current = result = 0 curstring = "" onnumber = False for word in textnum.split(): if word in ordinal_words: scale, increment = (1, ordinal_words[word]) current = current * scale + increment if scale > 100: result += current current = 0 onnumber = True else: for ending, replacement in ordinal_endings: if word.endswith(ending): word = "%s%s" % (word[:-len(ending)], replacement) if word not in numwords: if onnumber: curstring += repr(result + current) + " " curstring += word + " " result = current = 0 onnumber = False else: scale, increment = numwords[word] current = current * scale + increment if scale > 100: result += current current = 0 onnumber = True if onnumber: curstring += repr(result + current) return curstring Example: >>> text2int("I want fifty five hot dogs for two hundred dollars.") I want 55 hot dogs for 200 dollars. There could be issues if you have, say, "$200". But, this was really rough. A: I needed to handle a couple extra parsing cases, such as ordinal words ("first", "second"), hyphenated words ("one-hundred"), and hyphenated ordinal words like ("fifty-seventh"), so I added a couple lines: def text2int(textnum, numwords={}): if not numwords: units = [ "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", ] tens = ["", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"] scales = ["hundred", "thousand", "million", "billion", "trillion"] numwords["and"] = (1, 0) for idx, word in enumerate(units): numwords[word] = (1, idx) for idx, word in enumerate(tens): numwords[word] = (1, idx * 10) for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0) ordinal_words = {'first':1, 'second':2, 'third':3, 'fifth':5, 'eighth':8, 'ninth':9, 'twelfth':12} ordinal_endings = [('ieth', 'y'), ('th', '')] textnum = textnum.replace('-', ' ') current = result = 0 for word in textnum.split(): if word in ordinal_words: scale, increment = (1, ordinal_words[word]) else: for ending, replacement in ordinal_endings: if word.endswith(ending): word = "%s%s" % (word[:-len(ending)], replacement) if word not in numwords: raise Exception("Illegal word: " + word) scale, increment = numwords[word] current = current * scale + increment if scale > 100: result += current current = 0 return result + current` A: Here's the trivial case approach: >>> number = {'one':1, ... 'two':2, ... 'three':3,} >>> >>> number['two'] 2 Or are you looking for something that can handle "twelve thousand, one hundred seventy-two"? A: Make use of the Python package: WordToDigits pip install wordtodigits It can find numbers present in word form in a sentence and then convert them to the proper numeric format. Also takes care of the decimal part, if present. The word representation of numbers could be anywhere in the passage. A: def parse_int(string): ONES = {'zero': 0, 'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5, 'six': 6, 'seven': 7, 'eight': 8, 'nine': 9, 'ten': 10, 'eleven': 11, 'twelve': 12, 'thirteen': 13, 'fourteen': 14, 'fifteen': 15, 'sixteen': 16, 'seventeen': 17, 'eighteen': 18, 'nineteen': 19, 'twenty': 20, 'thirty': 30, 'forty': 40, 'fifty': 50, 'sixty': 60, 'seventy': 70, 'eighty': 80, 'ninety': 90, } numbers = [] for token in string.replace('-', ' ').split(' '): if token in ONES: numbers.append(ONES[token]) elif token == 'hundred': numbers[-1] *= 100 elif token == 'thousand': numbers = [x * 1000 for x in numbers] elif token == 'million': numbers = [x * 1000000 for x in numbers] return sum(numbers) Tested with 700 random numbers in range 1 to million works well. A: This could be easily be hardcoded into a dictionary if there's a limited amount of numbers you'd like to parse. For slightly more complex cases, you'll probably want to generate this dictionary automatically, based on the relatively simple numbers grammar. Something along the lines of this (of course, generalized...) for i in range(10): myDict[30 + i] = "thirty-" + singleDigitsDict[i] If you need something more extensive, then it looks like you'll need natural language processing tools. This article might be a good starting point. A: Made change so that text2int(scale) will return correct conversion. Eg, text2int("hundred") => 100. import re numwords = {} def text2int(textnum): if not numwords: units = [ "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen"] tens = ["", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"] scales = ["hundred", "thousand", "million", "billion", "trillion", 'quadrillion', 'quintillion', 'sexillion', 'septillion', 'octillion', 'nonillion', 'decillion' ] numwords["and"] = (1, 0) for idx, word in enumerate(units): numwords[word] = (1, idx) for idx, word in enumerate(tens): numwords[word] = (1, idx * 10) for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0) ordinal_words = {'first':1, 'second':2, 'third':3, 'fifth':5, 'eighth':8, 'ninth':9, 'twelfth':12} ordinal_endings = [('ieth', 'y'), ('th', '')] current = result = 0 tokens = re.split(r"[\s-]+", textnum) for word in tokens: if word in ordinal_words: scale, increment = (1, ordinal_words[word]) else: for ending, replacement in ordinal_endings: if word.endswith(ending): word = "%s%s" % (word[:-len(ending)], replacement) if word not in numwords: raise Exception("Illegal word: " + word) scale, increment = numwords[word] if scale > 1: current = max(1, current) current = current * scale + increment if scale > 100: result += current current = 0 return result + current A: There's a ruby gem by Marc Burns that does it. I recently forked it to add support for years. You can call ruby code from python. require 'numbers_in_words' require 'numbers_in_words/duck_punch' nums = ["fifteen sixteen", "eighty five sixteen", "nineteen ninety six", "one hundred and seventy nine", "thirteen hundred", "nine thousand two hundred and ninety seven"] nums.each {|n| p n; p n.in_numbers} results: "fifteen sixteen" 1516 "eighty five sixteen" 8516 "nineteen ninety six" 1996 "one hundred and seventy nine" 179 "thirteen hundred" 1300 "nine thousand two hundred and ninety seven" 9297 A: A quick solution is to use the inflect.py to generate a dictionary for translation. inflect.py has a number_to_words() function, that will turn a number (e.g. 2) to it's word form (e.g. 'two'). Unfortunately, its reverse (which would allow you to avoid the translation dictionary route) isn't offered. All the same, you can use that function to build the translation dictionary: >>> import inflect >>> p = inflect.engine() >>> word_to_number_mapping = {} >>> >>> for i in range(1, 100): ... word_form = p.number_to_words(i) # 1 -> 'one' ... word_to_number_mapping[word_form] = i ... >>> print word_to_number_mapping['one'] 1 >>> print word_to_number_mapping['eleven'] 11 >>> print word_to_number_mapping['forty-three'] 43 If you're willing to commit some time, it might be possible to examine inflect.py's inner-workings of the number_to_words() function and build your own code to do this dynamically (I haven't tried to do this). A: I took @recursive's logic and converted to Ruby. I've also hardcoded the lookup table so its not as cool but might help a newbie understand what is going on. WORDNUMS = {"zero"=> [1,0], "one"=> [1,1], "two"=> [1,2], "three"=> [1,3], "four"=> [1,4], "five"=> [1,5], "six"=> [1,6], "seven"=> [1,7], "eight"=> [1,8], "nine"=> [1,9], "ten"=> [1,10], "eleven"=> [1,11], "twelve"=> [1,12], "thirteen"=> [1,13], "fourteen"=> [1,14], "fifteen"=> [1,15], "sixteen"=> [1,16], "seventeen"=> [1,17], "eighteen"=> [1,18], "nineteen"=> [1,19], "twenty"=> [1,20], "thirty" => [1,30], "forty" => [1,40], "fifty" => [1,50], "sixty" => [1,60], "seventy" => [1,70], "eighty" => [1,80], "ninety" => [1,90], "hundred" => [100,0], "thousand" => [1000,0], "million" => [1000000, 0]} def text_2_int(string) numberWords = string.gsub('-', ' ').split(/ /) - %w{and} current = result = 0 numberWords.each do |word| scale, increment = WORDNUMS[word] current = current * scale + increment if scale > 100 result += current current = 0 end end return result + current end I was looking to handle strings like two thousand one hundred and forty-six A: This handles number in words of Indian style, some fractions, combination of numbers and words and also addition. def words_to_number(words): numbers = {"zero":0, "a":1, "half":0.5, "quarter":0.25, "one":1,"two":2, "three":3, "four":4,"five":5,"six":6,"seven":7,"eight":8, "nine":9, "ten":10,"eleven":11,"twelve":12, "thirteen":13, "fourteen":14, "fifteen":15,"sixteen":16,"seventeen":17, "eighteen":18,"nineteen":19, "twenty":20,"thirty":30, "forty":40, "fifty":50,"sixty":60,"seventy":70, "eighty":80,"ninety":90} groups = {"hundred":100, "thousand":1_000, "lac":1_00_000, "lakh":1_00_000, "million":1_000_000, "crore":10**7, "billion":10**9, "trillion":10**12} split_at = ["and", "plus"] n = 0 skip = False words_array = words.split(" ") for i, word in enumerate(words_array): if not skip: if word in groups: n*= groups[word] elif word in numbers: n += numbers[word] elif word in split_at: skip = True remaining = ' '.join(words_array[i+1:]) n+=words_to_number(remaining) else: try: n += float(word) except ValueError as e: raise ValueError(f"Invalid word {word}") from e return n TEST: print(words_to_number("a million and one")) >> 1000001 print(words_to_number("one crore and one")) >> 1000,0001 print(words_to_number("0.5 million one")) >> 500001.0 print(words_to_number("half million and one hundred")) >> 500100.0 print(words_to_number("quarter")) >> 0.25 print(words_to_number("one hundred plus one")) >> 101 A: I find I faster way: $ Da_Unità_a_Cifre = {'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5, 'six': 6, 'seven': 7, 'eight': 8, 'nine': 9, 'ten': 10, 'eleven': 11, 'twelve': 12, 'thirteen': 13, 'fourteen': 14, 'fifteen': 15, 'sixteen': 16, 'seventeen': 17, 'eighteen': 18, 'nineteen': 19} Da_Lettere_a_Decine = {"tw": 20, "th": 30, "fo": 40, "fi": 50, "si": 60, "se": 70, "ei": 80, "ni": 90, } elemento = input(insert the word:) Val_Num = 0 try: elemento.lower() elemento.strip() Unità = elemento[elemento.find("ty")+2:] # è uguale alla str: five if elemento[-1] == "y": Val_Num = int(Da_Lettere_a_Decine[elemento[0] + elemento[1]]) print(Val_Num) elif elemento == "onehundred": Val_Num = 100 print(Val_Num) else: Cifre_Unità = int(Da_Unità_a_Cifre[Unità]) Cifre_Decine = int(Da_Lettere_a_Decine[elemento[0] + elemento[1]]) Val_Num = int(Cifre_Decine + Cifre_Unità) print(Val_Num) exept: print("invalid input")
Is there a way to convert number words to Integers?
I need to convert one into 1, two into 2 and so on. Is there a way to do this with a library or a class or anything?
[ "The majority of this code is to set up the numwords dict, which is only done on the first call.\ndef text2int(textnum, numwords={}):\n if not numwords:\n units = [\n \"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\",\n \"nine\", \"ten\", \"eleven\", \"twelve\", \"thirteen\", \"fourteen\", \"fifteen\",\n \"sixteen\", \"seventeen\", \"eighteen\", \"nineteen\",\n ]\n\n tens = [\"\", \"\", \"twenty\", \"thirty\", \"forty\", \"fifty\", \"sixty\", \"seventy\", \"eighty\", \"ninety\"]\n\n scales = [\"hundred\", \"thousand\", \"million\", \"billion\", \"trillion\"]\n\n numwords[\"and\"] = (1, 0)\n for idx, word in enumerate(units): numwords[word] = (1, idx)\n for idx, word in enumerate(tens): numwords[word] = (1, idx * 10)\n for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0)\n\n current = result = 0\n for word in textnum.split():\n if word not in numwords:\n raise Exception(\"Illegal word: \" + word)\n\n scale, increment = numwords[word]\n current = current * scale + increment\n if scale > 100:\n result += current\n current = 0\n\n return result + current\n\nprint text2int(\"seven billion one hundred million thirty one thousand three hundred thirty seven\")\n#7100031337\n\n", "I have just released a python module to PyPI called word2number for the exact purpose. https://github.com/akshaynagpal/w2n\nInstall it using: \npip install word2number\n\nmake sure your pip is updated to the latest version.\nUsage:\nfrom word2number import w2n\n\nprint w2n.word_to_num(\"two million three thousand nine hundred and eighty four\")\n2003984\n\n", "I needed something a bit different since my input is from a speech-to-text conversion and the solution is not always to sum the numbers. For example, \"my zipcode is one two three four five\" should not convert to \"my zipcode is 15\". \nI took Andrew's answer and tweaked it to handle a few other cases people highlighted as errors, and also added support for examples like the zipcode one I mentioned above. Some basic test cases are shown below, but I'm sure there is still room for improvement.\ndef is_number(x):\n if type(x) == str:\n x = x.replace(',', '')\n try:\n float(x)\n except:\n return False\n return True\n\ndef text2int (textnum, numwords={}):\n units = [\n 'zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight',\n 'nine', 'ten', 'eleven', 'twelve', 'thirteen', 'fourteen', 'fifteen',\n 'sixteen', 'seventeen', 'eighteen', 'nineteen',\n ]\n tens = ['', '', 'twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety']\n scales = ['hundred', 'thousand', 'million', 'billion', 'trillion']\n ordinal_words = {'first':1, 'second':2, 'third':3, 'fifth':5, 'eighth':8, 'ninth':9, 'twelfth':12}\n ordinal_endings = [('ieth', 'y'), ('th', '')]\n\n if not numwords:\n numwords['and'] = (1, 0)\n for idx, word in enumerate(units): numwords[word] = (1, idx)\n for idx, word in enumerate(tens): numwords[word] = (1, idx * 10)\n for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0)\n\n textnum = textnum.replace('-', ' ')\n\n current = result = 0\n curstring = ''\n onnumber = False\n lastunit = False\n lastscale = False\n\n def is_numword(x):\n if is_number(x):\n return True\n if word in numwords:\n return True\n return False\n\n def from_numword(x):\n if is_number(x):\n scale = 0\n increment = int(x.replace(',', ''))\n return scale, increment\n return numwords[x]\n\n for word in textnum.split():\n if word in ordinal_words:\n scale, increment = (1, ordinal_words[word])\n current = current * scale + increment\n if scale > 100:\n result += current\n current = 0\n onnumber = True\n lastunit = False\n lastscale = False\n else:\n for ending, replacement in ordinal_endings:\n if word.endswith(ending):\n word = \"%s%s\" % (word[:-len(ending)], replacement)\n\n if (not is_numword(word)) or (word == 'and' and not lastscale):\n if onnumber:\n # Flush the current number we are building\n curstring += repr(result + current) + \" \"\n curstring += word + \" \"\n result = current = 0\n onnumber = False\n lastunit = False\n lastscale = False\n else:\n scale, increment = from_numword(word)\n onnumber = True\n\n if lastunit and (word not in scales): \n # Assume this is part of a string of individual numbers to \n # be flushed, such as a zipcode \"one two three four five\" \n curstring += repr(result + current) \n result = current = 0 \n\n if scale > 1: \n current = max(1, current) \n\n current = current * scale + increment \n if scale > 100: \n result += current \n current = 0 \n\n lastscale = False \n lastunit = False \n if word in scales: \n lastscale = True \n elif word in units: \n lastunit = True\n\n if onnumber:\n curstring += repr(result + current)\n\n return curstring\n\nSome tests...\none two three -> 123\nthree forty five -> 345\nthree and forty five -> 3 and 45\nthree hundred and forty five -> 345\nthree hundred -> 300\ntwenty five hundred -> 2500\nthree thousand and six -> 3006\nthree thousand six -> 3006\nnineteenth -> 19\ntwentieth -> 20\nfirst -> 1\nmy zip is one two three four five -> my zip is 12345\nnineteen ninety six -> 1996\nfifty-seventh -> 57\none million -> 1000000\nfirst hundred -> 100\nI will buy the first thousand -> I will buy the 1000 # probably should leave ordinal in the string\nthousand -> 1000\nhundred and six -> 106\n1 million -> 1000000\n\n", "If anyone is interested, I hacked up a version that maintains the rest of the string (though it may have bugs, haven't tested it too much).\ndef text2int (textnum, numwords={}):\n if not numwords:\n units = [\n \"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\",\n \"nine\", \"ten\", \"eleven\", \"twelve\", \"thirteen\", \"fourteen\", \"fifteen\",\n \"sixteen\", \"seventeen\", \"eighteen\", \"nineteen\",\n ]\n\n tens = [\"\", \"\", \"twenty\", \"thirty\", \"forty\", \"fifty\", \"sixty\", \"seventy\", \"eighty\", \"ninety\"]\n\n scales = [\"hundred\", \"thousand\", \"million\", \"billion\", \"trillion\"]\n\n numwords[\"and\"] = (1, 0)\n for idx, word in enumerate(units): numwords[word] = (1, idx)\n for idx, word in enumerate(tens): numwords[word] = (1, idx * 10)\n for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0)\n\n ordinal_words = {'first':1, 'second':2, 'third':3, 'fifth':5, 'eighth':8, 'ninth':9, 'twelfth':12}\n ordinal_endings = [('ieth', 'y'), ('th', '')]\n\n textnum = textnum.replace('-', ' ')\n\n current = result = 0\n curstring = \"\"\n onnumber = False\n for word in textnum.split():\n if word in ordinal_words:\n scale, increment = (1, ordinal_words[word])\n current = current * scale + increment\n if scale > 100:\n result += current\n current = 0\n onnumber = True\n else:\n for ending, replacement in ordinal_endings:\n if word.endswith(ending):\n word = \"%s%s\" % (word[:-len(ending)], replacement)\n\n if word not in numwords:\n if onnumber:\n curstring += repr(result + current) + \" \"\n curstring += word + \" \"\n result = current = 0\n onnumber = False\n else:\n scale, increment = numwords[word]\n\n current = current * scale + increment\n if scale > 100:\n result += current\n current = 0\n onnumber = True\n\n if onnumber:\n curstring += repr(result + current)\n\n return curstring\n\nExample:\n >>> text2int(\"I want fifty five hot dogs for two hundred dollars.\")\n I want 55 hot dogs for 200 dollars.\n\nThere could be issues if you have, say, \"$200\". But, this was really rough.\n", "I needed to handle a couple extra parsing cases, such as ordinal words (\"first\", \"second\"), hyphenated words (\"one-hundred\"), and hyphenated ordinal words like (\"fifty-seventh\"), so I added a couple lines:\ndef text2int(textnum, numwords={}):\n if not numwords:\n units = [\n \"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\",\n \"nine\", \"ten\", \"eleven\", \"twelve\", \"thirteen\", \"fourteen\", \"fifteen\",\n \"sixteen\", \"seventeen\", \"eighteen\", \"nineteen\",\n ]\n\n tens = [\"\", \"\", \"twenty\", \"thirty\", \"forty\", \"fifty\", \"sixty\", \"seventy\", \"eighty\", \"ninety\"]\n\n scales = [\"hundred\", \"thousand\", \"million\", \"billion\", \"trillion\"]\n\n numwords[\"and\"] = (1, 0)\n for idx, word in enumerate(units): numwords[word] = (1, idx)\n for idx, word in enumerate(tens): numwords[word] = (1, idx * 10)\n for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0)\n\n ordinal_words = {'first':1, 'second':2, 'third':3, 'fifth':5, 'eighth':8, 'ninth':9, 'twelfth':12}\n ordinal_endings = [('ieth', 'y'), ('th', '')]\n\n textnum = textnum.replace('-', ' ')\n\n current = result = 0\n for word in textnum.split():\n if word in ordinal_words:\n scale, increment = (1, ordinal_words[word])\n else:\n for ending, replacement in ordinal_endings:\n if word.endswith(ending):\n word = \"%s%s\" % (word[:-len(ending)], replacement)\n\n if word not in numwords:\n raise Exception(\"Illegal word: \" + word)\n\n scale, increment = numwords[word]\n \n current = current * scale + increment\n if scale > 100:\n result += current\n current = 0\n\n return result + current`\n\n", "Here's the trivial case approach:\n>>> number = {'one':1,\n... 'two':2,\n... 'three':3,}\n>>> \n>>> number['two']\n2\n\nOr are you looking for something that can handle \"twelve thousand, one hundred seventy-two\"?\n", "Make use of the Python package: WordToDigits\npip install wordtodigits\n\nIt can find numbers present in word form in a sentence and then convert them to the proper numeric format. Also takes care of the decimal part, if present. The word representation of numbers could be anywhere in the passage.\n", "def parse_int(string):\n ONES = {'zero': 0,\n 'one': 1,\n 'two': 2,\n 'three': 3,\n 'four': 4,\n 'five': 5,\n 'six': 6,\n 'seven': 7,\n 'eight': 8,\n 'nine': 9,\n 'ten': 10,\n 'eleven': 11,\n 'twelve': 12,\n 'thirteen': 13,\n 'fourteen': 14,\n 'fifteen': 15,\n 'sixteen': 16,\n 'seventeen': 17,\n 'eighteen': 18,\n 'nineteen': 19,\n 'twenty': 20,\n 'thirty': 30,\n 'forty': 40,\n 'fifty': 50,\n 'sixty': 60,\n 'seventy': 70,\n 'eighty': 80,\n 'ninety': 90,\n }\n\n numbers = []\n for token in string.replace('-', ' ').split(' '):\n if token in ONES:\n numbers.append(ONES[token])\n elif token == 'hundred':\n numbers[-1] *= 100\n elif token == 'thousand':\n numbers = [x * 1000 for x in numbers]\n elif token == 'million':\n numbers = [x * 1000000 for x in numbers]\n return sum(numbers)\n\nTested with 700 random numbers in range 1 to million works well.\n", "This could be easily be hardcoded into a dictionary if there's a limited amount of numbers you'd like to parse. \nFor slightly more complex cases, you'll probably want to generate this dictionary automatically, based on the relatively simple numbers grammar. Something along the lines of this (of course, generalized...)\nfor i in range(10):\n myDict[30 + i] = \"thirty-\" + singleDigitsDict[i]\n\nIf you need something more extensive, then it looks like you'll need natural language processing tools. This article might be a good starting point.\n", "Made change so that text2int(scale) will return correct conversion. Eg, text2int(\"hundred\") => 100. \nimport re\n\nnumwords = {}\n\n\ndef text2int(textnum):\n\n if not numwords:\n\n units = [ \"zero\", \"one\", \"two\", \"three\", \"four\", \"five\", \"six\",\n \"seven\", \"eight\", \"nine\", \"ten\", \"eleven\", \"twelve\",\n \"thirteen\", \"fourteen\", \"fifteen\", \"sixteen\", \"seventeen\",\n \"eighteen\", \"nineteen\"]\n\n tens = [\"\", \"\", \"twenty\", \"thirty\", \"forty\", \"fifty\", \"sixty\", \n \"seventy\", \"eighty\", \"ninety\"]\n\n scales = [\"hundred\", \"thousand\", \"million\", \"billion\", \"trillion\", \n 'quadrillion', 'quintillion', 'sexillion', 'septillion', \n 'octillion', 'nonillion', 'decillion' ]\n\n numwords[\"and\"] = (1, 0)\n for idx, word in enumerate(units): numwords[word] = (1, idx)\n for idx, word in enumerate(tens): numwords[word] = (1, idx * 10)\n for idx, word in enumerate(scales): numwords[word] = (10 ** (idx * 3 or 2), 0)\n\n ordinal_words = {'first':1, 'second':2, 'third':3, 'fifth':5, \n 'eighth':8, 'ninth':9, 'twelfth':12}\n ordinal_endings = [('ieth', 'y'), ('th', '')]\n current = result = 0\n tokens = re.split(r\"[\\s-]+\", textnum)\n for word in tokens:\n if word in ordinal_words:\n scale, increment = (1, ordinal_words[word])\n else:\n for ending, replacement in ordinal_endings:\n if word.endswith(ending):\n word = \"%s%s\" % (word[:-len(ending)], replacement)\n\n if word not in numwords:\n raise Exception(\"Illegal word: \" + word)\n\n scale, increment = numwords[word]\n\n if scale > 1:\n current = max(1, current)\n\n current = current * scale + increment\n if scale > 100:\n result += current\n current = 0\n\n return result + current\n\n", "There's a ruby gem by Marc Burns that does it. I recently forked it to add support for years. You can call ruby code from python.\n require 'numbers_in_words'\n require 'numbers_in_words/duck_punch'\n\n nums = [\"fifteen sixteen\", \"eighty five sixteen\", \"nineteen ninety six\",\n \"one hundred and seventy nine\", \"thirteen hundred\", \"nine thousand two hundred and ninety seven\"]\n nums.each {|n| p n; p n.in_numbers}\n\nresults:\n\"fifteen sixteen\"\n1516\n\"eighty five sixteen\"\n8516\n\"nineteen ninety six\"\n1996\n\"one hundred and seventy nine\"\n179\n\"thirteen hundred\"\n1300\n\"nine thousand two hundred and ninety seven\"\n9297\n", "A quick solution is to use the inflect.py to generate a dictionary for translation. \ninflect.py has a number_to_words() function, that will turn a number (e.g. 2) to it's word form (e.g. 'two'). Unfortunately, its reverse (which would allow you to avoid the translation dictionary route) isn't offered. All the same, you can use that function to build the translation dictionary:\n>>> import inflect\n>>> p = inflect.engine()\n>>> word_to_number_mapping = {}\n>>>\n>>> for i in range(1, 100):\n... word_form = p.number_to_words(i) # 1 -> 'one'\n... word_to_number_mapping[word_form] = i\n...\n>>> print word_to_number_mapping['one']\n1\n>>> print word_to_number_mapping['eleven']\n11\n>>> print word_to_number_mapping['forty-three']\n43\n\nIf you're willing to commit some time, it might be possible to examine inflect.py's inner-workings of the number_to_words() function and build your own code to do this dynamically (I haven't tried to do this).\n", "I took @recursive's logic and converted to Ruby. I've also hardcoded the lookup table so its not as cool but might help a newbie understand what is going on.\nWORDNUMS = {\"zero\"=> [1,0], \"one\"=> [1,1], \"two\"=> [1,2], \"three\"=> [1,3],\n \"four\"=> [1,4], \"five\"=> [1,5], \"six\"=> [1,6], \"seven\"=> [1,7], \n \"eight\"=> [1,8], \"nine\"=> [1,9], \"ten\"=> [1,10], \n \"eleven\"=> [1,11], \"twelve\"=> [1,12], \"thirteen\"=> [1,13], \n \"fourteen\"=> [1,14], \"fifteen\"=> [1,15], \"sixteen\"=> [1,16], \n \"seventeen\"=> [1,17], \"eighteen\"=> [1,18], \"nineteen\"=> [1,19], \n \"twenty\"=> [1,20], \"thirty\" => [1,30], \"forty\" => [1,40], \n \"fifty\" => [1,50], \"sixty\" => [1,60], \"seventy\" => [1,70], \n \"eighty\" => [1,80], \"ninety\" => [1,90],\n \"hundred\" => [100,0], \"thousand\" => [1000,0], \n \"million\" => [1000000, 0]}\n\ndef text_2_int(string)\n numberWords = string.gsub('-', ' ').split(/ /) - %w{and}\n current = result = 0\n numberWords.each do |word|\n scale, increment = WORDNUMS[word]\n current = current * scale + increment\n if scale > 100\n result += current\n current = 0\n end\n end\n return result + current\nend\n\nI was looking to handle strings like two thousand one hundred and forty-six\n", "This handles number in words of Indian style, some fractions, combination of numbers and words and also addition.\ndef words_to_number(words):\n numbers = {\"zero\":0, \"a\":1, \"half\":0.5, \"quarter\":0.25, \"one\":1,\"two\":2,\n \"three\":3, \"four\":4,\"five\":5,\"six\":6,\"seven\":7,\"eight\":8,\n \"nine\":9, \"ten\":10,\"eleven\":11,\"twelve\":12, \"thirteen\":13,\n \"fourteen\":14, \"fifteen\":15,\"sixteen\":16,\"seventeen\":17,\n \"eighteen\":18,\"nineteen\":19, \"twenty\":20,\"thirty\":30, \"forty\":40,\n \"fifty\":50,\"sixty\":60,\"seventy\":70, \"eighty\":80,\"ninety\":90}\n\n groups = {\"hundred\":100, \"thousand\":1_000, \n \"lac\":1_00_000, \"lakh\":1_00_000, \n \"million\":1_000_000, \"crore\":10**7, \n \"billion\":10**9, \"trillion\":10**12}\n \n split_at = [\"and\", \"plus\"]\n \n n = 0\n skip = False\n words_array = words.split(\" \")\n for i, word in enumerate(words_array):\n if not skip:\n if word in groups:\n n*= groups[word]\n elif word in numbers:\n n += numbers[word]\n elif word in split_at:\n skip = True\n remaining = ' '.join(words_array[i+1:])\n n+=words_to_number(remaining)\n else:\n try:\n n += float(word)\n except ValueError as e:\n raise ValueError(f\"Invalid word {word}\") from e\n return n\n\nTEST:\nprint(words_to_number(\"a million and one\"))\n>> 1000001\n\nprint(words_to_number(\"one crore and one\"))\n>> 1000,0001\n\nprint(words_to_number(\"0.5 million one\"))\n>> 500001.0\n\nprint(words_to_number(\"half million and one hundred\"))\n>> 500100.0\n\nprint(words_to_number(\"quarter\"))\n>> 0.25\n\nprint(words_to_number(\"one hundred plus one\"))\n>> 101\n\n", "I find I faster way:\n$\nDa_Unità_a_Cifre = {'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5, 'six': 6, 'seven': 7, 'eight': 8, 'nine': 9, 'ten': 10, 'eleven': 11,\n'twelve': 12, 'thirteen': 13, 'fourteen': 14, 'fifteen': 15, 'sixteen': 16, 'seventeen': 17, 'eighteen': 18, 'nineteen': 19}\nDa_Lettere_a_Decine = {\"tw\": 20, \"th\": 30, \"fo\": 40, \"fi\": 50, \"si\": 60, \"se\": 70, \"ei\": 80, \"ni\": 90, }\nelemento = input(insert the word:)\nVal_Num = 0\ntry:\nelemento.lower()\nelemento.strip()\nUnità = elemento[elemento.find(\"ty\")+2:] # è uguale alla str: five\n\nif elemento[-1] == \"y\":\n Val_Num = int(Da_Lettere_a_Decine[elemento[0] + elemento[1]])\n print(Val_Num)\nelif elemento == \"onehundred\":\n Val_Num = 100\n print(Val_Num)\nelse:\n Cifre_Unità = int(Da_Unità_a_Cifre[Unità])\n Cifre_Decine = int(Da_Lettere_a_Decine[elemento[0] + elemento[1]])\n Val_Num = int(Cifre_Decine + Cifre_Unità)\n print(Val_Num)\nexept:\n print(\"invalid input\")\n\n" ]
[ 138, 37, 17, 16, 12, 7, 4, 4, 3, 1, 1, 0, 0, 0, 0 ]
[ "This code works for a series data:\nimport pandas as pd\nmylist = pd.Series(['one','two','three'])\nmylist1 = []\nfor x in range(len(mylist)):\n mylist1.append(w2n.word_to_num(mylist[x]))\nprint(mylist1)\n\n", "This code works only for numbers below 99. Both word to int and int to word (for rest need to implement 10-20 lines of code and simple logic. This is just simple code for beginners):\nnum = input(\"Enter the number you want to convert : \")\nmydict = {'1': 'One', '2': 'Two', '3': 'Three', '4': 'Four', '5': 'Five','6': 'Six', '7': 'Seven', '8': 'Eight', '9': 'Nine', '10': 'Ten','11': 'Eleven', '12': 'Twelve', '13': 'Thirteen', '14': 'Fourteen', '15': 'Fifteen', '16': 'Sixteen', '17': 'Seventeen', '18': 'Eighteen', '19': 'Nineteen'}\nmydict2 = ['', '', 'Twenty', 'Thirty', 'Fourty', 'fifty', 'sixty', 'Seventy', 'Eighty', 'Ninty']\n\nif num.isdigit():\n if(int(num) < 20):\n print(\" :---> \" + mydict[num])\n else:\n var1 = int(num) % 10\n var2 = int(num) / 10\n print(\" :---> \" + mydict2[int(var2)] + mydict[str(var1)])\nelse:\n num = num.lower()\n dict_w = {'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5, 'six': 6, 'seven': 7, 'eight': 8, 'nine': 9, 'ten': 10, 'eleven': 11, 'twelve': 12, 'thirteen': 13, 'fourteen': 14, 'fifteen': 15, 'sixteen': 16, 'seventeen': '17', 'eighteen': '18', 'nineteen': '19'}\n mydict2 = ['', '', 'twenty', 'thirty', 'fourty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninty']\n divide = num[num.find(\"ty\")+2:]\n if num:\n if(num in dict_w.keys()):\n print(\" :---> \" + str(dict_w[num]))\n elif divide == '' :\n for i in range(0, len(mydict2)-1):\n if mydict2[i] == num:\n print(\" :---> \" + str(i * 10))\n else :\n str3 = 0\n str1 = num[num.find(\"ty\")+2:]\n str2 = num[:-len(str1)]\n for i in range(0, len(mydict2)):\n if mydict2[i] == str2:\n str3 = i\n if str2 not in mydict2:\n print(\"----->Invalid Input<-----\") \n else:\n try:\n print(\" :---> \" + str((str3*10) + dict_w[str1]))\n except:\n print(\"----->Invalid Input<-----\")\n else:\n print(\"----->Please Enter Input<-----\")\n\n" ]
[ -1, -3 ]
[ "integer", "numbers", "python", "string", "text" ]
stackoverflow_0000493174_integer_numbers_python_string_text.txt
Q: How to update lables in a python frame class with (after) code I'm attempting a Python frame class with a lable that updates every time period. I can't seem to get he configure thing to work for me. Thanks from tkinter import * # get base widget set from tkinter.messagebox import askokcancel from datetime import datetime class SensorUpdate(Frame) : def __init__(self) : Frame.__init__(self) self.pack() Label(self, text="Sensors").pack(anchor=NW) self.timeLabel = Label(self, text=datetime.now().strftime("%H:%M:%S")).pack(anchor=NW) Button(self, text="Shut Down",command=(lambda :self.shutDown())).pack(side=RIGHT) self.after(500,self.updateImage) #updates Frame Image def shutDown(self): print("in shutDown") ans = askokcancel('Verify', "Really quit?") if ans: print("made it here") SensorUpdate.quit(self) def updateImage(self): print(" Updating Image") now = datetime.now() current_time = now.strftime("%H:%M:%S") print(current_time) #self().timeLabel.configure(text='Test') # object not callable #self.timeLabel.configure(text='Test') # object has no attribute timeLabel #timeLabel.configure(text='Test') # timeLabel not found self.after(10,self.updateImage) root = Tk() app = SensorUpdate() app.mainloop() root.destroy() A: After spending some quality time with 3000 pages of Lutz texts, I've found a solution using two classes one for the basics of display and one for the update using the (after) command. Any help on my first method is apprecieated. I can make this work at least even if I'm not sure why the first one doesn't from tkinter import * # get base widget set from tkinter.messagebox import askokcancel from datetime import datetime class SensorConfig: size = 200 bg, fg = 'beige', 'brown' class IndicatorDisplay(Frame): def __init__(self, parent, cfg): Frame.__init__(self, parent) self.timeLabel = Label(self) self.timeLabel.config(bd=20,relief=SUNKEN,bg=cfg.bg,fg=cfg.fg) self.timeLabel.pack(side=LEFT) Button(self, text="Shut Down",command=(lambda :self.shutDown())).pack(side=RIGHT) def shutDown(self): print("in shutDown") ans = askokcancel('Verify', "Really quit?") if ans: print("made it here") self.quit() def onUpdate(self, curTime): self.timeLabel.config(text=curTime) class Sensor(Frame): def __init__(self, config=SensorConfig, parent=None): Frame.__init__(self, parent) self.pack(expand=YES, fill = BOTH) self.cfg = config self.display = IndicatorDisplay(self, self.cfg) self.display.pack(side=TOP) self.updateLabels() def updateLabels(self): print("make time change") self.display.onUpdate(datetime.now().strftime("%H:%M:%S")) self.after(500, self.updateLabels) if __name__ == '__main__': config = SensorConfig() mySensor = Sensor(config) mySensor.mainloop()
How to update lables in a python frame class with (after) code
I'm attempting a Python frame class with a lable that updates every time period. I can't seem to get he configure thing to work for me. Thanks from tkinter import * # get base widget set from tkinter.messagebox import askokcancel from datetime import datetime class SensorUpdate(Frame) : def __init__(self) : Frame.__init__(self) self.pack() Label(self, text="Sensors").pack(anchor=NW) self.timeLabel = Label(self, text=datetime.now().strftime("%H:%M:%S")).pack(anchor=NW) Button(self, text="Shut Down",command=(lambda :self.shutDown())).pack(side=RIGHT) self.after(500,self.updateImage) #updates Frame Image def shutDown(self): print("in shutDown") ans = askokcancel('Verify', "Really quit?") if ans: print("made it here") SensorUpdate.quit(self) def updateImage(self): print(" Updating Image") now = datetime.now() current_time = now.strftime("%H:%M:%S") print(current_time) #self().timeLabel.configure(text='Test') # object not callable #self.timeLabel.configure(text='Test') # object has no attribute timeLabel #timeLabel.configure(text='Test') # timeLabel not found self.after(10,self.updateImage) root = Tk() app = SensorUpdate() app.mainloop() root.destroy()
[ "After spending some quality time with 3000 pages of Lutz texts, I've found a solution using two classes one for the basics of display and one for the update using the (after) command. Any help on my first method is apprecieated. I can make this work at least even if I'm not sure why the first one doesn't\nfrom tkinter import * # get base widget set\nfrom tkinter.messagebox import askokcancel\nfrom datetime import datetime\n\nclass SensorConfig:\n size = 200 \n bg, fg = 'beige', 'brown' \n\nclass IndicatorDisplay(Frame):\n def __init__(self, parent, cfg):\n Frame.__init__(self, parent)\n self.timeLabel = Label(self)\n self.timeLabel.config(bd=20,relief=SUNKEN,bg=cfg.bg,fg=cfg.fg)\n self.timeLabel.pack(side=LEFT)\n Button(self, text=\"Shut Down\",command=(lambda :self.shutDown())).pack(side=RIGHT)\n \n def shutDown(self):\n print(\"in shutDown\")\n ans = askokcancel('Verify', \"Really quit?\")\n if ans: \n print(\"made it here\")\n self.quit()\n\n def onUpdate(self, curTime):\n self.timeLabel.config(text=curTime)\n\nclass Sensor(Frame):\n def __init__(self, config=SensorConfig, parent=None):\n Frame.__init__(self, parent)\n self.pack(expand=YES, fill = BOTH)\n self.cfg = config\n self.display = IndicatorDisplay(self, self.cfg) \n self.display.pack(side=TOP)\n self.updateLabels()\n\n def updateLabels(self):\n print(\"make time change\")\n self.display.onUpdate(datetime.now().strftime(\"%H:%M:%S\")) \n self.after(500, self.updateLabels) \n\nif __name__ == '__main__':\n config = SensorConfig()\n mySensor = Sensor(config)\n mySensor.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074490926_python_tkinter.txt
Q: I need help to automatically DEcensore a text (lot's of text to be prosseced) I have a web story that has cencored word in it with asterix right now i'm doing it with a simple and dumb str.replace but as you can imagine this is a pain and I need to search in the text to find all instance of the censoring here is bastard instance that are capitalized, plurial and with asterix in different places toReplace = toReplace.replace("b*stard", "bastard") toReplace = toReplace.replace("b*stards", "bastards") toReplace = toReplace.replace("B*stard", "Bastard") toReplace = toReplace.replace("B*stards", "Bastards") toReplace = toReplace.replace("b*st*rd", "bastard") toReplace = toReplace.replace("b*st*rds", "bastards") toReplace = toReplace.replace("B*st*rd", "Bastard") toReplace = toReplace.replace("B*st*rds", "Bastards") is there a way to compare all word with "*" (or any other replacement character) to an already compiled dict and replace them with the uncensored version of the word ? maybe regex but I don't think so A: Using regex alone will likely not result in a full solution for this. You would likely have an easier time if you have a simple list of the words that you want to restore, and use Levenshtein distance to determine which one is closest to a given word that you have found a * in. One library that may help with this is fuzzywuzzy. The two approaches that I can think of quickly: Split the text so that you have 1 string per word. For each word, if '*' in word, then compare it to the list of replacements to find which is closest. Use re.sub to identify the words that contain a * character, and write a function that you would use as the repl argument to determine which replacement it is closest to and return that replacement. Additional resources: Python: find closest string (from a list) to another string Find closest string match from list How to find closest match of a string from a list of different length strings python? A: You can use re module to find matches between the censored word and words in your wordlist. Replace * with . (dot has special meaning in regex, it means "match every character") and then use re.match: import re wordlist = ["bastard", "apple", "orange"] def find_matches(censored_word, wordlist): pat = re.compile(censored_word.replace("*", ".")) return [w for w in wordlist if pat.match(w)] print(find_matches("b*st*rd", wordlist)) Prints: ['bastard'] Note: If you want match exact word, add $ at the end of your pattern. That means appl* will not match applejuice in your dictionary for example.
I need help to automatically DEcensore a text (lot's of text to be prosseced)
I have a web story that has cencored word in it with asterix right now i'm doing it with a simple and dumb str.replace but as you can imagine this is a pain and I need to search in the text to find all instance of the censoring here is bastard instance that are capitalized, plurial and with asterix in different places toReplace = toReplace.replace("b*stard", "bastard") toReplace = toReplace.replace("b*stards", "bastards") toReplace = toReplace.replace("B*stard", "Bastard") toReplace = toReplace.replace("B*stards", "Bastards") toReplace = toReplace.replace("b*st*rd", "bastard") toReplace = toReplace.replace("b*st*rds", "bastards") toReplace = toReplace.replace("B*st*rd", "Bastard") toReplace = toReplace.replace("B*st*rds", "Bastards") is there a way to compare all word with "*" (or any other replacement character) to an already compiled dict and replace them with the uncensored version of the word ? maybe regex but I don't think so
[ "Using regex alone will likely not result in a full solution for this. You would likely have an easier time if you have a simple list of the words that you want to restore, and use Levenshtein distance to determine which one is closest to a given word that you have found a * in.\nOne library that may help with this is fuzzywuzzy.\nThe two approaches that I can think of quickly:\n\nSplit the text so that you have 1 string per word. For each word, if '*' in word, then compare it to the list of replacements to find which is closest.\nUse re.sub to identify the words that contain a * character, and write a function that you would use as the repl argument to determine which replacement it is closest to and return that replacement.\n\nAdditional resources:\n\nPython: find closest string (from a list) to another string\nFind closest string match from list\nHow to find closest match of a string from a list of different length strings python?\n\n", "You can use re module to find matches between the censored word and words in your wordlist.\nReplace * with . (dot has special meaning in regex, it means \"match every character\") and then use re.match:\nimport re\n\nwordlist = [\"bastard\", \"apple\", \"orange\"]\n\n\ndef find_matches(censored_word, wordlist):\n pat = re.compile(censored_word.replace(\"*\", \".\"))\n return [w for w in wordlist if pat.match(w)]\n\n\nprint(find_matches(\"b*st*rd\", wordlist))\n\nPrints:\n['bastard']\n\n\nNote: If you want match exact word, add $ at the end of your pattern. That means appl* will not match applejuice in your dictionary for example.\n" ]
[ 1, 1 ]
[]
[]
[ "dictionary", "python", "replace", "string" ]
stackoverflow_0074502158_dictionary_python_replace_string.txt
Q: How to create a list with column name for each row of a df i have this df: have and i want to make a list with the column name and data for each row who looks like this: [{'userid': '1', 'account_holder': 'Vince', 'broker': '1090', 'account_id': '807521'}, {'userid': '2', 'account_holder': 'Joana', 'broker': '3055', 'account_id': '272167'}, {'userid': '3', 'account_holder': 'Dominique', 'broker': '5143', 'account_id': '37009'}, {'userid': '4', 'account_holder': 'James', 'broker': '5522', 'account_id': '905527'}] can you help me? i'm new at python and searched for info but not finding anything about it A: We can use your expected output to create a dataframe >>> import pandas as pd >>> df = pd.DataFrame([{'userid': '1', 'account_holder': 'Vince', 'broker': '1090', 'account_id': '807521'}, {'userid': '2', 'account_holder': 'Joana', 'broker': '3055', 'account_id': '272167'}, {'userid': '3', 'account_holder': 'Dominique', 'broker': '5143', 'account_id': '37009'}, {'userid': '4', 'account_holder': 'James', 'broker': '5522', 'account_id': '905527'}]) >>> df userid account_holder broker account_id 0 1 Vince 1090 807521 1 2 Joana 3055 272167 2 3 Dominique 5143 37009 3 4 James 5522 905527 DataFrame has a to_dict method that can output in multiple ways. help(pd.DataFrame.to_dict) says orient : str {'dict', 'list', 'series', 'split', 'records', 'index'} Determines the type of the values of the dictionary. - 'dict' (default) : dict like {column -> {index -> value}} - 'list' : dict like {column -> [values]} - 'series' : dict like {column -> Series(values)} - 'split' : dict like {'index' -> [index], 'columns' -> [columns], 'data' -> [values]} - 'records' : list like [{column -> value}, ... , {column -> value}] - 'index' : dict like {index -> {column -> value}} You can call to_dict with "records" orientation to get the result you want >>> df.to_dict("records") [{'userid': '1', 'account_holder': 'Vince', 'broker': '1090', 'account_id': '807521'}, {'userid': '2', 'account_holder': 'Joana', 'broker': '3055', 'account_id': '272167'}, {'userid': '3', 'account_holder': 'Dominique', 'broker': '5143', 'account_id': '37009'}, {'userid': '4', 'account_holder': 'James', 'broker': '5522', 'account_id': '905527'}]
How to create a list with column name for each row of a df
i have this df: have and i want to make a list with the column name and data for each row who looks like this: [{'userid': '1', 'account_holder': 'Vince', 'broker': '1090', 'account_id': '807521'}, {'userid': '2', 'account_holder': 'Joana', 'broker': '3055', 'account_id': '272167'}, {'userid': '3', 'account_holder': 'Dominique', 'broker': '5143', 'account_id': '37009'}, {'userid': '4', 'account_holder': 'James', 'broker': '5522', 'account_id': '905527'}] can you help me? i'm new at python and searched for info but not finding anything about it
[ "We can use your expected output to create a dataframe\n>>> import pandas as pd\n>>> df = pd.DataFrame([{'userid': '1', 'account_holder': 'Vince', 'broker': '1090', 'account_id': '807521'}, {'userid': '2', 'account_holder': 'Joana', 'broker': '3055', 'account_id': '272167'}, {'userid': '3', 'account_holder': 'Dominique', 'broker': '5143', 'account_id': '37009'}, {'userid': '4', 'account_holder': 'James', 'broker': '5522', 'account_id': '905527'}])\n>>> df\n userid account_holder broker account_id\n0 1 Vince 1090 807521\n1 2 Joana 3055 272167\n2 3 Dominique 5143 37009\n3 4 James 5522 905527\n\nDataFrame has a to_dict method that can output in multiple ways. help(pd.DataFrame.to_dict) says\norient : str {'dict', 'list', 'series', 'split', 'records', 'index'}\n Determines the type of the values of the dictionary.\n\n - 'dict' (default) : dict like {column -> {index -> value}}\n - 'list' : dict like {column -> [values]}\n - 'series' : dict like {column -> Series(values)}\n - 'split' : dict like\n {'index' -> [index], 'columns' -> [columns], 'data' -> [values]}\n - 'records' : list like\n [{column -> value}, ... , {column -> value}]\n - 'index' : dict like {index -> {column -> value}}\n\nYou can call to_dict with \"records\" orientation to get the result you want\n>>> df.to_dict(\"records\")\n[{'userid': '1', 'account_holder': 'Vince', 'broker': '1090', 'account_id': '807521'}, {'userid': '2', 'account_holder': 'Joana', 'broker': '3055', 'account_id': '272167'}, {'userid': '3', 'account_holder': 'Dominique', 'broker': '5143', 'account_id': '37009'}, {'userid': '4', 'account_holder': 'James', 'broker': '5522', 'account_id': '905527'}]\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "list", "pandas", "python" ]
stackoverflow_0074501175_for_loop_list_pandas_python.txt
Q: How to use commands from a txt file and store Python outputs in a txt file Input file looks like this I am trying do to following thing, 1-) Take shell commands from a txt file 2-) Store outputs of those commands in an another txt file. But I am not sure how to use those commands and store them. import os def read_file(file_name): #file_name must be a string current_dir_path = os.getcwd() #getting current directory path reading_file_name = file_name reading_file_path = os.path.join(current_dir_path, reading_file_name) #file path to read # Open file with open(reading_file_path, "r") as f: #"r" for reading data = f.readlines() for i in range(len(data)): data[i] = data[i].replace("\n", "") return data This is my function to read given file and return commands as a list of strings. And, outputs = "?" def write_file(file_name): #file_name must be a string current_dir_path = os.getcwd() writing_file_name = file_name writing_file_path = os.path.join(current_dir_path, writing_file_name) # Open file and add with open(writing_file_path, "w") as f: f.write(outputs) There are several functions that I created. Input file contains lines such that, func1 val1 val2 val3 func3 valx valy valz func2 val ... I couldn't figured out how to use commands I stored in 'data' and put store their outcomes WITHOUT USING LIBRARIES other than python build-in libraries. ` A: You can store the output of a command using subprocess. You can try the following: from subprocess import Popen, PIPE def write_file(command): proc = Popen(command, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE) ret = proc.stdout.readlines() output = [i.decode('utf-8') for i in ret] result = output[0] with open('output.txt', 'a') as file: file.write(result) file.close() #usage example write_file('echo hi') #'hi' will be written in output.txt
How to use commands from a txt file and store Python outputs in a txt file
Input file looks like this I am trying do to following thing, 1-) Take shell commands from a txt file 2-) Store outputs of those commands in an another txt file. But I am not sure how to use those commands and store them. import os def read_file(file_name): #file_name must be a string current_dir_path = os.getcwd() #getting current directory path reading_file_name = file_name reading_file_path = os.path.join(current_dir_path, reading_file_name) #file path to read # Open file with open(reading_file_path, "r") as f: #"r" for reading data = f.readlines() for i in range(len(data)): data[i] = data[i].replace("\n", "") return data This is my function to read given file and return commands as a list of strings. And, outputs = "?" def write_file(file_name): #file_name must be a string current_dir_path = os.getcwd() writing_file_name = file_name writing_file_path = os.path.join(current_dir_path, writing_file_name) # Open file and add with open(writing_file_path, "w") as f: f.write(outputs) There are several functions that I created. Input file contains lines such that, func1 val1 val2 val3 func3 valx valy valz func2 val ... I couldn't figured out how to use commands I stored in 'data' and put store their outcomes WITHOUT USING LIBRARIES other than python build-in libraries. `
[ "You can store the output of a command using subprocess. You can try the following:\nfrom subprocess import Popen, PIPE\n\ndef write_file(command):\n proc = Popen(command, shell=True, stdin=PIPE, stdout=PIPE,\n stderr=PIPE)\n ret = proc.stdout.readlines()\n output = [i.decode('utf-8') for i in ret]\n result = output[0]\n with open('output.txt', 'a') as file:\n file.write(result)\n file.close()\n\n#usage example\nwrite_file('echo hi')\n#'hi' will be written in output.txt\n\n" ]
[ 0 ]
[]
[]
[ "argparse", "command_line_interface", "operating_system", "python", "sys" ]
stackoverflow_0074501785_argparse_command_line_interface_operating_system_python_sys.txt
Q: How to install keyboard in python virtual environement | ImportError: You must be root to use this library on linux I did the following for keyboard interaction; pip install keyboard But when I execute, I get the following error; ImportError: You must be root to use this library on linux. My OS is Linux, and I work in python virtual environment and use Spyder. In addition to pip, I also tried conda install, but none of them helped. Some posts suggested to use sudo, so I tried sudo pip install keyboard in my python environment. But no use!! I also tried my python script executable using chmod +x, but no success. Thanks for any input. A: You're not supposed to run pip as root - you're supposed to run your program as root!
How to install keyboard in python virtual environement | ImportError: You must be root to use this library on linux
I did the following for keyboard interaction; pip install keyboard But when I execute, I get the following error; ImportError: You must be root to use this library on linux. My OS is Linux, and I work in python virtual environment and use Spyder. In addition to pip, I also tried conda install, but none of them helped. Some posts suggested to use sudo, so I tried sudo pip install keyboard in my python environment. But no use!! I also tried my python script executable using chmod +x, but no success. Thanks for any input.
[ "You're not supposed to run pip as root - you're supposed to run your program as root!\n" ]
[ 1 ]
[ "did you tryed \"sudo su\"?\nor is the problem that youre trying to intsall it for python3?\n\"sudo pip3 install keyboard\"\n" ]
[ -1 ]
[ "keyboard", "python", "python_3.x" ]
stackoverflow_0074502167_keyboard_python_python_3.x.txt
Q: Fetch large amount of data through an API endpoint Python3.7 I have a dataframe with ~100 000 CUI ids, which I would like to use at an API endpoint to fetch some information. Below is my code: #call UMLS API to get CUI terms umls_cui = open('umls_cui_names.txt', 'w') missed_cui = open('not_found_cui.txt', 'w') def get_cui(CUI): #api key API = "aaaaaaaaaaaaaaaaaaaaaaaa" #set the url url = 'https://uts-ws.nlm.nih.gov/rest/content/current/CUI/' url_cui = url + CUI #set the header headers = {'Content-Type': 'application/json'} #set the parameters params = {'apiKey' : API} #send the request response = requests.get(url_cui, headers = headers, params = params) if response.status_code == 200: name = response.json()['result']['name'] print (CUI, name) umls_cui.write("%s\t%s\n" % (CUI, name)) else: print (response) print (response.json()) print ('CUI not found') missed_cui.write("%s\n" % (CUI)) pass for i in df_cui['CUI']: print (i) get_cui(i) umls_cui.close() missed_cui.close() After getting the data from ~12k CUI ids, I get response 502 error. Can anyone suggest a better way to fetch the complete data through the API. A: You are using the National Library of Medicine's API, and chose to ignore their ToS. Apparently they return 502 status to non-compliant clients. https://documentation.uts.nlm.nih.gov/terms-of-service.html API Terms of Service In order to avoid overloading our servers, NLM requires that users send no more than 20 requests per second per IP address. Requests that exceed this limit may not be serviced, and service will not be restored until the request rate falls beneath the limit. To limit the number of requests that you send to the APIs, NLM recommends caching results for a 24 hour period. This policy is in place to ensure that the service remains available and accessible to all users. Reduce your request rate. Use time() to measure the current rate. Use sleep() to reduce the rate. When retrieving ~100 K CUI id search results, use the pageNumber and pageSize parameters to page through voluminous result records, and follow their advice for detecting end-of-list.
Fetch large amount of data through an API endpoint Python3.7
I have a dataframe with ~100 000 CUI ids, which I would like to use at an API endpoint to fetch some information. Below is my code: #call UMLS API to get CUI terms umls_cui = open('umls_cui_names.txt', 'w') missed_cui = open('not_found_cui.txt', 'w') def get_cui(CUI): #api key API = "aaaaaaaaaaaaaaaaaaaaaaaa" #set the url url = 'https://uts-ws.nlm.nih.gov/rest/content/current/CUI/' url_cui = url + CUI #set the header headers = {'Content-Type': 'application/json'} #set the parameters params = {'apiKey' : API} #send the request response = requests.get(url_cui, headers = headers, params = params) if response.status_code == 200: name = response.json()['result']['name'] print (CUI, name) umls_cui.write("%s\t%s\n" % (CUI, name)) else: print (response) print (response.json()) print ('CUI not found') missed_cui.write("%s\n" % (CUI)) pass for i in df_cui['CUI']: print (i) get_cui(i) umls_cui.close() missed_cui.close() After getting the data from ~12k CUI ids, I get response 502 error. Can anyone suggest a better way to fetch the complete data through the API.
[ "You are using the National Library of Medicine's API, and chose to ignore their ToS.\nApparently they return 502 status to non-compliant clients.\nhttps://documentation.uts.nlm.nih.gov/terms-of-service.html\n\nAPI Terms of Service\nIn order to avoid overloading our servers, NLM requires that users send no more than 20 requests per second per IP address. Requests that exceed this limit may not be serviced, and service will not be restored until the request rate falls beneath the limit. To limit the number of requests that you send to the APIs, NLM recommends caching results for a 24 hour period. This policy is in place to ensure that the service remains available and accessible to all users.\n\nReduce your request rate.\nUse time() to measure the current rate.\nUse sleep() to reduce the rate.\n\nWhen retrieving ~100 K CUI id search results,\nuse the pageNumber and pageSize parameters\nto page through voluminous result records,\nand follow their advice for detecting end-of-list.\n" ]
[ 0 ]
[]
[]
[ "api", "python" ]
stackoverflow_0074502444_api_python.txt
Q: Invalid stoi argument with torch I've been tackling python and torch specifically lately as a hobby, and, while some API works, I keep getting invalid stoi argument exception with other very basic API torch provides. Reproduced with the code below: import torch torch.cuda.is_available() torch.cuda.current_device() First call (is_available()) works as expected and returns True, but the second throws an exception: Exception has occurred: RuntimeError invalid stoi argument File "C:\DEV\pthon_test\torch_test.py", line 5, in <module> torch.cuda.current_device() Needless to say, more complicated things (for example, running stable_diffusion_webui) fail if used with GPU (said webui works with CPU), and trying to dig deeper into the code brings me to the same exception. OS is Windows 11, python Python 3.10.8, torch version checked with torch.__version__ returns 1.12.1+cu113. And, well, GPU is present List of packages installed: ❯ pip list Package Version ----------------------- --------------- absl-py 1.3.0 addict 2.4.0 antlr4-python3-runtime 4.9.3 basicsr 1.4.2 beautifulsoup4 4.11.1 cachetools 5.2.0 certifi 2022.9.24 charset-normalizer 2.1.1 clip 1.0 colorama 0.4.6 contourpy 1.0.6 cycler 0.11.0 einops 0.4.1 facexlib 0.2.5 ffmpy 0.3.0 filelock 3.8.0 filterpy 1.4.5 font-roboto 0.0.1 fonts 0.0.3 fonttools 4.38.0 ftfy 6.1.1 future 0.18.2 gdown 4.5.3 gfpgan 1.3.5 google-auth 2.14.0 google-auth-oauthlib 0.4.6 grpcio 1.50.0 idna 3.4 imageio 2.22.3 kiwisolver 1.4.4 lark 1.1.2 llvmlite 0.39.1 lmdb 1.3.0 lpips 0.1.4 Markdown 3.4.1 MarkupSafe 2.1.1 matplotlib 3.6.2 networkx 2.8.8 numba 0.56.3 numpy 1.23.3 oauthlib 3.2.2 omegaconf 2.2.3 opencv-python 4.6.0.66 orjson 3.8.1 packaging 21.3 piexif 1.1.3 Pillow 9.2.0 pip 22.2.2 protobuf 3.19.6 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycparser 2.21 pycryptodome 3.15.0 pydantic 1.10.2 pyDeprecate 0.3.2 pydub 0.25.1 pyparsing 3.0.9 pyrsistent 0.19.2 PySocks 1.7.1 python-dateutil 2.8.2 python-multipart 0.0.4 pytz 2022.6 PyWavelets 1.4.1 PyYAML 6.0 regex 2022.10.31 requests 2.28.1 requests-oauthlib 1.3.1 resize-right 0.0.2 rfc3986 1.5.0 rsa 4.9 scikit-image 0.19.3 scipy 1.9.3 setuptools 63.2.0 six 1.16.0 sniffio 1.3.0 soupsieve 2.3.2.post1 tb-nightly 2.11.0a20221103 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tifffile 2022.10.10 tokenizers 0.12.1 torch 1.12.1+cu113 torchvision 0.13.1+cu113 tqdm 4.64.1 typing_extensions 4.4.0 uc-micro-py 1.0.1 urllib3 1.26.12 wcwidth 0.2.5 websockets 10.4 Werkzeug 2.2.2 wheel 0.37.1 yapf 0.32.0 zipp 3.10.0 This brings me to two questions: What's causing the issue? I do have a rough understanding of the c++ meaning of the error, but not sure why I'm getting this on python Is there anything I can do to fix the problem? Thanks A: As there is no other answers, and initial problem seem to have been solved, I thought I'd share some information. I've noticed the problem was fixed when... I updated my Nvidia driver. No idea what was the problem, and if there's anything else I unknowingly did, but updating to 526.98 did the trick for me. If there's anyone else to share more details on the original issue, or possible solutions - feel free to edit or give a better answer, I'd mark that as the best option instead. Have a nice day o/
Invalid stoi argument with torch
I've been tackling python and torch specifically lately as a hobby, and, while some API works, I keep getting invalid stoi argument exception with other very basic API torch provides. Reproduced with the code below: import torch torch.cuda.is_available() torch.cuda.current_device() First call (is_available()) works as expected and returns True, but the second throws an exception: Exception has occurred: RuntimeError invalid stoi argument File "C:\DEV\pthon_test\torch_test.py", line 5, in <module> torch.cuda.current_device() Needless to say, more complicated things (for example, running stable_diffusion_webui) fail if used with GPU (said webui works with CPU), and trying to dig deeper into the code brings me to the same exception. OS is Windows 11, python Python 3.10.8, torch version checked with torch.__version__ returns 1.12.1+cu113. And, well, GPU is present List of packages installed: ❯ pip list Package Version ----------------------- --------------- absl-py 1.3.0 addict 2.4.0 antlr4-python3-runtime 4.9.3 basicsr 1.4.2 beautifulsoup4 4.11.1 cachetools 5.2.0 certifi 2022.9.24 charset-normalizer 2.1.1 clip 1.0 colorama 0.4.6 contourpy 1.0.6 cycler 0.11.0 einops 0.4.1 facexlib 0.2.5 ffmpy 0.3.0 filelock 3.8.0 filterpy 1.4.5 font-roboto 0.0.1 fonts 0.0.3 fonttools 4.38.0 ftfy 6.1.1 future 0.18.2 gdown 4.5.3 gfpgan 1.3.5 google-auth 2.14.0 google-auth-oauthlib 0.4.6 grpcio 1.50.0 idna 3.4 imageio 2.22.3 kiwisolver 1.4.4 lark 1.1.2 llvmlite 0.39.1 lmdb 1.3.0 lpips 0.1.4 Markdown 3.4.1 MarkupSafe 2.1.1 matplotlib 3.6.2 networkx 2.8.8 numba 0.56.3 numpy 1.23.3 oauthlib 3.2.2 omegaconf 2.2.3 opencv-python 4.6.0.66 orjson 3.8.1 packaging 21.3 piexif 1.1.3 Pillow 9.2.0 pip 22.2.2 protobuf 3.19.6 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycparser 2.21 pycryptodome 3.15.0 pydantic 1.10.2 pyDeprecate 0.3.2 pydub 0.25.1 pyparsing 3.0.9 pyrsistent 0.19.2 PySocks 1.7.1 python-dateutil 2.8.2 python-multipart 0.0.4 pytz 2022.6 PyWavelets 1.4.1 PyYAML 6.0 regex 2022.10.31 requests 2.28.1 requests-oauthlib 1.3.1 resize-right 0.0.2 rfc3986 1.5.0 rsa 4.9 scikit-image 0.19.3 scipy 1.9.3 setuptools 63.2.0 six 1.16.0 sniffio 1.3.0 soupsieve 2.3.2.post1 tb-nightly 2.11.0a20221103 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tifffile 2022.10.10 tokenizers 0.12.1 torch 1.12.1+cu113 torchvision 0.13.1+cu113 tqdm 4.64.1 typing_extensions 4.4.0 uc-micro-py 1.0.1 urllib3 1.26.12 wcwidth 0.2.5 websockets 10.4 Werkzeug 2.2.2 wheel 0.37.1 yapf 0.32.0 zipp 3.10.0 This brings me to two questions: What's causing the issue? I do have a rough understanding of the c++ meaning of the error, but not sure why I'm getting this on python Is there anything I can do to fix the problem? Thanks
[ "As there is no other answers, and initial problem seem to have been solved, I thought I'd share some information.\nI've noticed the problem was fixed when... I updated my Nvidia driver. No idea what was the problem, and if there's anything else I unknowingly did, but updating to 526.98 did the trick for me.\nIf there's anyone else to share more details on the original issue, or possible solutions - feel free to edit or give a better answer, I'd mark that as the best option instead.\nHave a nice day o/\n" ]
[ 0 ]
[]
[]
[ "python", "pytorch" ]
stackoverflow_0074308875_python_pytorch.txt
Q: Run python SCRIPT on multiple browsers at the same time using selenium I would like to run my script on Multiple browser using selenium. As of now I am able to perform the operation by opening one browser at a time. Eg:- Register to amazon. I want to be able to Register two users to amazon at the same time. This is the code I have as of now. import time from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.select import Select driver.get("https://www.amazon.com/ap/register?openid.pape.max_auth_age=0&openid.return_to=https%3A%2F%2Fwww.amazon.com%2F%3Fref_%3Dnav_signin&prevRID=VBHFJ50CPKFJ3PGG7RDY&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.assoc_handle=usflex&openid.mode=checkid_setup&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&prepopulatedLoginId=&failedSignInCount=0&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&pageId=usflex&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0") driver.find_element_by_xpath("""//*[@id="s2id_ID_form4a8055de_guest_register_sponsor_lookup"]/a/span[2]/b""").click() driver.find_element_by_xpath("""//*[@id="s2id_autogen1_search"]""").send_keys(v1) By using this I can run it for one user at one time. But I want to be able to register more than two users upto n users at the same time. Hence, the multiple windows questions. A: You could create multiple instances of the webdriver. You can then manipulate each individually. For example, from selenium import webdriver driver1 = webdriver.Chrome() driver2 = webdriver.Chrome() driver1.get("http://google.com") driver2.get("http://yahoo.com") A: This question is a bit old at this point, but I still found it applicable to something I was having trouble with today. In order to achieve parallel processes you need to utilize multiprocessing. Essentially, this allows you to create browser instances for each function and allow each script to lock to each browser GIL separately. You can then start each of the processes in your main code and they will all execute in parallel. If you need an explanation on how to do this, a great video can be found here
Run python SCRIPT on multiple browsers at the same time using selenium
I would like to run my script on Multiple browser using selenium. As of now I am able to perform the operation by opening one browser at a time. Eg:- Register to amazon. I want to be able to Register two users to amazon at the same time. This is the code I have as of now. import time from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.select import Select driver.get("https://www.amazon.com/ap/register?openid.pape.max_auth_age=0&openid.return_to=https%3A%2F%2Fwww.amazon.com%2F%3Fref_%3Dnav_signin&prevRID=VBHFJ50CPKFJ3PGG7RDY&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.assoc_handle=usflex&openid.mode=checkid_setup&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&prepopulatedLoginId=&failedSignInCount=0&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&pageId=usflex&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0") driver.find_element_by_xpath("""//*[@id="s2id_ID_form4a8055de_guest_register_sponsor_lookup"]/a/span[2]/b""").click() driver.find_element_by_xpath("""//*[@id="s2id_autogen1_search"]""").send_keys(v1) By using this I can run it for one user at one time. But I want to be able to register more than two users upto n users at the same time. Hence, the multiple windows questions.
[ "You could create multiple instances of the webdriver. You can then manipulate each individually. For example,\nfrom selenium import webdriver\ndriver1 = webdriver.Chrome()\ndriver2 = webdriver.Chrome()\ndriver1.get(\"http://google.com\")\ndriver2.get(\"http://yahoo.com\")\n\n", "This question is a bit old at this point, but I still found it applicable to something I was having trouble with today.\nIn order to achieve parallel processes you need to utilize multiprocessing. Essentially, this allows you to create browser instances for each function and allow each script to lock to each browser GIL separately. You can then start each of the processes in your main code and they will all execute in parallel.\nIf you need an explanation on how to do this, a great video can be found here\n" ]
[ 2, 0 ]
[]
[]
[ "python", "python_2.7", "selenium", "selenium_chromedriver" ]
stackoverflow_0043626313_python_python_2.7_selenium_selenium_chromedriver.txt
Q: Changing AWS Lambda environment variables when running test I've got a few small Python functions that post to twitter running on AWS. I'm a novice when it comes to Lambda, knowing only enough to get the functions running. The functions have environment variables set in Lambda with various bits of configuration, such as post frequency and the secret data for the twitter application. These are read into the python script directly. It's all triggered by an Event Bridge cron job that runs every hour. I'm wanting to create a test event that will allow me to invoke the function manually, but would like to be able to change the post frequency variable when run like this. Is there a simple way to change environment variables when running a test event? A: That is very much possible and there are multiple ways to do it. One is to use AWS CLI's aws lambda update-function-configuration: https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-configuration.html Alternatively, depending on programming language that you prefer, you can use AWS SDK that also has a similar method, you can find an example with JS SDK in this doc: https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/javascript_lambda_code_examples.html
Changing AWS Lambda environment variables when running test
I've got a few small Python functions that post to twitter running on AWS. I'm a novice when it comes to Lambda, knowing only enough to get the functions running. The functions have environment variables set in Lambda with various bits of configuration, such as post frequency and the secret data for the twitter application. These are read into the python script directly. It's all triggered by an Event Bridge cron job that runs every hour. I'm wanting to create a test event that will allow me to invoke the function manually, but would like to be able to change the post frequency variable when run like this. Is there a simple way to change environment variables when running a test event?
[ "That is very much possible and there are multiple ways to do it. One is to use AWS CLI's aws lambda update-function-configuration: https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-configuration.html\nAlternatively, depending on programming language that you prefer, you can use AWS SDK that also has a similar method, you can find an example with JS SDK in this doc: https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/javascript_lambda_code_examples.html\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "aws_lambda", "python" ]
stackoverflow_0074502175_amazon_web_services_aws_lambda_python.txt
Q: How to convert 2D list of string into 2D list of integers in Python I have written the following code to read a csv file into a multidimensional list which is working fine. The problem arise when I created a function to calculate the total of 2D list. This is happening because the numbers are in string inside the 2D list i.e. [['0', '0', '30', '2', '21', '13', '23'], .....,['8', '25', '1', '6', '21', '23', '0']]. What would be the simplest way to convert the string elements into integers in a 2D list such as [[0, 0, 30, 2, 21, 13, 23],.....,[8, 25, 1, 6, 21, 23, 0]] My code so far rows = 52 cols = 7 def populate2D(): with open("rainfall.csv","r") as file: lineArray = file.read().splitlines() matrix = [] for line in lineArray: matrix.append(line.split(",")) return matrix def display(matrix): print(matrix) def yearly(matrix): total = 0 for row in matrix: for value in row: total += value return total matrix = populate2D() display(matrix) total = yearly(matrix) print() print("Total rainfall for the year is " + str(total)) csv file 0,0,30,2,21,13,23 29,3,29,30,7,8,25 26,5,26,13,4,13,4 22,30,13,15,15,0,2 3,12,11,10,17,0,15 8,13,11,24,30,24,27 22,18,2,29,11,13,18 15,1,29,23,18,7,0 23,27,3,7,13,14,28 6,25,24,14,20,23,5 24,29,26,22,0,9,18 22,27,22,20,24,29,21 23,13,14,4,13,1,21 25,21,21,6,28,17,19 4,6,11,10,21,1,5 11,7,22,11,10,24,15 25,11,23,3,23,8,3 22,23,0,29,15,12,5 21,11,18,22,1,4,3 11,10,3,1,30,14,22 2,16,10,2,12,9,9 2,29,17,16,13,18,7 22,15,27,19,6,26,11 21,7,18,4,14,14,2 6,30,12,4,26,22,11 21,16,14,11,28,20,3 19,10,22,18,30,9,27 8,15,17,4,11,16,6 19,17,16,6,18,18,6 2,15,3,25,27,16,11 15,5,26,24,24,30,5 15,11,16,22,14,23,28 25,6,7,20,26,18,16 5,5,21,22,24,16,5 6,27,11,8,24,1,16 28,4,1,4,3,19,24 19,3,27,14,12,24,0 6,3,26,15,15,22,26 18,5,0,14,15,7,26 10,5,12,22,8,7,11 11,1,18,29,6,9,26 3,23,2,21,29,15,25 5,7,1,6,15,18,24 28,11,0,6,28,11,26 4,28,9,24,11,13,2 6,2,14,18,20,21,1 20,29,22,21,11,14,20 28,23,14,17,25,3,18 6,27,6,20,19,5,24 25,3,27,22,7,12,21 12,22,8,7,0,11,8 8,25,1,6,21,23,0 output $ python rainfall.py [['0', '0', '30', '2', '21', '13', '23'], ['29', '3', '29', '30', '7', '8', '25'], ['26', '5', '26', '13', '4', '13', '4'], ['22', '30', '13', '15', '15', '0', '2'], ['3', '12', '11', '10', '17', '0', '15'], ['8', '13', '11', '24', '30', '24', '27'], ['22', '18', '2', '29', '11', '13', '18'], ['15', '1', '29', '23', '18', '7', '0'], ['23', '27', '3', '7', '13', '14', '28'], ['6', '25', '24', '14', '20', '23', '5'], ['24', '29', '26', '22', '0', '9', '18'], ['22', '27', '22', '20', '24', '29', '21'], ['23', '13', '14', '4', '13', '1', '21'], ['25', '21', '21', '6', '28', '17', '19'], ['4', '6', '11', '10', '21', '1', '5'], ['11', '7', '22', '11', '10', '24', '15'], ['25', '11', '23', '3', '23', '8', '3'], ['22', '23', '0', '29', '15', '12', '5'], ['21', '11', '18', '22', '1', '4', '3'], ['11', '10', '3', '1', '30', '14', '22'], ['2', '16', '10', '2', '12', '9', '9'], ['2', '29', '17', '16', '13', '18', '7'], ['22', '15', '27', '19', '6', '26', '11'], ['21', '7', '18', '4', '14', '14', '2'], ['6', '30', '12', '4', '26', '22', '11'], ['21', '16', '14', '11', '28', '20', '3'], ['19', '10', '22', '18', '30', '9', '27'], ['8', '15', '17', '4', '11', '16', '6'], ['19', '17', '16', '6', '18', '18', '6'], ['2', '15', '3', '25', '27', '16', '11'], ['15', '5', '26', '24', '24', '30', '5'], ['15', '11', '16', '22', '14', '23', '28'], ['25', '6', '7', '20', '26', '18', '16'], ['5', '5', '21', '22', '24', '16', '5'], ['6', '27', '11', '8', '24', '1', '16'], ['28', '4', '1', '4', '3', '19', '24'], ['19', '3', '27', '14', '12', '24', '0'], ['6', '3', '26', '15', '15', '22', '26'], ['18', '5', '0', '14', '15', '7', '26'], ['10', '5', '12', '22', '8', '7', '11'], ['11', '1', '18', '29', '6', '9', '26'], ['3', '23', '2', '21', '29', '15', '25'], ['5', '7', '1', '6', '15', '18', '24'], ['28', '11', '0', '6', '28', '11', '26'], ['4', '28', '9', '24', '11', '13', '2'], ['6', '2', '14', '18', '20', '21', '1'], ['20', '29', '22', '21', '11', '14', '20'], ['28', '23', '14', '17', '25', '3', '18'], ['6', '27', '6', '20', '19', '5', '24'], ['25', '3', '27', '22', '7', '12', '21'], ['12', '22', '8', '7', '0', '11', '8'], ['8', '25', '1', '6', '21', '23', '0']] Traceback (most recent call last): File "C:\rainfall.py", line 33, in <module> total = yearly(matrix) File "C:\rainfall.py", line 28, in yearly total += value TypeError: unsupported operand type(s) for +=: 'int' and 'str' A: The TypeError tells that you try to add an str and not an int to an int. You can convert your str to an int by just wrapping int(<YourString>) arrount it. So in your code it would like this: total = 0 for row in matrix: for value in row: total += int(value) # this line return total Also when you read from a file the data is stored in str and not int. A: You have to convert the array of numbers from string to int using map function when reading the numbers from the CSV Replace for line in lineArray: matrix.append(line.split(",")) with for line in lineArray: matrix.append(list(map(int, line.split(","))))
How to convert 2D list of string into 2D list of integers in Python
I have written the following code to read a csv file into a multidimensional list which is working fine. The problem arise when I created a function to calculate the total of 2D list. This is happening because the numbers are in string inside the 2D list i.e. [['0', '0', '30', '2', '21', '13', '23'], .....,['8', '25', '1', '6', '21', '23', '0']]. What would be the simplest way to convert the string elements into integers in a 2D list such as [[0, 0, 30, 2, 21, 13, 23],.....,[8, 25, 1, 6, 21, 23, 0]] My code so far rows = 52 cols = 7 def populate2D(): with open("rainfall.csv","r") as file: lineArray = file.read().splitlines() matrix = [] for line in lineArray: matrix.append(line.split(",")) return matrix def display(matrix): print(matrix) def yearly(matrix): total = 0 for row in matrix: for value in row: total += value return total matrix = populate2D() display(matrix) total = yearly(matrix) print() print("Total rainfall for the year is " + str(total)) csv file 0,0,30,2,21,13,23 29,3,29,30,7,8,25 26,5,26,13,4,13,4 22,30,13,15,15,0,2 3,12,11,10,17,0,15 8,13,11,24,30,24,27 22,18,2,29,11,13,18 15,1,29,23,18,7,0 23,27,3,7,13,14,28 6,25,24,14,20,23,5 24,29,26,22,0,9,18 22,27,22,20,24,29,21 23,13,14,4,13,1,21 25,21,21,6,28,17,19 4,6,11,10,21,1,5 11,7,22,11,10,24,15 25,11,23,3,23,8,3 22,23,0,29,15,12,5 21,11,18,22,1,4,3 11,10,3,1,30,14,22 2,16,10,2,12,9,9 2,29,17,16,13,18,7 22,15,27,19,6,26,11 21,7,18,4,14,14,2 6,30,12,4,26,22,11 21,16,14,11,28,20,3 19,10,22,18,30,9,27 8,15,17,4,11,16,6 19,17,16,6,18,18,6 2,15,3,25,27,16,11 15,5,26,24,24,30,5 15,11,16,22,14,23,28 25,6,7,20,26,18,16 5,5,21,22,24,16,5 6,27,11,8,24,1,16 28,4,1,4,3,19,24 19,3,27,14,12,24,0 6,3,26,15,15,22,26 18,5,0,14,15,7,26 10,5,12,22,8,7,11 11,1,18,29,6,9,26 3,23,2,21,29,15,25 5,7,1,6,15,18,24 28,11,0,6,28,11,26 4,28,9,24,11,13,2 6,2,14,18,20,21,1 20,29,22,21,11,14,20 28,23,14,17,25,3,18 6,27,6,20,19,5,24 25,3,27,22,7,12,21 12,22,8,7,0,11,8 8,25,1,6,21,23,0 output $ python rainfall.py [['0', '0', '30', '2', '21', '13', '23'], ['29', '3', '29', '30', '7', '8', '25'], ['26', '5', '26', '13', '4', '13', '4'], ['22', '30', '13', '15', '15', '0', '2'], ['3', '12', '11', '10', '17', '0', '15'], ['8', '13', '11', '24', '30', '24', '27'], ['22', '18', '2', '29', '11', '13', '18'], ['15', '1', '29', '23', '18', '7', '0'], ['23', '27', '3', '7', '13', '14', '28'], ['6', '25', '24', '14', '20', '23', '5'], ['24', '29', '26', '22', '0', '9', '18'], ['22', '27', '22', '20', '24', '29', '21'], ['23', '13', '14', '4', '13', '1', '21'], ['25', '21', '21', '6', '28', '17', '19'], ['4', '6', '11', '10', '21', '1', '5'], ['11', '7', '22', '11', '10', '24', '15'], ['25', '11', '23', '3', '23', '8', '3'], ['22', '23', '0', '29', '15', '12', '5'], ['21', '11', '18', '22', '1', '4', '3'], ['11', '10', '3', '1', '30', '14', '22'], ['2', '16', '10', '2', '12', '9', '9'], ['2', '29', '17', '16', '13', '18', '7'], ['22', '15', '27', '19', '6', '26', '11'], ['21', '7', '18', '4', '14', '14', '2'], ['6', '30', '12', '4', '26', '22', '11'], ['21', '16', '14', '11', '28', '20', '3'], ['19', '10', '22', '18', '30', '9', '27'], ['8', '15', '17', '4', '11', '16', '6'], ['19', '17', '16', '6', '18', '18', '6'], ['2', '15', '3', '25', '27', '16', '11'], ['15', '5', '26', '24', '24', '30', '5'], ['15', '11', '16', '22', '14', '23', '28'], ['25', '6', '7', '20', '26', '18', '16'], ['5', '5', '21', '22', '24', '16', '5'], ['6', '27', '11', '8', '24', '1', '16'], ['28', '4', '1', '4', '3', '19', '24'], ['19', '3', '27', '14', '12', '24', '0'], ['6', '3', '26', '15', '15', '22', '26'], ['18', '5', '0', '14', '15', '7', '26'], ['10', '5', '12', '22', '8', '7', '11'], ['11', '1', '18', '29', '6', '9', '26'], ['3', '23', '2', '21', '29', '15', '25'], ['5', '7', '1', '6', '15', '18', '24'], ['28', '11', '0', '6', '28', '11', '26'], ['4', '28', '9', '24', '11', '13', '2'], ['6', '2', '14', '18', '20', '21', '1'], ['20', '29', '22', '21', '11', '14', '20'], ['28', '23', '14', '17', '25', '3', '18'], ['6', '27', '6', '20', '19', '5', '24'], ['25', '3', '27', '22', '7', '12', '21'], ['12', '22', '8', '7', '0', '11', '8'], ['8', '25', '1', '6', '21', '23', '0']] Traceback (most recent call last): File "C:\rainfall.py", line 33, in <module> total = yearly(matrix) File "C:\rainfall.py", line 28, in yearly total += value TypeError: unsupported operand type(s) for +=: 'int' and 'str'
[ "The TypeError tells that you try to add an str and not an int to an int. You can convert your str to an int by just wrapping int(<YourString>) arrount it.\nSo in your code it would like this:\ntotal = 0\nfor row in matrix:\n for value in row:\n total += int(value) # this line\nreturn total\n\nAlso when you read from a file the data is stored in str and not int.\n", "You have to convert the array of numbers from string to int using map function when reading the numbers from the CSV\nReplace\nfor line in lineArray:\n matrix.append(line.split(\",\"))\n\nwith\nfor line in lineArray:\n matrix.append(list(map(int, line.split(\",\"))))\n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074502551_python.txt
Q: How to make an executable file? Code for the background image of the GUI : bg = PhotoImage(file='images/all_button.png') lbl_bg = Label(root, image=bg) lbl_bg.place(x=0, y=0, relwidth=1, relheight=1) Error Traceback (most recent call last): File "finalbilling.py", line 14, in <module> File "tkinter\__init__.py", line 4061, in __init__ File "tkinter\__init__.py", line 4006, in __init__ _tkinter.TclError: couldn't open "images/all_button.png": no such file or directory [7912] Failed to execute script finalbilling A: You can convert the python script to a standalone executable with all its dependencies included using different python packages. One such package is pyinstaller which can be used to create executable for Windows, Mac OS X, or GNU/Linux. pyinstaller A: There are many ways in which you can convert it into an executable program. But one such easy way is to use pyinstaller. Firstly, download a module named pyinstaller by entering the below code into your console: pip install pyinstaller Then, open the folder in which you have kept your program with console. In windows, you can go to the folder and press shift key and right click to get the option to open with powershell. In Mac, you can head to the folder > Right click on the folder > Click New Terminal at folder. Then in the terminal you can either: pyinstaller (name of your file).py In this one you will get many other files associated with your executable program. Don't try to delete those files else your executable file may get affected. Instead, try the below one to get only one executable file. pyinstaller --onefile (name of file).py You can get the executable program into dist folder of the folder you opened into console. And then you can convert it to zip and send others. If you are getting any type of errors like: No such file or directory [7912] Failed to execute script finalbilling Then there might be an issue with your script or while opening or importing any file an error occurred. As the same happened with me and after resolving the issue in script it worked. So, try to resolve the issue in script and then try the above steps it will definitely work. A: pip install pyinstaller pyinstaller --onefile main.py OR pyinstaller --onefile -w main.py # (will not work for console app) -w means not to open CMD
How to make an executable file?
Code for the background image of the GUI : bg = PhotoImage(file='images/all_button.png') lbl_bg = Label(root, image=bg) lbl_bg.place(x=0, y=0, relwidth=1, relheight=1) Error Traceback (most recent call last): File "finalbilling.py", line 14, in <module> File "tkinter\__init__.py", line 4061, in __init__ File "tkinter\__init__.py", line 4006, in __init__ _tkinter.TclError: couldn't open "images/all_button.png": no such file or directory [7912] Failed to execute script finalbilling
[ "You can convert the python script to a standalone executable with all its dependencies included using different python packages. One such package is pyinstaller which can be used to create executable for Windows, Mac OS X, or GNU/Linux. pyinstaller\n", "There are many ways in which you can convert it into an executable program.\nBut one such easy way is to use pyinstaller.\nFirstly, download a module named pyinstaller by entering the below code into your console:\npip install pyinstaller\n\nThen, open the folder in which you have kept your program with console.\nIn windows, you can go to the folder and press shift key and right click to get the option to open with powershell.\nIn Mac, you can head to the folder > Right click on the folder > Click New Terminal at folder.\nThen in the terminal you can either:\npyinstaller (name of your file).py\n\nIn this one you will get many other files associated with your executable program.\nDon't try to delete those files else your executable file may get affected.\nInstead, try the below one to get only one executable file.\npyinstaller --onefile (name of file).py\n\nYou can get the executable program into dist folder of the folder you opened into console.\nAnd then you can convert it to zip and send others.\nIf you are getting any type of errors like:\nNo such file or directory [7912] Failed to execute script finalbilling\nThen there might be an issue with your script or while opening or importing any file an error occurred.\nAs the same happened with me and after resolving the issue in script it worked.\nSo, try to resolve the issue in script and then try the above steps it will definitely work.\n", "pip install pyinstaller\n\npyinstaller --onefile main.py\n\nOR\npyinstaller --onefile -w main.py # (will not work for console app)\n\n-w means not to open CMD\n" ]
[ 0, 0, 0 ]
[]
[]
[ "executable", "python", "windows" ]
stackoverflow_0066632129_executable_python_windows.txt
Q: Python requires ipykernel to be installed I encounter an issue when I use the Jupyter Notebook in VS code. The screen shows "Python 3.7.8 requires ipykernel to be installed". I followed the pop-up to install ipykernel. It still does not work. The screenshot is attached. It bothers me a lot. Could anyone help me with it? Tons of thanks. A: The reason is that your current VSCode terminal is in the environment "Deeplearning_Env", so "ipykernel" is installed in the environment "Deeplearning_Env" instead of the environment "base conda" displayed in the pop-up box. Solution: Please use the shortcut key Ctrl+Shift+` to open a new VScode terminal, it will automatically enter the currently selected VSCode environment (VSCode lower left corner), and activate this conda environment: Then, click to install "ipykernel" according to the prompt in the pop-up box. Or, we could also install "ipykernel" manually: (pip install ipykernel) In addition, for the newly created Python environment (without installing "ipykernel"), before opening the Jupyter file, please refresh the VSCode terminal and enter the currently selected environment. For the conda environment, we need to activate it before using it. Check: Check the installation of "ipykernel": More reference: Environment in VSCode. A: I had the same issue and spent the whole day trying to resolve it. What worked for me was installing the Jupyter dependencies for anaconda: > conda install jupyter I installed this in my base environment. After this VSCode worked without any errors. A: Recently I ran into this problem and personally I believe that this problem specifically emerges if you are using a conda environment. Even if you upgrade the ipykernel in the right environment, the problem persists. Install the nb_conda_kernels package in the conda environment you want to use with your Jupyter notebook. conda install -n notebook_env nb_conda_kernels Replace the notebook_env in the above command with the actual environment name you use. Check out this repository for further reference. A: Just Do A : pip install ipykernel --upgrade A: The pyzmq package installed in the conda(base) environment caused it. You can solve the problem through uninstall and reinstall the 'pyzmq' package under the conda(base) environment. pip uninstall pyzmq pip install pyzmq You can refer to here for more details. A: Maybe you can try type this cmd in the terminal. And let see what happen. python -m ipykernel I got sth error after I had typed this cmd. ImportError: cannot import name 'AsyncGenerator' The fix is from https://stackoverflow.com/a/65557088/11474510 pip install --upgrade prompt-toolkit==2.0.1 A: Change the JSON schema and point to your environment. If you encounter problems, create a new environment. See also: How to setup virtual environment for Python in VS Code? A: In my case, I had to pip install jupyter, not ipykernel as implied by the error message. A: The problem mentioned is not specific to conda based virtual environments. My config: Python 3.7.8, VS Code: 1.63.2, OS: Windows 10 64 bit, venv for virtual environment I am using python venv for virtual environment. When i imported a new .ipynb file in VS Code while trying to run it, it gave the error "Running cells with Python 3.7.8(env_name:venv) require ipykernel package". I hit the pop up to install and can see the following being installed in the selected virtual environment/kernel i am using with my Jupyter notebook. xxx/xxx/../python.exe -m pip install -U ipykernel and finally, the installed packages: Installing collected packages: wcwidth, traitlets, parso, tornado, pyzmq, pygments, prompt-toolkit, pickleshare, nest-asyncio, matplotlib-inline, jupyter-core, jedi, entrypoints, decorator, backcall, jupyter-client, ipython, debugpy, argcomplete, ipykernel Successfully installed argcomplete-2.0.0 backcall-0.2.0 debugpy-1.5.1 decorator-5.1.1 entrypoints-0.3 ipykernel-6.6.1 ipython-7.31.0 jedi-0.18.1 jupyter-client-7.1.0 jupyter-core-4.9.1 matplotlib-inline-0.1.3 nest-asyncio-1.5.4 parso-0.8.3 pickleshare-0.7.5 prompt-toolkit-3.0.24 pygments-2.11.2 pyzmq-22.3.0 tornado-6.1 traitlets-5.1.1 wcwidth-0.2.5 You can start with installing ipykernel directly in the selected environment. A: I too faced the same issue, so simply I made the new environment and changed the kernel in vscode. A: try conda install -n base ipykernel --update-deps --force-reinstall A: This is how the problem is solved for me: I ran this: pip install --upgrade --force jupyter-console Then I got an error for botocore conflict (You may get an error for another package). I installed botocore: pip uninstall botocore An then rerun the above code: pip install --upgrade --force jupyter-console If you received a conflict error for other packages, continue removing them and taking the same steps until there is no error. When jupyter-console successfully installs, you won't see the Kernel error again. A: Recently, I ran into the same problem twice after updating VS Code. When I tried to run a cell in a Jupyter notebook, it said I need to install a python extension (even though I had it installed). But I just went to the python extension and switched the version. That's it, it worked for me like that.
Python requires ipykernel to be installed
I encounter an issue when I use the Jupyter Notebook in VS code. The screen shows "Python 3.7.8 requires ipykernel to be installed". I followed the pop-up to install ipykernel. It still does not work. The screenshot is attached. It bothers me a lot. Could anyone help me with it? Tons of thanks.
[ "The reason is that your current VSCode terminal is in the environment \"Deeplearning_Env\", so \"ipykernel\" is installed in the environment \"Deeplearning_Env\" instead of the environment \"base conda\" displayed in the pop-up box.\nSolution: Please use the shortcut key Ctrl+Shift+` to open a new VScode terminal, it will automatically enter the currently selected VSCode environment (VSCode lower left corner), and activate this conda environment:\n\nThen, click to install \"ipykernel\" according to the prompt in the pop-up box.\nOr, we could also install \"ipykernel\" manually: (pip install ipykernel)\nIn addition, for the newly created Python environment (without installing \"ipykernel\"), before opening the Jupyter file, please refresh the VSCode terminal and enter the currently selected environment. For the conda environment, we need to activate it before using it.\nCheck: Check the installation of \"ipykernel\":\n\nMore reference: Environment in VSCode.\n", "I had the same issue and spent the whole day trying to resolve it. What worked for me was installing the Jupyter dependencies for anaconda:\n> conda install jupyter\nI installed this in my base environment. After this VSCode worked without any errors.\n", "Recently I ran into this problem and personally I believe that this problem specifically emerges if you are using a conda environment. Even if you upgrade the ipykernel in the right environment, the problem persists. Install the nb_conda_kernels package in the conda environment you want to use with your Jupyter notebook.\nconda install -n notebook_env nb_conda_kernels\n\nReplace the notebook_env in the above command with the actual environment name you use. Check out this repository for further reference.\n", "Just Do A :\npip install ipykernel --upgrade\n", "The pyzmq package installed in the conda(base) environment caused it. You can solve the problem through uninstall and reinstall the 'pyzmq' package under the conda(base) environment.\npip uninstall pyzmq\npip install pyzmq\n\nYou can refer to here for more details.\n", "Maybe you can try type this cmd in the terminal. And let see what happen.\n\npython -m ipykernel\n\nI got sth error after I had typed this cmd.\n\nImportError: cannot import name 'AsyncGenerator'\n\nThe fix is from https://stackoverflow.com/a/65557088/11474510\n\npip install --upgrade prompt-toolkit==2.0.1\n\n", "Change the JSON schema and point to your environment.\nIf you encounter problems, create a new environment.\nSee also: How to setup virtual environment for Python in VS Code?\n", "In my case, I had to pip install jupyter, not ipykernel as implied by the error message.\n", "The problem mentioned is not specific to conda based virtual environments.\nMy config:\nPython 3.7.8,\nVS Code: 1.63.2,\nOS: Windows 10 64 bit,\nvenv for virtual environment\nI am using python venv for virtual environment. When i imported a new .ipynb file in VS Code while trying to run it, it gave the error \"Running cells with Python 3.7.8(env_name:venv) require ipykernel package\".\nI hit the pop up to install and can see the following being installed in the selected virtual environment/kernel i am using with my Jupyter notebook.\nxxx/xxx/../python.exe -m pip install -U ipykernel\nand finally, the installed packages:\nInstalling collected packages: wcwidth, traitlets, parso, tornado, pyzmq, pygments, prompt-toolkit, pickleshare, nest-asyncio, matplotlib-inline, jupyter-core, jedi, entrypoints, decorator, backcall, jupyter-client, ipython, debugpy, argcomplete, ipykernel\nSuccessfully installed argcomplete-2.0.0 backcall-0.2.0 debugpy-1.5.1 decorator-5.1.1 entrypoints-0.3 ipykernel-6.6.1 ipython-7.31.0 jedi-0.18.1 jupyter-client-7.1.0 jupyter-core-4.9.1 matplotlib-inline-0.1.3 nest-asyncio-1.5.4 parso-0.8.3 pickleshare-0.7.5 prompt-toolkit-3.0.24 pygments-2.11.2 pyzmq-22.3.0 tornado-6.1 traitlets-5.1.1 wcwidth-0.2.5\n\nYou can start with installing ipykernel directly in the selected environment.\n", "I too faced the same issue, so simply I made the new environment and changed the kernel in vscode.\n", "try conda install -n base ipykernel --update-deps --force-reinstall\n", "This is how the problem is solved for me:\nI ran this:\npip install --upgrade --force jupyter-console\n\nThen I got an error for botocore conflict (You may get an error for another package). I installed botocore:\npip uninstall botocore\n\nAn then rerun the above code:\npip install --upgrade --force jupyter-console\n\nIf you received a conflict error for other packages, continue removing them and taking the same steps until there is no error. When jupyter-console successfully installs, you won't see the Kernel error again.\n", "Recently, I ran into the same problem twice after updating VS Code. When I tried to run a cell in a Jupyter notebook, it said I need to install a python extension (even though I had it installed). But I just went to the python extension and switched the version. That's it, it worked for me like that.\n\n\n" ]
[ 18, 12, 11, 4, 4, 2, 1, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "jupyter", "python", "visual_studio_code" ]
stackoverflow_0064997553_jupyter_python_visual_studio_code.txt
Q: python why does function never reach return statement? I always get the output None instead of False My code: def bi_search(elements: list, x) -> bool: i = len(elements)/2-1 i = int(i) print(i) if i == 0: return False elif x == elements[i]: return True elif x < elements[i]: e = elements[0:i + 1] bi_search(e, x) elif x > elements[i]: e = elements[i+1:len(elements)] bi_search(e, x) commands: my_list = [1, 2, 5, 7, 8, 10, 20, 30, 41, 100] print(bi_search(my_list, 21)) Output: 4 1 0 None I don't get it, it even says that is i = 0 right before the statement, so why do I not get False as a result? A: You don't have return statements in the last 2 elif, you want to return the value of recursive call def bi_search(elements: list, x) -> bool: i = len(elements)/2-1 i = int(i) print(i) if i == 0: return False elif x == elements[i]: return True elif x < elements[i]: e = elements[0:i + 1] return bi_search(e, x) elif x > elements[i]: e = elements[i+1:len(elements)] return bi_search(e, x) A: It's because when you fall into 3rd or 4th case and the bi_search() recursively calls itself you forgot to take into account that the call will eventually return and the flow will continue from there further. And because you miss return for these cases, Python jumps out of the elif and reaches ends of the function. As there's no more code in the function to execute, Python exits that function and returns execution to caller, but as caller expected to get something back (the return value) from the function it called we got a problem. Python solves this problem it goes with use of None to still return something it does not have really and also to indicate that it had nothing better to give back. Your code should look this way: def bi_search(elements: list, x) -> bool: i = len(elements)/2-1 i = int(i) print(i) if i == 0: return False elif x == elements[i]: return True elif x < elements[i]: e = elements[0:i + 1] return bi_search(e, x) elif x > elements[i]: e = elements[i+1:len(elements)] return bi_search(e, x) and then the output is as expected: 4 1 0 False
python why does function never reach return statement?
I always get the output None instead of False My code: def bi_search(elements: list, x) -> bool: i = len(elements)/2-1 i = int(i) print(i) if i == 0: return False elif x == elements[i]: return True elif x < elements[i]: e = elements[0:i + 1] bi_search(e, x) elif x > elements[i]: e = elements[i+1:len(elements)] bi_search(e, x) commands: my_list = [1, 2, 5, 7, 8, 10, 20, 30, 41, 100] print(bi_search(my_list, 21)) Output: 4 1 0 None I don't get it, it even says that is i = 0 right before the statement, so why do I not get False as a result?
[ "You don't have return statements in the last 2 elif, you want to return the value of recursive call\ndef bi_search(elements: list, x) -> bool:\n i = len(elements)/2-1\n i = int(i)\n print(i)\n if i == 0:\n return False\n elif x == elements[i]:\n return True\n elif x < elements[i]:\n e = elements[0:i + 1]\n return bi_search(e, x)\n elif x > elements[i]:\n e = elements[i+1:len(elements)]\n return bi_search(e, x)\n\n", "It's because when you fall into 3rd or 4th case and the bi_search() recursively calls itself you forgot to take into account that the call will eventually return and the flow will continue from there further. And because you miss return for these cases, Python jumps out of the elif and reaches ends of the function. As there's no more code in the function to execute, Python exits that function and returns execution to caller, but as caller expected to get something back (the return value) from the function it called we got a problem. Python solves this problem it goes with use of None to still return something it does not have really and also to indicate that it had nothing better to give back.\nYour code should look this way:\ndef bi_search(elements: list, x) -> bool:\n i = len(elements)/2-1\n i = int(i)\n print(i)\n if i == 0:\n return False\n elif x == elements[i]:\n return True\n elif x < elements[i]:\n e = elements[0:i + 1]\n return bi_search(e, x)\n elif x > elements[i]:\n e = elements[i+1:len(elements)]\n return bi_search(e, x)\n\nand then the output is as expected:\n4\n1\n0\nFalse\n\n" ]
[ 2, 2 ]
[]
[]
[ "function", "python" ]
stackoverflow_0074502599_function_python.txt
Q: How to explode multiple columns of a dataframe in pyspark I have a dataframe which consists lists in columns similar to the following. The length of the lists in all columns is not same. Name Age Subjects Grades [Bob] [16] [Maths,Physics,Chemistry] [A,B,C] I want to explode the dataframe in such a way that i get the following output- Name Age Subjects Grades Bob 16 Maths A Bob 16 Physics B Bob 16 Chemistry C How can I achieve this? A: PySpark has added an arrays_zip function in 2.4, which eliminates the need for a Python UDF to zip the arrays. import pyspark.sql.functions as F from pyspark.sql.types import * df = sql.createDataFrame( [(['Bob'], [16], ['Maths','Physics','Chemistry'], ['A','B','C'])], ['Name','Age','Subjects', 'Grades']) df = df.withColumn("new", F.arrays_zip("Subjects", "Grades"))\ .withColumn("new", F.explode("new"))\ .select("Name", "Age", F.col("new.Subjects").alias("Subjects"), F.col("new.Grades").alias("Grades")) df.show() +-----+----+---------+------+ | Name| Age| Subjects|Grades| +-----+----+---------+------+ |[Bob]|[16]| Maths| A| |[Bob]|[16]| Physics| B| |[Bob]|[16]|Chemistry| C| +-----+----+---------+------+ A: This works, import pyspark.sql.functions as F from pyspark.sql.types import * df = sql.createDataFrame( [(['Bob'], [16], ['Maths','Physics','Chemistry'], ['A','B','C'])], ['Name','Age','Subjects', 'Grades']) df.show() +-----+----+--------------------+---------+ | Name| Age| Subjects| Grades| +-----+----+--------------------+---------+ |[Bob]|[16]|[Maths, Physics, ...|[A, B, C]| +-----+----+--------------------+---------+ Use udf with zip. Those columns needed to explode have to be merged before exploding. combine = F.udf(lambda x, y: list(zip(x, y)), ArrayType(StructType([StructField("subs", StringType()), StructField("grades", StringType())]))) df = df.withColumn("new", combine("Subjects", "Grades"))\ .withColumn("new", F.explode("new"))\ .select("Name", "Age", F.col("new.subs").alias("Subjects"), F.col("new.grades").alias("Grades")) df.show() +-----+----+---------+------+ | Name| Age| Subjects|Grades| +-----+----+---------+------+ |[Bob]|[16]| Maths| A| |[Bob]|[16]| Physics| B| |[Bob]|[16]|Chemistry| C| +-----+----+---------+------+ A: Arriving late to the party :-) The simplest way to go is by using inline that doesn't have python API but is supported by selectExpr. df.selectExpr('Name[0] as Name','Age[0] as Age','inline(arrays_zip(Subjects,Grades))').show() +----+---+---------+------+ |Name|Age| Subjects|Grades| +----+---+---------+------+ | Bob| 16| Maths| A| | Bob| 16| Physics| B| | Bob| 16|Chemistry| C| +----+---+---------+------+ A: Have you tried this df.select(explode(split(col("Subjects"))).alias("Subjects")).show() you can convert the data frame to an RDD. For an RDD you can use a flatMap function to separate the Subjects. A: Copy/paste function if you need to repeat this quickly and easily across a large number of columns in a dataset cols = ["word", "stem", "pos", "ner"] def explode_cols(self, data, cols): data = data.withColumn('exp_combo', f.arrays_zip(*cols)) data = data.withColumn('exp_combo', f.explode('exp_combo')) for col in cols: data = data.withColumn(col, f.col('exp_combo.' + col)) return data.drop(f.col('exp_combo')) result = explode_cols(data, cols) Your welcome :) A: When Exploding multiple columns, the above solution comes in handy only when the length of array is same, but if they are not. It is better to explode them separately and take distinct values each time. df = sql.createDataFrame( [(['Bob'], [16], ['Maths','Physics','Chemistry'], ['A','B','C'])], ['Name','Age','Subjects', 'Grades']) df = df.withColumn('Subjects',F.explode('Subjects')).select('Name','Age','Subjects', 'Grades').distinct() df = df.withColumn('Grades',F.explode('Grades')).select('Name','Age','Subjects', 'Grades').distinct() df.show() +----+---+---------+------+ |Name|Age| Subjects|Grades| +----+---+---------+------+ | Bob| 16| Maths| A| | Bob| 16| Physics| B| | Bob| 16|Chemistry| C| +----+---+---------+------+
How to explode multiple columns of a dataframe in pyspark
I have a dataframe which consists lists in columns similar to the following. The length of the lists in all columns is not same. Name Age Subjects Grades [Bob] [16] [Maths,Physics,Chemistry] [A,B,C] I want to explode the dataframe in such a way that i get the following output- Name Age Subjects Grades Bob 16 Maths A Bob 16 Physics B Bob 16 Chemistry C How can I achieve this?
[ "PySpark has added an arrays_zip function in 2.4, which eliminates the need for a Python UDF to zip the arrays.\nimport pyspark.sql.functions as F\nfrom pyspark.sql.types import *\n\ndf = sql.createDataFrame(\n [(['Bob'], [16], ['Maths','Physics','Chemistry'], ['A','B','C'])],\n ['Name','Age','Subjects', 'Grades'])\ndf = df.withColumn(\"new\", F.arrays_zip(\"Subjects\", \"Grades\"))\\\n .withColumn(\"new\", F.explode(\"new\"))\\\n .select(\"Name\", \"Age\", F.col(\"new.Subjects\").alias(\"Subjects\"), F.col(\"new.Grades\").alias(\"Grades\"))\ndf.show()\n\n+-----+----+---------+------+\n| Name| Age| Subjects|Grades|\n+-----+----+---------+------+\n|[Bob]|[16]| Maths| A|\n|[Bob]|[16]| Physics| B|\n|[Bob]|[16]|Chemistry| C|\n+-----+----+---------+------+\n\n", "This works,\nimport pyspark.sql.functions as F\nfrom pyspark.sql.types import *\n\ndf = sql.createDataFrame(\n [(['Bob'], [16], ['Maths','Physics','Chemistry'], ['A','B','C'])],\n ['Name','Age','Subjects', 'Grades'])\ndf.show()\n\n+-----+----+--------------------+---------+\n| Name| Age| Subjects| Grades|\n+-----+----+--------------------+---------+\n|[Bob]|[16]|[Maths, Physics, ...|[A, B, C]|\n+-----+----+--------------------+---------+\n\nUse udf with zip. Those columns needed to explode have to be merged before exploding.\ncombine = F.udf(lambda x, y: list(zip(x, y)),\n ArrayType(StructType([StructField(\"subs\", StringType()),\n StructField(\"grades\", StringType())])))\n\ndf = df.withColumn(\"new\", combine(\"Subjects\", \"Grades\"))\\\n .withColumn(\"new\", F.explode(\"new\"))\\\n .select(\"Name\", \"Age\", F.col(\"new.subs\").alias(\"Subjects\"), F.col(\"new.grades\").alias(\"Grades\"))\ndf.show()\n\n\n+-----+----+---------+------+\n| Name| Age| Subjects|Grades|\n+-----+----+---------+------+\n|[Bob]|[16]| Maths| A|\n|[Bob]|[16]| Physics| B|\n|[Bob]|[16]|Chemistry| C|\n+-----+----+---------+------+\n\n", "Arriving late to the party :-)\nThe simplest way to go is by using inline that doesn't have python API but is supported by selectExpr.\ndf.selectExpr('Name[0] as Name','Age[0] as Age','inline(arrays_zip(Subjects,Grades))').show()\n\n\n+----+---+---------+------+\n|Name|Age| Subjects|Grades|\n+----+---+---------+------+\n| Bob| 16| Maths| A|\n| Bob| 16| Physics| B|\n| Bob| 16|Chemistry| C|\n+----+---+---------+------+\n\n", "Have you tried this\ndf.select(explode(split(col(\"Subjects\"))).alias(\"Subjects\")).show()\n\nyou can convert the data frame to an RDD.\nFor an RDD you can use a flatMap function to separate the Subjects.\n", "Copy/paste function if you need to repeat this quickly and easily across a large number of columns in a dataset\ncols = [\"word\", \"stem\", \"pos\", \"ner\"]\n\ndef explode_cols(self, data, cols):\n data = data.withColumn('exp_combo', f.arrays_zip(*cols))\n data = data.withColumn('exp_combo', f.explode('exp_combo'))\n for col in cols:\n data = data.withColumn(col, f.col('exp_combo.' + col))\n\n return data.drop(f.col('exp_combo'))\n\nresult = explode_cols(data, cols)\n\nYour welcome :)\n", "When Exploding multiple columns, the above solution comes in handy only when the length of array is same, but if they are not.\nIt is better to explode them separately and take distinct values each time.\ndf = sql.createDataFrame(\n [(['Bob'], [16], ['Maths','Physics','Chemistry'], ['A','B','C'])],\n ['Name','Age','Subjects', 'Grades'])\n\ndf = df.withColumn('Subjects',F.explode('Subjects')).select('Name','Age','Subjects', 'Grades').distinct()\n\ndf = df.withColumn('Grades',F.explode('Grades')).select('Name','Age','Subjects', 'Grades').distinct()\n\ndf.show()\n\n +----+---+---------+------+\n|Name|Age| Subjects|Grades|\n+----+---+---------+------+\n| Bob| 16| Maths| A|\n| Bob| 16| Physics| B|\n| Bob| 16|Chemistry| C|\n+----+---+---------+------+\n\n" ]
[ 53, 17, 5, 1, 0, 0 ]
[]
[]
[ "apache_spark", "apache_spark_sql", "dataframe", "pyspark", "python" ]
stackoverflow_0051082758_apache_spark_apache_spark_sql_dataframe_pyspark_python.txt
Q: Get the width of the rectangle in the plot created using matplootlib How do I get the size (width and height) of the rectangle of a plot create with matplotlib's pyplot library. Specifically I need the width of the box: Here is a part of the code: import matplotlib.pyplot as plt plt.figure() bar_plot = plt.bar(df.index, df_mean, yerr=df_std*1.96, color=colors); A: You can get the with of the box using ax.get_window_extent().transformed(ax.get_figure().dpi_scale_trans.inverted()).width*ax.get_figure().dpi
Get the width of the rectangle in the plot created using matplootlib
How do I get the size (width and height) of the rectangle of a plot create with matplotlib's pyplot library. Specifically I need the width of the box: Here is a part of the code: import matplotlib.pyplot as plt plt.figure() bar_plot = plt.bar(df.index, df_mean, yerr=df_std*1.96, color=colors);
[ "You can get the with of the box using\nax.get_window_extent().transformed(ax.get_figure().dpi_scale_trans.inverted()).width*ax.get_figure().dpi\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0062091026_matplotlib_python.txt
Q: ERROR: Could not build wheels for phik, which is required to install pyproject.toml-based projects While running this command on command prompt: PS D:\Mitali> pip install pandas-profiling I am getting this error: ERROR: Could not build wheels for phik, which is required to install pyproject.toml-based projects The entire error looks as: Building wheels for collected packages: phik Building wheel for phik (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\HP\AppData\Local\Programs\Python\Python310\python.exe' 'C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\HP\AppData\Local\Temp\tmpqi_0g29r' cwd: C:\Users\HP\AppData\Local\Temp\pip-install-prmn_pyb\phik_c27377b089f2467988f10191570c8033 A: try to do this: pip install phik==0.11.1 pip install pandas-profiling
ERROR: Could not build wheels for phik, which is required to install pyproject.toml-based projects
While running this command on command prompt: PS D:\Mitali> pip install pandas-profiling I am getting this error: ERROR: Could not build wheels for phik, which is required to install pyproject.toml-based projects The entire error looks as: Building wheels for collected packages: phik Building wheel for phik (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\HP\AppData\Local\Programs\Python\Python310\python.exe' 'C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\HP\AppData\Local\Temp\tmpqi_0g29r' cwd: C:\Users\HP\AppData\Local\Temp\pip-install-prmn_pyb\phik_c27377b089f2467988f10191570c8033
[ "try to do this:\npip install phik==0.11.1 \npip install pandas-profiling\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "pandas_profiling", "python" ]
stackoverflow_0070917594_pandas_pandas_profiling_python.txt
Q: How to read a webpage table using requests-html? I am new to python and am trying to parse a table from the given website into a PANDAS DATAFRAME. I am using modules requests-html, requests, and beautifulSoup. Here is the website, I would like to gather the table from: https://www.aamc.org/data-reports/workforce/interactive-data/active-physicians-largest-specialties-2019 MWE import pandas as pd from urllib.request import Request, urlopen from bs4 import BeautifulSoup url = 'https://www.aamc.org/data-reports/workforce/interactive-data/active-physicians-largest-specialties-2019' req = Request(url, headers={'User-Agent': 'Mozilla/5.0'}) page = urlopen(req).read() soup = BeautifulSoup(page, 'html.parser') # soup.find_all('table') pages = soup.find('div', {'class': 'data-table-wrapper'}) df = pd.read_html(pages) # PROBLEM: somehow this table has no data df.head() Another attempt: import requests_html sess = requests_html.HTMLSession() res = sess.get(url) page = res.html import requests_html sess = requests_html.HTMLSession() res = sess.get(url) page_html = res.html df = pd.read_html(page_html.raw_html) df # This gives dataframe, but has no Values The screenshot is given below: A: The data you see on the page is embedded inside <script> in form of JavaScript. You can use selenium or parse the data manually from the page. I'm using js2py module to decode the data: import re import js2py import requests import pandas as pd url = "https://www.aamc.org/data-reports/workforce/interactive-data/active-physicians-largest-specialties-2019" html_doc = requests.get(url).text data = re.search(r"(?s)\$scope\.schools = (.*?);", html_doc).group(1) data = [{k: v.strip() for k, v in d.items()} for d in js2py.eval_js(data)] columns = { "specialty": "Specialty", "one": "Total Active Physicians", "two": "Patient Care", "three": "Teaching", "four": "Research", "five": "Other", } df = pd.DataFrame(data).rename(columns=columns) print(df[list(columns.values())].to_markdown(index=False)) Prints: Specialty Total Active Physicians Patient Care Teaching Research Other All Specialties 938,980 816,922 12,475 12,632 96,951 Allergy and Immunology 4,900 4,221 54 268 357 Anatomic/Clinical Pathology 12,643 8,711 385 520 3,027 Anesthesiology 42,267 39,377 540 180 2,170 Cardiovascular Disease 22,521 20,430 299 573 1,219 Child and Adolescent Psychiatry 9,787 8,670 134 109 874 Critical Care Medicine 13,093 11,146 178 111 1,658 Dermatology 12,516 11,747 100 98 571 Emergency Medicine 45,202 41,466 469 94 3,173 Endocrinology, Diabetes, and Metabolism 7,994 6,439 155 533 867 Family Medicine/General Practice 118,198 108,984 1,614 251 7,349 Gastroenterology 15,469 14,007 186 289 987 General Surgery 25,564 21,949 259 137 3,219 Geriatric Medicine 5,974 5,029 105 106 734 Hematology and Oncology 16,274 13,506 250 871 1,647 Infectious Disease 9,687 7,448 287 701 1,251 Internal Medicine 120,171 105,736 1,409 1,447 11,579 Internal Medicine/Pediatrics 5,509 4,924 74 28 483 Interventional Cardiology 4,407 3,956 22 6 423 Neonatal-Perinatal Medicine 5,919 5,008 135 175 601 Nephrology 11,407 9,964 140 316 987 Neurological Surgery 5,748 5,246 52 32 418 Neurology 14,146 11,896 245 629 1,376 Neuroradiology 4,089 3,496 63 7 523 Obstetrics and Gynecology 42,720 39,825 499 195 2,201 Ophthalmology 19,312 17,859 147 126 1,180 Orthopedic Surgery 19,069 18,097 120 57 795 Otolaryngology 9,777 9,140 90 23 524 Pain Medicine and Pain Management 5,871 5,459 38 9 365 Pediatric Anesthesiology (Anesthesiology) 2,571 2,127 47 4 393 Pediatric Cardiology 2,966 2,414 74 64 414 Pediatric Critical Care Medicine 2,639 2,118 78 20 423 Pediatric Hematology/Oncology 3,079 2,251 77 210 541 Pediatrics 60,618 54,764 844 663 4,347 Physical Medicine and Rehabilitation 9,767 8,920 69 38 740 Plastic Surgery 7,317 6,938 55 20 304 Preventive Medicine 6,675 4,218 146 457 1,854 Psychiatry 38,792 33,776 562 735 3,719 Pulmonary Disease 5,106 4,490 138 296 182 Radiation Oncology 5,306 4,854 56 33 363 Radiology and Diagnostic Radiology 28,025 24,748 423 153 2,701 Rheumatology 6,265 5,333 108 255 569 Sports Medicine 2,897 2,624 20 4 249 Sports Medicine (Orthopedic Surgery) 2,903 2,737 9 157 Thoracic Surgery 4,479 4,105 45 40 289 Urology 10,201 9,593 76 39 493 Vascular and Interventional Radiology 3,877 3,425 27 3 422 Vascular Surgery 3,943 3,586 48 13 296
How to read a webpage table using requests-html?
I am new to python and am trying to parse a table from the given website into a PANDAS DATAFRAME. I am using modules requests-html, requests, and beautifulSoup. Here is the website, I would like to gather the table from: https://www.aamc.org/data-reports/workforce/interactive-data/active-physicians-largest-specialties-2019 MWE import pandas as pd from urllib.request import Request, urlopen from bs4 import BeautifulSoup url = 'https://www.aamc.org/data-reports/workforce/interactive-data/active-physicians-largest-specialties-2019' req = Request(url, headers={'User-Agent': 'Mozilla/5.0'}) page = urlopen(req).read() soup = BeautifulSoup(page, 'html.parser') # soup.find_all('table') pages = soup.find('div', {'class': 'data-table-wrapper'}) df = pd.read_html(pages) # PROBLEM: somehow this table has no data df.head() Another attempt: import requests_html sess = requests_html.HTMLSession() res = sess.get(url) page = res.html import requests_html sess = requests_html.HTMLSession() res = sess.get(url) page_html = res.html df = pd.read_html(page_html.raw_html) df # This gives dataframe, but has no Values The screenshot is given below:
[ "The data you see on the page is embedded inside <script> in form of JavaScript. You can use selenium or parse the data manually from the page. I'm using js2py module to decode the data:\nimport re\nimport js2py\nimport requests\nimport pandas as pd\n\n\nurl = \"https://www.aamc.org/data-reports/workforce/interactive-data/active-physicians-largest-specialties-2019\"\nhtml_doc = requests.get(url).text\n\ndata = re.search(r\"(?s)\\$scope\\.schools = (.*?);\", html_doc).group(1)\ndata = [{k: v.strip() for k, v in d.items()} for d in js2py.eval_js(data)]\n\ncolumns = {\n \"specialty\": \"Specialty\",\n \"one\": \"Total Active Physicians\",\n \"two\": \"Patient Care\",\n \"three\": \"Teaching\",\n \"four\": \"Research\",\n \"five\": \"Other\",\n}\n\ndf = pd.DataFrame(data).rename(columns=columns)\nprint(df[list(columns.values())].to_markdown(index=False))\n\nPrints:\n\n\n\n\nSpecialty\nTotal Active Physicians\nPatient Care\nTeaching\nResearch\nOther\n\n\n\n\nAll Specialties\n938,980\n816,922\n12,475\n12,632\n96,951\n\n\nAllergy and Immunology\n4,900\n4,221\n54\n268\n357\n\n\nAnatomic/Clinical Pathology\n12,643\n8,711\n385\n520\n3,027\n\n\nAnesthesiology\n42,267\n39,377\n540\n180\n2,170\n\n\nCardiovascular Disease\n22,521\n20,430\n299\n573\n1,219\n\n\nChild and Adolescent Psychiatry\n9,787\n8,670\n134\n109\n874\n\n\nCritical Care Medicine\n13,093\n11,146\n178\n111\n1,658\n\n\nDermatology\n12,516\n11,747\n100\n98\n571\n\n\nEmergency Medicine\n45,202\n41,466\n469\n94\n3,173\n\n\nEndocrinology, Diabetes, and Metabolism\n7,994\n6,439\n155\n533\n867\n\n\nFamily Medicine/General Practice\n118,198\n108,984\n1,614\n251\n7,349\n\n\nGastroenterology\n15,469\n14,007\n186\n289\n987\n\n\nGeneral Surgery\n25,564\n21,949\n259\n137\n3,219\n\n\nGeriatric Medicine\n5,974\n5,029\n105\n106\n734\n\n\nHematology and Oncology\n16,274\n13,506\n250\n871\n1,647\n\n\nInfectious Disease\n9,687\n7,448\n287\n701\n1,251\n\n\nInternal Medicine\n120,171\n105,736\n1,409\n1,447\n11,579\n\n\nInternal Medicine/Pediatrics\n5,509\n4,924\n74\n28\n483\n\n\nInterventional Cardiology\n4,407\n3,956\n22\n6\n423\n\n\nNeonatal-Perinatal Medicine\n5,919\n5,008\n135\n175\n601\n\n\nNephrology\n11,407\n9,964\n140\n316\n987\n\n\nNeurological Surgery\n5,748\n5,246\n52\n32\n418\n\n\nNeurology\n14,146\n11,896\n245\n629\n1,376\n\n\nNeuroradiology\n4,089\n3,496\n63\n7\n523\n\n\nObstetrics and Gynecology\n42,720\n39,825\n499\n195\n2,201\n\n\nOphthalmology\n19,312\n17,859\n147\n126\n1,180\n\n\nOrthopedic Surgery\n19,069\n18,097\n120\n57\n795\n\n\nOtolaryngology\n9,777\n9,140\n90\n23\n524\n\n\nPain Medicine and Pain Management\n5,871\n5,459\n38\n9\n365\n\n\nPediatric Anesthesiology (Anesthesiology)\n2,571\n2,127\n47\n4\n393\n\n\nPediatric Cardiology\n2,966\n2,414\n74\n64\n414\n\n\nPediatric Critical Care Medicine\n2,639\n2,118\n78\n20\n423\n\n\nPediatric Hematology/Oncology\n3,079\n2,251\n77\n210\n541\n\n\nPediatrics\n60,618\n54,764\n844\n663\n4,347\n\n\nPhysical Medicine and Rehabilitation\n9,767\n8,920\n69\n38\n740\n\n\nPlastic Surgery\n7,317\n6,938\n55\n20\n304\n\n\nPreventive Medicine\n6,675\n4,218\n146\n457\n1,854\n\n\nPsychiatry\n38,792\n33,776\n562\n735\n3,719\n\n\nPulmonary Disease\n5,106\n4,490\n138\n296\n182\n\n\nRadiation Oncology\n5,306\n4,854\n56\n33\n363\n\n\nRadiology and Diagnostic Radiology\n28,025\n24,748\n423\n153\n2,701\n\n\nRheumatology\n6,265\n5,333\n108\n255\n569\n\n\nSports Medicine\n2,897\n2,624\n20\n4\n249\n\n\nSports Medicine (Orthopedic Surgery)\n2,903\n2,737\n9\n\n157\n\n\nThoracic Surgery\n4,479\n4,105\n45\n40\n289\n\n\nUrology\n10,201\n9,593\n76\n39\n493\n\n\nVascular and Interventional Radiology\n3,877\n3,425\n27\n3\n422\n\n\nVascular Surgery\n3,943\n3,586\n48\n13\n296\n\n\n\n" ]
[ 2 ]
[]
[]
[ "beautifulsoup", "pandas", "python", "python_requests_html", "request" ]
stackoverflow_0074502644_beautifulsoup_pandas_python_python_requests_html_request.txt
Q: Replace all instances of a value to another specific value I have this part of the df x y d n 0 -17.7 -0.785430 0.053884 y1 1 -15.0 -3820.085000 0.085000 y4 2 -12.5 2.138833 0.143237 y3 3 -12.4 1.721205 0.251180 y3 I want to replace all instances of y3 for "3rd" and y4 for "4th" in column n Output: x y d n 0 -17.7 -0.785430 0.053884 y1 1 -15.0 -3820.085000 0.085000 4th 2 -12.5 2.138833 0.143237 3rd 3 -12.4 1.721205 0.251180 3rd A: Simple. You can use Python str functions after .str on a column. df['n'] = df['n'].str.replace('y3', '3rd').replace('y4', '4th') OR You can select the specific columns and replace like this df[df['n'] == 'y3'] = '3rd' df[df['n'] == 'y4'] = '4th' Choice is yours. A: You can use regex and define a dict for replace. dct_rep = {'y3':'3rd' , 'y4':'4th'} df['n'] = df['n'].str.replace(r'(y3|y4)', lambda x: dct_rep.get(x.group(), 'Not define in dct_rep'), regex=True ) print(df) Output: x y d n 0 -17.7 -0.785430 0.053884 y1 1 -15.0 -3820.085000 0.085000 4th 2 -12.5 2.138833 0.143237 3rd 3 -12.4 1.721205 0.251180 3rd
Replace all instances of a value to another specific value
I have this part of the df x y d n 0 -17.7 -0.785430 0.053884 y1 1 -15.0 -3820.085000 0.085000 y4 2 -12.5 2.138833 0.143237 y3 3 -12.4 1.721205 0.251180 y3 I want to replace all instances of y3 for "3rd" and y4 for "4th" in column n Output: x y d n 0 -17.7 -0.785430 0.053884 y1 1 -15.0 -3820.085000 0.085000 4th 2 -12.5 2.138833 0.143237 3rd 3 -12.4 1.721205 0.251180 3rd
[ "Simple. You can use Python str functions after .str on a column.\ndf['n'] = df['n'].str.replace('y3', '3rd').replace('y4', '4th')\n\nOR\nYou can select the specific columns and replace like this\ndf[df['n'] == 'y3'] = '3rd'\ndf[df['n'] == 'y4'] = '4th'\n\nChoice is yours.\n", "You can use regex and define a dict for replace.\ndct_rep = {'y3':'3rd' , 'y4':'4th'}\n\n\ndf['n'] = df['n'].str.replace(r'(y3|y4)', \n lambda x: dct_rep.get(x.group(), 'Not define in dct_rep'), \n regex=True\n )\n\nprint(df)\n\nOutput:\n x y d n\n0 -17.7 -0.785430 0.053884 y1\n1 -15.0 -3820.085000 0.085000 4th\n2 -12.5 2.138833 0.143237 3rd\n3 -12.4 1.721205 0.251180 3rd\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074502635_dataframe_pandas_python.txt
Q: Setting limits of the colorbar in Python I made a contour plot and a colorbar to show the range of the values I plot. The limits of the colorbar are (-0.4, 0.4) and I would like to convert them to (0, 100) with a step 20, so 0, 20, 40, 60, 80 and 100. I tried to do that with: plt.clim(0, 100) plt.colorbar(label="unit name", orientation="vertical") but instead the colorbar's limits remain the same and my plot changes color, it becomes purple. Could you please help me with this? A: If you translate your array of [-0.4, 0.4] values to the range [0, 100], you will be able to plot what you need. This is what I did in the example below : from random import randint import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable size = 50 A = np.array([randint(-4, 4)/10 for _ in range(size*size)]) A = (A*10 + 4)*12.5 # Translate values from [-0.4, 0.4] to [0, 100]. B = A.reshape((size, size)) ax = plt.subplot() im = ax.imshow(B) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) im.set_clim(0, 100) plt.colorbar(im, cax=cax) plt.show() Output:
Setting limits of the colorbar in Python
I made a contour plot and a colorbar to show the range of the values I plot. The limits of the colorbar are (-0.4, 0.4) and I would like to convert them to (0, 100) with a step 20, so 0, 20, 40, 60, 80 and 100. I tried to do that with: plt.clim(0, 100) plt.colorbar(label="unit name", orientation="vertical") but instead the colorbar's limits remain the same and my plot changes color, it becomes purple. Could you please help me with this?
[ "If you translate your array of [-0.4, 0.4] values to the range [0, 100], you will be able to plot what you need.\nThis is what I did in the example below :\nfrom random import randint\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable\n\nsize = 50\nA = np.array([randint(-4, 4)/10 for _ in range(size*size)])\nA = (A*10 + 4)*12.5 # Translate values from [-0.4, 0.4] to [0, 100].\nB = A.reshape((size, size))\n\nax = plt.subplot()\n\nim = ax.imshow(B)\ndivider = make_axes_locatable(ax)\ncax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\nim.set_clim(0, 100)\nplt.colorbar(im, cax=cax)\nplt.show()\n\nOutput:\n\n" ]
[ 1 ]
[]
[]
[ "colorbar", "limit", "matplotlib", "python" ]
stackoverflow_0074500693_colorbar_limit_matplotlib_python.txt
Q: Formatted print to the console in python I have a method that returns a list of lists. def get_ranking_matrix(self) -> list: return self.ranking_matrix When I call print(a.get_ranking_matrix()), I get the classic output of a two-dimensional array: [[2, 1, 4, 3, 6, 5], [3, 1, 4, 6, 5, 2], [4, 1, 2, 6, 3, 5], [2, 1, 3, 4, 5, 6], [2, 1, 4, 5, 6, 3], [2, 1, 4, 6, 5, 3]] And if I call print(a.get_ranking_matrix), then <bound method Ranking.get_ranking_matrix of <__main__.Ranking object at 0x000002431BB8F880>> Can you tell me how to make a nice print, like in numpy. When you just write print(some_dataframe) and get formatted table in the console: A1 A2 A3 A4 A5 A6 A1 1 0 1 1 1 1 A2 1 1 1 1 1 1 A3 0 0 1 1 1 1 A4 0 0 0 1 1 0 A5 0 0 0 1 1 0 A6 0 0 1 1 1 1 How does this implement in practice? I want to call this method inside print(a.get_ranking_matrix) and have the following in the console: 2 1 4 3 6 5 3 1 4 6 5 2 4 1 2 6 3 5 2 1 3 4 5 6 2 1 4 5 6 3 2 1 4 6 5 3 A: You can achieve the same output format manually like so: def get_ranking_mat(the_list): my_str = '' for i in the_list: for elem in I: my_str+=str(elem)+ ' ' my_str+='\n' return my_str print(get_ranking_mat(my_list)) you need to implement it within your class. On another note, if your class __str__ method is not used for other functionality I think That it is better to use it for printing the result.
Formatted print to the console in python
I have a method that returns a list of lists. def get_ranking_matrix(self) -> list: return self.ranking_matrix When I call print(a.get_ranking_matrix()), I get the classic output of a two-dimensional array: [[2, 1, 4, 3, 6, 5], [3, 1, 4, 6, 5, 2], [4, 1, 2, 6, 3, 5], [2, 1, 3, 4, 5, 6], [2, 1, 4, 5, 6, 3], [2, 1, 4, 6, 5, 3]] And if I call print(a.get_ranking_matrix), then <bound method Ranking.get_ranking_matrix of <__main__.Ranking object at 0x000002431BB8F880>> Can you tell me how to make a nice print, like in numpy. When you just write print(some_dataframe) and get formatted table in the console: A1 A2 A3 A4 A5 A6 A1 1 0 1 1 1 1 A2 1 1 1 1 1 1 A3 0 0 1 1 1 1 A4 0 0 0 1 1 0 A5 0 0 0 1 1 0 A6 0 0 1 1 1 1 How does this implement in practice? I want to call this method inside print(a.get_ranking_matrix) and have the following in the console: 2 1 4 3 6 5 3 1 4 6 5 2 4 1 2 6 3 5 2 1 3 4 5 6 2 1 4 5 6 3 2 1 4 6 5 3
[ "You can achieve the same output format manually like so:\ndef get_ranking_mat(the_list):\n my_str = ''\n for i in the_list:\n for elem in I:\n my_str+=str(elem)+ ' '\n my_str+='\\n'\n return my_str \n\nprint(get_ranking_mat(my_list))\n\nyou need to implement it within your class.\nOn another note, if your class __str__ method is not used for other functionality I think That it is better to use it for printing the result.\n" ]
[ 0 ]
[]
[]
[ "console_application", "methods", "object", "python", "python_3.x" ]
stackoverflow_0074502455_console_application_methods_object_python_python_3.x.txt
Q: I'm pulling data from amazon with python bs4 but it's not pulling data from some links image here image here I scan the links and draw the price and title of the products, but sometimes on some pages it does not attract any product, I guess it does not list the link, how do I fix it? I gave you 2 pictures, sometimes they do, sometimes they don't. What is the reason for this? ` import requests from bs4 import BeautifulSoup pricelist = [] titlelist = [] productlist = [] countpage = 1 sk = "/s?k=HyperX+Cloud+II+Gaming+Kulakl%C4%B1k&page=1" while True: url = f"https://www.amazon.com.tr{sk}" countpage+=1 headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 OPR/91.0.4516.106" } request = requests.get(url,headers=headers) soup = BeautifulSoup(request.content,"html.parser") result = soup.findAll("div", {"class":"sg-col-4-of-24 sg-col-4-of-12 s-result-item s-asin sg-col-4-of-16 sg-col s-widget-spacing-small sg-col-4-of-20"}) itemcounter = 0 for item in result: try: itemprice = item.find("span", {"class":"a-offscreen"}).text.strip() itemtitle = item.find("span", {"class":"a-size-base-plus a-color-base a-text-normal"}).text.strip() f = open("read.txt","a+",encoding="utf-8") f.write(f"{itemprice} / {itemtitle} \n") itemcounter+=1 except: pass print(itemcounter) after = soup.find("a", {"class":"s-pagination-item s-pagination-next s-pagination-button s-pagination-separator"}, href=True) try: sk = after["href"] except TypeError: break print(sk) ` Is this due to amazon? A: result = soup.findAll("div", {"class":"sg-col-4-of-24 sg-col-4-of-12 s-result-item s-asin sg-col-4-of-16 sg-col s-widget-spacing-small sg-col-4-of-20"}) Because Amazon changed the names of classes on some pages not working, It works as I have given below. result = soup.findAll("div", {"class":"s-card-container s-overflow-hidden aok-relative puis-expand-height puis-include-content-margin puis s-latency-cf-section s-card-border"})
I'm pulling data from amazon with python bs4 but it's not pulling data from some links
image here image here I scan the links and draw the price and title of the products, but sometimes on some pages it does not attract any product, I guess it does not list the link, how do I fix it? I gave you 2 pictures, sometimes they do, sometimes they don't. What is the reason for this? ` import requests from bs4 import BeautifulSoup pricelist = [] titlelist = [] productlist = [] countpage = 1 sk = "/s?k=HyperX+Cloud+II+Gaming+Kulakl%C4%B1k&page=1" while True: url = f"https://www.amazon.com.tr{sk}" countpage+=1 headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 OPR/91.0.4516.106" } request = requests.get(url,headers=headers) soup = BeautifulSoup(request.content,"html.parser") result = soup.findAll("div", {"class":"sg-col-4-of-24 sg-col-4-of-12 s-result-item s-asin sg-col-4-of-16 sg-col s-widget-spacing-small sg-col-4-of-20"}) itemcounter = 0 for item in result: try: itemprice = item.find("span", {"class":"a-offscreen"}).text.strip() itemtitle = item.find("span", {"class":"a-size-base-plus a-color-base a-text-normal"}).text.strip() f = open("read.txt","a+",encoding="utf-8") f.write(f"{itemprice} / {itemtitle} \n") itemcounter+=1 except: pass print(itemcounter) after = soup.find("a", {"class":"s-pagination-item s-pagination-next s-pagination-button s-pagination-separator"}, href=True) try: sk = after["href"] except TypeError: break print(sk) ` Is this due to amazon?
[ "result = soup.findAll(\"div\", {\"class\":\"sg-col-4-of-24 sg-col-4-of-12 s-result-item s-asin sg-col-4-of-16 sg-col s-widget-spacing-small sg-col-4-of-20\"})\n\nBecause Amazon changed the names of classes on some pages\nnot working,\nIt works as I have given below.\nresult = soup.findAll(\"div\", {\"class\":\"s-card-container s-overflow-hidden aok-relative puis-expand-height puis-include-content-margin puis s-latency-cf-section s-card-border\"})\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "request" ]
stackoverflow_0074502638_beautifulsoup_python_request.txt
Q: How to set default value for column in Google bigquery table using API from pyhton script Class that crate column and describe it doesn't have "default_value_expression" in the constructor. path: google/cloud/bigquery/schema.py class SchemaField: def __init__( name: str, field_type: str, mode: Any = "NULLABLE", description: Any = None, fields: Any = (), policy_tags: Any = None) That key is displayed on the google bigquery docs. But when I try to set the value for any column using this key the error message "Unknow key(s): default_value_expression..." is displayed. I tried to set a default value from the google cloud console. A: Ensure you're using the latest version of the library. It was just released in version 3.4.0 a few days ago.
How to set default value for column in Google bigquery table using API from pyhton script
Class that crate column and describe it doesn't have "default_value_expression" in the constructor. path: google/cloud/bigquery/schema.py class SchemaField: def __init__( name: str, field_type: str, mode: Any = "NULLABLE", description: Any = None, fields: Any = (), policy_tags: Any = None) That key is displayed on the google bigquery docs. But when I try to set the value for any column using this key the error message "Unknow key(s): default_value_expression..." is displayed. I tried to set a default value from the google cloud console.
[ "Ensure you're using the latest version of the library. It was just released in version 3.4.0 a few days ago.\n" ]
[ 1 ]
[]
[]
[ "google_bigquery", "python" ]
stackoverflow_0074500662_google_bigquery_python.txt
Q: Efficienctly selecting rows that end with zeros in numpy I have a tensor / array of shape N x M, where M is less than 10 but N can potentially be > 2000. All entries are larger than or equal to zero. I want to filter out rows that either Do not contain any zeros End with zeros only, i.e [1,2,0,0] would be valid but not [1,0,2,0] or [0,0,1,2]. Put differently once a zero appears all following entries of that row must also be zero, otherwise the row should be ignored. as efficiently as possible. Consider the following example Example: [[35, 25, 17], # no zeros -> valid [12, 0, 0], # ends with zeros -> valid [36, 2, 0], # ends with zeros -> valid [8, 0, 9]] # contains zeros and does not end with zeros -> invalid should yield [True, True, True, False]. The straightforward implementation I came up with is: import numpy as np T = np.array([[35,25,17], [12,0,0], [36,2,0], [0,0,9]]) N,M = T.shape valid = [i*[True,] + (M-i)*[False,] for i in range(1, M+1)] mask = [((row > 0).tolist() in valid) for row in T] Is there a more elegant and efficient solution to this? Any help is greatly appreciated! A: Here's one way: x[np.all((x == 0) == (x.cumprod(axis=1) == 0), axis=1)] This calculates the row-wise cumulative product, matches the original array's zeros up with the cumprod array, then filters any rows where there's one or more False. Workings: In [3]: x Out[3]: array([[35, 25, 17], [12, 0, 0], [36, 2, 0], [ 8, 0, 9]]) In [4]: x == 0 Out[4]: array([[False, False, False], [False, True, True], [False, False, True], [False, True, False]]) In [5]: x.cumprod(axis=1) == 0 Out[5]: array([[False, False, False], [False, True, True], [False, False, True], [False, True, True]]) In [6]: (x == 0) == (x.cumprod(axis=1) == 0) Out[6]: array([[ True, True, True], [ True, True, True], [ True, True, True], [ True, True, False]]) # bad row! In [7]: np.all((x == 0) == (x.cumprod(axis=1) == 0), axis=1) Out[7]: array([ True, True, True, False])
Efficienctly selecting rows that end with zeros in numpy
I have a tensor / array of shape N x M, where M is less than 10 but N can potentially be > 2000. All entries are larger than or equal to zero. I want to filter out rows that either Do not contain any zeros End with zeros only, i.e [1,2,0,0] would be valid but not [1,0,2,0] or [0,0,1,2]. Put differently once a zero appears all following entries of that row must also be zero, otherwise the row should be ignored. as efficiently as possible. Consider the following example Example: [[35, 25, 17], # no zeros -> valid [12, 0, 0], # ends with zeros -> valid [36, 2, 0], # ends with zeros -> valid [8, 0, 9]] # contains zeros and does not end with zeros -> invalid should yield [True, True, True, False]. The straightforward implementation I came up with is: import numpy as np T = np.array([[35,25,17], [12,0,0], [36,2,0], [0,0,9]]) N,M = T.shape valid = [i*[True,] + (M-i)*[False,] for i in range(1, M+1)] mask = [((row > 0).tolist() in valid) for row in T] Is there a more elegant and efficient solution to this? Any help is greatly appreciated!
[ "Here's one way:\nx[np.all((x == 0) == (x.cumprod(axis=1) == 0), axis=1)]\n\nThis calculates the row-wise cumulative product, matches the original array's zeros up with the cumprod array, then filters any rows where there's one or more False.\nWorkings:\nIn [3]: x\nOut[3]:\narray([[35, 25, 17],\n [12, 0, 0],\n [36, 2, 0],\n [ 8, 0, 9]])\n\nIn [4]: x == 0\nOut[4]:\narray([[False, False, False],\n [False, True, True],\n [False, False, True],\n [False, True, False]])\n\nIn [5]: x.cumprod(axis=1) == 0\nOut[5]:\narray([[False, False, False],\n [False, True, True],\n [False, False, True],\n [False, True, True]])\n\nIn [6]: (x == 0) == (x.cumprod(axis=1) == 0)\nOut[6]:\narray([[ True, True, True],\n [ True, True, True],\n [ True, True, True],\n [ True, True, False]]) # bad row!\n\nIn [7]: np.all((x == 0) == (x.cumprod(axis=1) == 0), axis=1)\nOut[7]: array([ True, True, True, False])\n\n" ]
[ 4 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074502782_numpy_python.txt
Q: Python Query - Arrays Create two arrays using numpy. One called students with as values. ['Janet', 'Adriana', 'Manual', 'Mohamed', 'Leann'] Another is called grades as values: [[93, 85], [78, 80], [94, 93], [75, 90], [92, 87]] Select all rows from grades where student is either 'Adriana' or 'Mohamed' How do i go about this problem? A: You can use numpy.isin. import numpy as np students = ['Janet', 'Adriana', 'Manual', 'Mohamed', 'Leann'] grades = [[93, 85], [78, 80], [94, 93], [75, 90], [92, 87]] arr_s = np.asarray(students) arr_g = np.asarray(grades) mask = np.isin(arr_s, ['Adriana', 'Mohamed']) res = arr_g[mask] print(res) Output: array([[78, 80], [75, 90]])
Python Query - Arrays
Create two arrays using numpy. One called students with as values. ['Janet', 'Adriana', 'Manual', 'Mohamed', 'Leann'] Another is called grades as values: [[93, 85], [78, 80], [94, 93], [75, 90], [92, 87]] Select all rows from grades where student is either 'Adriana' or 'Mohamed' How do i go about this problem?
[ "You can use numpy.isin.\nimport numpy as np\nstudents = ['Janet', 'Adriana', 'Manual', 'Mohamed', 'Leann']\ngrades = [[93, 85], [78, 80], [94, 93], [75, 90], [92, 87]]\narr_s = np.asarray(students)\narr_g = np.asarray(grades)\nmask = np.isin(arr_s, ['Adriana', 'Mohamed'])\nres = arr_g[mask]\nprint(res)\n\nOutput:\narray([[78, 80],\n [75, 90]])\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "list", "numpy", "python" ]
stackoverflow_0074502830_arrays_list_numpy_python.txt
Q: Shell file not found inside docker compose/dockerfile containers I have multiple Python scripts from which I want to run a docker container. From a related question How to run multiple Python scripts and an executable files using Docker? , I found that the best way to do that is to have run.sh a shell file as follows: #!/bin/bash python3 producer.py & python3 consumer.py & python3 test_conn.py and then call this file from a Dockerfile as: FROM python:3.9 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY requirements.txt /usr/src/app RUN pip install --no-cache-dir -r requirements.txt COPY . /usr/src/app CMD ["./run.sh"] However, in the container logs the following error is prompting exec ./run.sh: no such file or directory, which makes no sense to me since I copied everything on the current directory, run.sh included, to /usr/src/app on my container via COPY . /usr/src/app Please, clone my repo and on the root directory call docker-compose up -d and check myapp container logs to help me. https://github.com/Quilograma/IES_Project Thank you! Can't run multiple python scripts in a single container. A: You should explicitly specify what shell interpreter be used for running your script. Changing the last line to CMD ["bash", "-c", "./run.sh"] might solve your issue. A: You need to chmod run.sh to be excecuteable: FROM python:3.9 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY requirements.txt /usr/src/app RUN pip install --no-cache-dir -r requirements.txt COPY . /usr/src/app RUN chmod +x run.sh CMD ["./run.sh"]
Shell file not found inside docker compose/dockerfile containers
I have multiple Python scripts from which I want to run a docker container. From a related question How to run multiple Python scripts and an executable files using Docker? , I found that the best way to do that is to have run.sh a shell file as follows: #!/bin/bash python3 producer.py & python3 consumer.py & python3 test_conn.py and then call this file from a Dockerfile as: FROM python:3.9 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY requirements.txt /usr/src/app RUN pip install --no-cache-dir -r requirements.txt COPY . /usr/src/app CMD ["./run.sh"] However, in the container logs the following error is prompting exec ./run.sh: no such file or directory, which makes no sense to me since I copied everything on the current directory, run.sh included, to /usr/src/app on my container via COPY . /usr/src/app Please, clone my repo and on the root directory call docker-compose up -d and check myapp container logs to help me. https://github.com/Quilograma/IES_Project Thank you! Can't run multiple python scripts in a single container.
[ "You should explicitly specify what shell interpreter be used for running your script.\nChanging the last line to CMD [\"bash\", \"-c\", \"./run.sh\"] might solve your issue.\n", "You need to chmod run.sh to be excecuteable:\nFROM python:3.9\n\nRUN mkdir -p /usr/src/app\n\nWORKDIR /usr/src/app\n\nCOPY requirements.txt /usr/src/app\n\nRUN pip install --no-cache-dir -r requirements.txt\n\nCOPY . /usr/src/app\n\nRUN chmod +x run.sh\n\nCMD [\"./run.sh\"]\n\n" ]
[ 0, 0 ]
[ "If you need to run three separate long-running processes, do not try to orchestrate them from a shell script. Instead, launch three separate containers. If you're running this via Compose, this is straightforward: have three containers all running the same image, but override the command: to run different main processes.\nversion: '3.8'\nservices:\n producer:\n build: .\n command: ./producer.py\n consumer:\n build: .\n command: ./consumer.py\n test_conn:\n build: .\n command: ./test_conn.py\n\nMake sure the scripts are executable (run chmod +x producer.py on your host system, and commit that to source control) and begin with a \"shebang\" line #!/usr/bin/env python3 as the very first line.\n", "For those encountering the same problem adding RUN sed -i -e 's/\\r$//' run.sh on the Dokcerfile before CMD [\"bash\", \"-c\", \"./run.sh\"] is what worked for me. See Bash script – \"/bin/bash^M: bad interpreter: No such file or directory\" for further details.\n" ]
[ -1, -2 ]
[ "docker", "docker_compose", "dockerfile", "python" ]
stackoverflow_0074493795_docker_docker_compose_dockerfile_python.txt
Q: UnicodeEncodeError: 'charmap' codec can't encode characters I'm trying to scrape a website, but it gives me an error. I'm using the following code: import urllib.request from bs4 import BeautifulSoup get = urllib.request.urlopen("https://www.website.com/") html = get.read() soup = BeautifulSoup(html) print(soup) And I'm getting the following error: File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode characters in position 70924-70950: character maps to <undefined> What can I do to fix this? A: I was getting the same UnicodeEncodeError when saving scraped web content to a file. To fix it I replaced this code: with open(fname, "w") as f: f.write(html) with this: with open(fname, "w", encoding="utf-8") as f: f.write(html) If you need to support Python 2, then use this: import io with io.open(fname, "w", encoding="utf-8") as f: f.write(html) If you want to use a different encoding than UTF-8, specify whatever your actual encoding is for encoding. A: I fixed it by adding .encode("utf-8") to soup. That means that print(soup) becomes print(soup.encode("utf-8")). A: In Python 3.7, and running Windows 10 this worked (I am not sure whether it will work on other platforms and/or other versions of Python) Replacing this line: with open('filename', 'w') as f: With this: with open('filename', 'w', encoding='utf-8') as f: The reason why it is working is because the encoding is changed to UTF-8 when using the file, so characters in UTF-8 are able to be converted to text, instead of returning an error when it encounters a UTF-8 character that is not suppord by the current encoding. A: set PYTHONIOENCODING=utf-8 set PYTHONLEGACYWINDOWSSTDIO=utf-8 You may or may not need to set that second environment variable PYTHONLEGACYWINDOWSSTDIO. Alternatively, this can be done in code (although it seems that doing it through env vars is recommended): sys.stdin.reconfigure(encoding='utf-8') sys.stdout.reconfigure(encoding='utf-8') Additionally: Reproducing this error was a bit of a pain, so leaving this here too in case you need to reproduce it on your machine: set PYTHONIOENCODING=windows-1252 set PYTHONLEGACYWINDOWSSTDIO=windows-1252 A: While saving the response of get request, same error was thrown on Python 3.7 on window 10. The response received from the URL, encoding was UTF-8 so it is always recommended to check the encoding so same can be passed to avoid such trivial issue as it really kills lots of time in production import requests resp = requests.get('https://en.wikipedia.org/wiki/NIFTY_50') print(resp.encoding) with open ('NiftyList.txt', 'w') as f: f.write(resp.text) When I added encoding="utf-8" with the open command it saved the file with the correct response with open ('NiftyList.txt', 'w', encoding="utf-8") as f: f.write(resp.text) A: Even I faced the same issue with the encoding that occurs when you try to print it, read/write it or open it. As others mentioned above adding .encoding="utf-8" will help if you are trying to print it. soup.encode("utf-8") If you are trying to open scraped data and maybe write it into a file, then open the file with (......,encoding="utf-8") with open(filename_csv , 'w', newline='',encoding="utf-8") as csv_file: A: For those still getting this error, adding encode("utf-8") to soup will also fix this. soup = BeautifulSoup(html_doc, 'html.parser').encode("utf-8") print(soup) A: There are multiple aspects to this problem. The fundamental question is which character set you want to output into. You may also have to figure out the input character set. Printing (with either print or write) into a file with an explicit encoding="..." will translate Python's internal Unicode representation into that encoding. If the output contains characters which are not supported by that encoding, you will get an UnicodeEncodeError. For example, you can't write Russian or Chinese or Indic or Hebrew or Arabic or emoji or ... anything except a restricted set of some 200+ Western characters to a file whose encoding is "cp1252" because this limited 8-bit character set has no way to represent these characters. Basically the same problem will occur with any 8-bit character set, including nearly all the legacy Windows code pages (437, 850, 1250, 1251, etc etc), though some of them support some additional script in addition to or instead of English (1251 supports Cyrillic, for example, so you can write Russian, Ukrainian, Serbian, Bulgarian, etc). An 8-bit encoding has only a maximum of 256 character codes and no way to represent a character which isn't among them. Perhaps now would be a good time to read Joel Spolsky's The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) On platforms where the terminal is not capable of printing Unicode (only Windows these days really, though if you're into retrocomputing, this problem was also prevalent on other platforms in the previous millennium) attempting to print Unicode strings can also produce this error, or output mojibake. If you see something like Héllö instead of Héllö, this is your issue. In short, then, you need to know: What is the character set of the page you scraped, or the data you received? Was it correctly scraped? Did the originator correctly identify its encoding, or are you able to otherwise obtain this information (or guess it)? Some web sites incorrectly declare a different character set than the page actually contains, some sites have incorrectly configured the connection between the web server and a back-end database. See e.g. scrape with correct character encoding (python requests + beautifulsoup) for a more detailed example with some solutions. What is the character set you want to write? If printing to the screen, is your terminal correctly configured, and is your Python interpreter configured identically? Perhaps see also How to display utf-8 in windows console If you are here, probably the answer to one of these questions is not "UTF-8". This is increasingly becoming the prevalent encoding for web pages, too, though the former standard was ISO-8859-1 (aka Latin-1) and more recently Windows code page 1252. Going forward, you basically want all your textual data to be Unicode, outside of a few fringe use cases. Generally, that means UTF-8, though on Windows (or if you need Java compatibility), UTF-16 is also vaguely viable, albeit somewhat cumbersome. (There are several other Unicode serialization formats, which may be useful in specialized circumstances. UTF-32 is technically trivial, but takes up a lot more memory; UTF-7 is used in a few network protocols where 7-bit ASCII is required for transport.) Perhaps see also https://utf8everywhere.org/ Naturally, if you are printing to a file, you also need to examine that file using a tool which can correctly display it. A common pilot error is to open the file using a tool which only displays the currently selected system encoding, or one which tries to guess the encoding, but guesses wrong. Again, a common symptom when viewing UTF-8 text using Windows code page 1252 would result, for example, in Héllö displaying as Héllö. If the encoding of character data is unknown, there is no simple way to automatically establish it. If you know what the text is supposed to represent, you can perhaps infer it, but this is typically a manual process with some guesswork involved. (Automatic tools like chardet and ftfy can help, but they get it wrong some of the time, too.) To establish which encoding you are looking at, it can be helpful if you can identify the individual bytes in a character which isn't displayed correctly. For example, if you are looking at H\x8ell\x9a but expect it to represent Héllö, you can look up the bytes in a translation table. I have published one such table at https://tripleee.github.io/8bit where you can see that in this example, it's probably one of the legacy Mac 8-bit character sets; with more data points, perhaps you can narrow it down to just one of them (and if not, any one of them will do in practice, since all the code points you care about map to the same Unicode characters). Python 3 on most platforms defaults to UTF-8 for all input and output, but on Windows, this is commonly not the case. It will then instead default to the system's default encoding (still misleadingly called "ANSI code page" in some Microsoft documentation), which depends on a number of factors. On Western systems, the default encoding out of the box is commonly Windows code page 1252. (Earlier Python versions had somewhat different expectations, and in Python 2, the internal string representation was not Unicode.) If you are on Windows and write UTF-8 to a text file, maybe specify encoding="utf-8-sig" which adds a BOM sequence at the beginning of the file. This is strictly speaking not necessary or correct, but some Windows tools need it to correctly identify the encoding. Several of the earlier answers here suggest blindly applying some encoding, but hopefully this should help you understand how that's not generally the correct approach, and how to figure out - rather than guess - which encoding to use. A: From Python 3.7 onwards, Set the the environment variable PYTHONUTF8 to 1 The following script included other useful variables too which set System Environment Variables. setx /m PYTHONUTF8 1 setx PATHEXT "%PATHEXT%;.PY" ; In CMD, Python file can be executed without extesnion. setx /m PY_PYTHON 3.10 ; To set default python version for py Source
UnicodeEncodeError: 'charmap' codec can't encode characters
I'm trying to scrape a website, but it gives me an error. I'm using the following code: import urllib.request from bs4 import BeautifulSoup get = urllib.request.urlopen("https://www.website.com/") html = get.read() soup = BeautifulSoup(html) print(soup) And I'm getting the following error: File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode characters in position 70924-70950: character maps to <undefined> What can I do to fix this?
[ "I was getting the same UnicodeEncodeError when saving scraped web content to a file. To fix it I replaced this code:\nwith open(fname, \"w\") as f:\n f.write(html)\n\nwith this:\nwith open(fname, \"w\", encoding=\"utf-8\") as f:\n f.write(html)\n\nIf you need to support Python 2, then use this:\nimport io\nwith io.open(fname, \"w\", encoding=\"utf-8\") as f:\n f.write(html)\n\nIf you want to use a different encoding than UTF-8, specify whatever your actual encoding is for encoding.\n", "I fixed it by adding .encode(\"utf-8\") to soup.\nThat means that print(soup) becomes print(soup.encode(\"utf-8\")).\n", "In Python 3.7, and running Windows 10 this worked (I am not sure whether it will work on other platforms and/or other versions of Python)\nReplacing this line:\nwith open('filename', 'w') as f:\nWith this:\nwith open('filename', 'w', encoding='utf-8') as f:\nThe reason why it is working is because the encoding is changed to UTF-8 when using the file, so characters in UTF-8 are able to be converted to text, instead of returning an error when it encounters a UTF-8 character that is not suppord by the current encoding.\n", "set PYTHONIOENCODING=utf-8\nset PYTHONLEGACYWINDOWSSTDIO=utf-8\n\nYou may or may not need to set that second environment variable PYTHONLEGACYWINDOWSSTDIO.\nAlternatively, this can be done in code (although it seems that doing it through env vars is recommended):\nsys.stdin.reconfigure(encoding='utf-8')\nsys.stdout.reconfigure(encoding='utf-8')\n\n\nAdditionally: Reproducing this error was a bit of a pain, so leaving this here too in case you need to reproduce it on your machine:\nset PYTHONIOENCODING=windows-1252\nset PYTHONLEGACYWINDOWSSTDIO=windows-1252\n\n", "While saving the response of get request, same error was thrown on Python 3.7 on window 10. The response received from the URL, encoding was UTF-8 so it is always recommended to check the encoding so same can be passed to avoid such trivial issue as it really kills lots of time in production\nimport requests\nresp = requests.get('https://en.wikipedia.org/wiki/NIFTY_50')\nprint(resp.encoding)\nwith open ('NiftyList.txt', 'w') as f:\n f.write(resp.text)\n\nWhen I added encoding=\"utf-8\" with the open command it saved the file with the correct response \nwith open ('NiftyList.txt', 'w', encoding=\"utf-8\") as f:\n f.write(resp.text)\n\n", "Even I faced the same issue with the encoding that occurs when you try to print it, read/write it or open it. As others mentioned above adding .encoding=\"utf-8\" will help if you are trying to print it. \n\nsoup.encode(\"utf-8\")\n\nIf you are trying to open scraped data and maybe write it into a file, then open the file with (......,encoding=\"utf-8\")\n\nwith open(filename_csv , 'w', newline='',encoding=\"utf-8\") as csv_file:\n\n", "For those still getting this error, adding encode(\"utf-8\") to soup will also fix this.\nsoup = BeautifulSoup(html_doc, 'html.parser').encode(\"utf-8\")\nprint(soup)\n\n", "There are multiple aspects to this problem. The fundamental question is which character set you want to output into. You may also have to figure out the input character set.\nPrinting (with either print or write) into a file with an explicit encoding=\"...\" will translate Python's internal Unicode representation into that encoding. If the output contains characters which are not supported by that encoding, you will get an UnicodeEncodeError. For example, you can't write Russian or Chinese or Indic or Hebrew or Arabic or emoji or ... anything except a restricted set of some 200+ Western characters to a file whose encoding is \"cp1252\" because this limited 8-bit character set has no way to represent these characters.\nBasically the same problem will occur with any 8-bit character set, including nearly all the legacy Windows code pages (437, 850, 1250, 1251, etc etc), though some of them support some additional script in addition to or instead of English (1251 supports Cyrillic, for example, so you can write Russian, Ukrainian, Serbian, Bulgarian, etc). An 8-bit encoding has only a maximum of 256 character codes and no way to represent a character which isn't among them.\nPerhaps now would be a good time to read Joel Spolsky's The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)\nOn platforms where the terminal is not capable of printing Unicode (only Windows these days really, though if you're into retrocomputing, this problem was also prevalent on other platforms in the previous millennium) attempting to print Unicode strings can also produce this error, or output mojibake. If you see something like Héllö instead of Héllö, this is your issue.\nIn short, then, you need to know:\n\nWhat is the character set of the page you scraped, or the data you received? Was it correctly scraped? Did the originator correctly identify its encoding, or are you able to otherwise obtain this information (or guess it)? Some web sites incorrectly declare a different character set than the page actually contains, some sites have incorrectly configured the connection between the web server and a back-end database. See e.g. scrape with correct character encoding (python requests + beautifulsoup) for a more detailed example with some solutions.\n\nWhat is the character set you want to write? If printing to the screen, is your terminal correctly configured, and is your Python interpreter configured identically?\nPerhaps see also How to display utf-8 in windows console\n\n\nIf you are here, probably the answer to one of these questions is not \"UTF-8\". This is increasingly becoming the prevalent encoding for web pages, too, though the former standard was ISO-8859-1 (aka Latin-1) and more recently Windows code page 1252.\nGoing forward, you basically want all your textual data to be Unicode, outside of a few fringe use cases. Generally, that means UTF-8, though on Windows (or if you need Java compatibility), UTF-16 is also vaguely viable, albeit somewhat cumbersome. (There are several other Unicode serialization formats, which may be useful in specialized circumstances. UTF-32 is technically trivial, but takes up a lot more memory; UTF-7 is used in a few network protocols where 7-bit ASCII is required for transport.)\nPerhaps see also https://utf8everywhere.org/\nNaturally, if you are printing to a file, you also need to examine that file using a tool which can correctly display it. A common pilot error is to open the file using a tool which only displays the currently selected system encoding, or one which tries to guess the encoding, but guesses wrong. Again, a common symptom when viewing UTF-8 text using Windows code page 1252 would result, for example, in Héllö displaying as Héllö.\nIf the encoding of character data is unknown, there is no simple way to automatically establish it. If you know what the text is supposed to represent, you can perhaps infer it, but this is typically a manual process with some guesswork involved. (Automatic tools like chardet and ftfy can help, but they get it wrong some of the time, too.)\nTo establish which encoding you are looking at, it can be helpful if you can identify the individual bytes in a character which isn't displayed correctly. For example, if you are looking at H\\x8ell\\x9a but expect it to represent Héllö, you can look up the bytes in a translation table. I have published one such table at https://tripleee.github.io/8bit where you can see that in this example, it's probably one of the legacy Mac 8-bit character sets; with more data points, perhaps you can narrow it down to just one of them (and if not, any one of them will do in practice, since all the code points you care about map to the same Unicode characters).\nPython 3 on most platforms defaults to UTF-8 for all input and output, but on Windows, this is commonly not the case. It will then instead default to the system's default encoding (still misleadingly called \"ANSI code page\" in some Microsoft documentation), which depends on a number of factors. On Western systems, the default encoding out of the box is commonly Windows code page 1252.\n(Earlier Python versions had somewhat different expectations, and in Python 2, the internal string representation was not Unicode.)\nIf you are on Windows and write UTF-8 to a text file, maybe specify encoding=\"utf-8-sig\" which adds a BOM sequence at the beginning of the file. This is strictly speaking not necessary or correct, but some Windows tools need it to correctly identify the encoding.\nSeveral of the earlier answers here suggest blindly applying some encoding, but hopefully this should help you understand how that's not generally the correct approach, and how to figure out - rather than guess - which encoding to use.\n", "From Python 3.7 onwards,\nSet the the environment variable PYTHONUTF8 to 1\nThe following script included other useful variables too which set System Environment Variables.\nsetx /m PYTHONUTF8 1\nsetx PATHEXT \"%PATHEXT%;.PY\" ; In CMD, Python file can be executed without extesnion.\nsetx /m PY_PYTHON 3.10 ; To set default python version for py\n\nSource\n" ]
[ 650, 247, 76, 47, 21, 14, 5, 5, 2 ]
[ "I got the same error so I use (encoding=\"utf-8\") and it solve the error.\nThis generally happens when we got some unidentified symbol or pattern in text data that our encoder does not understand.\nwith open(\"text.txt\", \"w\", encoding='utf-8') as f:\n f.write(data)\n\nThis will solve your problem.\n", "if you are using windows try to pass encoding='latin1', encoding='iso-8859-1' or encoding='cp1252'\nexample:\ncsv_data = pd.read_csv(csvpath,encoding='iso-8859-1')\nprint(print(soup.encode('iso-8859-1')))\n\n" ]
[ -1, -2 ]
[ "beautifulsoup", "python", "urllib" ]
stackoverflow_0027092833_beautifulsoup_python_urllib.txt
Q: Sum values ​from a treeview column Good, I'm trying to sum the values ​​of a column, while inputting it. Since I put a code in the entry and check if it exists and put it in columns in treeview, and I would like to add only the "price" values, but I can't do it, I get the data from the price column, but I can't get if This 5.99 I have entered another 5.99 add up and give me a total, as I add a price. What I can be doing wrong? or what I have wrong Any additional information would be appreciated. def Cesta(self): self.conex() self.b_codigo = self.s_Codigo.get() self.sql3 = "SELECT * FROM productos WHERE codigo = %s" self.mycursor.execute(self.sql3,[(self.b_codigo)]) self.r_codigo = self.mycursor.fetchall() self.row3 = [item['nombre'] for item in self.r_codigo] if self.s_Codigo.get() == "": MessageBox.showinfo("ERROR", "DEBES INTRODUCIR DATOS", icon="error") elif self.r_codigo: for self.x2 in self.r_codigo: print (self.x2["nombre"], self.x2["talla"], self.x2["precio"]+"€") self.tree.insert('', 'end', text=self.x2["nombre"], values=(self.x2["talla"],self.x2["precio"]+" €")) print(self.x2["fecha"]) for self.item in self.tree.get_children(): self.resultado = 0 self.celda = int(self.tree.set(self.item,"col2")) self.total = int(self.resultado) + int(float(self.celda)) print(self.total) else: MessageBox.showinfo("ERROR", "EL CODIGO INTRODUCIDO NO ES CORRECTO", icon="error") self.clear_entry() ` File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/tkinter/__init__.py", line 1921, in __call__ return self.func(*args) File "/Users/tomas/Downloads/PROYECTO/main.py", line 205, in Cesta self.celda = int(self.tree.set(self.item,"col2")) ValueError: invalid literal for int() with base 10: '134,99 €' [Finished in 6.7s] ` self.tree = ttk.Treeview(self.pagina1,columns=("col1","col2"), height=50) self.tree.grid(column=0, row=2, padx=50, pady=100) ### COLUMNAS ### self.tree.column("#0",width=250) self.tree.column("col1",width=150, anchor=CENTER) self.tree.column("col2",width=150, anchor=CENTER) ### NOMBRES COLUMNAS ### self.tree.heading("#0", text="Articulo", anchor=CENTER) self.tree.heading("col1", text="Talla", anchor=CENTER) self.tree.heading("col2", text="Precio", anchor=CENTER) Everything else is going well for me, but the part in which I want to add the results of the price column does not What am I doing wrong, to be able to add the values ​​of the prices column every time I insert a new product? A: I have already managed to solve the error, the new price is already added to the old one, thanks for making me reflect on it. for self.x2 in self.r_codigo: print (self.x2["nombre"], self.x2["talla"], self.x2["precio"]+"€") self.tree.insert('', 'end', text=self.x2["nombre"], values=(self.x2["talla"],self.x2["precio"])) self.total = 0 for self.item in self.tree.get_children(): self.celda = float(self.tree.set(self.item,"col2")) self.total+=self.celda print(self.total)
Sum values ​from a treeview column
Good, I'm trying to sum the values ​​of a column, while inputting it. Since I put a code in the entry and check if it exists and put it in columns in treeview, and I would like to add only the "price" values, but I can't do it, I get the data from the price column, but I can't get if This 5.99 I have entered another 5.99 add up and give me a total, as I add a price. What I can be doing wrong? or what I have wrong Any additional information would be appreciated. def Cesta(self): self.conex() self.b_codigo = self.s_Codigo.get() self.sql3 = "SELECT * FROM productos WHERE codigo = %s" self.mycursor.execute(self.sql3,[(self.b_codigo)]) self.r_codigo = self.mycursor.fetchall() self.row3 = [item['nombre'] for item in self.r_codigo] if self.s_Codigo.get() == "": MessageBox.showinfo("ERROR", "DEBES INTRODUCIR DATOS", icon="error") elif self.r_codigo: for self.x2 in self.r_codigo: print (self.x2["nombre"], self.x2["talla"], self.x2["precio"]+"€") self.tree.insert('', 'end', text=self.x2["nombre"], values=(self.x2["talla"],self.x2["precio"]+" €")) print(self.x2["fecha"]) for self.item in self.tree.get_children(): self.resultado = 0 self.celda = int(self.tree.set(self.item,"col2")) self.total = int(self.resultado) + int(float(self.celda)) print(self.total) else: MessageBox.showinfo("ERROR", "EL CODIGO INTRODUCIDO NO ES CORRECTO", icon="error") self.clear_entry() ` File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/tkinter/__init__.py", line 1921, in __call__ return self.func(*args) File "/Users/tomas/Downloads/PROYECTO/main.py", line 205, in Cesta self.celda = int(self.tree.set(self.item,"col2")) ValueError: invalid literal for int() with base 10: '134,99 €' [Finished in 6.7s] ` self.tree = ttk.Treeview(self.pagina1,columns=("col1","col2"), height=50) self.tree.grid(column=0, row=2, padx=50, pady=100) ### COLUMNAS ### self.tree.column("#0",width=250) self.tree.column("col1",width=150, anchor=CENTER) self.tree.column("col2",width=150, anchor=CENTER) ### NOMBRES COLUMNAS ### self.tree.heading("#0", text="Articulo", anchor=CENTER) self.tree.heading("col1", text="Talla", anchor=CENTER) self.tree.heading("col2", text="Precio", anchor=CENTER) Everything else is going well for me, but the part in which I want to add the results of the price column does not What am I doing wrong, to be able to add the values ​​of the prices column every time I insert a new product?
[ "I have already managed to solve the error, the new price is already added to the old one, thanks for making me reflect on it.\nfor self.x2 in self.r_codigo:\n print (self.x2[\"nombre\"], self.x2[\"talla\"], self.x2[\"precio\"]+\"€\")\n self.tree.insert('', 'end', text=self.x2[\"nombre\"], values=(self.x2[\"talla\"],self.x2[\"precio\"]))\n\n self.total = 0\n \n for self.item in self.tree.get_children(): \n self.celda = float(self.tree.set(self.item,\"col2\"))\n self.total+=self.celda\n print(self.total)\n\n" ]
[ 0 ]
[]
[]
[ "mysql", "python", "tkinter", "treeview" ]
stackoverflow_0074493068_mysql_python_tkinter_treeview.txt
Q: Python Inserting a string I need to insert a string (character by character) into another string at every 3rd position For example:- string_1:-wwwaabkccgkll String_2:- toadhp Now I need to insert string2 char by char into string1 at every third position So the output must be wwtaaobkaccdgkhllp Need in Python.. even Java is ok So i tried this Test_str="hiimdumbiknow" challenge="toadh" new_st=challenge [k] Last=list(test_str) K=0 For i in range(Len(test_str)): if(i%3==0): last.insert(i,new_st) K+=1 and the output i get thitimtdutmbtiknow A: You can split test_str into sub-strings to length 2, and then iterate merging them with challenge: def concat3(test_str, challenge): chunks = [test_str[i:i+2] for i in range(0,len(test_str),2)] result = [] i = j = 0 while i<len(chunks) or j<len(challenge): if i<len(chunks): result.append(chunks[i]) i += 1 if j<len(challenge): result.append(challenge[j]) j += 1 return ''.join(result) test_str = "hiimdumbiknow" challenge = "toadh" print(concat3(test_str, challenge)) # hitimoduambdikhnow This method works even if the lengths of test_str and challenge are mismatching. (The remaining characters in the longest string will be appended at the end.) A: You can split Test_str in to groups of two letters and then re-join with each letter from challenge in between as follows; import itertools print(''.join(f'{two}{letter}' for two, letter in itertools.zip_longest([Test_str[i:i+2] for i in range(0,len(Test_str),2)], challenge, fillvalue=''))) Output: hitimoduambdikhnow *edited to split in to groups of two rather than three as originally posted A: you can try this, make an iter above the second string and iterate over the first one and select which character should be part of the final string according the position def add3(s1, s2): def n(): try: k = iter(s2) for i,j in enumerate(s1): yield (j if (i==0 or (i+1)%3) else next(k)) except: try: yield s1[i+1:] except: pass return ''.join(n()) A: def insertstring(test_str,challenge): result = '' x = [x for x in test_str] y = [y for y in challenge] j = 0 for i in range(len(x)): if i % 2 != 0 or i == 0: result += x[i] else: if j < 5: result += y[j] result += x[i] j += 1 get_last_element = x[-1] return result + get_last_element print(insertstring(test_str,challenge)) #output: hitimoduambdikhnow
Python Inserting a string
I need to insert a string (character by character) into another string at every 3rd position For example:- string_1:-wwwaabkccgkll String_2:- toadhp Now I need to insert string2 char by char into string1 at every third position So the output must be wwtaaobkaccdgkhllp Need in Python.. even Java is ok So i tried this Test_str="hiimdumbiknow" challenge="toadh" new_st=challenge [k] Last=list(test_str) K=0 For i in range(Len(test_str)): if(i%3==0): last.insert(i,new_st) K+=1 and the output i get thitimtdutmbtiknow
[ "You can split test_str into sub-strings to length 2, and then iterate merging them with challenge:\ndef concat3(test_str, challenge):\n chunks = [test_str[i:i+2] for i in range(0,len(test_str),2)]\n result = []\n i = j = 0\n while i<len(chunks) or j<len(challenge):\n if i<len(chunks):\n result.append(chunks[i])\n i += 1\n if j<len(challenge):\n result.append(challenge[j])\n j += 1\n return ''.join(result)\n\ntest_str = \"hiimdumbiknow\"\nchallenge = \"toadh\"\n\nprint(concat3(test_str, challenge))\n# hitimoduambdikhnow\n\nThis method works even if the lengths of test_str and challenge are mismatching. (The remaining characters in the longest string will be appended at the end.)\n", "You can split Test_str in to groups of two letters and then re-join with each letter from challenge in between as follows;\nimport itertools\nprint(''.join(f'{two}{letter}' for two, letter in itertools.zip_longest([Test_str[i:i+2] for i in range(0,len(Test_str),2)], challenge, fillvalue='')))\n\nOutput:\nhitimoduambdikhnow\n\n*edited to split in to groups of two rather than three as originally posted\n", "you can try this, make an iter above the second string and iterate over the first one and select which character should be part of the final string according the position\ndef add3(s1, s2):\n def n():\n try:\n k = iter(s2)\n for i,j in enumerate(s1):\n yield (j if (i==0 or (i+1)%3) else next(k))\n except:\n try:\n yield s1[i+1:]\n except:\n pass \n return ''.join(n()) \n\n", "def insertstring(test_str,challenge):\n result = ''\n x = [x for x in test_str]\n y = [y for y in challenge]\n j = 0\n for i in range(len(x)):\n if i % 2 != 0 or i == 0:\n result += x[i]\n else:\n if j < 5:\n result += y[j]\n result += x[i]\n j += 1\n get_last_element = x[-1]\n return result + get_last_element\n\n\nprint(insertstring(test_str,challenge))\n\n#output: hitimoduambdikhnow\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074499534_python.txt
Q: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [585,1024,3], [batch]: [600,799,3] I am trying to train a model, at first I had dataset of 5000 images and training worked fine, Now I have added couple of more images and now my dataset contains 6,423‬ images. I am using python 3.6.1 on Ubuntu 18.04, my tensorflow version is 1.15 & numpy version is 1.16 (had same versions before and it worked fine). Now when I use: python model_main.py --logtostderr --pipeline_config_path=training/faster_rcnn_resnet50_coco.config --model_dir=training It starts settings up for couple of minutes and after these lines: INFO:tensorflow:Saving checkpoints for 0 into training/model.ckpt. I1123 10:26:21.548237 140482563244160 basic_session_run_hooks.py:606] Saving checkpoints for 0 into training/model.ckpt. 2019-11-23 10:28:30.801453: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 I get following erros: 2019-11-23 10:08:38.843259: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_3_hash_table_2/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.843323: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_1_hash_table_1/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.843345: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_2_hash_table/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851405: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_3_hash_table_2/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851488: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_1_hash_table_1/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851512: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_2_hash_table/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851807: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_1_hash_table_1/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851848: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_2_hash_table/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851899: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_3_hash_table_2/N10tensorflow6lookup15LookupInterfaceE does not exist. Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call return fn(*args) File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn target_list, run_metadata) File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [585,1024,3], [batch]: [600,799,3] [[{{node IteratorGetNext}}]] [[ToAbsoluteCoordinates_118/Assert/AssertGuard/Assert/data_0/_5709]] (1) Invalid argument: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [585,1024,3], [batch]: [600,799,3] [[{{node IteratorGetNext}}]] 0 successful operations. 0 derived errors ignored. and training stops. A: It seems that the new images you've added have a resolution of 585x1024, which differs from the size that's expected by the model i.e. 600x799. If so, then the solution is to resize these new images accordingly. A: If you need batch size > 1, you can resize the images to a uniform size with the right image_resizer in the config, one of the ones defined in the image_resizer protobuf file, which I assume is what is used to parse that part of the config. For example (stolen from here): image_resizer { fixed_shape_resizer { height: 600 width: 800 } } This seems to fix the problem for me. A: Changing the batch_size to 1 fixed this issue for me. A: In minibatch, all images must have the same size, so you must resize all photos to the same size or set the batch size to 1 A: Just removed the data augmentation and it worked for me. Also if you want you can try removing one after another data augmentations... but removing all just worked for me.
Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [585,1024,3], [batch]: [600,799,3]
I am trying to train a model, at first I had dataset of 5000 images and training worked fine, Now I have added couple of more images and now my dataset contains 6,423‬ images. I am using python 3.6.1 on Ubuntu 18.04, my tensorflow version is 1.15 & numpy version is 1.16 (had same versions before and it worked fine). Now when I use: python model_main.py --logtostderr --pipeline_config_path=training/faster_rcnn_resnet50_coco.config --model_dir=training It starts settings up for couple of minutes and after these lines: INFO:tensorflow:Saving checkpoints for 0 into training/model.ckpt. I1123 10:26:21.548237 140482563244160 basic_session_run_hooks.py:606] Saving checkpoints for 0 into training/model.ckpt. 2019-11-23 10:28:30.801453: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 I get following erros: 2019-11-23 10:08:38.843259: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_3_hash_table_2/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.843323: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_1_hash_table_1/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.843345: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_2_hash_table/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851405: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_3_hash_table_2/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851488: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_1_hash_table_1/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851512: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_2_hash_table/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851807: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_1_hash_table_1/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851848: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_2_hash_table/N10tensorflow6lookup15LookupInterfaceE does not exist. 2019-11-23 10:08:38.851899: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at lookup_table_op.cc:788 : Not found: Resource localhost/_3_hash_table_2/N10tensorflow6lookup15LookupInterfaceE does not exist. Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call return fn(*args) File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn target_list, run_metadata) File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [585,1024,3], [batch]: [600,799,3] [[{{node IteratorGetNext}}]] [[ToAbsoluteCoordinates_118/Assert/AssertGuard/Assert/data_0/_5709]] (1) Invalid argument: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [585,1024,3], [batch]: [600,799,3] [[{{node IteratorGetNext}}]] 0 successful operations. 0 derived errors ignored. and training stops.
[ "It seems that the new images you've added have a resolution of 585x1024, which differs from the size that's expected by the model i.e. 600x799.\nIf so, then the solution is to resize these new images accordingly.\n", "If you need batch size > 1, you can resize the images to a uniform size with the right image_resizer in the config, one of the ones defined in the image_resizer protobuf file, which I assume is what is used to parse that part of the config.\nFor example (stolen from here):\nimage_resizer {\n fixed_shape_resizer {\n height: 600\n width: 800\n }\n}\n\nThis seems to fix the problem for me.\n", "Changing the batch_size to 1 fixed this issue for me.\n", "In minibatch, all images must have the same size, so you must resize all photos to the same size or set the batch size to 1\n", "Just removed the data augmentation and it worked for me.\nAlso if you want you can try removing one after another data augmentations... but removing all just worked for me.\n" ]
[ 4, 2, 0, 0, 0 ]
[]
[]
[ "python", "tensorflow" ]
stackoverflow_0059006696_python_tensorflow.txt
Q: How can i implement a code so the snake won't go opposite direction Hello i have been struggling, to make is so that the snake head can't move to the left if it is moving to the right same for up and down. I understand i need to make some direction for the snake so i can compare ot to each other i just don't know how to implement this. code: # Snake game. import pygame import random pygame.init() # Kleur voor slang / food. white = (255, 255, 255) black = (0, 0, 0) red = (255, 0, 0) blue = (50, 153, 213) green = (0, 255, 0) # Start screen. win_width = 800 win_height = 600 window_screen = pygame.display.set_mode((win_width, win_height)) pygame.display.set_caption('Snake game by Smerfy') snake_block = 10 snake_speed = 20 clock = pygame.time.Clock() font_style = pygame.font.SysFont("bahnschrift", 25) score_font = pygame.font.SysFont("comicsansms", 35) def your_score(score): value = score_font.render(f"Your Score: {score}", True, red) window_screen.blit(value, [0, 0]) def snake(snake_blk, snake_list): for x in snake_list: pygame.draw.rect(window_screen, red, [x[0], x[1], snake_blk, snake_blk]) def message(msg, color): msg = font_style.render(msg, True, color) msg_rect = msg.get_rect(center=(win_width / 2, win_height / 2)) window_screen.blit(msg, msg_rect) def game_loop(): game_over = False game_close = False # Start punt snake hoofd x,y. x1 = win_width / 2 y1 = win_height / 2 x1_change = 0 y1_change = 0 snake_list = [] lenght_of_snake = 1 foodx = round(random.randrange(0, win_width - snake_block) / 10.0) * 10.0 foody = round(random.randrange(0, win_height - snake_block) / 10.0) * 10.0 while not game_over: while game_close: window_screen.fill(blue) message("You Lost! Press C-Play Again or Q-Quit", red) your_score(lenght_of_snake - 1) pygame.display.update() for event in pygame.event.get(): if event.key == pygame.K_q: game_over = True game_close = False if event.key == pygame.K_c: game_loop() for event in pygame.event.get(): if event.type == pygame.QUIT: game_over = True if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: x1_change = -snake_block y1_change = 0 elif event.key == pygame.K_RIGHT: x1_change = snake_block y1_change = 0 elif event.key == pygame.K_UP: y1_change = -snake_block x1_change = 0 elif event.key == pygame.K_DOWN: y1_change = snake_block x1_change = 0 if x1 >= win_width or x1 < 0 or y1 >= win_height or y1 < 0: game_close = True x1 += x1_change y1 += y1_change window_screen.fill('black') pygame.draw.rect(window_screen, green, [foodx, foody, snake_block, snake_block]) snake_head = [x1, y1] snake_list.append(snake_head) if len(snake_list) > lenght_of_snake: del snake_list[0] for x in snake_list[:-1]: if x == snake_head: game_close = True snake(snake_block, snake_list) your_score(lenght_of_snake - 1) pygame.display.update() if x1 == foodx and y1 == foody: foodx = round(random.randrange(0, win_width - snake_block) / 10.0) * 10.0 foody = round(random.randrange(0, win_height - snake_block) / 10.0) * 10.0 lenght_of_snake += 1 # Snelheid van de slang. clock.tick(snake_speed) pygame.quit() quit() game_loop() I have no idea how to implement something so it can compare to each other likethe following: if direction != 'down' direction = 'up' if self.direction == 'left': self.x[0] -= size if self.direction == 'right': self.x[0] += size if self.direction == 'up': self.y[0] -= size if self.direction == 'down': self.y[0] += size self.draw() A: You'll want to define a direction variable at the start of game_loop: direction = 'right' You'll then need to edit the input section of the code to something like this: if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT and direction != 'right': x1_change = -snake_block y1_change = 0 direction = 'left' elif event.key == pygame.K_RIGHT and direction != 'left': x1_change = snake_block y1_change = 0 direction = 'right' elif event.key == pygame.K_UP and direction != 'down': y1_change = -snake_block x1_change = 0 direction = 'up' elif event.key == pygame.K_DOWN and direction != 'up': y1_change = snake_block x1_change = 0 direction = 'down'
How can i implement a code so the snake won't go opposite direction
Hello i have been struggling, to make is so that the snake head can't move to the left if it is moving to the right same for up and down. I understand i need to make some direction for the snake so i can compare ot to each other i just don't know how to implement this. code: # Snake game. import pygame import random pygame.init() # Kleur voor slang / food. white = (255, 255, 255) black = (0, 0, 0) red = (255, 0, 0) blue = (50, 153, 213) green = (0, 255, 0) # Start screen. win_width = 800 win_height = 600 window_screen = pygame.display.set_mode((win_width, win_height)) pygame.display.set_caption('Snake game by Smerfy') snake_block = 10 snake_speed = 20 clock = pygame.time.Clock() font_style = pygame.font.SysFont("bahnschrift", 25) score_font = pygame.font.SysFont("comicsansms", 35) def your_score(score): value = score_font.render(f"Your Score: {score}", True, red) window_screen.blit(value, [0, 0]) def snake(snake_blk, snake_list): for x in snake_list: pygame.draw.rect(window_screen, red, [x[0], x[1], snake_blk, snake_blk]) def message(msg, color): msg = font_style.render(msg, True, color) msg_rect = msg.get_rect(center=(win_width / 2, win_height / 2)) window_screen.blit(msg, msg_rect) def game_loop(): game_over = False game_close = False # Start punt snake hoofd x,y. x1 = win_width / 2 y1 = win_height / 2 x1_change = 0 y1_change = 0 snake_list = [] lenght_of_snake = 1 foodx = round(random.randrange(0, win_width - snake_block) / 10.0) * 10.0 foody = round(random.randrange(0, win_height - snake_block) / 10.0) * 10.0 while not game_over: while game_close: window_screen.fill(blue) message("You Lost! Press C-Play Again or Q-Quit", red) your_score(lenght_of_snake - 1) pygame.display.update() for event in pygame.event.get(): if event.key == pygame.K_q: game_over = True game_close = False if event.key == pygame.K_c: game_loop() for event in pygame.event.get(): if event.type == pygame.QUIT: game_over = True if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: x1_change = -snake_block y1_change = 0 elif event.key == pygame.K_RIGHT: x1_change = snake_block y1_change = 0 elif event.key == pygame.K_UP: y1_change = -snake_block x1_change = 0 elif event.key == pygame.K_DOWN: y1_change = snake_block x1_change = 0 if x1 >= win_width or x1 < 0 or y1 >= win_height or y1 < 0: game_close = True x1 += x1_change y1 += y1_change window_screen.fill('black') pygame.draw.rect(window_screen, green, [foodx, foody, snake_block, snake_block]) snake_head = [x1, y1] snake_list.append(snake_head) if len(snake_list) > lenght_of_snake: del snake_list[0] for x in snake_list[:-1]: if x == snake_head: game_close = True snake(snake_block, snake_list) your_score(lenght_of_snake - 1) pygame.display.update() if x1 == foodx and y1 == foody: foodx = round(random.randrange(0, win_width - snake_block) / 10.0) * 10.0 foody = round(random.randrange(0, win_height - snake_block) / 10.0) * 10.0 lenght_of_snake += 1 # Snelheid van de slang. clock.tick(snake_speed) pygame.quit() quit() game_loop() I have no idea how to implement something so it can compare to each other likethe following: if direction != 'down' direction = 'up' if self.direction == 'left': self.x[0] -= size if self.direction == 'right': self.x[0] += size if self.direction == 'up': self.y[0] -= size if self.direction == 'down': self.y[0] += size self.draw()
[ "You'll want to define a direction variable at the start of game_loop:\ndirection = 'right'\n\nYou'll then need to edit the input section of the code to something like this:\nif event.type == pygame.KEYDOWN:\n if event.key == pygame.K_LEFT and direction != 'right':\n x1_change = -snake_block\n y1_change = 0\n direction = 'left'\n elif event.key == pygame.K_RIGHT and direction != 'left':\n x1_change = snake_block\n y1_change = 0\n direction = 'right'\n elif event.key == pygame.K_UP and direction != 'down':\n y1_change = -snake_block\n x1_change = 0\n direction = 'up'\n elif event.key == pygame.K_DOWN and direction != 'up':\n y1_change = snake_block\n x1_change = 0\n direction = 'down'\n\n" ]
[ 1 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074502879_pygame_python.txt
Q: In python, how do I make a string overlap a current string in the shell? Each second, it prints a new line. Is there a way to have it print ontop of the previous line? while True: sec += 1 if sec / 60 == sec_int: sec = 0 mins += 1 if mins / 60 == min_int: mins = 0 hours += 1 if hours / 24 == hour_int: hours = 0 days += 1 print(f"{days}d : {hours}h : {mins}m : {sec}s") time.sleep(1) A: Replace your print statement with: print(f"\r{days}d : {hours}h : {mins}m : {sec}s", end="", flush=True) "\r" is a "control character" which moves the cursor to the beginning of the line ("carriage Return"). flush=True is needed to make the display update right away--normally Python can buffer until a newline is written, which of course you'll never write (because there's no way to go back up to a previous line).
In python, how do I make a string overlap a current string in the shell?
Each second, it prints a new line. Is there a way to have it print ontop of the previous line? while True: sec += 1 if sec / 60 == sec_int: sec = 0 mins += 1 if mins / 60 == min_int: mins = 0 hours += 1 if hours / 24 == hour_int: hours = 0 days += 1 print(f"{days}d : {hours}h : {mins}m : {sec}s") time.sleep(1)
[ "Replace your print statement with:\nprint(f\"\\r{days}d : {hours}h : {mins}m : {sec}s\", end=\"\", flush=True)\n\n\"\\r\" is a \"control character\" which moves the cursor to the beginning of the line (\"carriage Return\"). flush=True is needed to make the display update right away--normally Python can buffer until a newline is written, which of course you'll never write (because there's no way to go back up to a previous line).\n" ]
[ 0 ]
[]
[]
[ "display", "loops", "python", "replace", "shell" ]
stackoverflow_0074502688_display_loops_python_replace_shell.txt
Q: Kernel size change in convolutional neural networks I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers. Convolutional layer with kernel_size = (5,5) with 32 output channels new dimension of throughput = (32, 28, 28) Max Pooling layer with pool_size (2,2) and step (2,2) new dimension of throughput = (32, 14, 14) If I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels? Since the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14). A: you need 64 kernel, each with the size of (32,5,5) . depth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same. e.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels. ("I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)") A: You essentially answered your own question. YOU are building the network solver. It seems like your convolutional layer output is [channels out] = [channels in] * [number of kernels]. I had to infer this from the wording of your question. In general, this is how it works: you specify the kernel size of the layer and how many kernels to use. Since you have one input channel you are essentially saying that there are 32 kernels in your first convolution layer. That is 32 unique 5x5 kernels. Each of these kernels will be applied to the one input channel. More in general, each of the layer kernels (32 in your example) is applied to each of the input channels. And that is the key. If you build code to implement the convolution layer according to these generalities, then your subsequent convolution layers are done. In the next layer you specify two kernels per channel. In your example there would be 32 input channels, the hidden layer has 2 kernels per channel, and the output would be 64 channels. You could then down sample by applying a pooling layer, then flatten the 64 channels [turn a matrix into a vector by stacking the columns or rows], and pass it as a column vector into a fully connected network. That is the basic scheme of convolutional networks. The work comes when you try to code up backpropagation through the convolutional layers. But the OP didn’t ask about that. I’ll just say this, you will come to a place where you have the stored input matrix (one channel), you have a gradient from a lower layer in the form of a matrix and is the size of the layer kernel, and you need to backpropagate it up to the next convolutional layer. The simple approach is to rotate your stored channel matrix by 180 degrees and then convolve it with the gradient. The explanation for this is long and tedious, too much to write here, and not a lot on the internet explains it well. A more sophisticated approach is to apply “correlation” between the input gradient and the stored channel matrix. Note I specifically said “correlation” as opposed to “convolution” and that is key. If you think they “almost” the same thing, then I recommend you take some time and learn about the differences. If you would like to have a look at my CNN solver here's a link to the project. It's C++ and no documentation, sorry :) It's all in a header file called layer.h, find the class FilterLayer2D. I think the code is pretty readable (what programmer doesn't think his code is readable :) ) https://github.com/sraber/simplenet.git I also wrote a paper on basic fully connected networks. I wrote it so that I would forget what I learned in my self study. Maybe you'll get something out of it. It's at this link: http://www.raberfamily.com/scottblog/scottblog.htm
Kernel size change in convolutional neural networks
I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers. Convolutional layer with kernel_size = (5,5) with 32 output channels new dimension of throughput = (32, 28, 28) Max Pooling layer with pool_size (2,2) and step (2,2) new dimension of throughput = (32, 14, 14) If I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels? Since the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14).
[ "you need 64 kernel, each with the size of (32,5,5) . \ndepth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same.\ne.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels.\n(\"I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)\")\n", "You essentially answered your own question. YOU are building the network solver. It seems like your convolutional layer output is [channels out] = [channels in] * [number of kernels]. I had to infer this from the wording of your question. In general, this is how it works: you specify the kernel size of the layer and how many kernels to use. Since you have one input channel you are essentially saying that there are 32 kernels in your first convolution layer. That is 32 unique 5x5 kernels. Each of these kernels will be applied to the one input channel. More in general, each of the layer kernels (32 in your example) is applied to each of the input channels. And that is the key. If you build code to implement the convolution layer according to these generalities, then your subsequent convolution layers are done. In the next layer you specify two kernels per channel. In your example there would be 32 input channels, the hidden layer has 2 kernels per channel, and the output would be 64 channels.\nYou could then down sample by applying a pooling layer, then flatten the 64 channels [turn a matrix into a vector by stacking the columns or rows], and pass it as a column vector into a fully connected network. That is the basic scheme of convolutional networks.\nThe work comes when you try to code up backpropagation through the convolutional layers. But the OP didn’t ask about that. I’ll just say this, you will come to a place where you have the stored input matrix (one channel), you have a gradient from a lower layer in the form of a matrix and is the size of the layer kernel, and you need to backpropagate it up to the next convolutional layer.\nThe simple approach is to rotate your stored channel matrix by 180 degrees and then convolve it with the gradient. The explanation for this is long and tedious, too much to write here, and not a lot on the internet explains it well.\nA more sophisticated approach is to apply “correlation” between the input gradient and the stored channel matrix. Note I specifically said “correlation” as opposed to “convolution” and that is key. If you think they “almost” the same thing, then I recommend you take some time and learn about the differences.\nIf you would like to have a look at my CNN solver here's a link to the project. It's C++ and no documentation, sorry :) It's all in a header file called layer.h, find the class FilterLayer2D. I think the code is pretty readable (what programmer doesn't think his code is readable :) )\nhttps://github.com/sraber/simplenet.git\nI also wrote a paper on basic fully connected networks. I wrote it so that I would forget what I learned in my self study. Maybe you'll get something out of it. It's at this link:\nhttp://www.raberfamily.com/scottblog/scottblog.htm\n" ]
[ 0, 0 ]
[]
[]
[ "conv_neural_network", "convolution", "neural_network", "python", "tensorflow" ]
stackoverflow_0052997810_conv_neural_network_convolution_neural_network_python_tensorflow.txt
Q: Azure Python SDK to get cost of individual resources I want to get cost of individual resources using python script is thr any way to get the price of VM,Database etc., A: Use the Azure Billing library for Python. https://learn.microsoft.com/en-us/python/api/overview/azure/cost-management-+-billing?view=azure-python
Azure Python SDK to get cost of individual resources
I want to get cost of individual resources using python script is thr any way to get the price of VM,Database etc.,
[ "Use the Azure Billing library for Python.\nhttps://learn.microsoft.com/en-us/python/api/overview/azure/cost-management-+-billing?view=azure-python\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_python_sdk", "python" ]
stackoverflow_0074502642_azure_azure_python_sdk_python.txt
Q: Get mangled attribute value of a parent class outside of a class Imagine a parent class which has a mangled attribute, and a child class: class Foo: def __init__(self): self.__is_init = False async def init(self): # Some custom logic here, not important self.__is_init = True class Bar(Foo): ... # Create class instance. bar = Bar() # How access `__is_init` of the parent class from the child instance? How can I get a __is_init value from a parent (Foo) class? Obviously, I can bar._Foo__is_init in this example, but the problem is that class name is dynamic and I need a general purpose solution that will work with any passed class name. A: The solution I see now is iterating over parent classes, and building a mangled attribute name dynamically: from contextlib import suppress class MangledAttributeError(Exception): ... def getattr_mangled(object_: object, name: str) -> str: for cls_ in getattr(object_, "__mro__", None) or object_.__class__.__mro__: with suppress(AttributeError): return getattr(object_, f"_{cls_.__name__}{name}") raise MangledAttributeError(f"{type(object_).__name__} object has no attribute '{name}'") Checking that this works: class Foo: def __init__(self): self.__is_init = False async def init(self): self.__is_init = True class Bar(Foo): def __init__(self): super().__init__() bar = Bar() is_init = getattr_mangled(bar, "__is_init") print(f"is_init: {is_init}") # Will print `False` which is a correct value in this example A: class Foo: def __init__(self): self.__is_init = False async def init(self): self.__is_init = True class Bar(Foo): def getattr_mangled(self, attr:str): for i in self.__dict__.keys(): if attr in i: return getattr(self,i) # return self.__dict__[i] #or like this bar = Bar() print(bar.getattr_mangled('__is_init')) #False if there is a need in __init__ in Bar we should of course initiate Foo's init too by: super().__init__() When Foo's init is run, self namespace already has attribute name we need in the form we need it (like_PARENT_CLASS_NAME__attrname). And we can just get it from self namespace without even knowing what parent class name is.
Get mangled attribute value of a parent class outside of a class
Imagine a parent class which has a mangled attribute, and a child class: class Foo: def __init__(self): self.__is_init = False async def init(self): # Some custom logic here, not important self.__is_init = True class Bar(Foo): ... # Create class instance. bar = Bar() # How access `__is_init` of the parent class from the child instance? How can I get a __is_init value from a parent (Foo) class? Obviously, I can bar._Foo__is_init in this example, but the problem is that class name is dynamic and I need a general purpose solution that will work with any passed class name.
[ "The solution I see now is iterating over parent classes, and building a mangled attribute name dynamically:\nfrom contextlib import suppress\n\nclass MangledAttributeError(Exception):\n ...\n\ndef getattr_mangled(object_: object, name: str) -> str:\n for cls_ in getattr(object_, \"__mro__\", None) or object_.__class__.__mro__:\n with suppress(AttributeError):\n return getattr(object_, f\"_{cls_.__name__}{name}\")\n raise MangledAttributeError(f\"{type(object_).__name__} object has no attribute '{name}'\")\n\nChecking that this works:\nclass Foo:\n\n def __init__(self):\n self.__is_init = False\n\n async def init(self):\n self.__is_init = True\n\nclass Bar(Foo):\n\n def __init__(self):\n super().__init__()\n\nbar = Bar()\nis_init = getattr_mangled(bar, \"__is_init\")\nprint(f\"is_init: {is_init}\") # Will print `False` which is a correct value in this example\n\n", "class Foo:\n\n def __init__(self):\n self.__is_init = False\n\n async def init(self):\n self.__is_init = True\n\nclass Bar(Foo):\n\n def getattr_mangled(self, attr:str):\n for i in self.__dict__.keys():\n if attr in i:\n return getattr(self,i)\n # return self.__dict__[i] #or like this\n\n\n\nbar = Bar()\nprint(bar.getattr_mangled('__is_init')) #False\n\nif there is a need in __init__ in Bar we should of course initiate Foo's init too by: super().__init__()\nWhen Foo's init is run, self namespace already has attribute name we need in the form we need it (like_PARENT_CLASS_NAME__attrname).\nAnd we can just get it from self namespace without even knowing what parent class name is.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_3.x", "python_class" ]
stackoverflow_0074502700_python_python_3.x_python_class.txt
Q: Streamlit Doesn't start. AttributeError: Enum LabelVisibilityOptions has no value defined for name 'ValueType' I have a basic application. I have no experience working with streamlit. When I try streamlit run app.py I get the following error. Traceback (most recent call last): File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\Scripts\streamlit.exe\__main__.py", line 4, in <module> File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\__init__.py", line 55, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\delta_generator.py", line 45, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\elements\arrow_altair.py", line 42, in <module> from streamlit.elements.utils import last_index_for_melted_dataframes File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\elements\utils.py", line 82, in <module> ) -> LabelVisibilityMessage.LabelVisibilityOptions.ValueType: File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\internal\enum_type_wrapper.py", line 114, in __getattr__ raise AttributeError('Enum {} has no value defined for name {!r}'.format( AttributeError: Enum LabelVisibilityOptions has no value defined for name 'ValueType' I have installed streamlit, mysql, mysql.connector app.py import streamlit as st from gui.login.login import login_Main login_Main() login.py inside gui/login/ import streamlit as st # from user import login # Third change in april #from controller import * headerSection = st.container() mainSection = st.container() loginSection = st.container() logOutSection = st.container() def login_Main(): login() class login(): def show_main_page(self): with mainSection: dataFile = st.text_input("Enter your Test file name: ") Topics = st.text_input("Enter your Model Name: ") ModelVersion = st.text_input("Enter your Model Version: ") processingClicked = st.button ("Start Processing", key="processing") if processingClicked: st.balloons() def LoggedOut_Clicked(self): st.session_state['loggedIn'] = False def show_logout_page(self): loginSection.empty(); with logOutSection: st.button ("Log Out", key="logout", on_click=self.LoggedOut_Clicked) def LoggedIn_Clicked(self,userName, password): if (userName - password): st.session_state['loggedIn'] = True else: st.session_state['loggedIn'] = False st.error("Invalid user name or password") def show_login_page(self): with loginSection: if st.session_state['loggedIn'] == False: userName = st.text_input (label="", value="", placeholder="Enter your user name") password = st.text_input (label="", value="",placeholder="Enter password", type="password") st.button ("Login", on_click=self.LoggedIn_Clicked, args= (userName, password)) def __init__(self): with headerSection: st.title("Streamlit Application") #first run will have nothing in session_state if 'loggedIn' not in st.session_state: st.session_state['loggedIn'] = False self.show_login_page() else: if st.session_state['loggedIn']: self.show_logout_page() self.show_main_page() else: self.show_login_page() This is just a login window. This works on another pc but not in mine. What could be wrong. I have python -V = 3.10.0 I have tried installing python version 3.11.0. regular python code files work just fine. A: I think you should revert to streamlit 1.14 for the moment, there is a problem with 1.15.0, some issues are opened : https://github.com/streamlit/streamlit/issues/5742, https://github.com/streamlit/streamlit/issues/5743
Streamlit Doesn't start. AttributeError: Enum LabelVisibilityOptions has no value defined for name 'ValueType'
I have a basic application. I have no experience working with streamlit. When I try streamlit run app.py I get the following error. Traceback (most recent call last): File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\Scripts\streamlit.exe\__main__.py", line 4, in <module> File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\__init__.py", line 55, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\delta_generator.py", line 45, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\elements\arrow_altair.py", line 42, in <module> from streamlit.elements.utils import last_index_for_melted_dataframes File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\elements\utils.py", line 82, in <module> ) -> LabelVisibilityMessage.LabelVisibilityOptions.ValueType: File "C:\Users\joelm\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\internal\enum_type_wrapper.py", line 114, in __getattr__ raise AttributeError('Enum {} has no value defined for name {!r}'.format( AttributeError: Enum LabelVisibilityOptions has no value defined for name 'ValueType' I have installed streamlit, mysql, mysql.connector app.py import streamlit as st from gui.login.login import login_Main login_Main() login.py inside gui/login/ import streamlit as st # from user import login # Third change in april #from controller import * headerSection = st.container() mainSection = st.container() loginSection = st.container() logOutSection = st.container() def login_Main(): login() class login(): def show_main_page(self): with mainSection: dataFile = st.text_input("Enter your Test file name: ") Topics = st.text_input("Enter your Model Name: ") ModelVersion = st.text_input("Enter your Model Version: ") processingClicked = st.button ("Start Processing", key="processing") if processingClicked: st.balloons() def LoggedOut_Clicked(self): st.session_state['loggedIn'] = False def show_logout_page(self): loginSection.empty(); with logOutSection: st.button ("Log Out", key="logout", on_click=self.LoggedOut_Clicked) def LoggedIn_Clicked(self,userName, password): if (userName - password): st.session_state['loggedIn'] = True else: st.session_state['loggedIn'] = False st.error("Invalid user name or password") def show_login_page(self): with loginSection: if st.session_state['loggedIn'] == False: userName = st.text_input (label="", value="", placeholder="Enter your user name") password = st.text_input (label="", value="",placeholder="Enter password", type="password") st.button ("Login", on_click=self.LoggedIn_Clicked, args= (userName, password)) def __init__(self): with headerSection: st.title("Streamlit Application") #first run will have nothing in session_state if 'loggedIn' not in st.session_state: st.session_state['loggedIn'] = False self.show_login_page() else: if st.session_state['loggedIn']: self.show_logout_page() self.show_main_page() else: self.show_login_page() This is just a login window. This works on another pc but not in mine. What could be wrong. I have python -V = 3.10.0 I have tried installing python version 3.11.0. regular python code files work just fine.
[ "I think you should revert to streamlit 1.14 for the moment, there is a problem with 1.15.0, some issues are opened : https://github.com/streamlit/streamlit/issues/5742, https://github.com/streamlit/streamlit/issues/5743\n" ]
[ 0 ]
[]
[]
[ "python", "streamlit" ]
stackoverflow_0074494209_python_streamlit.txt
Q: (psycopg2.OperationalError) connection to server at "localhost" (::1), port 5432 failed: FATAL: database "players" does not exist This is a program that I had written a while ago and it had been working fine, but now when I run it I'm getting this error: sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (::1), port 5432 failed: FATAL: database "players" does not exist from flask import Flask, render_template, request from models import * from flask_sqlalchemy import SQLAlchemy from flask_sqlalchemy import SQLAlchemy app.config["SQLALCHEMY_DATABASE_URI"] = 'postgres://postgres:%password%@localhost:5432/players' app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False db.init_app(app) models class Player(db.Model): __tablename__ = "players" id = db.Column(db.Integer, primary_key=True) player_name = db.Column(db.String, nullable=False) Initially when I ran this program I was getting the following error: Can't load plugin: sqlalchemy.dialects:postgres Then I found from another post The URI should start with postgresql:// instead of postgres://. SQLAlchemy used to accept both, but has removed support for the postgres name. I updated that part of the code and now it is giving me the error that the database doesn't exist. What else am I missing here? I tried using 5433 as well. I'm able to connect to the database through the terminal A: you have to create the database players. The database does not exist.
(psycopg2.OperationalError) connection to server at "localhost" (::1), port 5432 failed: FATAL: database "players" does not exist
This is a program that I had written a while ago and it had been working fine, but now when I run it I'm getting this error: sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (::1), port 5432 failed: FATAL: database "players" does not exist from flask import Flask, render_template, request from models import * from flask_sqlalchemy import SQLAlchemy from flask_sqlalchemy import SQLAlchemy app.config["SQLALCHEMY_DATABASE_URI"] = 'postgres://postgres:%password%@localhost:5432/players' app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False db.init_app(app) models class Player(db.Model): __tablename__ = "players" id = db.Column(db.Integer, primary_key=True) player_name = db.Column(db.String, nullable=False) Initially when I ran this program I was getting the following error: Can't load plugin: sqlalchemy.dialects:postgres Then I found from another post The URI should start with postgresql:// instead of postgres://. SQLAlchemy used to accept both, but has removed support for the postgres name. I updated that part of the code and now it is giving me the error that the database doesn't exist. What else am I missing here? I tried using 5433 as well. I'm able to connect to the database through the terminal
[ "you have to create the database players. The database does not exist.\n" ]
[ 1 ]
[]
[]
[ "flask", "python", "sqlalchemy" ]
stackoverflow_0073848498_flask_python_sqlalchemy.txt
Q: Validate user input of character and iterate how many times character exists in sentence The while loop and for loop works individually, but combining them does no generate the desired output. I want the user to enter a sentence, and then a character. The Character must be entered as a single 1 character, if not then the program should ask again. sentence = input("Type sentence: ") sentence = sentence.lower() singleCharacter = input("Type character: ") char = 0 while len(singleCharacter) != 1: singleCharacter = input('Enter a single character: ') for i in sentence: if i == singleCharacter: char += 1 print(singleCharacter,"appears",char,"times in your sentence") A: Something like: sentence = input("Type sentence: ") sentence = sentence.lower() singleCharacter = input("Type character: ") char = 0 while len(singleCharacter) != 1: singleCharacter = input('Enter a single character: ') print(sum([1 for c in sentence if c == singleCharacter])) Should do what you want. A: The problem you encounter in this block of code: while len(singleCharacter) != 1: singleCharacter = input('Enter a single character: ') for i in sentence: if i == singleCharacter: char += 1 It will only give you the desired output when you enter more than one character at first time, otherwise it will not enter for loop for counting the number of characters which inside of while loop condition len(singleCharacter) != 1 and then will return the initial characters count 0. Therefore to work as expected you should put for loop for counting the characters outside while loop: while len(singleCharacter) != 1: singleCharacter = input('Enter a single character: ') for i in sentence: if i == singleCharacter: char += 1 Output: Type sentence: Hello World! Type character: l l appears 3 times in your sentence A: Just get for-loop out of while loop and it will be working fine
Validate user input of character and iterate how many times character exists in sentence
The while loop and for loop works individually, but combining them does no generate the desired output. I want the user to enter a sentence, and then a character. The Character must be entered as a single 1 character, if not then the program should ask again. sentence = input("Type sentence: ") sentence = sentence.lower() singleCharacter = input("Type character: ") char = 0 while len(singleCharacter) != 1: singleCharacter = input('Enter a single character: ') for i in sentence: if i == singleCharacter: char += 1 print(singleCharacter,"appears",char,"times in your sentence")
[ "Something like:\nsentence = input(\"Type sentence: \")\nsentence = sentence.lower()\nsingleCharacter = input(\"Type character: \")\n\nchar = 0\n\nwhile len(singleCharacter) != 1:\n singleCharacter = input('Enter a single character: ')\n\nprint(sum([1 for c in sentence if c == singleCharacter]))\n\nShould do what you want.\n", "The problem you encounter in this block of code:\nwhile len(singleCharacter) != 1:\n singleCharacter = input('Enter a single character: ')\n for i in sentence:\n if i == singleCharacter:\n char += 1\n\nIt will only give you the desired output when you enter more than one character at first time, otherwise it will not enter for loop for counting the number of characters which inside of while loop condition len(singleCharacter) != 1 and then will return the initial characters count 0. Therefore to work as expected you should put for loop for counting the characters outside while loop:\nwhile len(singleCharacter) != 1:\n singleCharacter = input('Enter a single character: ')\nfor i in sentence:\n if i == singleCharacter:\n char += 1\n\nOutput:\nType sentence: Hello World!\nType character: l\nl appears 3 times in your sentence\n\n", "Just get for-loop out of while loop and it will be working fine\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074502810_python.txt
Q: Using Python how to get list of all files in a HDFS folder? I would like to return a listing of all files in a HDFS folder using Python or preferably Pandas in a data frame. I have looked at subprocess.Popen and that may be the best way but if so is there a way to parse out all the noise and only return the file names? the hdfs module is out as can't get the config options. Tried subprocess.Popen but it returns so much extranious stuff. A: Once you've named the path from pathlib import Path folder = Path("/tmp/favorite_folder/") then it's just a matter of globbing some pattern, like folder.glob("*.csv"). Use wildcard to get all names at single level: print(folder.glob("*")) To recurse through all levels, you might wish to rely on os.walk(). https://docs.python.org/3/library/os.html#os.walk Or, use a recursive glob pattern: folder.glob("**/*.csv")
Using Python how to get list of all files in a HDFS folder?
I would like to return a listing of all files in a HDFS folder using Python or preferably Pandas in a data frame. I have looked at subprocess.Popen and that may be the best way but if so is there a way to parse out all the noise and only return the file names? the hdfs module is out as can't get the config options. Tried subprocess.Popen but it returns so much extranious stuff.
[ "Once you've named the path\nfrom pathlib import Path\n\nfolder = Path(\"/tmp/favorite_folder/\")\n\nthen it's just a matter of globbing some pattern, like folder.glob(\"*.csv\").\nUse wildcard to get all names at single level:\nprint(folder.glob(\"*\"))\n\n\nTo recurse through all levels,\nyou might wish to rely on os.walk().\nhttps://docs.python.org/3/library/os.html#os.walk\nOr, use a recursive glob pattern: folder.glob(\"**/*.csv\")\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074502325_python.txt
Q: Create a column by groupby Pandas DataFrame based on tail(1).index I want create a boolean column that said if a match on first or second half for each match in the dataframe. Code #First Half firsthalf_index = df.groupby(['Date','Match']).apply(lambda x: x[(x.M >= 1) & (x.M <= 45)].tail(1).index) #Second Half secondhalf_index = df.groupby(['Date','Match']).apply(lambda x: x[(x.M >= 46) & (x.M <= 90)].tail(1).index) This code return only the referred.index for each game Output What I want to add in this code is df[df.index < firsthalfindex_index] and df[(df.index > firsthalfindex.index) & (df.index < secondhalf_index)] A: You could do firsthalf_index = ((df.M >= 1) & (df.M <= 45)).iloc[::-1].groupby([df['Date'],df['Match']]).transform('idxmax') secondhalf_index =((df.M >= 46) & (df.M <= 90)).iloc[::-1].groupby([df['Date'],df['Match']]).transform('idxmax') Then s = df.index.to_series() df[(s > firsthalf_index) & (s < secondhalf_index)]
Create a column by groupby Pandas DataFrame based on tail(1).index
I want create a boolean column that said if a match on first or second half for each match in the dataframe. Code #First Half firsthalf_index = df.groupby(['Date','Match']).apply(lambda x: x[(x.M >= 1) & (x.M <= 45)].tail(1).index) #Second Half secondhalf_index = df.groupby(['Date','Match']).apply(lambda x: x[(x.M >= 46) & (x.M <= 90)].tail(1).index) This code return only the referred.index for each game Output What I want to add in this code is df[df.index < firsthalfindex_index] and df[(df.index > firsthalfindex.index) & (df.index < secondhalf_index)]
[ "You could do\nfirsthalf_index = ((df.M >= 1) & (df.M <= 45)).iloc[::-1].groupby([df['Date'],df['Match']]).transform('idxmax')\nsecondhalf_index =((df.M >= 46) & (df.M <= 90)).iloc[::-1].groupby([df['Date'],df['Match']]).transform('idxmax')\n\nThen\ns = df.index.to_series()\ndf[(s > firsthalf_index) & (s < secondhalf_index)]\n\n" ]
[ 1 ]
[]
[]
[ "group_by", "pandas", "python" ]
stackoverflow_0074502844_group_by_pandas_python.txt
Q: Python: Selenium xpath to find element with case insensitive characters? I am able to do this search = "View List" driver.find_elements_by_xpath("//*/text()[normalize-space(.)='%s']/parent::*" % search) but I need it to ignore and match all elements with text like: "VieW LiSt" or "view LIST" search = "View List" driver.find_elements_by_xpath("//*/lower-case(text())[normalize-space(.)='%s']/parent::*" % search.lower()) The above doesn't seem to work. lower-case() is in XPATH 1.0 A: The lower-case() function is only supported from XPath 2.0. For XPath 1.0 you will have to use translate(). Example code is given in this stackoverflow answer. Edit: The selenium python bindings site has a FAQ - Does Selenium 2 supports XPath 2.0 ?: Ref: http://seleniumhq.org/docs/03_webdriver.html#how-xpath-works-in-webdriver Selenium delegate XPath queries down to the browser’s own XPath engine, so Selenium support XPath supports whatever the browser supports. In browsers which don’t have native XPath engines (IE 6,7,8), Selenium support XPath 1.0 only. A: Since lower-case() is only supported in 2.0 I came up with this solution using translate() so I don't need to type the whole function manually everytime translate = "translate({value},'ABCDEFGHIJKLMNOPQRSTUVWXYZ','abcdefghijklmnopqrstuvwxyz')" driver.find_elements(By.XPATH, f"//*/{translate.format(value='text()')}[normalize-space(.)='{search.lower()}']/parent::*") Which computes to: >>> print(f"//*/{translate.format(value='text()')}[normalize-space(.)='{search.lower()}']/parent::*") "//*/translate(text(),'ABCDEFGHIJKLMNOPQRSTUVWXYZ','abcdefghijklmnopqrstuvwxyz')[normalize-space(.)='view list']/parent::*"
Python: Selenium xpath to find element with case insensitive characters?
I am able to do this search = "View List" driver.find_elements_by_xpath("//*/text()[normalize-space(.)='%s']/parent::*" % search) but I need it to ignore and match all elements with text like: "VieW LiSt" or "view LIST" search = "View List" driver.find_elements_by_xpath("//*/lower-case(text())[normalize-space(.)='%s']/parent::*" % search.lower()) The above doesn't seem to work. lower-case() is in XPATH 1.0
[ "The lower-case() function is only supported from XPath 2.0. For XPath 1.0 you will have to use translate().\nExample code is given in this stackoverflow answer.\nEdit:\nThe selenium python bindings site has a FAQ - Does Selenium 2 supports XPath 2.0 ?:\n\nRef:\n http://seleniumhq.org/docs/03_webdriver.html#how-xpath-works-in-webdriver\nSelenium delegate XPath queries down to the browser’s own XPath\n engine, so Selenium support XPath supports whatever the browser\n supports. In browsers which don’t have native XPath engines (IE\n 6,7,8), Selenium support XPath 1.0 only.\n\n", "Since lower-case() is only supported in 2.0 I came up with this solution using translate() so I don't need to type the whole function manually everytime\ntranslate = \"translate({value},'ABCDEFGHIJKLMNOPQRSTUVWXYZ','abcdefghijklmnopqrstuvwxyz')\"\n\ndriver.find_elements(By.XPATH, f\"//*/{translate.format(value='text()')}[normalize-space(.)='{search.lower()}']/parent::*\")\n\nWhich computes to:\n>>> print(f\"//*/{translate.format(value='text()')}[normalize-space(.)='{search.lower()}']/parent::*\")\n\"//*/translate(text(),'ABCDEFGHIJKLMNOPQRSTUVWXYZ','abcdefghijklmnopqrstuvwxyz')[normalize-space(.)='view list']/parent::*\"\n\n\n" ]
[ 3, 0 ]
[]
[]
[ "python", "selenium", "xpath" ]
stackoverflow_0020228962_python_selenium_xpath.txt
Q: Multithreading for similarity test in Python Hello I've been working on a huge csv file which needs similarity tests done. There is 1.16million rows and to test similarity between each rows it takes approximately 7 hours. I want to use multiple threads to reduce the time it takes to do so. My function which does the similarity test is: def similarity(): for i in range(0, 1000): for j in range(i+1, 1000): longestSentence = 0 commonWords = 0 row1 = dff['Product'].iloc[i] row2 = dff['Product'].iloc[j] wordsRow1 = row1.split() wordsRow2 = row2.split() # iki tumcedede esit olan sozcukler common = list(set(wordsRow1).intersection(wordsRow2)) if len(wordsRow1) > len(wordsRow2): longestSentence = len(wordsRow1) commonWords = calculate(common, wordsRow1) else: longestSentence = len(wordsRow2) commonWords = calculate(common, wordsRow2) print(i, j, (commonWords / longestSentence) * 100) def calculate(common, longestRow):#esit sozcuklerin bulunmasi sum = 0 for word in common: sum += longestRow.count(word) return sum I am using ThreadPoolExecutor to do multithreading and the code to do so is: with ThreadPoolExecutor(max_workers=500) as executor: for result in executor.map(similarity()): print(result) But even if I set max_workers to incredible amounts the code runs the same. How can I make it so the code runs faster? Is there any other way? I tried to do it with threading library but it doesn't work because it just starts the threads to do the same job over and over again. So if I do 10 threads it just starts the function 10 times to do the same thing. Thanks in advance for any help. A: ThreadPoolExecutor will not actually help a lot because ThreadPool is more for IO tasks. Let's say you would do 500 API calls this would work but since you are doing heavy CPU tasks it does not work. You should use ProcessPoolExecutor but also point attention that making max_workers numbers greater than the number of your cores will not do anything as well. Also, your syntax is incorrect because you are running the same function inside your pool. But I think you need to change your algorithm to make this work properly. There is definitely something wrong with your time compexity. from concurrent.futures import ProcessPoolExecutor from time import sleep values = [3,4,5,6] def cube(x): print(f'Cube of {x}:{x*x*x}') if __name__ == '__main__': result =[] with ProcessPoolExecutor(max_workers=5) as exe: exe.submit(cube,2) # Maps the method 'cube' with a iterable result = exe.map(cube,values) for r in result: print(r)
Multithreading for similarity test in Python
Hello I've been working on a huge csv file which needs similarity tests done. There is 1.16million rows and to test similarity between each rows it takes approximately 7 hours. I want to use multiple threads to reduce the time it takes to do so. My function which does the similarity test is: def similarity(): for i in range(0, 1000): for j in range(i+1, 1000): longestSentence = 0 commonWords = 0 row1 = dff['Product'].iloc[i] row2 = dff['Product'].iloc[j] wordsRow1 = row1.split() wordsRow2 = row2.split() # iki tumcedede esit olan sozcukler common = list(set(wordsRow1).intersection(wordsRow2)) if len(wordsRow1) > len(wordsRow2): longestSentence = len(wordsRow1) commonWords = calculate(common, wordsRow1) else: longestSentence = len(wordsRow2) commonWords = calculate(common, wordsRow2) print(i, j, (commonWords / longestSentence) * 100) def calculate(common, longestRow):#esit sozcuklerin bulunmasi sum = 0 for word in common: sum += longestRow.count(word) return sum I am using ThreadPoolExecutor to do multithreading and the code to do so is: with ThreadPoolExecutor(max_workers=500) as executor: for result in executor.map(similarity()): print(result) But even if I set max_workers to incredible amounts the code runs the same. How can I make it so the code runs faster? Is there any other way? I tried to do it with threading library but it doesn't work because it just starts the threads to do the same job over and over again. So if I do 10 threads it just starts the function 10 times to do the same thing. Thanks in advance for any help.
[ "ThreadPoolExecutor will not actually help a lot because ThreadPool is more for IO tasks. Let's say you would do 500 API calls this would work but since you are doing heavy CPU tasks it does not work. You should use ProcessPoolExecutor but also point attention that making max_workers numbers greater than the number of your cores will not do anything as well.\nAlso, your syntax is incorrect because you are running the same function inside your pool.\nBut I think you need to change your algorithm to make this work properly. There is definitely something wrong with your time compexity.\nfrom concurrent.futures import ProcessPoolExecutor\nfrom time import sleep\n \nvalues = [3,4,5,6]\ndef cube(x):\n print(f'Cube of {x}:{x*x*x}')\n \n \nif __name__ == '__main__':\n result =[]\n with ProcessPoolExecutor(max_workers=5) as exe:\n exe.submit(cube,2)\n \n # Maps the method 'cube' with a iterable\n result = exe.map(cube,values)\n \n for r in result:\n print(r)\n\n" ]
[ 0 ]
[]
[]
[ "csv", "multithreading", "python", "similarity" ]
stackoverflow_0074503005_csv_multithreading_python_similarity.txt
Q: Random Errors For No Reason For some reason I am experiencing a lot of errors regarding my indentation. I don't see anything wrong and I have re-typed the indentation multiple times. Maybe this has something to do with my other question here? Here is the code where the error is: @bot.command(description="See your balance or somebody else's balance.", aliases=['bal']) async def balance(ctx, member: discord.Member = None): if member: if not currency['balance'][member.id]: currency['balance'][member.id] = 0 save_data() ctx.message.reply(embed=discord.Embed( title=f"{member.name}'s Balance", description=f"{member.name}'s balance is `{currency['balance'][member.id]}`" )) else: if not currency['balance'][ctx.author.id]: currency['balance'][ctx.author.id] = 0 ctx.message.reply(embed=discord.Embed( title=f"Your Balance", description=f"Your balance is `{currency['balance'][ctx.author.id]}`" )) A: All the lines after the function definition need to be indented once more, since they have to belong to the function implementation. Like so: @bot.command(description="See your balance or somebody else's balance.", aliases=['bal']) async def balance(ctx, member: discord.Member = None): if member: if not currency['balance'][member.id]: currency['balance'][member.id] = 0 save_data() ctx.message.reply(embed=discord.Embed( title=f"{member.name}'s Balance", description=f"{member.name}'s balance is `{currency['balance'][member.id]}`" )) else: if not currency['balance'][ctx.author.id]: currency['balance'][ctx.author.id] = 0 ctx.message.reply(embed=discord.Embed( title=f"Your Balance", description=f"Your balance is `{currency['balance'][ctx.author.id]}`" ))
Random Errors For No Reason
For some reason I am experiencing a lot of errors regarding my indentation. I don't see anything wrong and I have re-typed the indentation multiple times. Maybe this has something to do with my other question here? Here is the code where the error is: @bot.command(description="See your balance or somebody else's balance.", aliases=['bal']) async def balance(ctx, member: discord.Member = None): if member: if not currency['balance'][member.id]: currency['balance'][member.id] = 0 save_data() ctx.message.reply(embed=discord.Embed( title=f"{member.name}'s Balance", description=f"{member.name}'s balance is `{currency['balance'][member.id]}`" )) else: if not currency['balance'][ctx.author.id]: currency['balance'][ctx.author.id] = 0 ctx.message.reply(embed=discord.Embed( title=f"Your Balance", description=f"Your balance is `{currency['balance'][ctx.author.id]}`" ))
[ "All the lines after the function definition need to be indented once more, since they have to belong to the function implementation. Like so:\n@bot.command(description=\"See your balance or somebody else's balance.\", aliases=['bal'])\nasync def balance(ctx, member: discord.Member = None):\n if member:\n if not currency['balance'][member.id]:\n currency['balance'][member.id] = 0\n save_data()\n ctx.message.reply(embed=discord.Embed(\n title=f\"{member.name}'s Balance\",\n description=f\"{member.name}'s balance is `{currency['balance'][member.id]}`\"\n ))\n else:\n if not currency['balance'][ctx.author.id]:\n currency['balance'][ctx.author.id] = 0\n ctx.message.reply(embed=discord.Embed(\n title=f\"Your Balance\",\n description=f\"Your balance is `{currency['balance'][ctx.author.id]}`\"\n ))\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074502977_python_python_3.x.txt