content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Is the a easy way to print the :e format to 10^x format? I am writing some numerical value in a matplotlib textbox as textst = "$En_1={0:.4e}$".format( *popt) plt.text(.950, .100, textst, bbox=props, ha='right', va='bottom', transform=ax.transAxes) Problem is, I am getting the numerical value as,say, 5 e +5. I want the value as 5x10^5, i.e. proper superscript. Is there any easy way of doing this? (easy is the key here, I don't want a lot of regex etc to get the e->10 etc) A: You just have to do it yourself. Note that I'm assuming you want something that can be interpreted as LaTeX: import math def custom_number_format(n: float, precision: int = 3) -> str: exp = math.floor(math.log10(n)) decimal = round(n / 10**exp, precision) return rf"{decimal} \times 10^{{{exp}}}" # or f"{decimal}x10^{exp}" if you don't want LaTeX Demo: In [3]: for i in range(-11, 11): ...: print(custom_number_format(1.23456789 * 10**i, 4)) ...: 1.2346 \times 10^{-11} 1.2346 \times 10^{-10} 1.2346 \times 10^{-9} 1.2346 \times 10^{-8} 1.2346 \times 10^{-7} 1.2346 \times 10^{-6} 1.2346 \times 10^{-5} 1.2346 \times 10^{-4} 1.2346 \times 10^{-3} 1.2346 \times 10^{-2} 1.2346 \times 10^{-1} 1.2346 \times 10^{0} 1.2346 \times 10^{1} 1.2346 \times 10^{2} 1.2346 \times 10^{3} 1.2346 \times 10^{4} 1.2346 \times 10^{5} 1.2346 \times 10^{6} 1.2346 \times 10^{7} 1.2346 \times 10^{8} 1.2346 \times 10^{9} 1.2346 \times 10^{10} Or use regex, because the pattern is pretty straight forward, although you lose the ability to set the precision dynamically: import re p = re.compile(r"(\d\.\d{4})e[\+\-](\d*)") def custom_number_format(n: float) -> str: decimal, exponent = p.match(f"{n}:.4e").groups() return rf"{decimal} \times 10^{{{int(exponent)}}}") A: "proper superscript" (use appropriate font ) import math super = str.maketrans('-0123456789', '\N{SUPERSCRIPT MINUS}' '\N{SUPERSCRIPT ZERO}' '\N{SUPERSCRIPT ONE}' '\N{SUPERSCRIPT TWO}' '\N{SUPERSCRIPT THREE}' '\N{SUPERSCRIPT FOUR}' '\N{SUPERSCRIPT FIVE}' '\N{SUPERSCRIPT SIX}' '\N{SUPERSCRIPT SEVEN}' '\N{SUPERSCRIPT EIGHT}' '\N{SUPERSCRIPT NINE}') def myexp(n): exponent = math.floor(math.log10(n)) mantissa = n / 10**exponent return f'{mantissa:.4}\N{MULTIPLICATION SIGN}10{str(exponent).translate(super)}' for i in range(20): n = 1.23456 * 10**i print(myexp(n)) 1.2346×10⁰ 1.2346×10¹ 1.2346×10² 1.2346×10³ 1.2346×10⁴ 1.2346×10⁵ 1.2346×10⁶ 1.2346×10⁷ 1.2346×10⁸ 1.2346×10⁹ 1.2346×10¹⁰ 1.2346×10¹¹ 1.2346×10¹² 1.2346×10¹³ 1.2346×10¹⁴ 1.2346×10¹⁵ 1.2346×10¹⁶ 1.2346×10¹⁷ 1.2346×10¹⁸ 1.2346×10¹⁹
Is the a easy way to print the :e format to 10^x format?
I am writing some numerical value in a matplotlib textbox as textst = "$En_1={0:.4e}$".format( *popt) plt.text(.950, .100, textst, bbox=props, ha='right', va='bottom', transform=ax.transAxes) Problem is, I am getting the numerical value as,say, 5 e +5. I want the value as 5x10^5, i.e. proper superscript. Is there any easy way of doing this? (easy is the key here, I don't want a lot of regex etc to get the e->10 etc)
[ "You just have to do it yourself. Note that I'm assuming you want something that can be interpreted as LaTeX:\nimport math\n\n\ndef custom_number_format(n: float, precision: int = 3) -> str:\n exp = math.floor(math.log10(n))\n decimal = round(n / 10**exp, precision)\n return rf\"{decimal} \\times 10^{{{exp}}}\" # or f\"{decimal}x10^{exp}\" if you don't want LaTeX\n\nDemo:\nIn [3]: for i in range(-11, 11):\n ...: print(custom_number_format(1.23456789 * 10**i, 4))\n ...:\n1.2346 \\times 10^{-11}\n1.2346 \\times 10^{-10}\n1.2346 \\times 10^{-9}\n1.2346 \\times 10^{-8}\n1.2346 \\times 10^{-7}\n1.2346 \\times 10^{-6}\n1.2346 \\times 10^{-5}\n1.2346 \\times 10^{-4}\n1.2346 \\times 10^{-3}\n1.2346 \\times 10^{-2}\n1.2346 \\times 10^{-1}\n1.2346 \\times 10^{0}\n1.2346 \\times 10^{1}\n1.2346 \\times 10^{2}\n1.2346 \\times 10^{3}\n1.2346 \\times 10^{4}\n1.2346 \\times 10^{5}\n1.2346 \\times 10^{6}\n1.2346 \\times 10^{7}\n1.2346 \\times 10^{8}\n1.2346 \\times 10^{9}\n1.2346 \\times 10^{10}\n\nOr use regex, because the pattern is pretty straight forward, although you lose the ability to set the precision dynamically:\nimport re\n\np = re.compile(r\"(\\d\\.\\d{4})e[\\+\\-](\\d*)\")\n\n\ndef custom_number_format(n: float) -> str:\n decimal, exponent = p.match(f\"{n}:.4e\").groups()\n return rf\"{decimal} \\times 10^{{{int(exponent)}}}\")\n\n", "\"proper superscript\" (use appropriate font )\nimport math\n\nsuper = str.maketrans('-0123456789',\n '\\N{SUPERSCRIPT MINUS}'\n '\\N{SUPERSCRIPT ZERO}'\n '\\N{SUPERSCRIPT ONE}'\n '\\N{SUPERSCRIPT TWO}'\n '\\N{SUPERSCRIPT THREE}'\n '\\N{SUPERSCRIPT FOUR}'\n '\\N{SUPERSCRIPT FIVE}'\n '\\N{SUPERSCRIPT SIX}'\n '\\N{SUPERSCRIPT SEVEN}'\n '\\N{SUPERSCRIPT EIGHT}'\n '\\N{SUPERSCRIPT NINE}')\n\ndef myexp(n):\n exponent = math.floor(math.log10(n))\n mantissa = n / 10**exponent\n return f'{mantissa:.4}\\N{MULTIPLICATION SIGN}10{str(exponent).translate(super)}'\n\nfor i in range(20):\n n = 1.23456 * 10**i\n print(myexp(n))\n\n1.2346×10⁰\n1.2346×10¹\n1.2346×10²\n1.2346×10³\n1.2346×10⁴\n1.2346×10⁵\n1.2346×10⁶\n1.2346×10⁷\n1.2346×10⁸\n1.2346×10⁹\n1.2346×10¹⁰\n1.2346×10¹¹\n1.2346×10¹²\n1.2346×10¹³\n1.2346×10¹⁴\n1.2346×10¹⁵\n1.2346×10¹⁶\n1.2346×10¹⁷\n1.2346×10¹⁸\n1.2346×10¹⁹\n\n" ]
[ 0, 0 ]
[]
[]
[ "format", "python" ]
stackoverflow_0074541759_format_python.txt
Q: I need help to find the correct output (convert word to lower case) Here is my my program: def word_frequencies(words): l=[] l=words.split() wordfreq=[l.count(p) for p in l] return(dict(zip(l,wordfreq))) if __name__ == '__main__': words = input("Enter a sentence: ") your_dictionary = word_frequencies(words) sorted_keys = sorted(your_dictionary.keys()) for key in sorted_keys: print(key + ': ' + str(your_dictionary[key])) Here is my output: Enter a sentence: ZyBooks now zyBooks later zyBooks forever ZyBooks: 1 forever: 1 later: 1 now: 1 zyBooks: 2 Here is my expectation: Enter a sentence: ZyBooks now zyBooks later zyBooks forever forever: 1 later: 1 now: 1 zybooks: 3 A: Use the str.lower() function to make all words lowercase before counting them. For example: def word_frequencies(words): l = words.lower().split() # ... A: def word_frequencies(words): return {p: words.count(p) for p in set(words)} if __name__ == '__main__': words = input("Enter a sentence: ").lower().split() print(*sorted(f'{k}: {v}' for k, v in word_frequencies(words).items())) Transform the input to lowercase using lower() method and split it before calling the function, so your function is only responsible for one task. Print the sorted result by unpacking the return value from the function. Output: Enter a sentence: ZyBooks now zyBooks later zyBooks forever forever: 1 later: 1 now: 1 zybooks: 3
I need help to find the correct output (convert word to lower case)
Here is my my program: def word_frequencies(words): l=[] l=words.split() wordfreq=[l.count(p) for p in l] return(dict(zip(l,wordfreq))) if __name__ == '__main__': words = input("Enter a sentence: ") your_dictionary = word_frequencies(words) sorted_keys = sorted(your_dictionary.keys()) for key in sorted_keys: print(key + ': ' + str(your_dictionary[key])) Here is my output: Enter a sentence: ZyBooks now zyBooks later zyBooks forever ZyBooks: 1 forever: 1 later: 1 now: 1 zyBooks: 2 Here is my expectation: Enter a sentence: ZyBooks now zyBooks later zyBooks forever forever: 1 later: 1 now: 1 zybooks: 3
[ "Use the str.lower() function to make all words lowercase before counting them.\nFor example:\ndef word_frequencies(words):\n l = words.lower().split()\n # ...\n\n", "def word_frequencies(words):\n return {p: words.count(p) for p in set(words)}\n\nif __name__ == '__main__':\n words = input(\"Enter a sentence: \").lower().split()\n print(*sorted(f'{k}: {v}' for k, v in word_frequencies(words).items()))\n\nTransform the input to lowercase using lower() method and split it before calling the function, so your function is only responsible for one task. Print the sorted result by unpacking the return value from the function.\nOutput:\nEnter a sentence: ZyBooks now zyBooks later zyBooks forever\nforever: 1 later: 1 now: 1 zybooks: 3\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074540930_python.txt
Q: How to check if more than one model/library is installed if not install it? I want python to install model1 & model2 if they are not already installed if model1 or model2 doesn't exist then !pip install model1 & !pip install model2 A: The easiest way is to ensure that all modules can be loaded on all systems. Enclosing each import statement in a try block is the best solution and not un-Pythonic at all. > try: > import model1 > print("module 'model1' is installed") > except ModuleNotFoundError: > print("module 'model1' is not installed") > install("model1") # the install function from the question Updated Section:- If you can just convert into below code than i think no need for each model/library > try: > #importing all packages > #importing model1 > #importing model2 > except ExceptionType1: > #install model1 > except ExceptionType2: > #install model2
How to check if more than one model/library is installed if not install it?
I want python to install model1 & model2 if they are not already installed if model1 or model2 doesn't exist then !pip install model1 & !pip install model2
[ "The easiest way is to ensure that all modules can be loaded on all systems. Enclosing each import statement in a try block is the best solution and not un-Pythonic at all.\n> try:\n> import model1\n> print(\"module 'model1' is installed\") \n\n> except ModuleNotFoundError:\n> print(\"module 'model1' is not installed\")\n> install(\"model1\") # the install function from the question\n\nUpdated Section:-\nIf you can just convert into below code than i think no need for each model/library\n> try:\n> #importing all packages\n> #importing model1\n> #importing model2 \n> except ExceptionType1: \n> #install model1 \n> except ExceptionType2: \n> #install model2\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074541938_python.txt
Q: Bypassing recaptcha v2 using python requests this is a web scraping project I'm working on. I need to send the response of this v2 recaptcha but it's not bringing the data I need ` headers = { 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } url = 'https://www2.detran.rn.gov.br/externo/consultarveiculo.asp' session = requests.session() fazer_get = session.get(url, headers=headers) cookie = fazer_get.cookies html = fazer_get.text try: rgxCaptchaKey = re.search(r'<div\s*class="g-recaptcha"\s*data-\s*sitekey="([^\"]*?)"></div>', html, re.IGNORECASE) captchaKey = rgxCaptchaKey.group(1) except: print('erro') resposta_captcha = captcha(captchaKey, url, KEY) placa = 'pcj90' renavam = '57940' payload = { 'oculto:' 'AvancarC' 'placa': placa, 'renavam': renavam, 'g-recaptcha-response': resposta_captcha['code'], 'btnConsultaPlaca': '' } fazerPost = session.post( url, payload, headers=headers, cookies=cookie) ` I tried to send the captcha response in the payload but I couldn't get to the page I want A: If the website you're trying to scrape is reCaptcha protected, your best bet is to use a stealthy method for scraping. That means either Selenium (with at least selenium-stealth) or a third party web scraper, such as WebScrapingAPI, where I'm an engineer. The advantage of using the third party service is that it usually comes packed with reCaptcha solving, IP rotation systems and other various features to prevent bot detection, so you can focus on building handling the scraped data, rather than building the scraper. In order to have a better view on both options, here are two implementation examples you can compare: 1. Python With Stealthy Selenium from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium_stealth import stealth from bs4 import BeautifulSoup URL = 'https://www2.detran.rn.gov.br/externo/consultarveiculo.asp' options = Options() options.add_argument("start-maximized") options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option('useAutomationExtension', False) driver = webdriver.Chrome(options=options) stealth(driver, languages=["en-US", "en"], vendor="Google Inc.", platform="Win32", webgl_vendor="Intel Inc.", renderer="Intel Iris OpenGL Engine", fix_hairline=True) driver.get(URL) html = driver.page_source driver.quit() You should also look into integrating a captcha solver (like 2captcha) with his code. 2. Python With WebScrapingAPI import requests URL = 'https://www2.detran.rn.gov.br/externo/consultarveiculo.asp' API_KEY = '<YOUR_API_KEY>' SCRAPER_URL = 'https://api.webscrapingapi.com/v1' params = { "api_key":API_KEY, "url": URL, "render_js":"1", "js_instructions":''' [{ "action":"value", "selector":"input#placa", "timeout": 5000, "value":"<YOUR_EMAIL_OR_USERNAME>" }, { "action":"value", "selector":"input#renavam", "timeout": 5000, "value":"<YOUR_PASSWORD>" }, { "action":"submit", "selector":"button#btnConsultaPlaca", "timeout": 5000 }] ''' } res = requests.get(SCRAPER_URL, params=params) print(res.text)
Bypassing recaptcha v2 using python requests
this is a web scraping project I'm working on. I need to send the response of this v2 recaptcha but it's not bringing the data I need ` headers = { 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } url = 'https://www2.detran.rn.gov.br/externo/consultarveiculo.asp' session = requests.session() fazer_get = session.get(url, headers=headers) cookie = fazer_get.cookies html = fazer_get.text try: rgxCaptchaKey = re.search(r'<div\s*class="g-recaptcha"\s*data-\s*sitekey="([^\"]*?)"></div>', html, re.IGNORECASE) captchaKey = rgxCaptchaKey.group(1) except: print('erro') resposta_captcha = captcha(captchaKey, url, KEY) placa = 'pcj90' renavam = '57940' payload = { 'oculto:' 'AvancarC' 'placa': placa, 'renavam': renavam, 'g-recaptcha-response': resposta_captcha['code'], 'btnConsultaPlaca': '' } fazerPost = session.post( url, payload, headers=headers, cookies=cookie) ` I tried to send the captcha response in the payload but I couldn't get to the page I want
[ "If the website you're trying to scrape is reCaptcha protected, your best bet is to use a stealthy method for scraping. That means either Selenium (with at least selenium-stealth) or a third party web scraper, such as WebScrapingAPI, where I'm an engineer.\nThe advantage of using the third party service is that it usually comes packed with reCaptcha solving, IP rotation systems and other various features to prevent bot detection, so you can focus on building handling the scraped data, rather than building the scraper.\nIn order to have a better view on both options, here are two implementation examples you can compare:\n1. Python With Stealthy Selenium\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium_stealth import stealth\nfrom bs4 import BeautifulSoup\n\nURL = 'https://www2.detran.rn.gov.br/externo/consultarveiculo.asp'\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.add_experimental_option(\"excludeSwitches\", [\"enable-automation\"])\noptions.add_experimental_option('useAutomationExtension', False)\ndriver = webdriver.Chrome(options=options)\n\nstealth(driver,\n languages=[\"en-US\", \"en\"],\n vendor=\"Google Inc.\",\n platform=\"Win32\",\n webgl_vendor=\"Intel Inc.\",\n renderer=\"Intel Iris OpenGL Engine\",\n fix_hairline=True)\n\ndriver.get(URL) \nhtml = driver.page_source\ndriver.quit()\n\nYou should also look into integrating a captcha solver (like 2captcha) with his code.\n2. Python With WebScrapingAPI\nimport requests\n\nURL = 'https://www2.detran.rn.gov.br/externo/consultarveiculo.asp'\n\nAPI_KEY = '<YOUR_API_KEY>'\nSCRAPER_URL = 'https://api.webscrapingapi.com/v1'\n\nparams = {\n \"api_key\":API_KEY,\n \"url\": URL,\n \"render_js\":\"1\",\n \"js_instructions\":'''\n [{\n \"action\":\"value\",\n \"selector\":\"input#placa\",\n \"timeout\": 5000,\n \"value\":\"<YOUR_EMAIL_OR_USERNAME>\"\n },\n { \n \"action\":\"value\",\n \"selector\":\"input#renavam\",\n \"timeout\": 5000,\n \"value\":\"<YOUR_PASSWORD>\"\n },\n {\n \"action\":\"submit\",\n \"selector\":\"button#btnConsultaPlaca\",\n \"timeout\": 5000\n }]\n '''\n}\n\nres = requests.get(SCRAPER_URL, params=params)\nprint(res.text)\n\n" ]
[ 0 ]
[]
[]
[ "2captcha", "python", "python_requests", "web_scraping" ]
stackoverflow_0074540568_2captcha_python_python_requests_web_scraping.txt
Q: Unable to import numpy/pandas/matplotlib packages in VScode I have used widely used packages(installed via pip) for a while in Jupyter notebook without any issues. I tried to do Python coding in VScode,but it somehow cannot load those packages. I have tried changing python interpreter, but it did solve the issue. Does anyone know how to resolve this issue? A: First make sure that you have the python interpreter installed on your computer. In your vscode UI you should see a terminal. You can install and upgrade pip through there if needed by using these commands: pip install --upgrade pip From here you should be able to import using pip commands.
Unable to import numpy/pandas/matplotlib packages in VScode
I have used widely used packages(installed via pip) for a while in Jupyter notebook without any issues. I tried to do Python coding in VScode,but it somehow cannot load those packages. I have tried changing python interpreter, but it did solve the issue. Does anyone know how to resolve this issue?
[ "First make sure that you have the python interpreter installed on your computer. In your vscode UI you should see a terminal. You can install and upgrade pip through there if needed by using these commands:\npip install --upgrade pip\n\nFrom here you should be able to import using pip commands.\n" ]
[ 0 ]
[ "Hi you can use terminal for installation.\notherwise you can anaconda iDE its very good tool and user friendly.\n" ]
[ -1 ]
[ "package", "python", "visual_studio_code" ]
stackoverflow_0074541851_package_python_visual_studio_code.txt
Q: How to format a JSON in python create a json file the file should be formatted like this: {"name": "YOU", "items": { "item 1": "bread", "quantity of item 1": 2, "price of item1": "0.60", "item 2": "milk", "quantity of item 2": 10, "price of item2": "6.00" } } //I tried to use f.write( When I tried to create a dictionary and add a the new line operator it would just add \n to the file order = {"name": name, "items": {}} a= 1 for i in items: order["items"].update({f'item {a}': i.item, +'\n' f'quantity of item {a}': i.quantity,+'\n' f'price of item{a}': i.price()}) a+=1 A: Python has the json library for formatting dictionaries into json formats: import json json_string = json.dumps(items) print(json_string) Another issue is that you're defining key-value pairs within a list - you want to use {} instead of [] if you're giving values keys as well, and placing commas in some missed lines: { "name": "name", "items": { "item name": "item1", "quantity": 1, "price": 1.00 } }
How to format a JSON in python
create a json file the file should be formatted like this: {"name": "YOU", "items": { "item 1": "bread", "quantity of item 1": 2, "price of item1": "0.60", "item 2": "milk", "quantity of item 2": 10, "price of item2": "6.00" } } //I tried to use f.write( When I tried to create a dictionary and add a the new line operator it would just add \n to the file order = {"name": name, "items": {}} a= 1 for i in items: order["items"].update({f'item {a}': i.item, +'\n' f'quantity of item {a}': i.quantity,+'\n' f'price of item{a}': i.price()}) a+=1
[ "Python has the json library for formatting dictionaries into json formats:\nimport json\njson_string = json.dumps(items)\nprint(json_string)\n\nAnother issue is that you're defining key-value pairs within a list - you want to use {} instead of [] if you're giving values keys as well, and placing commas in some missed lines:\n{\n \"name\": \"name\",\n \"items\": {\n \"item name\": \"item1\",\n \"quantity\": 1,\n \"price\": 1.00\n }\n}\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "json", "python" ]
stackoverflow_0074542057_dictionary_json_python.txt
Q: How to change text when a button is pressed with PySimpleGUI I'm trying to make a calculator with PySimpleGUI as a school project and I have made a basic GUI with it but I am struggling to make the buttons functional. I made functions for all the buttons. import PySimpleGUI as sg def pressed_button_0(): button0 = 0 def pressed_button_1(): button1 = 1 def pressed_button_2(): button2 = 2 def pressed_button_3(): button3 = 3 def pressed_button_4(): button4 = 4 def pressed_button_5(): button5 = 5 def pressed_button_6(): button6 = 6 def pressed_button_7(): button7 = 7 def pressed_button_8(): button8 = 8 def pressed_button_9(): button9 = 9 problem = '' layout_1 = [ [sg.Text('Calculator')], [sg.Text(str(problem))], [sg.Button('1'), sg.Button('2'), sg.Button('3'), sg.Button('÷')], [sg.Button('4'), sg.Button('5'), sg.Button('6'), sg.Button('×')], [sg.Button('7'), sg.Button('8'), sg.Button('9'), sg.Button('+')], [sg.Button('.'), sg.Button('0'), sg.Button('='), sg.Button('-')] ] sg.theme('dark grey 13') window = sg.Window('Calculator', layout_1) problem = '' while True: event, values = window.read() if event == sg.WINDOW_CLOSED: break if event == '0': pressed_button_0() window.close() i tried setting a text element as a variable which i thought would update when i pressed a button but that didnt seem to work, not sure what i did wrong A: Variable buttonX is just a variable and nothing about the GUI, you have to call elemet.update(value=something) where the element can be found by window[element_key]. import PySimpleGUI as sg keys = ['123÷', '456×', '789+', '.0=-'] all_keys = ''.join(keys) sg.theme('DarkGrey13') sg.set_options(font=('Courier New', 16)) layout = [ [sg.Text('Calculator', expand_x=True, justification='center')], [sg.Input(size=5, expand_x=True, key='-INPUT-')]] + [ [sg.Button(key, size=3) for key in row] for row in keys] + [ [sg.Push(), sg.Button('Submit')], ] window = sg.Window('Calculator', layout) while True: event, values = window.read() if event == sg.WINDOW_CLOSED: break elif event in all_keys: problem = values['-INPUT-'] window['-INPUT-'].update(problem + event) window['-INPUT-'].widget.xview_moveto(1) elif event == 'Submit': problem = values['-INPUT-'] print(problem) window.close()
How to change text when a button is pressed with PySimpleGUI
I'm trying to make a calculator with PySimpleGUI as a school project and I have made a basic GUI with it but I am struggling to make the buttons functional. I made functions for all the buttons. import PySimpleGUI as sg def pressed_button_0(): button0 = 0 def pressed_button_1(): button1 = 1 def pressed_button_2(): button2 = 2 def pressed_button_3(): button3 = 3 def pressed_button_4(): button4 = 4 def pressed_button_5(): button5 = 5 def pressed_button_6(): button6 = 6 def pressed_button_7(): button7 = 7 def pressed_button_8(): button8 = 8 def pressed_button_9(): button9 = 9 problem = '' layout_1 = [ [sg.Text('Calculator')], [sg.Text(str(problem))], [sg.Button('1'), sg.Button('2'), sg.Button('3'), sg.Button('÷')], [sg.Button('4'), sg.Button('5'), sg.Button('6'), sg.Button('×')], [sg.Button('7'), sg.Button('8'), sg.Button('9'), sg.Button('+')], [sg.Button('.'), sg.Button('0'), sg.Button('='), sg.Button('-')] ] sg.theme('dark grey 13') window = sg.Window('Calculator', layout_1) problem = '' while True: event, values = window.read() if event == sg.WINDOW_CLOSED: break if event == '0': pressed_button_0() window.close() i tried setting a text element as a variable which i thought would update when i pressed a button but that didnt seem to work, not sure what i did wrong
[ "Variable buttonX is just a variable and nothing about the GUI, you have to call elemet.update(value=something) where the element can be found by window[element_key].\nimport PySimpleGUI as sg\n\nkeys = ['123÷', '456×', '789+', '.0=-']\nall_keys = ''.join(keys)\n\nsg.theme('DarkGrey13')\nsg.set_options(font=('Courier New', 16))\nlayout = [\n [sg.Text('Calculator', expand_x=True, justification='center')],\n [sg.Input(size=5, expand_x=True, key='-INPUT-')]] + [\n [sg.Button(key, size=3) for key in row] for row in keys] + [\n [sg.Push(), sg.Button('Submit')],\n]\nwindow = sg.Window('Calculator', layout)\n\nwhile True:\n\n event, values = window.read()\n\n if event == sg.WINDOW_CLOSED:\n break\n\n elif event in all_keys:\n problem = values['-INPUT-']\n window['-INPUT-'].update(problem + event)\n window['-INPUT-'].widget.xview_moveto(1)\n\n elif event == 'Submit':\n problem = values['-INPUT-']\n print(problem)\n\nwindow.close()\n\n" ]
[ 0 ]
[]
[]
[ "pysimplegui", "python", "python_3.10", "user_interface" ]
stackoverflow_0074454414_pysimplegui_python_python_3.10_user_interface.txt
Q: Read flat list into multidimensional array/matrix in python I have a list of numbers that represent the flattened output of a matrix or array produced by another program, I know the dimensions of the original array and want to read the numbers back into either a list of lists or a NumPy matrix. There could be more than 2 dimensions in the original array. e.g. data = [0, 2, 7, 6, 3, 1, 4, 5] shape = (2,4) print some_func(data, shape) Would produce: [[0,2,7,6], [3,1,4,5]] Cheers in advance A: Use numpy.reshape: >>> import numpy as np >>> data = np.array( [0, 2, 7, 6, 3, 1, 4, 5] ) >>> shape = ( 2, 4 ) >>> data.reshape( shape ) array([[0, 2, 7, 6], [3, 1, 4, 5]]) You can also assign directly to the shape attribute of data if you want to avoid copying it in memory: >>> data.shape = shape A: If you dont want to use numpy, there is a simple oneliner for the 2d case: group = lambda flat, size: [flat[i:i+size] for i in range(0,len(flat), size)] And can be generalized for multidimensions by adding recursion: import operator def shape(flat, dims): subdims = dims[1:] subsize = reduce(operator.mul, subdims, 1) if dims[0]*subsize!=len(flat): raise ValueError("Size does not match or invalid") if not subdims: return flat return [shape(flat[i:i+subsize], subdims) for i in range(0,len(flat), subsize)] A: For those one liners out there: >>> data = [0, 2, 7, 6, 3, 1, 4, 5] >>> col = 4 # just grab the number of columns here >>> [data[i:i+col] for i in range(0, len(data), col)] [[0, 2, 7, 6],[3, 1, 4, 5]] >>> # for pretty print, use either np.array or np.asmatrix >>> np.array([data[i:i+col] for i in range(0, len(data), col)]) array([[0, 2, 7, 6], [3, 1, 4, 5]]) A: Without Numpy we can do as below as well.. l1 = [1,2,3,4,5,6,7,8,9] def convintomatrix(x): sqrt = int(len(x) ** 0.5) matrix = [] while x != []: matrix.append(x[:sqrt]) x = x[sqrt:] return matrix print (convintomatrix(l1)) A: [list(x) for x in zip(*[iter(data)]*shape[1])]
 (found this post searching for how this works)
Read flat list into multidimensional array/matrix in python
I have a list of numbers that represent the flattened output of a matrix or array produced by another program, I know the dimensions of the original array and want to read the numbers back into either a list of lists or a NumPy matrix. There could be more than 2 dimensions in the original array. e.g. data = [0, 2, 7, 6, 3, 1, 4, 5] shape = (2,4) print some_func(data, shape) Would produce: [[0,2,7,6], [3,1,4,5]] Cheers in advance
[ "Use numpy.reshape:\n>>> import numpy as np\n>>> data = np.array( [0, 2, 7, 6, 3, 1, 4, 5] )\n>>> shape = ( 2, 4 )\n>>> data.reshape( shape )\narray([[0, 2, 7, 6],\n [3, 1, 4, 5]])\n\nYou can also assign directly to the shape attribute of data if you want to avoid copying it in memory:\n>>> data.shape = shape\n\n", "If you dont want to use numpy, there is a simple oneliner for the 2d case:\ngroup = lambda flat, size: [flat[i:i+size] for i in range(0,len(flat), size)]\n\nAnd can be generalized for multidimensions by adding recursion:\nimport operator\ndef shape(flat, dims):\n subdims = dims[1:]\n subsize = reduce(operator.mul, subdims, 1)\n if dims[0]*subsize!=len(flat):\n raise ValueError(\"Size does not match or invalid\")\n if not subdims:\n return flat\n return [shape(flat[i:i+subsize], subdims) for i in range(0,len(flat), subsize)]\n\n", "For those one liners out there: \n>>> data = [0, 2, 7, 6, 3, 1, 4, 5]\n>>> col = 4 # just grab the number of columns here\n\n>>> [data[i:i+col] for i in range(0, len(data), col)]\n[[0, 2, 7, 6],[3, 1, 4, 5]]\n\n>>> # for pretty print, use either np.array or np.asmatrix\n>>> np.array([data[i:i+col] for i in range(0, len(data), col)]) \narray([[0, 2, 7, 6],\n [3, 1, 4, 5]])\n\n", "Without Numpy we can do as below as well..\nl1 = [1,2,3,4,5,6,7,8,9]\n\ndef convintomatrix(x):\n\n sqrt = int(len(x) ** 0.5)\n matrix = []\n while x != []:\n matrix.append(x[:sqrt])\n x = x[sqrt:]\n return matrix\n\nprint (convintomatrix(l1))\n\n", "[list(x) for x in zip(*[iter(data)]*shape[1])]
\n\n(found this post searching for how this works)\n" ]
[ 26, 6, 5, 0, 0 ]
[]
[]
[ "multidimensional_array", "numpy", "python" ]
stackoverflow_0003636344_multidimensional_array_numpy_python.txt
Q: How to get '7' attached to each string in a list in Python if it doesn't have 7 already in it? I have been trying to solve a problem where I am given a list as input and I need to show an output with 7 attached to each string value if it doesn't contain a 7 already. I have created a list and for the case of 7 not included I have attached the '7' using the for loop. So for example: for the input ["a7", "g", "u"], I expect output as ["a7","g7","u7"] but I am getting the output as follows ['a7', 'g', 'u', ['a77', 'g7', 'u7']] I have tried to put the values in a new list using append but I am not sure how to remove the old values and replace it with new ones in existing list. Following is my code class Solution(object): def jazz(self, list=[]): for i in range(len(list)): if '7' not in list[i]: li = [i + '7' for i in list] list.append(li) return list if __name__ == "__main__": p = Solution() lt = ['a7', 'g', 'u'] print(p.jazz(lt)) A: You can use a cleaner and more pythonic solution, no classes required, and much more concise: def jazz(items): return [item if '7' in item else item+'7' for item in items] if __name__ == "__main__": lt = ['a7', 'g', 'u'] p = jazz(lt) print(p) If you want to modify the original list you can use: if __name__ == "__main__": lt = ['a7', 'g', 'u'] lt[:] = jazz(lt) print(lt) A: @AviTurner already showed you the simplest way of doing it. I'm just gonna write some point about your solution: You don't need to inherit from object in Python 3. check here Don't use mutable objects for your parameters' default values unless you know what you're doing. check here. Don't use built-in names like list for your variables. You create a list li and then you append that, This appends the whole list as a single item. Instead you either want to append individual items or .extend() it. It's perfectly ok to iterate this way for i in range(len(something)) but there is a better approach, if you need only the items, you can directly get those items this way : for item in something. If you also need the indexes: for i, item in enumerate(something)
How to get '7' attached to each string in a list in Python if it doesn't have 7 already in it?
I have been trying to solve a problem where I am given a list as input and I need to show an output with 7 attached to each string value if it doesn't contain a 7 already. I have created a list and for the case of 7 not included I have attached the '7' using the for loop. So for example: for the input ["a7", "g", "u"], I expect output as ["a7","g7","u7"] but I am getting the output as follows ['a7', 'g', 'u', ['a77', 'g7', 'u7']] I have tried to put the values in a new list using append but I am not sure how to remove the old values and replace it with new ones in existing list. Following is my code class Solution(object): def jazz(self, list=[]): for i in range(len(list)): if '7' not in list[i]: li = [i + '7' for i in list] list.append(li) return list if __name__ == "__main__": p = Solution() lt = ['a7', 'g', 'u'] print(p.jazz(lt))
[ "You can use a cleaner and more pythonic solution, no classes required, and much more concise:\n\ndef jazz(items):\n return [item if '7' in item else item+'7' for item in items]\n\nif __name__ == \"__main__\":\n lt = ['a7', 'g', 'u']\n p = jazz(lt)\n print(p)\n\n\nIf you want to modify the original list you can use:\nif __name__ == \"__main__\":\n lt = ['a7', 'g', 'u']\n lt[:] = jazz(lt)\n print(lt)\n\n\n", "@AviTurner already showed you the simplest way of doing it. I'm just gonna write some point about your solution:\n\nYou don't need to inherit from object in Python 3. check here\nDon't use mutable objects for your parameters' default values unless you know what you're doing. check here.\nDon't use built-in names like list for your variables.\nYou create a list li and then you append that, This appends the whole list as a single item. Instead you either want to append individual items or .extend() it.\nIt's perfectly ok to iterate this way for i in range(len(something)) but there is a better approach, if you need only the items, you can directly get those items this way : for item in something. If you also need the indexes: for i, item in enumerate(something)\n\n" ]
[ 2, 2 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074542021_list_python.txt
Q: Replace the last line of a file in python I am working on a project where i use a text file to store the data. I have a label for the user to enter the name and i want the user's name to be saved on line 41 of the file, which is the last line. I tried append but that just keeps adding a last line so if the user types another name it wont replace it but add another line. Can you please help me modify the code so it writes the name in line 41 of the text file and if there is already something on the text file, it just replaces line 41 based on the input. Until now i have this code but its not working i dont know why def addUser(self): global name global splitname name = self.inputBox.text() splitname = name.split() print("Splitname {}".format(splitname)) print(len(splitname)) self.usernameLbl.setText(name) self.inputBox.clear() # self.congratulations() if name != "": if len(splitname) == 2: with open('UpdatedCourseInfo.txt', 'r', encoding='utf-8') as f: data1 = f.readlines() data1[40]= [f'\n{splitname[0]}, {splitname[1]}, 0, None, None'] with open('UpdatedCourseInfo.txt', 'w', encoding='utf-8') as f: f.writelines() f.close() else: with open('UpdatedCourseInfo.txt', 'r', encoding='utf-8') as f: data1 = f.readlines() data1[40]= [f'\n{splitname[0]}, 0, 0, None, None'] with open('UpdatedCourseInfo.txt', 'w', encoding='utf-8') as f: f.writelines() f.close() print(name) return name A: Here you go: # == Ignore this part ========================================================== # `create_fake_course_info_file`, `FakeInputBox` and `FakeUsernameLabel` are just # placeholder classes to simulate the objects that `FakeCls.addUser` method # interacts with. def create_fake_course_info_file(filepath: str): """Create a fake file to test ``FakeCls`` class implementation. Function creates a text file, and populates it with 50 blank lines. Parameters ---------- filepath : str Path to the file to be created. """ print(f"Saving fake data to: {filepath}") with open(filepath, "w") as fh: fh.write("\n" * 50) class FakeInputBox: """ Mock class with necessary methods to run ``FakeCls.addUser`` method. """ def __init__(self, text): self._text = text def text(self): return self._text def clear(self): self._text = "" class FakeUsernameLabel: """ Mock class with necessary methods to run ``FakeCls.addUser`` method. """ def setText(self, text): self.text = text # == Actual Code =============================================================== class FakeCls: def __init__(self, name): self.inputBox = FakeInputBox(name) self.usernameLbl = FakeUsernameLabel() def addUser(self): global name global split_name name = self.inputBox.text() split_name = name.split() print(f"Split Name: {split_name} | Length: {len(split_name)}") self.usernameLbl.setText(name) self.inputBox.clear() # self.congratulations() if name != "": # Read current contents of the file and save each line of text as # an element in a list. with open("UpdatedCourseInfo.txt", mode="r", encoding="utf-8") as fh: data = fh.readlines() # Replace the 41st line with the user's name. if len(split_name) == 2: data[40] = f"\n{split_name[0]}, {split_name[1]}, 0, None, None" else: data[40] = f"\n{split_name[0]}, 0, 0, None, None" # Write the updated list to the file. with open("UpdatedCourseInfo.txt", mode="w", encoding="utf-8") as fh: fh.writelines(data) print(name) return name Example Here's the code in action: Notes I've made some changes to your original implementation, to make it a little bit cleaner and more "Pythonic". You can ignore the code inside the create_fake_course_info_file function, FakeInputBox and FakeUsernameLabel classes as they're only placeholders to your actual code that was not provided in the question.
Replace the last line of a file in python
I am working on a project where i use a text file to store the data. I have a label for the user to enter the name and i want the user's name to be saved on line 41 of the file, which is the last line. I tried append but that just keeps adding a last line so if the user types another name it wont replace it but add another line. Can you please help me modify the code so it writes the name in line 41 of the text file and if there is already something on the text file, it just replaces line 41 based on the input. Until now i have this code but its not working i dont know why def addUser(self): global name global splitname name = self.inputBox.text() splitname = name.split() print("Splitname {}".format(splitname)) print(len(splitname)) self.usernameLbl.setText(name) self.inputBox.clear() # self.congratulations() if name != "": if len(splitname) == 2: with open('UpdatedCourseInfo.txt', 'r', encoding='utf-8') as f: data1 = f.readlines() data1[40]= [f'\n{splitname[0]}, {splitname[1]}, 0, None, None'] with open('UpdatedCourseInfo.txt', 'w', encoding='utf-8') as f: f.writelines() f.close() else: with open('UpdatedCourseInfo.txt', 'r', encoding='utf-8') as f: data1 = f.readlines() data1[40]= [f'\n{splitname[0]}, 0, 0, None, None'] with open('UpdatedCourseInfo.txt', 'w', encoding='utf-8') as f: f.writelines() f.close() print(name) return name
[ "Here you go:\n\n# == Ignore this part ==========================================================\n# `create_fake_course_info_file`, `FakeInputBox` and `FakeUsernameLabel` are just\n# placeholder classes to simulate the objects that `FakeCls.addUser` method\n# interacts with.\n\ndef create_fake_course_info_file(filepath: str):\n \"\"\"Create a fake file to test ``FakeCls`` class implementation.\n\n Function creates a text file, and populates it with 50 blank lines.\n\n Parameters\n ----------\n filepath : str\n Path to the file to be created.\n \"\"\"\n print(f\"Saving fake data to: {filepath}\")\n with open(filepath, \"w\") as fh:\n fh.write(\"\\n\" * 50)\n\n\nclass FakeInputBox:\n \"\"\"\n Mock class with necessary methods to run ``FakeCls.addUser`` method.\n \"\"\"\n def __init__(self, text):\n self._text = text\n\n def text(self):\n return self._text\n\n def clear(self):\n self._text = \"\"\n\n\nclass FakeUsernameLabel:\n \"\"\"\n Mock class with necessary methods to run ``FakeCls.addUser`` method.\n \"\"\"\n def setText(self, text):\n self.text = text\n\n# == Actual Code ===============================================================\n\nclass FakeCls:\n\n def __init__(self, name):\n\n self.inputBox = FakeInputBox(name)\n self.usernameLbl = FakeUsernameLabel()\n\n def addUser(self):\n\n global name\n global split_name\n\n name = self.inputBox.text()\n split_name = name.split()\n print(f\"Split Name: {split_name} | Length: {len(split_name)}\")\n self.usernameLbl.setText(name)\n self.inputBox.clear()\n\n # self.congratulations()\n\n if name != \"\":\n # Read current contents of the file and save each line of text as\n # an element in a list.\n with open(\"UpdatedCourseInfo.txt\", mode=\"r\", encoding=\"utf-8\") as fh:\n data = fh.readlines()\n # Replace the 41st line with the user's name.\n if len(split_name) == 2:\n data[40] = f\"\\n{split_name[0]}, {split_name[1]}, 0, None, None\"\n else:\n data[40] = f\"\\n{split_name[0]}, 0, 0, None, None\"\n # Write the updated list to the file.\n with open(\"UpdatedCourseInfo.txt\", mode=\"w\", encoding=\"utf-8\") as fh:\n fh.writelines(data)\n print(name)\n return name\n\n\nExample\nHere's the code in action:\n\nNotes\nI've made some changes to your original implementation, to make it a little bit cleaner and more \"Pythonic\". You can ignore the code inside the create_fake_course_info_file function, FakeInputBox and FakeUsernameLabel classes as they're only placeholders to your actual code that was not provided in the question.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074541990_python.txt
Q: Tensorflow: How can I use tf.roll without wrapping? I want to independently shift the columns or rows of a 2-D tensor like: a = tf.constant([[1,2,3], [4,5,6]]) shift = tf.constant([2, -1]) b = shift_fn(a, shift) which gives me: b = [[0, 0, 1], [5, 6, 0]] I find that tf.roll() can do similar things but will wrap the elements. How can I pad zeros using it? A: One not-so-nice solution is to first pad the tensor using tf.pad, then use tf.roll inside tf.map_fn to independently shift each row (or column) of the padded tensor. And then finally, you can take the proper slice of the result. For example: a = tf.constant([[1,2,3], [4,5,6]]) shift = tf.constant([2, -1]) cols = a.shape[1] paddings = tf.constant([[0, 0], [cols, cols]]) padded_a = tf.pad(a, paddings) tf.map_fn( lambda x: tf.roll(x[0], x[1], axis=-1), elems=(padded_a, shift), # The following argument is required here fn_output_signature=a.dtype )[:,cols:cols*2] """ <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[0, 0, 1], [5, 6, 0]], dtype=int32)> """ Alternatively, to help save some memory, both padding and slicing could be done inside the fn function passed to tf.map_fn. A: @tf.function def shift(inputs, shift, axes, pad_val = 0): assert len(shift) == len(axes) axis_shift = zip(axes, shift) axis2shift = dict(axis_shift) old_shape = inputs.shape for axis in axis2shift: pad_shape = list(inputs.shape) pad_shape[axis] = abs(axis2shift[axis]) input_pad = tf.fill(pad_shape, pad_val) inputs = tf.concat((inputs, input_pad), axis = axis) input_roll = tf.roll(inputs, shift, axes) ret = tf.slice(input_roll, [0 for _ in range(len(old_shape))], old_shape) return ret The above function implements shift with tf.roll like interface* (shift and axes are expected to be lists rather than tensors). As told by @today in a previous answer, I performed the pad-roll-slice operation. Hope this helps
Tensorflow: How can I use tf.roll without wrapping?
I want to independently shift the columns or rows of a 2-D tensor like: a = tf.constant([[1,2,3], [4,5,6]]) shift = tf.constant([2, -1]) b = shift_fn(a, shift) which gives me: b = [[0, 0, 1], [5, 6, 0]] I find that tf.roll() can do similar things but will wrap the elements. How can I pad zeros using it?
[ "One not-so-nice solution is to first pad the tensor using tf.pad, then use tf.roll inside tf.map_fn to independently shift each row (or column) of the padded tensor. And then finally, you can take the proper slice of the result. For example:\na = tf.constant([[1,2,3], [4,5,6]])\nshift = tf.constant([2, -1])\n\ncols = a.shape[1]\npaddings = tf.constant([[0, 0], [cols, cols]])\npadded_a = tf.pad(a, paddings)\n\ntf.map_fn(\n lambda x: tf.roll(x[0], x[1], axis=-1),\n elems=(padded_a, shift),\n # The following argument is required here\n fn_output_signature=a.dtype\n)[:,cols:cols*2]\n\n\"\"\"\n<tf.Tensor: shape=(2, 3), dtype=int32, numpy=\narray([[0, 0, 1],\n [5, 6, 0]], dtype=int32)>\n\"\"\"\n\nAlternatively, to help save some memory, both padding and slicing could be done inside the fn function passed to tf.map_fn.\n", "@tf.function\ndef shift(inputs, shift, axes, pad_val = 0):\n assert len(shift) == len(axes)\n axis_shift = zip(axes, shift)\n axis2shift = dict(axis_shift) \n old_shape = inputs.shape\n\n for axis in axis2shift:\n pad_shape = list(inputs.shape)\n pad_shape[axis] = abs(axis2shift[axis])\n input_pad = tf.fill(pad_shape, pad_val)\n inputs = tf.concat((inputs, input_pad), axis = axis) \n \n \n input_roll = tf.roll(inputs, shift, axes)\n ret = tf.slice(input_roll, [0 for _ in range(len(old_shape))], old_shape)\n\n return ret\n\nThe above function implements shift with tf.roll like interface* (shift and axes are expected to be lists rather than tensors). As told by @today in a previous answer, I performed the pad-roll-slice operation.\nHope this helps\n" ]
[ 0, 0 ]
[]
[]
[ "python", "tensor", "tensorflow" ]
stackoverflow_0063347897_python_tensor_tensorflow.txt
Q: How to quickly calculate the sympy symol within the data frame I use pandas, numpy, sympy library in python. Is there a way to calculate the below for statement faster? import pandas as pd import numpy as np import sympy as sp df = pd.DataFrame(np.zeros(100 ** 2).reshape(100,100)) x = sp.symbols('x',real = True) df.loc[99,99] = x for j in range(99,0,-1): for k in range(j-1,-1,-1): df.loc[k,j] = df.loc[k+1,j] ** (1/2) * sp.exp(1.5) df.loc[j-1,j-1] = df.loc[0,j] I used threading, multiprocessing, numba library for speed improvement. But always appear Error. A: Let's run your code, but with a reasonable size 4 (instead of 100): In [7]: df = pd.DataFrame(np.zeros(4 ** 2).reshape(4,4)) ...: x = sp.symbols('x',real = True) ...: df.loc[3,3] = x ...: ...: for j in range(3,0,-1): ...: for k in range(j-1,-1,-1): ...: df.loc[k,j] = df.loc[k+1,j] ** (1/2) * sp.exp(1.5) ...: df.loc[j-1,j-1] = df.loc[0,j] ...: In [8]: df Out[8]: 0 1 \ 0 19.1657532216759*x**0.015625 19.1657532216759*x**0.015625 1 0.0 18.2880894824436*x**0.03125 2 0.0 0.0 3 0.0 0.0 2 3 0 18.2880894824436*x**0.03125 13.8045741860671*x**0.125 1 16.6514949636101*x**0.0625 9.48773583635853*x**0.25 2 13.8045741860671*x**0.125 4.48168907033806*x**0.5 3 0.0 x In [9]: df.dtypes Out[9]: 0 object 1 object 2 object 3 object dtype: object As I commented, this an object dtype frame. As a numpy array: In [10]: df.to_numpy() Out[10]: array([[19.1657532216759*x**0.015625, 19.1657532216759*x**0.015625, 18.2880894824436*x**0.03125, 13.8045741860671*x**0.125], [0.0, 18.2880894824436*x**0.03125, 16.6514949636101*x**0.0625, 9.48773583635853*x**0.25], [0.0, 0.0, 13.8045741860671*x**0.125, 4.48168907033806*x**0.5], [0.0, 0.0, 0.0, x]], dtype=object) It would be simpler (and probably faster) use arrays directly, rather than pandas: In [11]: arr = np.zeros((4,4),object) ...: x = sp.symbols('x',real = True) ...: arr[3,3] = x ...: ...: for j in range(3,0,-1): ...: for k in range(j-1,-1,-1): ...: arr[k,j] = arr[k+1,j] ** (1/2) * sp.exp(1.5) ...: arr[j-1,j-1] = arr[0,j] But I don't know what's the value of this array. You can't apply any sympy functions/methods to a numpy array (e.g. no subs). You could do some rudimentary math with the array, so long as x implements it, such as 2arr, or arr+arr. Even arr@arr.T, which ends up using the +and properties ofx`. But all that's done a iterative Python speeds (like lists).
How to quickly calculate the sympy symol within the data frame
I use pandas, numpy, sympy library in python. Is there a way to calculate the below for statement faster? import pandas as pd import numpy as np import sympy as sp df = pd.DataFrame(np.zeros(100 ** 2).reshape(100,100)) x = sp.symbols('x',real = True) df.loc[99,99] = x for j in range(99,0,-1): for k in range(j-1,-1,-1): df.loc[k,j] = df.loc[k+1,j] ** (1/2) * sp.exp(1.5) df.loc[j-1,j-1] = df.loc[0,j] I used threading, multiprocessing, numba library for speed improvement. But always appear Error.
[ "Let's run your code, but with a reasonable size 4 (instead of 100):\nIn [7]: df = pd.DataFrame(np.zeros(4 ** 2).reshape(4,4))\n ...: x = sp.symbols('x',real = True)\n ...: df.loc[3,3] = x\n ...: \n ...: for j in range(3,0,-1):\n ...: for k in range(j-1,-1,-1):\n ...: df.loc[k,j] = df.loc[k+1,j] ** (1/2) * sp.exp(1.5)\n ...: df.loc[j-1,j-1] = df.loc[0,j]\n ...: \n\nIn [8]: df\nOut[8]: \n 0 1 \\\n0 19.1657532216759*x**0.015625 19.1657532216759*x**0.015625 \n1 0.0 18.2880894824436*x**0.03125 \n2 0.0 0.0 \n3 0.0 0.0 \n\n 2 3 \n0 18.2880894824436*x**0.03125 13.8045741860671*x**0.125 \n1 16.6514949636101*x**0.0625 9.48773583635853*x**0.25 \n2 13.8045741860671*x**0.125 4.48168907033806*x**0.5 \n3 0.0 x \n\nIn [9]: df.dtypes\nOut[9]: \n0 object\n1 object\n2 object\n3 object\ndtype: object\n\nAs I commented, this an object dtype frame.\nAs a numpy array:\nIn [10]: df.to_numpy()\nOut[10]: \narray([[19.1657532216759*x**0.015625, 19.1657532216759*x**0.015625,\n 18.2880894824436*x**0.03125, 13.8045741860671*x**0.125],\n [0.0, 18.2880894824436*x**0.03125, 16.6514949636101*x**0.0625,\n 9.48773583635853*x**0.25],\n [0.0, 0.0, 13.8045741860671*x**0.125, 4.48168907033806*x**0.5],\n [0.0, 0.0, 0.0, x]], dtype=object)\n\nIt would be simpler (and probably faster) use arrays directly, rather than pandas:\nIn [11]: arr = np.zeros((4,4),object)\n ...: x = sp.symbols('x',real = True)\n ...: arr[3,3] = x\n ...: \n ...: for j in range(3,0,-1):\n ...: for k in range(j-1,-1,-1):\n ...: arr[k,j] = arr[k+1,j] ** (1/2) * sp.exp(1.5)\n ...: arr[j-1,j-1] = arr[0,j]\n\nBut I don't know what's the value of this array. You can't apply any sympy functions/methods to a numpy array (e.g. no subs). You could do some rudimentary math with the array, so long as x implements it, such as 2arr, or arr+arr. Even arr@arr.T, which ends up using the +and properties ofx`. But all that's done a iterative Python speeds (like lists).\n" ]
[ 0 ]
[]
[]
[ "numpy", "pandas", "python", "sympy" ]
stackoverflow_0074541290_numpy_pandas_python_sympy.txt
Q: How to move Jupyter notebook cells up/down using keyboard shortcut? Anyone knows keyboard shortcut to move cells up or down in Jupyter notebook? Cannot find the shortcut, any clues? A: The following solution works on JupyterLab (I currently have version 2.2.6): You must first open the Keyboard Shortcuts configuration file. In JupyterLab you can find it in Settings -> Advanced Settings Editor then selecting the "Keyboard Shortcuts" option in the left panel and then editing the "User Preferences" tab at the right. Expanding on sherdim's answer, you must add two json objects (one for each direction) within the "shortcuts" json array. Here I have chosen the shortcuts Ctrl + Shift + ↓ and Ctrl + Shift + ↑. { "shortcuts": [ { <<other items you may have>> }, { "command": "notebook:move-cell-up", "keys": [ "Ctrl Shift ArrowUp" ], "selector": ".jp-Notebook:focus" }, { "command": "notebook:move-cell-down", "keys": [ "Ctrl Shift ArrowDown" ], "selector": ".jp-Notebook:focus" }, ] } Finally, press Ctrl + S to save changes. Now, when you are in the command mode, you should be able to move one or more selected cells up or down. The shortcuts will even appear in the menu Edit -> Move Cells Up and Edit -> Move Cells Down. A: Further to honeybadger's response, you can see when you open up the Edit Command Mode shortcuts dialog box that there are no shortcuts defined for moving a cell up and down, by default: I simply typed in my preferred combination Ctrl-Shift-Down and Ctrl-Shift-Up in the 'add shortcut' field, and pressed Enter. This is the same in Windows/Mac. Cheers! A: David's answer above was helpful, but didn't work for me in Firefox on Xubuntu. I had to make the following change for the selector: { "shortcuts": [ { "command": "notebook:move-cell-up", "keys": [ "Ctrl Alt Shift ArrowUp" ], "selector": "body" }, { "command": "notebook:move-cell-down", "keys": [ "Ctrl Alt Shift ArrowDown" ], "selector": "body" } ] } A: This is from the official Jupyter Notebook documentation - Starting with Jupyter Notebook 5.0, you can customize the command mode shortcuts from within the Notebook Application itself. n”, “n”, “Head to the Help menu and select the Edit keyboard Shortcuts item.n”, “A dialog will guide you through the process of adding custom keyboard shortcuts.n”, “n”, “Keyboard shortcut set from within the Notebook Application will be persisted to your configuration file. n”, “A single action may have several shortcuts attached to it A: Steps to add a shortcut to move cells up or down Open any Jupyter Notebook click on Help Click on Edit Keyboard Shortcuts Find move cell down and tap on 'add a shortcut', give any shortcut like "D" for example Click on the '+' icon Do the same for move cell up, you can give "U" for this click on Ok Now use these shortcuts to move your cells above or below
How to move Jupyter notebook cells up/down using keyboard shortcut?
Anyone knows keyboard shortcut to move cells up or down in Jupyter notebook? Cannot find the shortcut, any clues?
[ "The following solution works on JupyterLab (I currently have version 2.2.6):\nYou must first open the Keyboard Shortcuts configuration file. In JupyterLab you can find it in Settings -> Advanced Settings Editor then selecting the \"Keyboard Shortcuts\" option in the left panel and then editing the \"User Preferences\" tab at the right.\nExpanding on sherdim's answer, you must add two json objects (one for each direction) within the \"shortcuts\" json array. Here I have chosen the shortcuts Ctrl + Shift + ↓ and Ctrl + Shift + ↑.\n{\n \"shortcuts\": [\n {\n <<other items you may have>>\n },\n {\n \"command\": \"notebook:move-cell-up\",\n \"keys\": [\n \"Ctrl Shift ArrowUp\"\n ],\n \"selector\": \".jp-Notebook:focus\"\n },\n {\n \"command\": \"notebook:move-cell-down\",\n \"keys\": [\n \"Ctrl Shift ArrowDown\"\n ],\n \"selector\": \".jp-Notebook:focus\"\n },\n ]\n}\n\nFinally, press Ctrl + S to save changes.\nNow, when you are in the command mode, you should be able to move one or more selected cells up or down. The shortcuts will even appear in the menu Edit -> Move Cells Up and Edit -> Move Cells Down.\n", "Further to honeybadger's response, you can see when you open up the Edit Command Mode shortcuts dialog box that there are no shortcuts defined for moving a cell up and down, by default:\n\nI simply typed in my preferred combination Ctrl-Shift-Down and Ctrl-Shift-Up in the 'add shortcut' field, and pressed Enter. This is the same in Windows/Mac.\nCheers!\n", "David's answer above was helpful, but didn't work for me in Firefox on Xubuntu. I had to make the following change for the selector:\n{\n \"shortcuts\": [ \n\n{\n \"command\": \"notebook:move-cell-up\",\n \"keys\": [\n \"Ctrl Alt Shift ArrowUp\"\n ],\n \"selector\": \"body\"\n },\n {\n \"command\": \"notebook:move-cell-down\",\n \"keys\": [\n \"Ctrl Alt Shift ArrowDown\"\n ],\n \"selector\": \"body\"\n }\n ]\n}\n\n", "This is from the official Jupyter Notebook documentation -\n\nStarting with Jupyter Notebook 5.0, you can customize the command mode\nshortcuts from within the Notebook Application itself. n”, “n”, “Head\nto the Help menu and select the Edit keyboard Shortcuts item.n”,\n“A dialog will guide you through the process of adding custom keyboard\nshortcuts.n”, “n”, “Keyboard shortcut set from within the Notebook\nApplication will be persisted to your configuration file. n”, “A\nsingle action may have several shortcuts attached to it\n\n", "Steps to add a shortcut to move cells up or down\n\nOpen any Jupyter Notebook\nclick on Help\nClick on Edit Keyboard Shortcuts\nFind move cell down and tap on 'add a shortcut', give any shortcut like \"D\" for example\nClick on the '+' icon\nDo the same for move cell up, you can give \"U\" for this\nclick on Ok\nNow use these shortcuts to move your cells above or below\n\n" ]
[ 18, 6, 5, 3, 0 ]
[ "Tab + arrow keys works for me in Windows.\n" ]
[ -3 ]
[ "jupyter_lab", "jupyter_notebook", "keyboard_shortcuts", "python" ]
stackoverflow_0062453756_jupyter_lab_jupyter_notebook_keyboard_shortcuts_python.txt
Q: Update a dict with duplicate keys and keeping the index of each key the same in Python I am trying to update the json payload with a dict type info and keeping the key position the same as before as it is required by the task I am working on. Note I understand that the implementation of type dict not allow duplicate keys, but I do need this done, so any work-around or hacky approach would helps. I have a payload which I loaded from a json file payload.json { "name": "", "address": "", "age": " ", "ethnicities": "", "select": "", "sub-ethnicities": "", "select": "", "option1": "", "option2": "" } loading it payload = json.load(open("payload.json")) The I have the info: info = { "name": "Spock", "ethnicities": "Vulcan", "select": "paternal", "sub-ethnicities": "human", "select": "maternal", } I am trying to insert above info into the payload and keeping the key indexes the way they were. Expected result would be { "name": "Spock", "address": "", "age": "", "ethnicities": "Vulcan", "select": "paternal", "sub-ethnicities": "human", "select": "maternal", "option1": "", "option2": "" } Thank you in advantage. Using data | info or data.update(info) will removes the duplicate keys, which defeating the goal I am trying to archieve. For example expected result is: { "name": "Spock", "address": "", "age": "", "ethnicities": "Vulcan", "select": "paternal", "sub-ethnicities": "human", "select": "maternal", "Extra-ethnicities": "Asian", "select": "Asian" } but I got was the duplicate keys got removed. "name": "Spock", "address": "", "age": "", "ethnicities": "Vulcan", "select": "paternal", "sub-ethnicities": "human", "Extra-ethnicities": "Asian", } A: An operator for this was added in Python 3.9 as the Union operator: payload_with_info = payload | info print(payload_with_info) >>> { 'address': '', 'age': ' ', 'ethnicities': 'Vulcan', 'name': 'Spock', 'option1': '', 'option2': '', 'select': 'maternal', 'sub-ethnicities': 'human' }
Update a dict with duplicate keys and keeping the index of each key the same in Python
I am trying to update the json payload with a dict type info and keeping the key position the same as before as it is required by the task I am working on. Note I understand that the implementation of type dict not allow duplicate keys, but I do need this done, so any work-around or hacky approach would helps. I have a payload which I loaded from a json file payload.json { "name": "", "address": "", "age": " ", "ethnicities": "", "select": "", "sub-ethnicities": "", "select": "", "option1": "", "option2": "" } loading it payload = json.load(open("payload.json")) The I have the info: info = { "name": "Spock", "ethnicities": "Vulcan", "select": "paternal", "sub-ethnicities": "human", "select": "maternal", } I am trying to insert above info into the payload and keeping the key indexes the way they were. Expected result would be { "name": "Spock", "address": "", "age": "", "ethnicities": "Vulcan", "select": "paternal", "sub-ethnicities": "human", "select": "maternal", "option1": "", "option2": "" } Thank you in advantage. Using data | info or data.update(info) will removes the duplicate keys, which defeating the goal I am trying to archieve. For example expected result is: { "name": "Spock", "address": "", "age": "", "ethnicities": "Vulcan", "select": "paternal", "sub-ethnicities": "human", "select": "maternal", "Extra-ethnicities": "Asian", "select": "Asian" } but I got was the duplicate keys got removed. "name": "Spock", "address": "", "age": "", "ethnicities": "Vulcan", "select": "paternal", "sub-ethnicities": "human", "Extra-ethnicities": "Asian", }
[ "An operator for this was added in Python 3.9 as the Union operator:\npayload_with_info = payload | info\nprint(payload_with_info)\n>>>\n{\n 'address': '',\n 'age': ' ',\n 'ethnicities': 'Vulcan',\n 'name': 'Spock',\n 'option1': '',\n 'option2': '',\n 'select': 'maternal',\n 'sub-ethnicities': 'human'\n}\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "json", "python" ]
stackoverflow_0074542230_dictionary_json_python.txt
Q: VSCode issue with setting python version to 3.10 on Azure function sample I am trying to run the Azure python function with Fast API locally and hit into this issue https://github.com/Azure-Samples/fastapi-on-azure-functions/issues/7 The last one suggests upgrading to the 3.10 version of python to solve the issue. However when i try to upgrade in vs code , i get the errors below When i try to manually add the path I am not that familiar with all python ENV setups, any suggestions will be helpful EDIT: Yes, Here are all the versions I have on the machine A: Below are the python versions installed in my windows system: When creating the Azure Function Python App in the VS code, it is not showing the Python 3.10.x version interpreter: In this step, click on Skip virtual environment and create the required trigger function. You can select the Python 3.10.x version interpret after creating the trigger function: py -m pip install --user virtualenv py -m venv env .\env\Scripts\activate py -m pip install -r requirements.txt Run the above cmdlets to activate virtual environment taken, taken from the Python Packages Source. So that you can be able to see the current python version for the current Azure Function in VS Code:
VSCode issue with setting python version to 3.10 on Azure function sample
I am trying to run the Azure python function with Fast API locally and hit into this issue https://github.com/Azure-Samples/fastapi-on-azure-functions/issues/7 The last one suggests upgrading to the 3.10 version of python to solve the issue. However when i try to upgrade in vs code , i get the errors below When i try to manually add the path I am not that familiar with all python ENV setups, any suggestions will be helpful EDIT: Yes, Here are all the versions I have on the machine
[ "Below are the python versions installed in my windows system:\n\nWhen creating the Azure Function Python App in the VS code, it is not showing the Python 3.10.x version interpreter:\n\nIn this step, click on Skip virtual environment and create the required trigger function.\nYou can select the Python 3.10.x version interpret after creating the trigger function:\n\npy -m pip install --user virtualenv\npy -m venv env\n.\\env\\Scripts\\activate\npy -m pip install -r requirements.txt\n\nRun the above cmdlets to activate virtual environment taken, taken from the Python Packages Source.\nSo that you can be able to see the current python version for the current Azure Function in VS Code:\n\n" ]
[ 1 ]
[]
[]
[ "azure_function_async", "azure_functions", "python", "python_3.x", "visual_studio_code" ]
stackoverflow_0074536796_azure_function_async_azure_functions_python_python_3.x_visual_studio_code.txt
Q: Calculate number of function calls for any size N I'm trying to understand a way to write how many times the print statement for fun1 will be called for any size N. Written in summation form. This is more of an analysis question. I know I could just setup a count variable and print the result. S is an array of N items. N is the size. def myAlg(S,n): for i in range(1,n+1): for j in range(1,i+1): for k in range(1,j+1): if j > k: print('fun1 called, and count is now', count) else: print('fun2 called') Im honestly a little lost on how to approach this. Any explanation would be greatly appreciated. A: For two first loops we have sum of arithmetic progression 1+2+3+...+n, and result is T(n) = n*(n+1)/2 known as trianglular numbers (1,3,6,10,15,21...) So loop for k is executed T(n) times, and inner part is executed Q(n) = sum(T(i),i=1..n) = n*(n+1)*(n+2)/6 times, sequence is known as tetrahedral numbers (1,4,10,20,35,56...) But we have to subtract T(n) to exclude fun2 calls (one per loop) Result = n*(n+1)*(n+2)/6 - n*(n+1)/2 = (n-1)*n*(n+1)/6 This is the same Q sequence without the last term, so Result(n) = Q(n-1) = (n-1)*n*(n+1)/6
Calculate number of function calls for any size N
I'm trying to understand a way to write how many times the print statement for fun1 will be called for any size N. Written in summation form. This is more of an analysis question. I know I could just setup a count variable and print the result. S is an array of N items. N is the size. def myAlg(S,n): for i in range(1,n+1): for j in range(1,i+1): for k in range(1,j+1): if j > k: print('fun1 called, and count is now', count) else: print('fun2 called') Im honestly a little lost on how to approach this. Any explanation would be greatly appreciated.
[ "For two first loops we have sum of arithmetic progression 1+2+3+...+n, and result is\nT(n) = n*(n+1)/2\n\nknown as trianglular numbers (1,3,6,10,15,21...)\nSo loop for k is executed T(n) times, and inner part is executed\nQ(n) = sum(T(i),i=1..n) = n*(n+1)*(n+2)/6\n\ntimes, sequence is known as tetrahedral numbers (1,4,10,20,35,56...)\nBut we have to subtract T(n) to exclude fun2 calls (one per loop)\nResult = n*(n+1)*(n+2)/6 - n*(n+1)/2 = (n-1)*n*(n+1)/6\n\nThis is the same Q sequence without the last term, so\nResult(n) = Q(n-1) = (n-1)*n*(n+1)/6\n\n" ]
[ 1 ]
[]
[]
[ "algorithm", "analysis", "python" ]
stackoverflow_0074541959_algorithm_analysis_python.txt
Q: detect and count the number of star symbol in opencv-python I need to count the occurrence of the stars at the right bottom corner in the image, I read this article Template Matching and used the following code to find the stars but My code doesn't work for detecting the stars in the image. What changes should I make in the code? import cv2 as cv import numpy as np from matplotlib import pyplot as plt img_rgb = cv.imread('page.png') img_gray = cv.cvtColor(img_rgb, cv.COLOR_BGR2GRAY) template = cv.imread('star_temp.png', 0) w, h = template.shape[::-1] res = cv.matchTemplate(img_gray, template, cv.TM_CCOEFF_NORMED) threshold = 0.8 loc = np.where(res >= threshold) print(res) for pt in zip(*loc[::-1]): cv.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0, 0, 255), 2) cv.imwrite('res.png', img_rgb) star_temp.png page.png A: It's because your template is bigger than the actual star on the image. Template matching is not scale invariant, so you need to be careful and match an almost same-size image. I cropped this from your target image: This is the full working snippet: import cv2 # image path path = "D://opencvImages//" # Reading images in default mode: targetImage = cv2.imread(path + "d6tu1.png") templateImage = cv2.imread(path + "starTemplate.png") # Deep copy for results: targetImageCopy = targetImage.copy() # Convert RGB to grayscale: grayscaleImage = cv2.cvtColor(targetImage, cv2.COLOR_BGR2GRAY) grayscaleTemplate = cv2.cvtColor(templateImage, cv2.COLOR_BGR2GRAY) # Perform template match: matchResult = cv2.matchTemplate(grayscaleImage, grayscaleTemplate, cv2.TM_CCOEFF_NORMED) # Set matching minimum score: threshold = 0.9 loc = np.where( matchResult >= threshold) # Look for possible matches: matchesCounter = 0 w, h = grayscaleTemplate.shape[::-1] for pt in zip(*loc[::-1]): cv2.rectangle(targetImageCopy, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2) # increase number of matches: matchesCounter +=1 cv2.imshow("Matches", targetImageCopy) cv2.waitKey(0) print("Number of matches: "+str(matchesCounter)) You end up with this result: Number of matches: 3
detect and count the number of star symbol in opencv-python
I need to count the occurrence of the stars at the right bottom corner in the image, I read this article Template Matching and used the following code to find the stars but My code doesn't work for detecting the stars in the image. What changes should I make in the code? import cv2 as cv import numpy as np from matplotlib import pyplot as plt img_rgb = cv.imread('page.png') img_gray = cv.cvtColor(img_rgb, cv.COLOR_BGR2GRAY) template = cv.imread('star_temp.png', 0) w, h = template.shape[::-1] res = cv.matchTemplate(img_gray, template, cv.TM_CCOEFF_NORMED) threshold = 0.8 loc = np.where(res >= threshold) print(res) for pt in zip(*loc[::-1]): cv.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0, 0, 255), 2) cv.imwrite('res.png', img_rgb) star_temp.png page.png
[ "It's because your template is bigger than the actual star on the image. Template matching is not scale invariant, so you need to be careful and match an almost same-size image. I cropped this from your target image:\n\nThis is the full working snippet:\nimport cv2\n\n# image path\npath = \"D://opencvImages//\"\n\n# Reading images in default mode:\ntargetImage = cv2.imread(path + \"d6tu1.png\")\ntemplateImage = cv2.imread(path + \"starTemplate.png\")\n\n# Deep copy for results:\ntargetImageCopy = targetImage.copy()\n\n# Convert RGB to grayscale:\ngrayscaleImage = cv2.cvtColor(targetImage, cv2.COLOR_BGR2GRAY)\ngrayscaleTemplate = cv2.cvtColor(templateImage, cv2.COLOR_BGR2GRAY)\n\n# Perform template match:\nmatchResult = cv2.matchTemplate(grayscaleImage, grayscaleTemplate, cv2.TM_CCOEFF_NORMED)\n# Set matching minimum score:\nthreshold = 0.9\nloc = np.where( matchResult >= threshold)\n\n# Look for possible matches:\nmatchesCounter = 0\nw, h = grayscaleTemplate.shape[::-1]\n\nfor pt in zip(*loc[::-1]):\n cv2.rectangle(targetImageCopy, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)\n\n # increase number of matches:\n matchesCounter +=1\n\n cv2.imshow(\"Matches\", targetImageCopy)\n cv2.waitKey(0)\n\nprint(\"Number of matches: \"+str(matchesCounter))\n\nYou end up with this result:\n\nNumber of matches: 3\n\n" ]
[ 3 ]
[]
[]
[ "computer_vision", "object_detection", "opencv", "python" ]
stackoverflow_0074542090_computer_vision_object_detection_opencv_python.txt
Q: How to instal Python packages for Spyder I am using the IDE called Spyder for learning Python. I would like to know in how to go about in installing Python packages for Spyder? A: step 1. First open Spyder and click Tools --> Open command prompt. For more details click visit this link, https://miamioh.instructure.com/courses/38817/pages/downloading-and-installing-packages A: I am running Spyder 4.2.4 and for me following solution turned out to be working: open tools-> preferences -> python interpreter click 'use the following python interpreter' point the location to local python installation, in my case : C:\Users\MYUSER\AppData\Local\Programs\Python\Python37\python.exe Click OK and restart the kernel. Now the pip started to work and I was able to import any package I previously installed on the cmd/python CLI. A: Spyder is a package too, you can install packages using pip or conda, and spyder will access them using your python path in environment. Spyder is not a package manager like conda,, but an IDE like jupyter notebook and VS Code. A: For the latest versions of Spyder use this console at right bottom Note: Once you hit enter it may take some time to install and you can't see the progress until it finishes. Else: Open anaconda command prompt Activate your environment: conda activate env-name Install the package: conda install your-package-name A: I have not checked if the ways described by people here before me work or not. I am running Spyder 5.0.5, and for me below steps worked: Step 1: Open anaconda prompt (I had my Spyder opened parallelly) Step 2: write - "pip install package-name" Note: I got my Spyder 5.0.5 up and running after installing the whole Anaconda Navigator 2.0.3. A: I installed Basic Python IDLE(python 3.9) As I used to Spyder. I installed a standalone Spyder from https://www.spyder-ide.org/ Then I faced problems for packages I tried this one pip install spyder spyder-terminal
How to instal Python packages for Spyder
I am using the IDE called Spyder for learning Python. I would like to know in how to go about in installing Python packages for Spyder?
[ "step 1. First open Spyder and click Tools --> Open command prompt.\n\n\n\nFor more details click visit this link,\nhttps://miamioh.instructure.com/courses/38817/pages/downloading-and-installing-packages\n", "I am running Spyder 4.2.4 and for me following solution turned out to be working:\n\nopen tools-> preferences -> python interpreter\nclick 'use the following python interpreter'\n\npoint the location to local python installation, in my case : C:\\Users\\MYUSER\\AppData\\Local\\Programs\\Python\\Python37\\python.exe\nClick OK and restart the kernel.\n\nNow the pip started to work and I was able to import any package I previously installed on the cmd/python CLI.\n", "Spyder is a package too, you can install packages using pip or conda, and spyder will access them using your python path in environment.\nSpyder is not a package manager like conda,, but an IDE like jupyter notebook and VS Code.\n", "For the latest versions of Spyder use this console\nat right bottom\nNote: Once you hit enter it may take some time to install and you can't see the progress until it finishes.\nElse:\n\nOpen anaconda command prompt\n\nActivate your environment: conda activate env-name\n\nInstall the package: conda install your-package-name\n\n\n", "I have not checked if the ways described by people here before me work or not.\nI am running Spyder 5.0.5, and for me below steps worked:\n\nStep 1: Open anaconda prompt (I had my Spyder opened parallelly)\nStep 2: write - \"pip install package-name\"\n\nNote: I got my Spyder 5.0.5 up and running after installing the whole Anaconda Navigator 2.0.3.\n", "I installed Basic Python IDLE(python 3.9)\nAs I used to Spyder. I installed a standalone Spyder from https://www.spyder-ide.org/\nThen I faced problems for packages\nI tried this one\npip install spyder spyder-terminal\n\n" ]
[ 7, 5, 3, 0, 0, 0 ]
[]
[]
[ "installation", "package", "python", "spyder" ]
stackoverflow_0063109860_installation_package_python_spyder.txt
Q: How to access value in QuerySet Django I am creating a simple Pizza Delivery website and trying to add an option to choose a topping. When I want to print Ingredients it returns QuerySet and Values in it separated by a comma. Is there any option how can I get values based on their variable names (ex. ingredients.all[0].toppingName -> cheese) or is there any methods which allows to get values separately. Of course, kludge with .split works but it is awful UPD: As for admin panel the provided solution worked. But it does not work in html template, so I found this code. Maybe it will useful for somebody. A: In Django, if you want to get only a specific column value you can use Model.objects.all().values('column_name') You can also filter the queryset and get values as Model.objects.filter(condition).values('column_name')
How to access value in QuerySet Django
I am creating a simple Pizza Delivery website and trying to add an option to choose a topping. When I want to print Ingredients it returns QuerySet and Values in it separated by a comma. Is there any option how can I get values based on their variable names (ex. ingredients.all[0].toppingName -> cheese) or is there any methods which allows to get values separately. Of course, kludge with .split works but it is awful UPD: As for admin panel the provided solution worked. But it does not work in html template, so I found this code. Maybe it will useful for somebody.
[ "In Django, if you want to get only a specific column value you can use\nModel.objects.all().values('column_name')\n\nYou can also filter the queryset and get values as\nModel.objects.filter(condition).values('column_name')\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0074540818_django_django_models_python.txt
Q: Replace Nested for loop in list-comprehension I want to combine my id list with my status list and I used list comprehension to do it: # id id_list = [ 1, # UAE1S 2, # UAE2S 3, # UAE3S ] # status status_list = [ 'okay', 'not okay', 'unknown', ] result = [ { 'id':id, 'status':status, } for id in id_list for status in status_list ] print(result) [{'id': 1, 'status': 'okay'}, {'id': 1, 'status': 'not okay'}, {'id': 1, 'status': 'unknown'}, {'id': 2, 'status': 'okay'}, {'id': 2, 'status': 'not okay'}, {'id': 2, 'status': 'unknown'}, {'id': 3, 'status': 'okay'}, {'id': 3, 'status': 'not okay'}, {'id': 3, 'status': 'unknown'}] It's outputting the correct list but is there a way to remove the nested for loop? A: itertools.product gives the cartesian product, import itertools id_list = [1, 2, 3] status_list = ['okay','not okay','unknown',] [{'id': item[0], 'status': item[1]} for item in itertools.product(id_list, status_list)] [{'status': 'okay', 'id': 1}, {'status': 'not okay', 'id': 1}, {'status': 'unknown', 'id': 1}, {'status': 'okay', 'id': 2}, {'status': 'not okay', 'id': 2}, {'status': 'unknown', 'id': 2}, {'status': 'okay', 'id': 3}, {'status': 'not okay', 'id': 3}, {'status': 'unknown', 'id': 3}]
Replace Nested for loop in list-comprehension
I want to combine my id list with my status list and I used list comprehension to do it: # id id_list = [ 1, # UAE1S 2, # UAE2S 3, # UAE3S ] # status status_list = [ 'okay', 'not okay', 'unknown', ] result = [ { 'id':id, 'status':status, } for id in id_list for status in status_list ] print(result) [{'id': 1, 'status': 'okay'}, {'id': 1, 'status': 'not okay'}, {'id': 1, 'status': 'unknown'}, {'id': 2, 'status': 'okay'}, {'id': 2, 'status': 'not okay'}, {'id': 2, 'status': 'unknown'}, {'id': 3, 'status': 'okay'}, {'id': 3, 'status': 'not okay'}, {'id': 3, 'status': 'unknown'}] It's outputting the correct list but is there a way to remove the nested for loop?
[ "itertools.product gives the cartesian product,\nimport itertools\nid_list = [1, 2, 3]\nstatus_list = ['okay','not okay','unknown',]\n[{'id': item[0], 'status': item[1]} for item in itertools.product(id_list, status_list)]\n\n[{'status': 'okay', 'id': 1}, {'status': 'not okay', 'id': 1}, {'status': 'unknown', 'id': 1}, {'status': 'okay', 'id': 2}, {'status': 'not okay', 'id': 2}, {'status': 'unknown', 'id': 2}, {'status': 'okay', 'id': 3}, {'status': 'not okay', 'id': 3}, {'status': 'unknown', 'id': 3}]\n" ]
[ 0 ]
[]
[]
[ "list_comprehension", "loops", "python" ]
stackoverflow_0074542197_list_comprehension_loops_python.txt
Q: multiple csv files data to single json file I had two csv files named mortality1 and mortality2 and i want to insert these two csv files data into a single json file...when i am inserting these data, i am unable give the two files at the same time to json file.and this is my code import csv import json import pandas as pd from glob import glob csvfile1 = open('C:/Users/DELL/Desktop/data/mortality1.csv', 'r') csvfile2 = open('C:/Users/DELL/Desktop/data/mortality2.csv', 'r') jsonfile = open('C:/Users/DELL/Desktop/data/cvstojson.json', 'w') df = pd.read_csv(csvfile1) df.to_json(jsonfile) i want insert the 2 csv files data at the same time to the json file A: If both of your csv data are having similar structure, then you can append the data frames to one another, and then convert it to a JSON. Like import csv import json import pandas as pd from glob import glob csvfile1 = open('C:/Users/DELL/Desktop/data/mortality1.csv', 'r') csvfile2 = open('C:/Users/DELL/Desktop/data/mortality2.csv', 'r') jsonfile = open('C:/Users/DELL/Desktop/data/cvstojson.json', 'w') # Read and append both dataframes to single one df = pd.read_csv(csvfile1).append(pd.read_csv(csvfile2)) # Create the json representation of all rows together. df.to_json(jsonfile, orient="records")
multiple csv files data to single json file
I had two csv files named mortality1 and mortality2 and i want to insert these two csv files data into a single json file...when i am inserting these data, i am unable give the two files at the same time to json file.and this is my code import csv import json import pandas as pd from glob import glob csvfile1 = open('C:/Users/DELL/Desktop/data/mortality1.csv', 'r') csvfile2 = open('C:/Users/DELL/Desktop/data/mortality2.csv', 'r') jsonfile = open('C:/Users/DELL/Desktop/data/cvstojson.json', 'w') df = pd.read_csv(csvfile1) df.to_json(jsonfile) i want insert the 2 csv files data at the same time to the json file
[ "If both of your csv data are having similar structure, then you can append the data frames to one another, and then convert it to a JSON.\nLike\nimport csv\nimport json\nimport pandas as pd\nfrom glob import glob\ncsvfile1 = open('C:/Users/DELL/Desktop/data/mortality1.csv', 'r')\ncsvfile2 = open('C:/Users/DELL/Desktop/data/mortality2.csv', 'r')\njsonfile = open('C:/Users/DELL/Desktop/data/cvstojson.json', 'w')\n\n# Read and append both dataframes to single one\ndf = pd.read_csv(csvfile1).append(pd.read_csv(csvfile2))\n\n# Create the json representation of all rows together.\ndf.to_json(jsonfile, orient=\"records\")\n\n" ]
[ 0 ]
[]
[]
[ "csv", "json", "pandas", "python" ]
stackoverflow_0074542208_csv_json_pandas_python.txt
Q: I need to subtract 1 from each digit in the list .is It any easy way? This is the list I have : list_input = [432567,876323,124356] This is the Output I need : List_output = [321456,765212,013245] like so, for index, number in enumerate(list_input): one_number = list_lnput(index) one_digit_list = list(one_number[0]) and I don't have Idea after this step A: This can be solved in a time complexity of O(1) since you're basically asking to subtract a number of 1's from an integer i, where the number is equal to the number of digits of that integer, which can be obtained by calculating int(math.log10(i)) + 1, with which you can produce the same number of 1's with (10 ** (int(math.log10(i)) + 1) - 1) // 9: import math def decrement_digits(i): return i - (10 ** (int(math.log10(i)) + 1) - 1) // 9 so that decrement_digits(432567), for example, would return: 321456 so you can then map the input list to the function for output: List_output = list(map(decrement_digits, list_input)) A: The only way to get the output with leading zeros is to cast the intS to strS. list_input = [432567,876323,124356] list_output = [''.join(str(int(digit)-1) for digit in str(s)) for s in list_input] Note that this will result in a ValueError for input with negative numbers: list_input = [-4306] list_output = [''.join(str(int(digit)-1) for digit in str(s)) for s in list_input] print(list_output) Traceback (most recent call last): File "/Users/michael.ruth/SO/solution.py", line 2, in <module> list_output = [''.join(str(int(digit)-1) for digit in str(s)) File "/Users/michael.ruth/SO/solution.py", line 2, in <listcomp> list_output = [''.join(str(int(digit)-1) for digit in str(s)) File "/Users/michael.ruth/SO/solution.py", line 2, in <genexpr> list_output = [''.join(str(int(digit)-1) for digit in str(s)) ValueError: invalid literal for int() with base 10: '-' A: divmod can be used to isolate each digit in turn. Remember the decimal positions (1's, 10's, 100's, etc...) to add it back in correctly. This will be messy for zeroes. But we don't have any definition what should happen in that case, so I'm sticking with it. Putting the logic into its own function makes it easier to write the process as a list comprehension. I think its also easier to read than trying to maintain an index. def digit_subtract(num): result = 0 base = 1 while num: num, remain = divmod(num, 10) result += (remain-1) * base base *= 10 return result list_input = [432567,876323,124356] List_output = [321456,765212,13245] test = [digit_subtract(num) for num in list_input] print(test) assert test == List_output
I need to subtract 1 from each digit in the list .is It any easy way?
This is the list I have : list_input = [432567,876323,124356] This is the Output I need : List_output = [321456,765212,013245] like so, for index, number in enumerate(list_input): one_number = list_lnput(index) one_digit_list = list(one_number[0]) and I don't have Idea after this step
[ "This can be solved in a time complexity of O(1) since you're basically asking to subtract a number of 1's from an integer i, where the number is equal to the number of digits of that integer, which can be obtained by calculating int(math.log10(i)) + 1, with which you can produce the same number of 1's with (10 ** (int(math.log10(i)) + 1) - 1) // 9:\nimport math\n\ndef decrement_digits(i):\n return i - (10 ** (int(math.log10(i)) + 1) - 1) // 9\n\nso that decrement_digits(432567), for example, would return:\n321456\n\nso you can then map the input list to the function for output:\nList_output = list(map(decrement_digits, list_input))\n\n", "The only way to get the output with leading zeros is to cast the intS to strS.\nlist_input = [432567,876323,124356]\nlist_output = [''.join(str(int(digit)-1) for digit in str(s)) \n for s in list_input]\n\nNote that this will result in a ValueError for input with negative numbers:\nlist_input = [-4306]\nlist_output = [''.join(str(int(digit)-1) for digit in str(s)) \n for s in list_input]\nprint(list_output)\n\nTraceback (most recent call last):\n File \"/Users/michael.ruth/SO/solution.py\", line 2, in <module>\n list_output = [''.join(str(int(digit)-1) for digit in str(s))\n File \"/Users/michael.ruth/SO/solution.py\", line 2, in <listcomp>\n list_output = [''.join(str(int(digit)-1) for digit in str(s))\n File \"/Users/michael.ruth/SO/solution.py\", line 2, in <genexpr>\n list_output = [''.join(str(int(digit)-1) for digit in str(s))\nValueError: invalid literal for int() with base 10: '-'\n\n", "divmod can be used to isolate each digit in turn. Remember the decimal positions (1's, 10's, 100's, etc...) to add it back in correctly. This will be messy for zeroes. But we don't have any definition what should happen in that case, so I'm sticking with it.\nPutting the logic into its own function makes it easier to write the process as a list comprehension. I think its also easier to read than trying to maintain an index.\ndef digit_subtract(num):\n result = 0\n base = 1\n while num:\n num, remain = divmod(num, 10)\n result += (remain-1) * base\n base *= 10\n return result\n\nlist_input = [432567,876323,124356]\nList_output = [321456,765212,13245]\n\ntest = [digit_subtract(num) for num in list_input]\nprint(test)\nassert test == List_output\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "list", "loops", "python", "python_3.x" ]
stackoverflow_0074542144_list_loops_python_python_3.x.txt
Q: How to fix image_url error in odoo website template I get this error raise QWebException("Error to render compiling AST", e, path, node and etree.tostring(node[0], encoding='unicode'), name) odoo.addons.base.models.qweb.QWebException: 'NoneType' object has no attribute 'image_url' Traceback (most recent call last): File "/home/akoh/isodir/odoo/odoo/addons/base/models/qweb.py", line 331, in _compiled_fn return compiled(self, append, new, options, log) File "<template>", line 1, in template_web_layout_1075 AttributeError: 'NoneType' object has no attribute 'image_url' Error to render compiling AST AttributeError: 'NoneType' object has no attribute 'image_url' Template: web.layout Path: /t/html/t[2] Node: <t t-set="x_icon" t-value="website.image_url(website, 'favicon')"/> Anytime I add <t t-call="portal.portal_layout"> to my template like this <template id="mass_sale_order_portal_template" name="Mass Sales Order Portal Template"> <t t-call="portal.portal_layout"> <!-- <div class="col-12 col-lg justify-content-end"> --> <h2><span t-esc="test"/></h2> <!-- modal relative to the actions sign and pay --> <div role="dialog" class="modal fade" id="modalaccept"> <div class="modal-dialog" > <div class="modal-content"> <header class="modal-header"> <h4 class="modal-title">Validate Order</h4> <button type="button" class="close" data-dismiss="modal" aria-label="Close">×</button> </header> <main class="modal-body" id="sign-dialog"> <p> <span>By paying this proposal, I agree to the following terms:</span> <ul> <li><span>Accepted on the behalf of:</span> <b t-field="sale_order.partner_id.commercial_partner_id"/></li> <li><span>For an amount of:</span> <b data-id="total_amount" t-field="sale_order.amount_total"/></li> <li t-if="sale_order.payment_term_id"><span>With payment terms:</span> <b t-field="sale_order.payment_term_id.note"/></li> </ul> </p> <div t-if="pms or acquirers" id="payment_method" class="text-left"> <h3 class="mb24">Pay with</h3> <t t-call="payment.payment_tokens_list"> <t t-set="mode" t-value="'payment'"/> <t t-set="submit_txt">Pay &amp; Confirm</t> <t t-set="icon_class" t-value="'fa-lock'"/> <t t-set="form_action" t-value="sale_order.get_portal_url(suffix='/transaction/token')"/> <t t-set="prepare_tx_url" t-value="sale_order.get_portal_url(suffix='/transaction/')"/> <t t-set="access_token" t-value="sale_order.access_token"/> </t> </div> </main> </div> </div> </div> <!-- modal relative to the action reject --> <div role="dialog" class="modal fade" id="modaldecline"> <div class="modal-dialog"> <form id="decline" method="POST" t-attf-action="/my/orders/#{sale_order.id}/decline?access_token=#{sale_order.access_token}" class="modal-content"> <input type="hidden" name="csrf_token" t-att-value="request.csrf_token()"/> <header class="modal-header"> <h4 class="modal-title">Reject This Quotation</h4> <button type="button" class="close" data-dismiss="modal" aria-label="Close">×</button> </header> <main class="modal-body"> <p> Tell us why you are refusing this quotation, this will help us improve our services. </p> <textarea rows="4" name="decline_message" required="" placeholder="Your feedback..." class="form-control" /> </main> <footer class="modal-footer"> <button type="submit" t-att-id="sale_order.id" class="btn btn-danger"><i class="fa fa-times"></i> Reject</button> <button type="button" class="btn btn-primary" data-dismiss="modal">Cancel</button> </footer> </form> </div> </div> <!-- main content --> <t t-foreach="orders" t-as="sale_order"> <div id="introduction" t-attf-class="pb-2 pt-3 #{'card-header bg-white' if report_type == 'html' else ''}"> <h2 class="my-0"> <t t-esc="sale_order.type_name"/> <em t-esc="sale_order.name"/> </h2> </div> </t> </t> </template> Here is the controller rendering this template. ```python @route(['/my/mass/orders/<int:order_id>'], type='http', auth="public", website=True) def portal_mass_order_page(self, order_id, orders, report_type=None, access_token=None, message=False, download=False, **kw): try: order_sudo = self._document_check_access('sale.order', order_id, access_token=access_token) except (AccessError, MissingError): return request.redirect('/my') if report_type in ('html', 'pdf', 'text'): return self._show_report(model=order_sudo, report_type=report_type, report_ref='sale.action_report_saleorder', download=download) # use sudo to allow accessing/viewing orders for public user # only if he knows the private token # Log only once a day if order_sudo: # store the date as a string in the session to allow serialization now = fields.Date.today().isoformat() session_obj_date = request.session.get('view_quote_%s' % order_sudo.id) if session_obj_date != now and request.env.user.share and access_token: request.session['view_quote_%s' % order_sudo.id] = now body = _('Quotation viewed by customer %s', order_sudo.partner_id.name) _message_post_helper( "sale.order", order_sudo.id, body, token=order_sudo.access_token, message_type="notification", subtype_xmlid="mail.mt_note", partner_ids=order_sudo.user_id.sudo().partner_id.ids, ) values = self._order_get_page_view_values(order_sudo, access_token, **kw) values['message'] = message # values = {} values['orders'] = orders values['test'] = "testing" sum_total = sum([vals.amount_total for vals in orders]) domain = expression.AND([ ['&', ('state', 'in', ['enabled', 'test']), ('company_id', '=', order_sudo.company_id.id)], ['|', ('country_ids', '=', False), ('country_ids', 'in', [order_sudo.partner_id.country_id.id])] ]) acquirers = request.env['payment.acquirer'].sudo().search(domain) values['acquirers'] = acquirers.filtered(lambda acq: (acq.payment_flow == 'form' and acq.view_template_id) or (acq.payment_flow == 's2s' and acq.registration_view_template_id)) values['pms'] = request.env['payment.token'].search([('partner_id', '=', order_sudo.partner_id.id)]) values['acq_extra_fees'] = acquirers.get_acquirer_extra_fees(sum_total, order_sudo.currency_id, order_sudo.partner_id.country_id.id) _logger.info("Testing values: %s", values) return request.render('main_membership_dashboard.mass_sale_order_portal_template', values) If I remove <t t-call="portal.portal_layout"> then it works but without css styling, I really can't trace the error or understand whats going on A: My issue was in the controller file that was rendering the template. I misspelled the keyword "website='True'". If it is not there then add the keyword, if it is then check that you wrote it properly. Hope it helps.
How to fix image_url error in odoo website template
I get this error raise QWebException("Error to render compiling AST", e, path, node and etree.tostring(node[0], encoding='unicode'), name) odoo.addons.base.models.qweb.QWebException: 'NoneType' object has no attribute 'image_url' Traceback (most recent call last): File "/home/akoh/isodir/odoo/odoo/addons/base/models/qweb.py", line 331, in _compiled_fn return compiled(self, append, new, options, log) File "<template>", line 1, in template_web_layout_1075 AttributeError: 'NoneType' object has no attribute 'image_url' Error to render compiling AST AttributeError: 'NoneType' object has no attribute 'image_url' Template: web.layout Path: /t/html/t[2] Node: <t t-set="x_icon" t-value="website.image_url(website, 'favicon')"/> Anytime I add <t t-call="portal.portal_layout"> to my template like this <template id="mass_sale_order_portal_template" name="Mass Sales Order Portal Template"> <t t-call="portal.portal_layout"> <!-- <div class="col-12 col-lg justify-content-end"> --> <h2><span t-esc="test"/></h2> <!-- modal relative to the actions sign and pay --> <div role="dialog" class="modal fade" id="modalaccept"> <div class="modal-dialog" > <div class="modal-content"> <header class="modal-header"> <h4 class="modal-title">Validate Order</h4> <button type="button" class="close" data-dismiss="modal" aria-label="Close">×</button> </header> <main class="modal-body" id="sign-dialog"> <p> <span>By paying this proposal, I agree to the following terms:</span> <ul> <li><span>Accepted on the behalf of:</span> <b t-field="sale_order.partner_id.commercial_partner_id"/></li> <li><span>For an amount of:</span> <b data-id="total_amount" t-field="sale_order.amount_total"/></li> <li t-if="sale_order.payment_term_id"><span>With payment terms:</span> <b t-field="sale_order.payment_term_id.note"/></li> </ul> </p> <div t-if="pms or acquirers" id="payment_method" class="text-left"> <h3 class="mb24">Pay with</h3> <t t-call="payment.payment_tokens_list"> <t t-set="mode" t-value="'payment'"/> <t t-set="submit_txt">Pay &amp; Confirm</t> <t t-set="icon_class" t-value="'fa-lock'"/> <t t-set="form_action" t-value="sale_order.get_portal_url(suffix='/transaction/token')"/> <t t-set="prepare_tx_url" t-value="sale_order.get_portal_url(suffix='/transaction/')"/> <t t-set="access_token" t-value="sale_order.access_token"/> </t> </div> </main> </div> </div> </div> <!-- modal relative to the action reject --> <div role="dialog" class="modal fade" id="modaldecline"> <div class="modal-dialog"> <form id="decline" method="POST" t-attf-action="/my/orders/#{sale_order.id}/decline?access_token=#{sale_order.access_token}" class="modal-content"> <input type="hidden" name="csrf_token" t-att-value="request.csrf_token()"/> <header class="modal-header"> <h4 class="modal-title">Reject This Quotation</h4> <button type="button" class="close" data-dismiss="modal" aria-label="Close">×</button> </header> <main class="modal-body"> <p> Tell us why you are refusing this quotation, this will help us improve our services. </p> <textarea rows="4" name="decline_message" required="" placeholder="Your feedback..." class="form-control" /> </main> <footer class="modal-footer"> <button type="submit" t-att-id="sale_order.id" class="btn btn-danger"><i class="fa fa-times"></i> Reject</button> <button type="button" class="btn btn-primary" data-dismiss="modal">Cancel</button> </footer> </form> </div> </div> <!-- main content --> <t t-foreach="orders" t-as="sale_order"> <div id="introduction" t-attf-class="pb-2 pt-3 #{'card-header bg-white' if report_type == 'html' else ''}"> <h2 class="my-0"> <t t-esc="sale_order.type_name"/> <em t-esc="sale_order.name"/> </h2> </div> </t> </t> </template> Here is the controller rendering this template. ```python @route(['/my/mass/orders/<int:order_id>'], type='http', auth="public", website=True) def portal_mass_order_page(self, order_id, orders, report_type=None, access_token=None, message=False, download=False, **kw): try: order_sudo = self._document_check_access('sale.order', order_id, access_token=access_token) except (AccessError, MissingError): return request.redirect('/my') if report_type in ('html', 'pdf', 'text'): return self._show_report(model=order_sudo, report_type=report_type, report_ref='sale.action_report_saleorder', download=download) # use sudo to allow accessing/viewing orders for public user # only if he knows the private token # Log only once a day if order_sudo: # store the date as a string in the session to allow serialization now = fields.Date.today().isoformat() session_obj_date = request.session.get('view_quote_%s' % order_sudo.id) if session_obj_date != now and request.env.user.share and access_token: request.session['view_quote_%s' % order_sudo.id] = now body = _('Quotation viewed by customer %s', order_sudo.partner_id.name) _message_post_helper( "sale.order", order_sudo.id, body, token=order_sudo.access_token, message_type="notification", subtype_xmlid="mail.mt_note", partner_ids=order_sudo.user_id.sudo().partner_id.ids, ) values = self._order_get_page_view_values(order_sudo, access_token, **kw) values['message'] = message # values = {} values['orders'] = orders values['test'] = "testing" sum_total = sum([vals.amount_total for vals in orders]) domain = expression.AND([ ['&', ('state', 'in', ['enabled', 'test']), ('company_id', '=', order_sudo.company_id.id)], ['|', ('country_ids', '=', False), ('country_ids', 'in', [order_sudo.partner_id.country_id.id])] ]) acquirers = request.env['payment.acquirer'].sudo().search(domain) values['acquirers'] = acquirers.filtered(lambda acq: (acq.payment_flow == 'form' and acq.view_template_id) or (acq.payment_flow == 's2s' and acq.registration_view_template_id)) values['pms'] = request.env['payment.token'].search([('partner_id', '=', order_sudo.partner_id.id)]) values['acq_extra_fees'] = acquirers.get_acquirer_extra_fees(sum_total, order_sudo.currency_id, order_sudo.partner_id.country_id.id) _logger.info("Testing values: %s", values) return request.render('main_membership_dashboard.mass_sale_order_portal_template', values) If I remove <t t-call="portal.portal_layout"> then it works but without css styling, I really can't trace the error or understand whats going on
[ "My issue was in the controller file that was rendering the template. I misspelled the keyword \"website='True'\". If it is not there then add the keyword, if it is then check that you wrote it properly. Hope it helps.\n" ]
[ 0 ]
[]
[]
[ "odoo", "odoo_14", "python" ]
stackoverflow_0070158835_odoo_odoo_14_python.txt
Q: How do I set up my Django urlpatterns within my app (not project) Let's say I've got the classic "School" app within my Django project. My school/models.py contains models for both student and course. All my project files live within a directory I named config. How do I write an include statement(s) within config/urls.py that references two separate endpoints within school/urls.py? And then what do I put in schools/urls.py? For example, if I were trying to define an endpoint just for students, in config/urls.py I would do something like this: from django.urls import path, include urlpatterns = [ ... path("students/", include("school.urls"), name="students"), ... ] And then in school/urls.py I would do something like this: from django.urls import path from peakbagger.views import StudentCreateView, StudentDetailView, StudentListView, StudentUpdateView, StudentDeleteView urlpatterns = [ # ... path("", StudentListView.as_view(), name="student-list"), path("add/", StudentCreateView.as_view(), name="student-add"), path("<int:pk>/", StudentDetailView.as_view(), name="student-detail"), path("<int:pk>/update/", StudentUpdateView.as_view(), name="student-update"), path("<int:pk>/delete/", StudentDeleteView.as_view(), name="student-delete"), ] But how do I do I add another urlpattern to config/urls.py along the lines of something like this? The include statement needs some additional info/parameters, no? from django.urls import path, include urlpatterns = [ ... path("students/", include("school.urls"), name="students"), path("courses/", include("school.urls"), name="courses"), ... ] And then what happens inside of school/urls.py? I'm open to suggestions, and definitely am a neophyte when it comes to the Django philosophy. Do I need an additional urls.py somewhere? I'd prefer not to put everything in config/urls.py and I'd prefer not to build a separate app for both students and courses. A: I would rather create two (or more) urls.py files and then point them separately. # directory structure school/ ├── admin.py ├── apps.py ├── __init__.py ├── migrations │   └── __init__.py ├── models.py ├── tests.py ├── urls │   ├── course.py │   ├── __init__.py │   └── student.py └── views.py # school/urls/course.py from django.urls import path from school.views import CourseListView urlpatterns = [ path("", CourseListView.as_view(), name="course_list"), # other URLs ] # school/urls/student.py from django.urls import path from school.views import StudentListView urlpatterns = [ path("", StudentListView.as_view(), name="student_list"), # other URLs ] # config/urls.py from django.urls import include, path urlpatterns = [ path("student/", include("school.urls.student")), path("course/", include("school.urls.course")), # other URLs ] A: The best solution for you is to make separate urls directory inside your app For example if you have school as app then app ├── School │ ├── views.py │ └── models.py | └── urls | └── __init__.py | └── urls.py | └── school_urls.py | └── course_urls.py Now in your main project urls you can set this way urlpatterns = [ ... path("", include("school.urls"), name="students"), ... ] and in urls.py of your school urls folder you can do this way urlpatterns = [ ... path("students/", include("school.urls.school_urls"), name="students"), path("course/", include("school.urls.course_urls"), name="course"), ... ] and you can do add course view in course url folder and another student view in student urls file
How do I set up my Django urlpatterns within my app (not project)
Let's say I've got the classic "School" app within my Django project. My school/models.py contains models for both student and course. All my project files live within a directory I named config. How do I write an include statement(s) within config/urls.py that references two separate endpoints within school/urls.py? And then what do I put in schools/urls.py? For example, if I were trying to define an endpoint just for students, in config/urls.py I would do something like this: from django.urls import path, include urlpatterns = [ ... path("students/", include("school.urls"), name="students"), ... ] And then in school/urls.py I would do something like this: from django.urls import path from peakbagger.views import StudentCreateView, StudentDetailView, StudentListView, StudentUpdateView, StudentDeleteView urlpatterns = [ # ... path("", StudentListView.as_view(), name="student-list"), path("add/", StudentCreateView.as_view(), name="student-add"), path("<int:pk>/", StudentDetailView.as_view(), name="student-detail"), path("<int:pk>/update/", StudentUpdateView.as_view(), name="student-update"), path("<int:pk>/delete/", StudentDeleteView.as_view(), name="student-delete"), ] But how do I do I add another urlpattern to config/urls.py along the lines of something like this? The include statement needs some additional info/parameters, no? from django.urls import path, include urlpatterns = [ ... path("students/", include("school.urls"), name="students"), path("courses/", include("school.urls"), name="courses"), ... ] And then what happens inside of school/urls.py? I'm open to suggestions, and definitely am a neophyte when it comes to the Django philosophy. Do I need an additional urls.py somewhere? I'd prefer not to put everything in config/urls.py and I'd prefer not to build a separate app for both students and courses.
[ "I would rather create two (or more) urls.py files and then point them separately.\n# directory structure\nschool/\n├── admin.py\n├── apps.py\n├── __init__.py\n├── migrations\n│   └── __init__.py\n├── models.py\n├── tests.py\n├── urls\n│   ├── course.py\n│   ├── __init__.py\n│   └── student.py\n└── views.py\n\n\n# school/urls/course.py\nfrom django.urls import path\nfrom school.views import CourseListView\n\nurlpatterns = [\n path(\"\", CourseListView.as_view(), name=\"course_list\"),\n # other URLs\n]\n\n\n\n# school/urls/student.py\nfrom django.urls import path\nfrom school.views import StudentListView\n\nurlpatterns = [\n path(\"\", StudentListView.as_view(), name=\"student_list\"),\n # other URLs\n]\n\n\n# config/urls.py\nfrom django.urls import include, path\n\nurlpatterns = [\n path(\"student/\", include(\"school.urls.student\")),\n path(\"course/\", include(\"school.urls.course\")),\n # other URLs\n]\n\n", "The best solution for you is to make separate urls directory inside your app\nFor example if you have school as app then\napp\n├── School\n│ ├── views.py\n│ └── models.py\n| └── urls\n| └── __init__.py\n| └── urls.py\n| └── school_urls.py\n| └── course_urls.py\n\nNow in your main project urls you can set this way\nurlpatterns = [\n ...\n path(\"\", include(\"school.urls\"), name=\"students\"),\n ...\n]\n\nand in urls.py of your school urls folder you can do this way\n urlpatterns = [\n ...\n path(\"students/\", include(\"school.urls.school_urls\"), name=\"students\"),\n path(\"course/\", include(\"school.urls.course_urls\"), name=\"course\"),\n ...\n ]\n\nand you can do add course view in course url folder and another student view in student urls file\n" ]
[ 1, 1 ]
[]
[]
[ "django", "python", "url_pattern" ]
stackoverflow_0074542115_django_python_url_pattern.txt
Q: TypeError: Function() missing 1 required positional argument: 'self' there is some problem I cant execute the function in main.py from another file it give's the error self argument missing The code from here I import the car Manger class store it in a object called car and and use car.create() in the while loop MAIN.PY < import time from turtle import Screen from player import Player from car_manager import CarManager # from scoreboard import Scoreboard screen = Screen() screen.setup(width=600, height=600) screen.tracer(0) player = Player() car = CarManager screen.listen() screen.onkeypress(player.moveup,"w") screen.onkeypress(player.moveup,"Up") counter = 0 game_is_on = True while game_is_on: time.sleep(0.1) screen.update() if counter % 6 == 0: car.create() # the error counter += 1 screen.exitonclick() car manger.py < from turtle import Turtle import random # COLORS = ["red", "orange", "yellow", "green", "blue", "purple"] STARTING_MOVE_DISTANCE = 5 MOVE_INCREMENT = 10 class CarManager(Turtle): def __init__(self): super().__init__() self.create() def create(self): self.shape("Square") self.shapesize(stretch_wid=2,stretch_len=1) self.penup() self.Y = random.randint(-250,250) self.goto(300,self.Y) self.setheading(180) def move(self): self.forward(STARTING_MOVE_DISTANCE) A: Instead of car = CarManager, which assigns the class CarManager itself to be the value of car, you wanted car = CarManager(), which creates an instance of type CarManager and assigns that to car. You then don't need to call .create() since __init__() already calls it. Consider just putting that code in __init__, unless you want a factory method like this: class CarManager(Turtle): def __init__(self): super().__init__() self.shape("Square") self.shapesize(stretch_wid=2,stretch_len=1) self.penup() self.Y = random.randint(-250,250) self.goto(300,self.Y) self.setheading(180) @classmethod def create(cls): return cls() def move(self): self.forward(STARTING_MOVE_DISTANCE) car = CarManager.create()
TypeError: Function() missing 1 required positional argument: 'self'
there is some problem I cant execute the function in main.py from another file it give's the error self argument missing The code from here I import the car Manger class store it in a object called car and and use car.create() in the while loop MAIN.PY < import time from turtle import Screen from player import Player from car_manager import CarManager # from scoreboard import Scoreboard screen = Screen() screen.setup(width=600, height=600) screen.tracer(0) player = Player() car = CarManager screen.listen() screen.onkeypress(player.moveup,"w") screen.onkeypress(player.moveup,"Up") counter = 0 game_is_on = True while game_is_on: time.sleep(0.1) screen.update() if counter % 6 == 0: car.create() # the error counter += 1 screen.exitonclick() car manger.py < from turtle import Turtle import random # COLORS = ["red", "orange", "yellow", "green", "blue", "purple"] STARTING_MOVE_DISTANCE = 5 MOVE_INCREMENT = 10 class CarManager(Turtle): def __init__(self): super().__init__() self.create() def create(self): self.shape("Square") self.shapesize(stretch_wid=2,stretch_len=1) self.penup() self.Y = random.randint(-250,250) self.goto(300,self.Y) self.setheading(180) def move(self): self.forward(STARTING_MOVE_DISTANCE)
[ "Instead of car = CarManager, which assigns the class CarManager itself to be the value of car, you wanted car = CarManager(), which creates an instance of type CarManager and assigns that to car. You then don't need to call .create() since __init__() already calls it.\nConsider just putting that code in __init__, unless you want a factory method like this:\nclass CarManager(Turtle):\n def __init__(self):\n super().__init__()\n self.shape(\"Square\")\n self.shapesize(stretch_wid=2,stretch_len=1)\n self.penup()\n self.Y = random.randint(-250,250)\n self.goto(300,self.Y)\n self.setheading(180)\n \n @classmethod\n def create(cls):\n return cls()\n\n def move(self):\n self.forward(STARTING_MOVE_DISTANCE)\n\n\n\ncar = CarManager.create()\n\n" ]
[ 0 ]
[]
[]
[ "oop", "python", "turtle_graphics", "user_interface" ]
stackoverflow_0074542309_oop_python_turtle_graphics_user_interface.txt
Q: Filtering a dataframe according to datetime column of other dataframe I have two dataframes, denoted by df1 and df2. The df1 has 6 columns and df2 has 4 columns. The df1 has a column date that the smallest unit is second, but in the df2 is the hour. I am going to filter the df1 according to the df2. It means, I need to extract all records in a df1 that has the same hour as the df2. Sample of data for more clarification df1: df2: Date (yyyy-mm-dd hh:mm:ss) Date (yyyy-mm-dd hh:--:--) 2016-03-01 1:02:03 2016-03-01 1:00:00 2016-04-01 1:03:04 2016-04-01 2:00:00 2016-05-01 10:04:05 2016-05-01 3:00:00 2016-05-01 11:07:08 2016-05-01 4:00:00 The desired output is: df1: 2016-03-01 1:02:03 2016-04-01 1:03:04 Only the first two rows in the df1 are extracted because their hours exist in the df2. Thank you in advance A: Use boolean indexing with Series.dt.hour for extract hours with Series.isin: df1['Date'] = pd.to_datetime(df1['Date']) df2['Date'] = pd.to_datetime(df2['Date']) df = df1[df1['Date'].dt.hour.isin(df2['Date'].dt.hour)] print (df) Date 0 2016-03-01 01:02:03 1 2016-04-01 01:03:04 If need match dates with hours use Series.dt.floor for match df2['Date']: df3 = df1[df1['Date'].dt.floor('H').isin(df2['Date'])] print (df3) Date 0 2016-03-01 01:02:03 EDIT: For check how working hours and floor function create helper columns: print (df1.assign(hour=df1['Date'].dt.hour, floor=df1['Date'].dt.floor('H'))) Date hour floor 0 2016-03-01 01:02:03 1 2016-03-01 01:00:00 1 2016-04-01 01:03:04 1 2016-04-01 01:00:00 2 2016-05-01 10:04:05 10 2016-05-01 10:00:00 3 2016-05-01 11:07:08 11 2016-05-01 11:00:00 print (df2.assign(hour=df2['Date'].dt.hour)) Date hour 0 2016-03-01 01:00:00 1 1 2016-04-01 02:00:00 2 2 2016-05-01 03:00:00 3 3 2016-05-01 04:00:00 4 EDIT1: Problem was with timezones in df2['Date', solution is remove them by Series.dt.tz_localize: df3 = df2[df2['Date'].dt.floor('H').dt.tz_localize(None).isin(df1['Date'])]
Filtering a dataframe according to datetime column of other dataframe
I have two dataframes, denoted by df1 and df2. The df1 has 6 columns and df2 has 4 columns. The df1 has a column date that the smallest unit is second, but in the df2 is the hour. I am going to filter the df1 according to the df2. It means, I need to extract all records in a df1 that has the same hour as the df2. Sample of data for more clarification df1: df2: Date (yyyy-mm-dd hh:mm:ss) Date (yyyy-mm-dd hh:--:--) 2016-03-01 1:02:03 2016-03-01 1:00:00 2016-04-01 1:03:04 2016-04-01 2:00:00 2016-05-01 10:04:05 2016-05-01 3:00:00 2016-05-01 11:07:08 2016-05-01 4:00:00 The desired output is: df1: 2016-03-01 1:02:03 2016-04-01 1:03:04 Only the first two rows in the df1 are extracted because their hours exist in the df2. Thank you in advance
[ "Use boolean indexing with Series.dt.hour for extract hours with Series.isin:\ndf1['Date'] = pd.to_datetime(df1['Date'])\ndf2['Date'] = pd.to_datetime(df2['Date'])\n\n\ndf = df1[df1['Date'].dt.hour.isin(df2['Date'].dt.hour)]\nprint (df)\n Date\n0 2016-03-01 01:02:03\n1 2016-04-01 01:03:04\n\nIf need match dates with hours use Series.dt.floor for match df2['Date']:\ndf3 = df1[df1['Date'].dt.floor('H').isin(df2['Date'])]\nprint (df3)\n Date\n0 2016-03-01 01:02:03\n\nEDIT: For check how working hours and floor function create helper columns:\nprint (df1.assign(hour=df1['Date'].dt.hour, floor=df1['Date'].dt.floor('H')))\n\n Date hour floor\n0 2016-03-01 01:02:03 1 2016-03-01 01:00:00\n1 2016-04-01 01:03:04 1 2016-04-01 01:00:00\n2 2016-05-01 10:04:05 10 2016-05-01 10:00:00\n3 2016-05-01 11:07:08 11 2016-05-01 11:00:00\n\nprint (df2.assign(hour=df2['Date'].dt.hour))\n\n Date hour\n0 2016-03-01 01:00:00 1\n1 2016-04-01 02:00:00 2\n2 2016-05-01 03:00:00 3\n3 2016-05-01 04:00:00 4\n\nEDIT1: Problem was with timezones in df2['Date', solution is remove them by Series.dt.tz_localize:\ndf3 = df2[df2['Date'].dt.floor('H').dt.tz_localize(None).isin(df1['Date'])]\n\n" ]
[ 2 ]
[]
[]
[ "filtering", "pandas", "python" ]
stackoverflow_0074542358_filtering_pandas_python.txt
Q: VScode jupyer not loading ipython instance installed in a conda environment I have noticed this both on Linux and MacOS. I have a conda environment for data science stuff, which I have installed ipython, ipykernel, jupyer, and a bunch of other data science dependencies. In VSCode, when I try to select a python interpreter, it shows just fine. I have been able to run regular python files without issue. However, in Jupyter notebooks, when I try to select a kernel, it is only showing the system installed python interpreter (/usr/bin/python and such). Oddly, sometimes if I click the 'Select kernel' button early enough, it will temporarily show the conda environment as an option, but then in a second it will disappear. If I click the option fast enough, it also just resets to the system python and when I try again to select the kernel, it only shows the system options. A: This problem occurs in python 3.11. Open the extension store and change the jupyter extension to pre-release version. Use command python -m pip install jupyter in the terminal. Use shortcuts "Ctrl+Shift+P" and search the following option:
VScode jupyer not loading ipython instance installed in a conda environment
I have noticed this both on Linux and MacOS. I have a conda environment for data science stuff, which I have installed ipython, ipykernel, jupyer, and a bunch of other data science dependencies. In VSCode, when I try to select a python interpreter, it shows just fine. I have been able to run regular python files without issue. However, in Jupyter notebooks, when I try to select a kernel, it is only showing the system installed python interpreter (/usr/bin/python and such). Oddly, sometimes if I click the 'Select kernel' button early enough, it will temporarily show the conda environment as an option, but then in a second it will disappear. If I click the option fast enough, it also just resets to the system python and when I try again to select the kernel, it only shows the system options.
[ "This problem occurs in python 3.11.\n\nOpen the extension store and change the jupyter extension to pre-release version.\n\n\n\nUse command python -m pip install jupyter in the terminal.\nUse shortcuts \"Ctrl+Shift+P\" and search the following option:\n\n\n" ]
[ 0 ]
[]
[]
[ "ipython", "jupyter_notebook", "python", "visual_studio_code" ]
stackoverflow_0074541174_ipython_jupyter_notebook_python_visual_studio_code.txt
Q: Azure SDK ARM Template deployment: Could not find member 'id' I'm trying to deploy a vm through the python azure sdk with an arm template. I'm using the code provided by microsoft from here: https://learn.microsoft.com/en-us/samples/azure-samples/resource-manager-python-template-deployment/resource-manager-python-template-deployment/ But I get an error when trying to use the template. parameters = my parameters as a python dict parameters = {k: {'value': v} for k, v in parameters.items()} template = self.ts_client.template_specs.get('test-rg', 'deploy-vm.test').as_dict() deployment_properties = {'mode': DeploymentMode.incremental, 'template': template, 'parameters': parameters} self.client.deployments.create_or_update(self.resource_group,'azure-sample', {'properties': deployment_properties, 'tags': []}) The only part thats different from the example code, is that I'm not reading the template from a file but I'm getting it through the sdk and converting it into a dictonary and I pass the deployment_properties into the begin_create_or_update method as a dict. If I don't pass it like this it gives the exception: Parameter 'Deployment.properties' can not be None. However I get this error: azure.core.exceptions.HttpResponseError: (InvalidRequestContent) The request content was invalid and could not be deserialized: 'Could not find member 'id' on object of type 'Template'. Path 'properties.template.id', line 1, position 34.'. Any idea what this could be? A: I tried in my environment and got same type of error. Console: Make sure you are passing correct Arm template template.json and also check it is in correct state. Provide the valid id or correct the templates according to Azure-VM templates using this MS-Docs. After I validated my templates using document the virtual machine is deployed successfully using python with Arm templates. Code: from azure.mgmt.resource import ResourceManagementClient from azure.mgmt.resource.resources.models import DeploymentMode from azure.identity import DefaultAzureCredential import json subscription_id = '<subscription id>' resource_group = '<your rg name >' creds = DefaultAzureCredential() client = ResourceManagementClient( creds, '<sub id>') parameters = { 'virtualMachineName': '', 'location': '', 'virtualMachineRG':'' } parameters = {k: {'value': v} for k, v in parameters.items()} with open(r'<path of file >', 'r') as template_file_fd: template = json.load(template_file_fd) deployment_properties = { 'mode': DeploymentMode.incremental, 'template': template, 'parameters': parameters } deployment_async_operation = client.deployments.begin_create_or_update( resource_group, 'azure-sample', {'properties': deployment_properties, 'tags': []}) deployment_async_operation.wait() Console: Portal: Reference: Microsoft.Compute/virtualMachines - Bicep, ARM template & Terraform AzAPI reference | Microsoft Learn
Azure SDK ARM Template deployment: Could not find member 'id'
I'm trying to deploy a vm through the python azure sdk with an arm template. I'm using the code provided by microsoft from here: https://learn.microsoft.com/en-us/samples/azure-samples/resource-manager-python-template-deployment/resource-manager-python-template-deployment/ But I get an error when trying to use the template. parameters = my parameters as a python dict parameters = {k: {'value': v} for k, v in parameters.items()} template = self.ts_client.template_specs.get('test-rg', 'deploy-vm.test').as_dict() deployment_properties = {'mode': DeploymentMode.incremental, 'template': template, 'parameters': parameters} self.client.deployments.create_or_update(self.resource_group,'azure-sample', {'properties': deployment_properties, 'tags': []}) The only part thats different from the example code, is that I'm not reading the template from a file but I'm getting it through the sdk and converting it into a dictonary and I pass the deployment_properties into the begin_create_or_update method as a dict. If I don't pass it like this it gives the exception: Parameter 'Deployment.properties' can not be None. However I get this error: azure.core.exceptions.HttpResponseError: (InvalidRequestContent) The request content was invalid and could not be deserialized: 'Could not find member 'id' on object of type 'Template'. Path 'properties.template.id', line 1, position 34.'. Any idea what this could be?
[ "I tried in my environment and got same type of error.\nConsole:\n\nMake sure you are passing correct Arm template template.json and also check it is in correct state.\nProvide the valid id or correct the templates according to Azure-VM templates using this MS-Docs.\nAfter I validated my templates using document the virtual machine is deployed successfully using python with Arm templates.\nCode:\nfrom azure.mgmt.resource import ResourceManagementClient\nfrom azure.mgmt.resource.resources.models import DeploymentMode\nfrom azure.identity import DefaultAzureCredential\nimport json\n\n\nsubscription_id = '<subscription id>'\nresource_group = '<your rg name >'\ncreds = DefaultAzureCredential()\nclient = ResourceManagementClient( creds, '<sub id>')\n\nparameters = {\n'virtualMachineName': '',\n'location': '',\n'virtualMachineRG':''\n}\n\nparameters = {k: {'value': v} for k, v in parameters.items()}\nwith open(r'<path of file >', 'r') as template_file_fd:\ntemplate = json.load(template_file_fd)\ndeployment_properties = {\n'mode': DeploymentMode.incremental,\n'template': template,\n'parameters': parameters\n}\ndeployment_async_operation = client.deployments.begin_create_or_update(\nresource_group, 'azure-sample', {'properties': deployment_properties, 'tags': []})\ndeployment_async_operation.wait()\n\nConsole:\n\nPortal:\n\nReference:\nMicrosoft.Compute/virtualMachines - Bicep, ARM template & Terraform AzAPI reference | Microsoft Learn\n" ]
[ 0 ]
[]
[]
[ "arm_template", "azure", "azure_sdk", "json", "python" ]
stackoverflow_0074534876_arm_template_azure_azure_sdk_json_python.txt
Q: How can i destroy the surrounding of a collision? I have a protective wall of stacked rectangles that the player is behind. If the protective wall collides with a bomb, I not only want to destroy the one rectangle but also the side and bottom neighbors. Does anyone have an idea how to get the coordinates of the neighbors? i create the wall with this code: for j in range(int(bodenebenen)): for i in range(int(bodenspalten)): m = Boden(int(i)*bodenbreite,(int(j)*bodenhoehe) ,int(bodenbreite),int(bodenhoehe),620,schutzcolor[random.randint(0,len(schutzcolor) - 1)]) protectivewall.add(m) alle_sprites.add(m) hits = pygame.sprite.groupcollide(bombs,protectivewall,True,True) A: I suggest to inflate the rectangles of the bombs before collision detection. A larger rectangle, hits more objects. Use inflate_ip to inflate the rectangles in place and shrink (inverse inflate) the remaining bombs after collision detection. You just need to find a good size by which you want to enlarge the rectangles. I use 10 here just as an example: for b in bombs: b.rect.inflate_ip(10, 10) hits = pygame.sprite.groupcollide(bombs, protectivewall, True, True) for b in bombs: b.rect.inflate_ip(-10, -10)
How can i destroy the surrounding of a collision?
I have a protective wall of stacked rectangles that the player is behind. If the protective wall collides with a bomb, I not only want to destroy the one rectangle but also the side and bottom neighbors. Does anyone have an idea how to get the coordinates of the neighbors? i create the wall with this code: for j in range(int(bodenebenen)): for i in range(int(bodenspalten)): m = Boden(int(i)*bodenbreite,(int(j)*bodenhoehe) ,int(bodenbreite),int(bodenhoehe),620,schutzcolor[random.randint(0,len(schutzcolor) - 1)]) protectivewall.add(m) alle_sprites.add(m) hits = pygame.sprite.groupcollide(bombs,protectivewall,True,True)
[ "I suggest to inflate the rectangles of the bombs before collision detection. A larger rectangle, hits more objects. Use inflate_ip to inflate the rectangles in place and shrink (inverse inflate) the remaining bombs after collision detection. You just need to find a good size by which you want to enlarge the rectangles. I use 10 here just as an example:\nfor b in bombs:\n b.rect.inflate_ip(10, 10)\nhits = pygame.sprite.groupcollide(bombs, protectivewall, True, True)\nfor b in bombs:\n b.rect.inflate_ip(-10, -10)\n\n" ]
[ 1 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074541962_pygame_python.txt
Q: Pivoting column while retaining all other columns I have many columns in a table, but only one column that needs to be pivoted with its values. It looks like this: OrderNumber Item YearMonth Total 1 1 2019_01 20 1 2 2019_01 40 1 1 2019_02 30 2 1 2019_02 50 The resulting output should be: OrderNumber Item 2019_01 2019_02 Total 1 1 60 30 20 1 2 60 30 40 1 1 60 30 30 2 1 0 50 50 Basically, sum all the total for each month's order number while retaining all columns. Is there a way to do this? I'm using Pandas A: IIUC, you need a pivot_table + merge: out = (df .merge(df.pivot_table(index='OrderNumber', columns='YearMonth', values='Total', aggfunc='sum', fill_value=0), on='OrderNumber') #.drop(columns='YearMonth') # uncomment to drop unused 'YearMonth' ) Output: OrderNumber Item YearMonth Total 2019_01 2019_02 0 1 1 2019_01 20 60 30 1 1 2 2019_01 40 60 30 2 1 1 2019_02 30 60 30 3 2 1 2019_02 50 0 50 A: df.join( df.groupby(['OrderNumber', 'YearMonth'])['Total'].sum() .unstack(level=1,fill_value=0) , on='OrderNumber') OrderNumber Item YearMonth Total 2019_01 2019_02 0 1 1 2019_01 20 60.0 30.0 1 1 2 2019_01 40 60.0 30.0 2 1 1 2019_02 30 60.0 30.0 3 2 1 2019_02 50 0.0 50.0
Pivoting column while retaining all other columns
I have many columns in a table, but only one column that needs to be pivoted with its values. It looks like this: OrderNumber Item YearMonth Total 1 1 2019_01 20 1 2 2019_01 40 1 1 2019_02 30 2 1 2019_02 50 The resulting output should be: OrderNumber Item 2019_01 2019_02 Total 1 1 60 30 20 1 2 60 30 40 1 1 60 30 30 2 1 0 50 50 Basically, sum all the total for each month's order number while retaining all columns. Is there a way to do this? I'm using Pandas
[ "IIUC, you need a pivot_table + merge:\nout = (df\n .merge(df.pivot_table(index='OrderNumber', columns='YearMonth',\n values='Total', aggfunc='sum', fill_value=0),\n on='OrderNumber')\n #.drop(columns='YearMonth') # uncomment to drop unused 'YearMonth'\n )\n\nOutput:\n OrderNumber Item YearMonth Total 2019_01 2019_02\n0 1 1 2019_01 20 60 30\n1 1 2 2019_01 40 60 30\n2 1 1 2019_02 30 60 30\n3 2 1 2019_02 50 0 50\n\n", "df.join(\n df.groupby(['OrderNumber', 'YearMonth'])['Total'].sum()\n .unstack(level=1,fill_value=0)\n , on='OrderNumber')\n \n OrderNumber Item YearMonth Total 2019_01 2019_02\n 0 1 1 2019_01 20 60.0 30.0\n 1 1 2 2019_01 40 60.0 30.0\n 2 1 1 2019_02 30 60.0 30.0\n 3 2 1 2019_02 50 0.0 50.0\n\n" ]
[ 3, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0071582184_pandas_python.txt
Q: Slow Kalman Filter - How to speed up calculating inverse of 2x2 matrix (np.linalg.inv())? I am currently working on an image processing project and I am using a Kalman filter for the algorithm, among other things. However, the computation time of the Kalman filter is very slow compared to other software components, despite the use of numpy. The predict function is very fast. The update function, however, is not. I think the reason for that could be the calculation of the inverse of the 2x2 matrix np.linalg.inv(). Does anyone have an idea for a faster calculation? Possibly hardcoded or how to rearrange the equation to avoid the inverse calculation? I also appreciate other comments on how to get the code faster. I may have overlooked something as well. Thank you very much in advance! KalmanFilter.py: class KalmanFilter(object): def __init__(self, dt, u_x,u_y, std_acc, x_std_meas, y_std_meas): """ :param dt: sampling time (time for 1 cycle) :param u_x: acceleration in x-direction :param u_y: acceleration in y-direction :param std_acc: process noise magnitude :param x_std_meas: standard deviation of the measurement in x-direction :param y_std_meas: standard deviation of the measurement in y-direction """ # Define sampling time self.dt = dt # Define the control input variables self.u = np.array([u_x,u_y]) # Intial State self.x = np.array([0.,0.,0.,0.,0.,0.]) # x, x', x'', y, y', y'' # Define the State Transition Matrix A self.A = np.array([[1., self.dt, 0.5*self.dt**2., 0., 0., 0.], [0., 1., self.dt, 0., 0., 0.], [0., 0., 1., 0., 0., 0.], [0., 0., 0., 1., self.dt, 0.5*self.dt**2.], [0., 0., 0., 0., 1., self.dt], [0., 0., 0., 0., 0., 1.]]) # Define the Control Input Matrix B self.B = np.array([[(self.dt**2.)/2., 0.], [0.,(self.dt**2.)/2.], [self.dt,0.], [0.,self.dt]]) # Define Measurement Mapping Matrix self.H = np.array([[1., 0., 0., 0., 0., 0.], [0., 0., 0., 1., 0., 0.]]) # Initial Process Noise Covariance self.Q = np.array([[(self.dt**4.)/4., (self.dt**3.)/2., (self.dt**2.)/2., 0., 0., 0.], [(self.dt**3.)/2., self.dt**2., self.dt, 0., 0., 0.], [(self.dt**2.)/2., self.dt, 1., 0., 0., 0.], [0., 0., 0., (self.dt**4.)/4., (self.dt**3.)/2., (self.dt**2.)/2.], [0., 0., 0., (self.dt**3.)/2., self.dt**2., self.dt], [0., 0., 0., (self.dt**2.)/2., self.dt, 1.]]) * std_acc**2. # Initial Measurement Noise Covariance self.R = np.array([[x_std_meas**2.,0.], [0., y_std_meas**2.]]) # Initial Covariance Matrix self.P = np.array([[500., 0., 0., 0., 0., 0.], [0., 500., 0., 0., 0., 0.], [0., 0., 500., 0., 0., 0.], [0., 0., 0., 500., 0., 0.], [0., 0., 0., 0., 500., 0.], [0., 0., 0., 0., 0., 500.]]) # Initial Kalman Gain self.K = np.zeros((6, 2)) # Initial System uncertainity self.S = np.zeros((2, 2)) def predict(self): # Update time state if self.A is not None and self.u[0] is not None: self.x = dot(self.A, self.x) + dot(self.B, self.u) else: self.x = dot(self.A, self.x) # Calculate error covariance self.P = np.dot(np.dot(self.A, self.P), self.A.T) + self.Q return self.x[0], self.x[3] def update(self, z): self.S = np.dot(self.H, np.dot(self.P, self.H.T)) + self.R # Calculate the Kalman Gain self.K = np.dot(np.dot(self.P, self.H.T), np.linalg.inv(self.S)) self.x = self.x + np.dot(self.K, (z - np.dot(self.H, self.x))) I = np.eye(self.H.shape[1]) # Update error covariance matrix helper = I - np.dot(self.K, self.H) self.P = np.dot(np.dot(helper, self.P), helper.T) + np.dot(self.K, np.dot(self.R, self.K.T)) return self.x[0], self.x[3] I replaced all existing functions with numpy, but not much changed. In my previous search on the internet, I found that np.linalg.inv() could be the reason why it's slow. But I can't find a solution to get rid of it. A: You might get some speedup this way: def __init__(self, dt, u_x,u_y, std_acc, x_std_meas, y_std_meas): # your existing code self.I = np.eye(self.H.shape[1]) And, def update(self, z): # you can cut down on 1 dot product if you save P@H.T in an intermediate variable P_HT = np.dot(self.P, self.H.T) self.S = np.dot(self.H, P_HT) + self.R # Calculate the Kalman Gain self.K = np.dot(P_HT, np.linalg.inv(self.S)) self.x = self.x + np.dot(self.K, (z - np.dot(self.H, self.x))) # Update error covariance matrix >>> use self.I helper = self.I - np.dot(self.K, self.H) self.P = np.dot(np.dot(helper, self.P), helper.T) + np.dot(self.K, np.dot(self.R, self.K.T)) return self.x[0], self.x[3]
Slow Kalman Filter - How to speed up calculating inverse of 2x2 matrix (np.linalg.inv())?
I am currently working on an image processing project and I am using a Kalman filter for the algorithm, among other things. However, the computation time of the Kalman filter is very slow compared to other software components, despite the use of numpy. The predict function is very fast. The update function, however, is not. I think the reason for that could be the calculation of the inverse of the 2x2 matrix np.linalg.inv(). Does anyone have an idea for a faster calculation? Possibly hardcoded or how to rearrange the equation to avoid the inverse calculation? I also appreciate other comments on how to get the code faster. I may have overlooked something as well. Thank you very much in advance! KalmanFilter.py: class KalmanFilter(object): def __init__(self, dt, u_x,u_y, std_acc, x_std_meas, y_std_meas): """ :param dt: sampling time (time for 1 cycle) :param u_x: acceleration in x-direction :param u_y: acceleration in y-direction :param std_acc: process noise magnitude :param x_std_meas: standard deviation of the measurement in x-direction :param y_std_meas: standard deviation of the measurement in y-direction """ # Define sampling time self.dt = dt # Define the control input variables self.u = np.array([u_x,u_y]) # Intial State self.x = np.array([0.,0.,0.,0.,0.,0.]) # x, x', x'', y, y', y'' # Define the State Transition Matrix A self.A = np.array([[1., self.dt, 0.5*self.dt**2., 0., 0., 0.], [0., 1., self.dt, 0., 0., 0.], [0., 0., 1., 0., 0., 0.], [0., 0., 0., 1., self.dt, 0.5*self.dt**2.], [0., 0., 0., 0., 1., self.dt], [0., 0., 0., 0., 0., 1.]]) # Define the Control Input Matrix B self.B = np.array([[(self.dt**2.)/2., 0.], [0.,(self.dt**2.)/2.], [self.dt,0.], [0.,self.dt]]) # Define Measurement Mapping Matrix self.H = np.array([[1., 0., 0., 0., 0., 0.], [0., 0., 0., 1., 0., 0.]]) # Initial Process Noise Covariance self.Q = np.array([[(self.dt**4.)/4., (self.dt**3.)/2., (self.dt**2.)/2., 0., 0., 0.], [(self.dt**3.)/2., self.dt**2., self.dt, 0., 0., 0.], [(self.dt**2.)/2., self.dt, 1., 0., 0., 0.], [0., 0., 0., (self.dt**4.)/4., (self.dt**3.)/2., (self.dt**2.)/2.], [0., 0., 0., (self.dt**3.)/2., self.dt**2., self.dt], [0., 0., 0., (self.dt**2.)/2., self.dt, 1.]]) * std_acc**2. # Initial Measurement Noise Covariance self.R = np.array([[x_std_meas**2.,0.], [0., y_std_meas**2.]]) # Initial Covariance Matrix self.P = np.array([[500., 0., 0., 0., 0., 0.], [0., 500., 0., 0., 0., 0.], [0., 0., 500., 0., 0., 0.], [0., 0., 0., 500., 0., 0.], [0., 0., 0., 0., 500., 0.], [0., 0., 0., 0., 0., 500.]]) # Initial Kalman Gain self.K = np.zeros((6, 2)) # Initial System uncertainity self.S = np.zeros((2, 2)) def predict(self): # Update time state if self.A is not None and self.u[0] is not None: self.x = dot(self.A, self.x) + dot(self.B, self.u) else: self.x = dot(self.A, self.x) # Calculate error covariance self.P = np.dot(np.dot(self.A, self.P), self.A.T) + self.Q return self.x[0], self.x[3] def update(self, z): self.S = np.dot(self.H, np.dot(self.P, self.H.T)) + self.R # Calculate the Kalman Gain self.K = np.dot(np.dot(self.P, self.H.T), np.linalg.inv(self.S)) self.x = self.x + np.dot(self.K, (z - np.dot(self.H, self.x))) I = np.eye(self.H.shape[1]) # Update error covariance matrix helper = I - np.dot(self.K, self.H) self.P = np.dot(np.dot(helper, self.P), helper.T) + np.dot(self.K, np.dot(self.R, self.K.T)) return self.x[0], self.x[3] I replaced all existing functions with numpy, but not much changed. In my previous search on the internet, I found that np.linalg.inv() could be the reason why it's slow. But I can't find a solution to get rid of it.
[ "You might get some speedup this way:\ndef __init__(self, dt, u_x,u_y, std_acc, x_std_meas, y_std_meas):\n \n # your existing code\n\n self.I = np.eye(self.H.shape[1])\n\nAnd,\ndef update(self, z):\n # you can cut down on 1 dot product if you save P@H.T in an intermediate variable\n P_HT = np.dot(self.P, self.H.T)\n\n self.S = np.dot(self.H, P_HT) + self.R\n\n # Calculate the Kalman Gain\n self.K = np.dot(P_HT, np.linalg.inv(self.S))\n\n self.x = self.x + np.dot(self.K, (z - np.dot(self.H, self.x)))\n\n # Update error covariance matrix >>> use self.I\n helper = self.I - np.dot(self.K, self.H)\n self.P = np.dot(np.dot(helper, self.P), helper.T) + np.dot(self.K, np.dot(self.R, self.K.T))\n\n return self.x[0], self.x[3]\n\n" ]
[ 0 ]
[]
[]
[ "kalman_filter", "numpy", "performance", "python" ]
stackoverflow_0074541691_kalman_filter_numpy_performance_python.txt
Q: Why does my python run in vscode terminal but not vscode code I keep getting syntax error when using print(f"Addition: {num1} + {num2} = {num1 + num2}") in my code. The code also doesn't run when I double click and select 'Run python file in terminal' but it runs when I double click and select 'Run selection/line in Python terminal'. I have the latest python installed through homebrew. A: Functions like f-string is introduced from Python 3.6. Therefore, in order to solve this problem, you need to update to Python 3.6 or higher.
Why does my python run in vscode terminal but not vscode code
I keep getting syntax error when using print(f"Addition: {num1} + {num2} = {num1 + num2}") in my code. The code also doesn't run when I double click and select 'Run python file in terminal' but it runs when I double click and select 'Run selection/line in Python terminal'. I have the latest python installed through homebrew.
[ "Functions like f-string is introduced from Python 3.6.\nTherefore, in order to solve this problem, you need to update to Python 3.6 or higher.\n" ]
[ 0 ]
[]
[]
[ "python", "terminal", "visual_studio_code" ]
stackoverflow_0074533488_python_terminal_visual_studio_code.txt
Q: Pyvis graph wont stop moving I'm trying to make a project where I create a graph from a python project. I have this code import os import sys import re import networkx as nx from pyvis.physics import Physics from radon.visitors import ComplexityVisitor from pyvis.network import Network rootDir ="/home/ask/Git/Zeeguu-API" depth = int(sys.argv[1]) class directory: def __init__(self,path, ParentDir = None,ChildrenDirs = [] , PyChildren = []) -> None: self.path = path self.parentDir = ParentDir self.pyChildren = ChildrenDirs self.pyChildren = PyChildren def getComplexityoffile(file : str): f = open(file, "r") s = f.read() return ComplexityVisitor.from_code(s).total_complexity def getParentOfDir(dir: str): cutlast = dir.split("/")[:-1] join = "/".join(cutlast) if join: return join else: return "./" def extract_importandClass_from_line(unline): x = re.search("^import (\S+)", unline) x = re.search("^from (\S+)", unline) return x.group(1)#, c.group(1).split('(')[0] def getimportsforfile(file): lines = [line for line in open(file)] classes = [] all_imports = [] for line in lines: try: imports = extract_importandClass_from_line(line) tmp = imports.rsplit('.',1) importEnd = tmp[-1] # importsimports importsFormatted = imports.replace('.', '/') finalimport = importsFormatted[1:] if importsFormatted.startswith('/') else importsFormatted all_imports.append(importsFormatted) except: continue return all_imports NodesAndComplexity = {} # (node/complexity in folder) # ting jeg vil bruge til at holdestyr på dependencies Map_Dirs_And_Files_To_Displaybledirs = {} pythonFile_to_imports = {} # (Fille importing, file/dir imported) dirsForDisplay = set() # mapping files to parent directories parenDirToChildDir = {} # (parent, [list of children]) G = nx.DiGraph() isRoot = True for root, dirs, files in os.walk(rootDir): pyfiles = list(filter(lambda a : a.endswith('.py'), files)) thisDir = root.replace(rootDir, '') splitDIR = thisDir[1:].split("/")[:depth] if not isRoot: displayableDir = "/" + "/".join(splitDIR) else: displayableDir = "./" isRoot = False # if there is python files on this directory referentialDIr = thisDir[1:] if thisDir.startswith('/') else thisDir Map_Dirs_And_Files_To_Displaybledirs[referentialDIr] = displayableDir if (pyfiles): accumulateComplexity = 0 for f in pyfiles: filepath = root + "/"+ f imports = getimportsforfile(filepath) logFile = thisDir + "/" + f[:-3] accumulateComplexity = accumulateComplexity + getComplexityoffile(filepath) removedslashFromLogfile = logFile[1:] if logFile.startswith('/') else logFile Map_Dirs_And_Files_To_Displaybledirs[removedslashFromLogfile] = displayableDir pythonFile_to_imports[removedslashFromLogfile] = imports if displayableDir not in NodesAndComplexity: NodesAndComplexity[displayableDir] = accumulateComplexity else: NodesAndComplexity[displayableDir] = NodesAndComplexity[displayableDir] + accumulateComplexity if (displayableDir not in dirsForDisplay): dirsForDisplay.add(thisDir) G.add_node(displayableDir, Physics=False) if not isRoot and displayableDir != "./": parent = getParentOfDir(displayableDir) G.add_edge(parent, displayableDir) # setting node sizes for importingfile, importlist in pythonFile_to_imports.items(): for importt in importlist: if importt in Map_Dirs_And_Files_To_Displaybledirs: fromf = Map_Dirs_And_Files_To_Displaybledirs[importingfile] to = Map_Dirs_And_Files_To_Displaybledirs[importt] if fromf != to: G.add_edge(Map_Dirs_And_Files_To_Displaybledirs[importingfile],Map_Dirs_And_Files_To_Displaybledirs[importt], color="red") for node, complexity in NodesAndComplexity.items(): complexixtyDisplay = complexity / 2 G.nodes[node]["size"] = complexixtyDisplay Displayer = Network(directed=True, height="1500px", width="100%") Displayer.from_nx(G) Displayer.barnes_hut(overlap=1) Displayer.show_buttons(filter_=["physics"]) Displayer.show("pik.html") This creates the graph just fine. However, when I create it, the graph is flying around my screen, and it is impossible to actually get a looks at it. If I remove Displayer.barnes_hut(overlap=1), then it doesnt move, but then the nodes are all just bunched up on top of eachother, and again it is impossible to decipher the graph. How do I get a graph that is both standing (reasonably) still and readable? A: In the show_buttons function add all the buttons, and after creating the pik.html file, open the html file in Google Chrome. In the buttons option there will be font category, there you can disable the physics option. From then on the nodes will not move and you can distribute the nodes as you want by moving them.
Pyvis graph wont stop moving
I'm trying to make a project where I create a graph from a python project. I have this code import os import sys import re import networkx as nx from pyvis.physics import Physics from radon.visitors import ComplexityVisitor from pyvis.network import Network rootDir ="/home/ask/Git/Zeeguu-API" depth = int(sys.argv[1]) class directory: def __init__(self,path, ParentDir = None,ChildrenDirs = [] , PyChildren = []) -> None: self.path = path self.parentDir = ParentDir self.pyChildren = ChildrenDirs self.pyChildren = PyChildren def getComplexityoffile(file : str): f = open(file, "r") s = f.read() return ComplexityVisitor.from_code(s).total_complexity def getParentOfDir(dir: str): cutlast = dir.split("/")[:-1] join = "/".join(cutlast) if join: return join else: return "./" def extract_importandClass_from_line(unline): x = re.search("^import (\S+)", unline) x = re.search("^from (\S+)", unline) return x.group(1)#, c.group(1).split('(')[0] def getimportsforfile(file): lines = [line for line in open(file)] classes = [] all_imports = [] for line in lines: try: imports = extract_importandClass_from_line(line) tmp = imports.rsplit('.',1) importEnd = tmp[-1] # importsimports importsFormatted = imports.replace('.', '/') finalimport = importsFormatted[1:] if importsFormatted.startswith('/') else importsFormatted all_imports.append(importsFormatted) except: continue return all_imports NodesAndComplexity = {} # (node/complexity in folder) # ting jeg vil bruge til at holdestyr på dependencies Map_Dirs_And_Files_To_Displaybledirs = {} pythonFile_to_imports = {} # (Fille importing, file/dir imported) dirsForDisplay = set() # mapping files to parent directories parenDirToChildDir = {} # (parent, [list of children]) G = nx.DiGraph() isRoot = True for root, dirs, files in os.walk(rootDir): pyfiles = list(filter(lambda a : a.endswith('.py'), files)) thisDir = root.replace(rootDir, '') splitDIR = thisDir[1:].split("/")[:depth] if not isRoot: displayableDir = "/" + "/".join(splitDIR) else: displayableDir = "./" isRoot = False # if there is python files on this directory referentialDIr = thisDir[1:] if thisDir.startswith('/') else thisDir Map_Dirs_And_Files_To_Displaybledirs[referentialDIr] = displayableDir if (pyfiles): accumulateComplexity = 0 for f in pyfiles: filepath = root + "/"+ f imports = getimportsforfile(filepath) logFile = thisDir + "/" + f[:-3] accumulateComplexity = accumulateComplexity + getComplexityoffile(filepath) removedslashFromLogfile = logFile[1:] if logFile.startswith('/') else logFile Map_Dirs_And_Files_To_Displaybledirs[removedslashFromLogfile] = displayableDir pythonFile_to_imports[removedslashFromLogfile] = imports if displayableDir not in NodesAndComplexity: NodesAndComplexity[displayableDir] = accumulateComplexity else: NodesAndComplexity[displayableDir] = NodesAndComplexity[displayableDir] + accumulateComplexity if (displayableDir not in dirsForDisplay): dirsForDisplay.add(thisDir) G.add_node(displayableDir, Physics=False) if not isRoot and displayableDir != "./": parent = getParentOfDir(displayableDir) G.add_edge(parent, displayableDir) # setting node sizes for importingfile, importlist in pythonFile_to_imports.items(): for importt in importlist: if importt in Map_Dirs_And_Files_To_Displaybledirs: fromf = Map_Dirs_And_Files_To_Displaybledirs[importingfile] to = Map_Dirs_And_Files_To_Displaybledirs[importt] if fromf != to: G.add_edge(Map_Dirs_And_Files_To_Displaybledirs[importingfile],Map_Dirs_And_Files_To_Displaybledirs[importt], color="red") for node, complexity in NodesAndComplexity.items(): complexixtyDisplay = complexity / 2 G.nodes[node]["size"] = complexixtyDisplay Displayer = Network(directed=True, height="1500px", width="100%") Displayer.from_nx(G) Displayer.barnes_hut(overlap=1) Displayer.show_buttons(filter_=["physics"]) Displayer.show("pik.html") This creates the graph just fine. However, when I create it, the graph is flying around my screen, and it is impossible to actually get a looks at it. If I remove Displayer.barnes_hut(overlap=1), then it doesnt move, but then the nodes are all just bunched up on top of eachother, and again it is impossible to decipher the graph. How do I get a graph that is both standing (reasonably) still and readable?
[ "In the show_buttons function add all the buttons, and after creating the pik.html file, open the html file in Google Chrome. In the buttons option\nthere will be font category, there you can disable the physics option.\nFrom then on the nodes will not move and you can distribute the nodes as you want by moving them.\n" ]
[ 0 ]
[]
[]
[ "python", "pyvis" ]
stackoverflow_0067548160_python_pyvis.txt
Q: 'DataFrame' object has no attribute 'flush' I'm trying to solve Boston house price prediction problem,but it has this error AttributeError: 'DataFrame' object has no attribute 'flush' and this: ` Cell In [53], line 7, in load_data() 5 def load_data(): 6 datafile= pd.read_csv("housing.csv",sep=',') ----> 7 data = np.fromfile(datafile) 8 feature_names = ['RM', 'LSTAT', 'PTRATIO', 'MEDV'] 9 feature_num = len(feature_names) ` here's a part of my code ` import numpy as np import matplotlib.pyplot as plt import pandas as pd def load_data(): datafile= pd.read_csv("housing.csv",sep=',') data = np.fromfile(datafile) feature_names = ['RM', 'LSTAT', 'PTRATIO', 'MEDV'] feature_num = len(feature_names) data = data.reshape(data.shape[0] // feature_num, feature_num) ratio = 0.8 offset = int(data.shape[0] * ratio) training = data[:offset] maximums, minimums, avge = training.max(axis=0), training.min(axis=0), training.sum(axis=0) / training.shape[0] ` the word "flush" doesn't appear in my code or in my data can anyone give me some idea? A: You are reading the housing.csv file with pd.read_csv, which converts it to a Dataframe object. This leads to the error, because np.fromfile expects a file (str or path), not a Dataframe. To get rid of the error, replace the first to statements in the load_data function with a single suitable numpy function such as np.genfromtext. import numpy as np import matplotlib.pyplot as plt import pandas as pd def load_data(): data = np.genfromtxt('housing.csv', delimiter=',') feature_names = ['RM', 'LSTAT', 'PTRATIO', 'MEDV'] # [...]
'DataFrame' object has no attribute 'flush'
I'm trying to solve Boston house price prediction problem,but it has this error AttributeError: 'DataFrame' object has no attribute 'flush' and this: ` Cell In [53], line 7, in load_data() 5 def load_data(): 6 datafile= pd.read_csv("housing.csv",sep=',') ----> 7 data = np.fromfile(datafile) 8 feature_names = ['RM', 'LSTAT', 'PTRATIO', 'MEDV'] 9 feature_num = len(feature_names) ` here's a part of my code ` import numpy as np import matplotlib.pyplot as plt import pandas as pd def load_data(): datafile= pd.read_csv("housing.csv",sep=',') data = np.fromfile(datafile) feature_names = ['RM', 'LSTAT', 'PTRATIO', 'MEDV'] feature_num = len(feature_names) data = data.reshape(data.shape[0] // feature_num, feature_num) ratio = 0.8 offset = int(data.shape[0] * ratio) training = data[:offset] maximums, minimums, avge = training.max(axis=0), training.min(axis=0), training.sum(axis=0) / training.shape[0] ` the word "flush" doesn't appear in my code or in my data can anyone give me some idea?
[ "You are reading the housing.csv file with pd.read_csv, which converts it to a Dataframe object. This leads to the error, because np.fromfile expects a file (str or path), not a Dataframe.\nTo get rid of the error, replace the first to statements in the load_data function with a single suitable numpy function such as np.genfromtext.\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndef load_data():\n data = np.genfromtxt('housing.csv', delimiter=',')\n feature_names = ['RM', 'LSTAT', 'PTRATIO', 'MEDV']\n # [...]\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074532226_numpy_python.txt
Q: Is there a way to assign keyboard shortcuts to specific audio outputs on macOS? As far as functionality, I'd just like to assign multiple keyboard shortcuts to individual audio outputs. For instance: cmd+F12 --> Airpods cmd+F11 --> Macbook speakers cmd+F10 --> Headphones I'm very new to this and learning so I'm not looking for a specific answer on how to write it - I'm more interested in the concept and libraries to research to see how far I can get on my own. I wrote something with Python using pynput keyboard and mouse, but it's not exactly what I had in mind since it's taking control of the mouse and using coordinates on my display it's one layout change from not working. (Python preferred as that's what I'm learning, but open to all ideas and suggestions) Thanks in advance! A: Install switchaudio-osx e.g. with brew install switchaudio-osx Use SwitchAudioSource -a to show me exactly how all my speakers are named. Create some 1-line AppleScripts, saving them as applications: do shell script "/usr/local/bin/SwitchAudioSource -s 'MacBook Pro Speakers'" Use Apptivate to give the saved AppleScript app a keyboard shortcut. Note : As mentioned in other answers, there are other popular alternatives to Apptivate for assigning a keyboard shortcut to an AppleScript application. Using SwitchAudio has the advantage of not relying on UI list positions. A big help if your list changes. Run SwitchAudioSource with no options to see help text.
Is there a way to assign keyboard shortcuts to specific audio outputs on macOS?
As far as functionality, I'd just like to assign multiple keyboard shortcuts to individual audio outputs. For instance: cmd+F12 --> Airpods cmd+F11 --> Macbook speakers cmd+F10 --> Headphones I'm very new to this and learning so I'm not looking for a specific answer on how to write it - I'm more interested in the concept and libraries to research to see how far I can get on my own. I wrote something with Python using pynput keyboard and mouse, but it's not exactly what I had in mind since it's taking control of the mouse and using coordinates on my display it's one layout change from not working. (Python preferred as that's what I'm learning, but open to all ideas and suggestions) Thanks in advance!
[ "\nInstall switchaudio-osx e.g. with brew install switchaudio-osx\n\nUse SwitchAudioSource -a to show me exactly how all my speakers are named.\n\nCreate some 1-line AppleScripts, saving them as applications:\n\n\ndo shell script \"/usr/local/bin/SwitchAudioSource -s 'MacBook Pro Speakers'\"\n\nUse Apptivate to give the saved AppleScript app a keyboard shortcut.\n\nNote :\nAs mentioned in other answers, there are other popular alternatives to Apptivate for assigning a keyboard shortcut to an AppleScript application.\nUsing SwitchAudio has the advantage of not relying on UI list positions. A big help if your list changes. Run SwitchAudioSource with no options to see help text.\n" ]
[ 0 ]
[]
[]
[ "macos", "operating_system", "python", "user_interface" ]
stackoverflow_0074542511_macos_operating_system_python_user_interface.txt
Q: data only alternately gets fetched properly (inconsistently fetched) from a website I'm trying to get the data from a website, and here are the codes of what I did: These are the modules import bs4 import pandas as pd import numpy as np import random import requests from lxml import etree import time from tqdm.notebook import tqdm from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.chrome.options import Options from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import ElementNotInteractableException from time import sleep from webdriver_manager.chrome import ChromeDriverManager Here is for getting the urls of each target product: driver = webdriver.Chrome(ChromeDriverManager().install()) for page in tqdm(range(5, 10)): driver.get("https://shopee.ph/Makeup-Fragrances-cat.11021036?facet=100664&page="+str(page)+"&sortBy=pop") skincare = driver.find_elements(By.XPATH, '//div[@class="col-xs-2-4 shopee-search-item-result__item"]//a[@data-sqe="link"]') for _skincare in tqdm(skincare): urls.append({"url":_skincare.get_attribute('href')}) driver.quit() It was successfully fetched. And here's what I did next: data_final = pd.DataFrame(urls) driver = webdriver.Chrome(ChromeDriverManager().install()) skincares = [] for product in tqdm(data_final["url"]): driver.get(product) try: company = driver.find_element(By.XPATH,"//div[@class='CKGyuW']//div[@class='_1Yaflp page-product__shop']//div[@class='_1YY3XU']//div[@class='zYQ1eS']//div[@class='_3LoNDM']").text except: company = 'none' try: product_name = driver.find_element(By.XPATH,"//div[@class='flex flex-auto eTjGTe']//div[@class='flex-auto flex-column _1Kkkb-']//div[@class='_2rQP1z']//span").text except: product_name = 'none' try: rating = driver.find_element(By.XPATH,"//div[@class='flex _3tkSsu']//div[@class='flex _3T9OoL']//div[@class='_3y5XOB _14izon']").text except: rating = 'none' try: number_of_ratings = driver.find_element(By.XPATH,"//div[@class='flex _3tkSsu']//div[@class='flex _3T9OoL']//div[@class='_3y5XOB']").text except: number_of_ratings = 'none' try: sold = driver.find_element(By.XPATH,"//div[@class='flex _3tkSsu']//div[@class='flex _3EOMd6']//div[@class='HmRxgn']").text except: sold = 'none' try: price = driver.find_element(By.XPATH,"//div[@class='_2Shl1j']").text except: price = 'none' try: description = driver.find_element(By.XPATH,"//div[@class='_1MqcWX']//p[@class='_2jrvqA']").text except: description = 'none' skincares.append({ "url": product, "company": company, "product name": product_name, "rating": rating, "number of ratings": number_of_ratings, "sold": sold, "price": price, "description": description, }) time.sleep(5) I put time.sleep(x) to avoid getting blocked, and I tried x = 1, 1.5, 2, 5 , 15. What the code above got was not consistent. Calling skincares_data = pd.DataFrame(skincares) skincares_data I get enter image description here Which is a bunch of blank or not properly fetched data. One thing thoughis that if I rerun the code, I get another set of data in which some of those which are blank now has data, and some of those that were properly fetched are now blank. Running it for another time the same problem occurs. I think being "blocked" by the website isn't the problem here (I just used the time.sleep()to make it sure). Any comments? I tried to get data from a website, I successfully got the urls but the details of each product are not properly fetched. There a re a lot of blank data. And alternately they either go blank or properly fetched. A: Page is being loaded dynamically, as you scroll it down. The following code should solve your issue: [..] wait = WebDriverWait(driver, 15) url='https://shopee.ph/Makeup-Fragrances-cat.11021036?facet=100664&page=1&sortBy=pop' driver.get(url) rows= wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[contains(@class, "shopee-search-item-result__item")]'))) for r in rows: r.location_once_scrolled_into_view t.sleep(5) products = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@data-sqe="item"]'))) for p in products: name = p.find_element(By.XPATH, './/div[@data-sqe="name"]').text.strip() some_id = p.find_element(By.XPATH, './/a[@data-sqe="link"]').get_attribute('href').split('?sp_atk=')[0].split('-i.')[1] print(name, some_id) All items will be printed in terminal: ORIG M.Q. Cosmetics MACAROON LIP THERAPY LIPBALM WITH SPATULA | MQ wholesale 10092844.9115684791 Magic Lip Therapy Balm in 10g jar (FREE Spatula) Rebranding NO STICKER! 286498185.11511633880 BIOAQUA COLLAGEN Nourish Lips Membrane Moisturizing Lip Mask moisture nourishing skin care soft 295464315.8585504678 Lip therapy Cosmetic Potion lipbalm ₱5 off Free Gift 11055729.11663828134 VASELINE Rosy Lip Stick 4.8g 92328166.8130605004 Collagen Crystal lip mask lips plump gel personal care hydrating lip whitening a smacker wrinkle gel 386726777.2925165359 blk cosmetics fresh lip scrub coco crush 62677292.5532509493 [...] Selenium documentation can be found here
data only alternately gets fetched properly (inconsistently fetched) from a website
I'm trying to get the data from a website, and here are the codes of what I did: These are the modules import bs4 import pandas as pd import numpy as np import random import requests from lxml import etree import time from tqdm.notebook import tqdm from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.chrome.options import Options from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import ElementNotInteractableException from time import sleep from webdriver_manager.chrome import ChromeDriverManager Here is for getting the urls of each target product: driver = webdriver.Chrome(ChromeDriverManager().install()) for page in tqdm(range(5, 10)): driver.get("https://shopee.ph/Makeup-Fragrances-cat.11021036?facet=100664&page="+str(page)+"&sortBy=pop") skincare = driver.find_elements(By.XPATH, '//div[@class="col-xs-2-4 shopee-search-item-result__item"]//a[@data-sqe="link"]') for _skincare in tqdm(skincare): urls.append({"url":_skincare.get_attribute('href')}) driver.quit() It was successfully fetched. And here's what I did next: data_final = pd.DataFrame(urls) driver = webdriver.Chrome(ChromeDriverManager().install()) skincares = [] for product in tqdm(data_final["url"]): driver.get(product) try: company = driver.find_element(By.XPATH,"//div[@class='CKGyuW']//div[@class='_1Yaflp page-product__shop']//div[@class='_1YY3XU']//div[@class='zYQ1eS']//div[@class='_3LoNDM']").text except: company = 'none' try: product_name = driver.find_element(By.XPATH,"//div[@class='flex flex-auto eTjGTe']//div[@class='flex-auto flex-column _1Kkkb-']//div[@class='_2rQP1z']//span").text except: product_name = 'none' try: rating = driver.find_element(By.XPATH,"//div[@class='flex _3tkSsu']//div[@class='flex _3T9OoL']//div[@class='_3y5XOB _14izon']").text except: rating = 'none' try: number_of_ratings = driver.find_element(By.XPATH,"//div[@class='flex _3tkSsu']//div[@class='flex _3T9OoL']//div[@class='_3y5XOB']").text except: number_of_ratings = 'none' try: sold = driver.find_element(By.XPATH,"//div[@class='flex _3tkSsu']//div[@class='flex _3EOMd6']//div[@class='HmRxgn']").text except: sold = 'none' try: price = driver.find_element(By.XPATH,"//div[@class='_2Shl1j']").text except: price = 'none' try: description = driver.find_element(By.XPATH,"//div[@class='_1MqcWX']//p[@class='_2jrvqA']").text except: description = 'none' skincares.append({ "url": product, "company": company, "product name": product_name, "rating": rating, "number of ratings": number_of_ratings, "sold": sold, "price": price, "description": description, }) time.sleep(5) I put time.sleep(x) to avoid getting blocked, and I tried x = 1, 1.5, 2, 5 , 15. What the code above got was not consistent. Calling skincares_data = pd.DataFrame(skincares) skincares_data I get enter image description here Which is a bunch of blank or not properly fetched data. One thing thoughis that if I rerun the code, I get another set of data in which some of those which are blank now has data, and some of those that were properly fetched are now blank. Running it for another time the same problem occurs. I think being "blocked" by the website isn't the problem here (I just used the time.sleep()to make it sure). Any comments? I tried to get data from a website, I successfully got the urls but the details of each product are not properly fetched. There a re a lot of blank data. And alternately they either go blank or properly fetched.
[ "Page is being loaded dynamically, as you scroll it down. The following code should solve your issue:\n[..]\nwait = WebDriverWait(driver, 15)\nurl='https://shopee.ph/Makeup-Fragrances-cat.11021036?facet=100664&page=1&sortBy=pop'\ndriver.get(url)\nrows= wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[contains(@class, \"shopee-search-item-result__item\")]')))\nfor r in rows:\n r.location_once_scrolled_into_view\nt.sleep(5)\nproducts = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@data-sqe=\"item\"]')))\nfor p in products:\n name = p.find_element(By.XPATH, './/div[@data-sqe=\"name\"]').text.strip()\n some_id = p.find_element(By.XPATH, './/a[@data-sqe=\"link\"]').get_attribute('href').split('?sp_atk=')[0].split('-i.')[1]\n print(name, some_id)\n\nAll items will be printed in terminal:\nORIG M.Q. Cosmetics MACAROON LIP THERAPY LIPBALM WITH SPATULA | MQ\nwholesale 10092844.9115684791\nMagic Lip Therapy Balm in 10g jar (FREE Spatula) Rebranding NO STICKER! 286498185.11511633880\nBIOAQUA COLLAGEN Nourish Lips Membrane Moisturizing Lip Mask moisture nourishing skin care soft 295464315.8585504678\nLip therapy Cosmetic Potion lipbalm\n₱5 off\nFree Gift 11055729.11663828134\nVASELINE Rosy Lip Stick 4.8g 92328166.8130605004\nCollagen Crystal lip mask lips plump gel personal care hydrating lip whitening a smacker wrinkle gel 386726777.2925165359\nblk cosmetics fresh lip scrub coco crush 62677292.5532509493\n[...]\n\nSelenium documentation can be found here\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "selenium", "web_scraping" ]
stackoverflow_0074540393_beautifulsoup_python_selenium_web_scraping.txt
Q: Can I add a non-editable field to the class based view UpdateView in Django class EmployeeView(generic.edit.UpdateView): model = Employee fields = '__all__' template_name = 'wfp/employee.html' def get_object(self, queryset=None): return Employee.objects.get(uuid=self.kwargs.get("employee_uuid")) has everything I need except the UUID that is on the employee which is non-editable. I'd really like to include that in the HTTPResponse so I can use elsewhere a link to another page. (Employee have allocations of things) Ideas? Thanks A: Create a EmployeeModelForm class then you can control the process with ease. # forms.py from django import forms class EmployeeModelForm(forms.ModelForm): class Meta: model = Employee exclude = ["your_uuid_field"] and then use the EmployeeModelForm class in your view with the help of form_class attribute # views.py class EmployeeView(generic.edit.UpdateView): model = Employee form_class = EmployeeModelForm template_name = 'wfp/employee.html' def get_object(self, queryset=None): return Employee.objects.get(uuid=self.kwargs.get("employee_uuid"))
Can I add a non-editable field to the class based view UpdateView in Django
class EmployeeView(generic.edit.UpdateView): model = Employee fields = '__all__' template_name = 'wfp/employee.html' def get_object(self, queryset=None): return Employee.objects.get(uuid=self.kwargs.get("employee_uuid")) has everything I need except the UUID that is on the employee which is non-editable. I'd really like to include that in the HTTPResponse so I can use elsewhere a link to another page. (Employee have allocations of things) Ideas? Thanks
[ "Create a EmployeeModelForm class then you can control the process with ease.\n# forms.py\n\nfrom django import forms\n\n\nclass EmployeeModelForm(forms.ModelForm):\n class Meta:\n model = Employee\n exclude = [\"your_uuid_field\"]\n\nand then use the EmployeeModelForm class in your view with the help of form_class attribute\n# views.py\n\nclass EmployeeView(generic.edit.UpdateView):\n model = Employee\n form_class = EmployeeModelForm\n template_name = 'wfp/employee.html'\n\n def get_object(self, queryset=None):\n return Employee.objects.get(uuid=self.kwargs.get(\"employee_uuid\"))\n\n" ]
[ 0 ]
[]
[]
[ "django", "forms", "python" ]
stackoverflow_0074542551_django_forms_python.txt
Q: Send JSON from curl by POST to Python FastAPI I'm running following script: from fastapi import FastAPI from fastapi import Request import os import uvicorn app = FastAPI() @app.post("/") async def root(data: Request): try: res = await data.json() except Exception as ex: res = str(ex) return res if __name__ == "__main__": prog = os.path.basename(__file__).replace(".py","") uvicorn.run("%s:app" % prog, host="127.0.0.1", port=5000, log_level="debug",reload=True) and try to test it by: curl -d '{"text":"Foo Bar"}' -H "Content-Type: application/json" -X POST http://localhost:5000 What I get is always: "Expecting value: line 1 column 1 (char 0)" What's wrong here? Windows 11, Python 3.9.9 A: On Windows, using single quotes around data (and in general) would not work, and you would thus need to escape double quotes. For example (adjust the port number as required): curl -X "POST" \ "http://127.0.0.1:8000/" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -d "{\"foo\": \"bar\"}" ^ ^^ ^^ ^^ ^^ ^ The above in a single line: curl -X "POST" "http://127.0.0.1:8000/" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"foo\": \"bar\"}" Note that you can also use the interactive API documentation, provided by Swagger UI at /docs, which allows you to test your API directly from the browser, as well as provides you with cURL command, after submitting the data, which you can copy and test it on your own. For Swagger UI to provide you with the request body area (where you can type the data you would like to send), you need to define a body parameter in your endpoint. Since you seem to be sending arbitrary JSON data, you can use the following (please have a look at this answer and this answer for more details on how to send JSON data to a FastAPI backend). Example: from typing import Dict, Any @app.post('/') def main(payload: Dict[Any, Any]): return payload
Send JSON from curl by POST to Python FastAPI
I'm running following script: from fastapi import FastAPI from fastapi import Request import os import uvicorn app = FastAPI() @app.post("/") async def root(data: Request): try: res = await data.json() except Exception as ex: res = str(ex) return res if __name__ == "__main__": prog = os.path.basename(__file__).replace(".py","") uvicorn.run("%s:app" % prog, host="127.0.0.1", port=5000, log_level="debug",reload=True) and try to test it by: curl -d '{"text":"Foo Bar"}' -H "Content-Type: application/json" -X POST http://localhost:5000 What I get is always: "Expecting value: line 1 column 1 (char 0)" What's wrong here? Windows 11, Python 3.9.9
[ "On Windows, using single quotes around data (and in general) would not work, and you would thus need to escape double quotes. For example (adjust the port number as required):\ncurl -X \"POST\" \\\n \"http://127.0.0.1:8000/\" \\\n -H \"accept: application/json\" \\\n -H \"Content-Type: application/json\" \\\n -d \"{\\\"foo\\\": \\\"bar\\\"}\"\n ^ ^^ ^^ ^^ ^^ ^\n\nThe above in a single line:\ncurl -X \"POST\" \"http://127.0.0.1:8000/\" -H \"accept: application/json\" -H \"Content-Type: application/json\" -d \"{\\\"foo\\\": \\\"bar\\\"}\"\n\nNote that you can also use the interactive API documentation, provided by Swagger UI at /docs, which allows you to test your API directly from the browser, as well as provides you with cURL command, after submitting the data, which you can copy and test it on your own. For Swagger UI to provide you with the request body area (where you can type the data you would like to send), you need to define a body parameter in your endpoint. Since you seem to be sending arbitrary JSON data, you can use the following (please have a look at this answer and this answer for more details on how to send JSON data to a FastAPI backend). Example:\nfrom typing import Dict, Any\n\n@app.post('/')\ndef main(payload: Dict[Any, Any]):\n return payload\n\n" ]
[ 1 ]
[]
[]
[ "curl", "fastapi", "post", "python" ]
stackoverflow_0074537444_curl_fastapi_post_python.txt
Q: How to update django database with a list of dictionary items? I have a list of key value pairs here. stat = [{'id': 1, 'status': 'Not Fixed'}, {'id': 2, 'status': 'Not Fixed'}, {'id': 4, 'status': 'Not Fixed'}, {'id': 5, 'status': 'Not Fixed'}, {'id': 6, 'status': 'Not Fixed'}, {'id': 7, 'status': 'Not Fixed'}] The id in this list represents the id(primary key) of my django model. How can I update the existing records in my database with this list? Models.py file class bug(models.Model): ....... ....... status = models.CharField(max_length=25, choices=status_choice, default="Pending") A: EDIT: As it's selected as correct answer I want to copy Hemal's answer https://stackoverflow.com/a/74541837/2281853 to use bulk_update, it's better for DB performance as it runs 1 query only update_objects = [] for update_item in stat: update_objects.append(bug(**update_item)) bug.objects.bulk_update(update_objects, [update_field in stat[0].keys() if update_field != 'id']) Original answer: for update_item in stat: bug_id = update_item.pop('id') bug.objects.filter(id=bug_id).update(**update_item) With this code, you can update any data in your list not limited to status as long as the columns are defined in your model. A: You can do bulk_update: update_obj = [] for item in stat: update_obj.append(bug(id=item['id'], status=item['status'])) bug.objects.bulk_update(update_obj, ['status']) A: stat = [{'id': 1, 'status': 'Not Fixed'}, {'id': 2, 'status': 'Not Fixed'}, {'id': 4, 'status': 'Not Fixed'}, {'id': 5, 'status': 'Not Fixed'}, {'id': 6, 'status': 'Not Fixed'}, {'id': 7, 'status': 'Not Fixed'}] for record in stat: bug.objects.update_or_create( id=record['id'], defaults={'status': record['status']}, ) We can use an update or create for updating existing and create for not in the database.
How to update django database with a list of dictionary items?
I have a list of key value pairs here. stat = [{'id': 1, 'status': 'Not Fixed'}, {'id': 2, 'status': 'Not Fixed'}, {'id': 4, 'status': 'Not Fixed'}, {'id': 5, 'status': 'Not Fixed'}, {'id': 6, 'status': 'Not Fixed'}, {'id': 7, 'status': 'Not Fixed'}] The id in this list represents the id(primary key) of my django model. How can I update the existing records in my database with this list? Models.py file class bug(models.Model): ....... ....... status = models.CharField(max_length=25, choices=status_choice, default="Pending")
[ "EDIT:\nAs it's selected as correct answer I want to copy Hemal's answer https://stackoverflow.com/a/74541837/2281853 to use bulk_update, it's better for DB performance as it runs 1 query only\nupdate_objects = []\nfor update_item in stat:\n update_objects.append(bug(**update_item))\n\nbug.objects.bulk_update(update_objects, [update_field in stat[0].keys() if update_field != 'id'])\n\nOriginal answer:\nfor update_item in stat:\n bug_id = update_item.pop('id')\n bug.objects.filter(id=bug_id).update(**update_item)\n\nWith this code, you can update any data in your list not limited to status as long as the columns are defined in your model.\n", "You can do bulk_update:\nupdate_obj = []\nfor item in stat:\n update_obj.append(bug(id=item['id'], status=item['status']))\n \nbug.objects.bulk_update(update_obj, ['status'])\n\n\n", "\n\nstat = [{'id': 1, 'status': 'Not Fixed'}, {'id': 2, 'status': 'Not Fixed'}, {'id': 4, 'status': 'Not Fixed'}, {'id': 5, 'status': 'Not Fixed'}, {'id': 6, 'status': 'Not Fixed'}, {'id': 7, 'status': 'Not Fixed'}]\nfor record in stat:\n bug.objects.update_or_create(\n id=record['id'],\n defaults={'status': record['status']},\n )\n\n\n\nWe can use an update or create for updating existing and create for not in the database.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "django", "django_forms", "django_models", "django_views", "python" ]
stackoverflow_0074541772_django_django_forms_django_models_django_views_python.txt
Q: ruamel.yaml cannot handle NamedTuple So I have the following piece of code: import sys from typing import NamedTuple import ruamel.yaml as ryaml class Loc(NamedTuple): lat: float long: float data = { "APAC": { "rating": 5, "leads": ["Jane", "John"], "locs": [Loc(1.0, 1.0), Loc(2.0, 2.0)], }, "EMEA": { "rating": 5, "leads": ["Jane", "Jack"], "locs": [Loc(3.0, 3.0), Loc(4.0, 4.0)], } } def main(): # Just to check that Loc is indeed recognized as a tuple assert all(map(lambda o: isinstance(o, tuple), data["APAC"]["locs"])) assert all(map(lambda o: isinstance(o, tuple), data["EMEA"]["locs"])) yml = ryaml.YAML() yml.register_class(Loc) yml.dump(data, sys.stdout) if __name__ == '__main__': main() But whenever I execute the code, I ended up with a series of exception ending with: File "C:\Repos\@Venv\myproj-cpy3.11-1\Lib\site-packages\ruamel\yaml\representer.py", line 1090, in represent_yaml_object anchor = state.pop(Anchor.attrib, None) ^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'pop' If I use 'bare' tuple instead of Loc(), error doesn't appear. How do I make ruamel.yaml understand that Loc is a NamedTuple is a tuple? I'm using CPython 3.11 and ruamel.yaml==0.17.21 A: For some reason you register the Loc class, but you don't tell ruamel.yaml how to dump that class, and that information is not automatically added, and relatively new features (like NamedTuple and e.g. DataClasses( are not explicitly recognised and handled in a special way by the ruamel.yaml codebase (if they were, they probably would not even have to be registered for dumping, but they probably would have to be for loading). The example in the documentation shows that that is done for the registered class, namely by adding a to_yaml class method, so you need to do that for Loc as well. import sys from typing import NamedTuple import ruamel.yaml class Loc(NamedTuple): lat: float long: float @classmethod def to_yaml(cls, representer, node): return representer.represent_sequence('!Loc', node) data = { "APAC": { "rating": 5, "leads": ["Jane", "John"], "locs": [Loc(1.0, 1.0), Loc(2.0, 2.0)], }, "EMEA": { "rating": 5, "leads": ["Jane", "Jack"], "locs": [Loc(3.0, 3.0), Loc(4.0, 4.0)], } } def main(): assert all(map(lambda o: isinstance(o, tuple), data["APAC"]["locs"])) assert all(map(lambda o: isinstance(o, tuple), data["EMEA"]["locs"])) yaml = ruamel.yaml.YAML() yaml.register_class(Loc) yaml.dump(data, sys.stdout) main() which gives: APAC: rating: 5 leads: - Jane - John locs: - !Loc - 1.0 - 1.0 - !Loc - 2.0 - 2.0 EMEA: rating: 5 leads: - Jane - Jack locs: - !Loc - 3.0 - 3.0 - !Loc - 4.0 - 4.0
ruamel.yaml cannot handle NamedTuple
So I have the following piece of code: import sys from typing import NamedTuple import ruamel.yaml as ryaml class Loc(NamedTuple): lat: float long: float data = { "APAC": { "rating": 5, "leads": ["Jane", "John"], "locs": [Loc(1.0, 1.0), Loc(2.0, 2.0)], }, "EMEA": { "rating": 5, "leads": ["Jane", "Jack"], "locs": [Loc(3.0, 3.0), Loc(4.0, 4.0)], } } def main(): # Just to check that Loc is indeed recognized as a tuple assert all(map(lambda o: isinstance(o, tuple), data["APAC"]["locs"])) assert all(map(lambda o: isinstance(o, tuple), data["EMEA"]["locs"])) yml = ryaml.YAML() yml.register_class(Loc) yml.dump(data, sys.stdout) if __name__ == '__main__': main() But whenever I execute the code, I ended up with a series of exception ending with: File "C:\Repos\@Venv\myproj-cpy3.11-1\Lib\site-packages\ruamel\yaml\representer.py", line 1090, in represent_yaml_object anchor = state.pop(Anchor.attrib, None) ^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'pop' If I use 'bare' tuple instead of Loc(), error doesn't appear. How do I make ruamel.yaml understand that Loc is a NamedTuple is a tuple? I'm using CPython 3.11 and ruamel.yaml==0.17.21
[ "For some reason\nyou register the Loc class, but you don't tell ruamel.yaml how to dump that class, and that information is not automatically added, and relatively new features (like NamedTuple and e.g. DataClasses( are not explicitly recognised and handled in a special way by the ruamel.yaml codebase (if they were, they probably would not even have to be registered for dumping, but they probably would have to be for loading).\nThe example in the documentation shows that that is done for\nthe registered class, namely by adding a to_yaml class method,\nso you need to do that for Loc as well.\nimport sys\nfrom typing import NamedTuple\nimport ruamel.yaml\n\nclass Loc(NamedTuple):\n lat: float\n long: float\n\n @classmethod\n def to_yaml(cls, representer, node):\n return representer.represent_sequence('!Loc', node)\n \ndata = {\n \"APAC\": {\n \"rating\": 5,\n \"leads\": [\"Jane\", \"John\"],\n \"locs\": [Loc(1.0, 1.0), Loc(2.0, 2.0)],\n },\n \"EMEA\": {\n \"rating\": 5,\n \"leads\": [\"Jane\", \"Jack\"],\n \"locs\": [Loc(3.0, 3.0), Loc(4.0, 4.0)],\n }\n}\n\ndef main():\n assert all(map(lambda o: isinstance(o, tuple), data[\"APAC\"][\"locs\"]))\n assert all(map(lambda o: isinstance(o, tuple), data[\"EMEA\"][\"locs\"]))\n\n yaml = ruamel.yaml.YAML()\n yaml.register_class(Loc)\n yaml.dump(data, sys.stdout)\n\nmain()\n\nwhich gives:\nAPAC:\n rating: 5\n leads:\n - Jane\n - John\n locs:\n - !Loc\n - 1.0\n - 1.0\n - !Loc\n - 2.0\n - 2.0\nEMEA:\n rating: 5\n leads:\n - Jane\n - Jack\n locs:\n - !Loc\n - 3.0\n - 3.0\n - !Loc\n - 4.0\n - 4.0\n\n" ]
[ 1 ]
[]
[]
[ "python", "ruamel.yaml" ]
stackoverflow_0074541869_python_ruamel.yaml.txt
Q: getting the text from attribute value in html I want to get the country code of the products form this website: https://www.skincarisma.com/products/olay/fresh-effects-s-wipe-out-refreshing-make-up-removal-cloths here is the html I tried country = driver.find_element(By.XPATH,'//div[@class="card-subtitle mb-2"]//img[@alt]').text and country = driver.find_element(By.XPATH,'//div[@class="card-subtitle mb-2"]//img').text but it failed to fetch (apparently because my code is wrong) What should be the path? trying: getting the coutnry code A: The text is contained in the alt attribute, therefore: country = driver.find_element(By.XPATH,'//div[@class="card-subtitle mb-2"]//img').get_attribute("alt") A: A more reliable way of getting that information would be: [...] wait = WebDriverWait(driver, 25) url = 'https://www.skincarisma.com/products/olay/fresh-effects-s-wipe-out-refreshing-make-up-removal-cloths' driver.get(url) country = wait.until(EC.presence_of_element_located((By.XPATH, '//product-info//div[@class="card-subtitle mb-2"]//img'))).get_attribute('alt') print(country) Result: US
getting the text from attribute value in html
I want to get the country code of the products form this website: https://www.skincarisma.com/products/olay/fresh-effects-s-wipe-out-refreshing-make-up-removal-cloths here is the html I tried country = driver.find_element(By.XPATH,'//div[@class="card-subtitle mb-2"]//img[@alt]').text and country = driver.find_element(By.XPATH,'//div[@class="card-subtitle mb-2"]//img').text but it failed to fetch (apparently because my code is wrong) What should be the path? trying: getting the coutnry code
[ "The text is contained in the alt attribute, therefore:\ncountry = driver.find_element(By.XPATH,'//div[@class=\"card-subtitle mb-2\"]//img').get_attribute(\"alt\")\n\n", "A more reliable way of getting that information would be:\n[...]\nwait = WebDriverWait(driver, 25)\n\nurl = 'https://www.skincarisma.com/products/olay/fresh-effects-s-wipe-out-refreshing-make-up-removal-cloths'\n\ndriver.get(url)\ncountry = wait.until(EC.presence_of_element_located((By.XPATH, '//product-info//div[@class=\"card-subtitle mb-2\"]//img'))).get_attribute('alt')\nprint(country)\n\nResult:\nUS\n\n" ]
[ 0, 0 ]
[]
[]
[ "beautifulsoup", "python", "selenium", "web_scraping" ]
stackoverflow_0074542630_beautifulsoup_python_selenium_web_scraping.txt
Q: Parallelize pandas apply New to pandas, I already want to parallelize a row-wise apply operation. So far I found Parallelize apply after pandas groupby However, that only seems to work for grouped data frames. My use case is different: I have a list of holidays and for my current row/date want to find the no-of-days before and after this day to the next holiday. This is the function I call via apply: def get_nearest_holiday(x, pivot): nearestHoliday = min(x, key=lambda x: abs(x- pivot)) difference = abs(nearesHoliday - pivot) return difference / np.timedelta64(1, 'D') How can I speed it up? edit I experimented a bit with pythons pools - but it was neither nice code, nor did I get my computed results. A: For the parallel approach this is the answer based on Parallelize apply after pandas groupby: from joblib import Parallel, delayed import multiprocessing def get_nearest_dateParallel(df): df['daysBeforeHoliday'] = df.myDates.apply(lambda x: get_nearest_date(holidays.day[holidays.day < x], x)) df['daysAfterHoliday'] = df.myDates.apply(lambda x: get_nearest_date(holidays.day[holidays.day > x], x)) return df def applyParallel(dfGrouped, func): retLst = Parallel(n_jobs=multiprocessing.cpu_count())(delayed(func)(group) for name, group in dfGrouped) return pd.concat(retLst) print ('parallel version: ') # 4 min 30 seconds %time result = applyParallel(datesFrame.groupby(datesFrame.index), get_nearest_dateParallel) but I prefer @NinjaPuppy's approach because it does not require O(n * number_of_holidays) A: I think going down the route of trying stuff in parallel is probably over complicating this. I haven't tried this approach on a large sample so your mileage may vary, but it should give you an idea... Let's just start with some dates... import pandas as pd dates = pd.to_datetime(['2016-01-03', '2016-09-09', '2016-12-12', '2016-03-03']) We'll use some holiday data from pandas.tseries.holiday - note that in effect we want a DatetimeIndex... from pandas.tseries.holiday import USFederalHolidayCalendar holiday_calendar = USFederalHolidayCalendar() holidays = holiday_calendar.holidays('2016-01-01') This gives us: DatetimeIndex(['2016-01-01', '2016-01-18', '2016-02-15', '2016-05-30', '2016-07-04', '2016-09-05', '2016-10-10', '2016-11-11', '2016-11-24', '2016-12-26', ... '2030-01-01', '2030-01-21', '2030-02-18', '2030-05-27', '2030-07-04', '2030-09-02', '2030-10-14', '2030-11-11', '2030-11-28', '2030-12-25'], dtype='datetime64[ns]', length=150, freq=None) Now we find the indices of the nearest nearest holiday for the original dates using searchsorted: indices = holidays.searchsorted(dates) # array([1, 6, 9, 3]) next_nearest = holidays[indices] # DatetimeIndex(['2016-01-18', '2016-10-10', '2016-12-26', '2016-05-30'], dtype='datetime64[ns]', freq=None) Then take the difference between the two: next_nearest_diff = pd.to_timedelta(next_nearest.values - dates.values).days # array([15, 31, 14, 88]) You'll need to be careful about the indices so you don't wrap around, and for the previous date, do the calculation with the indices - 1 but it should act as (I hope) a relatively good base. A: I think that the pandarallel package makes it way easier to do this now. Have not looked into it much, but should do the trick. A: You can also easily parallelize your calculations using the parallel-pandas library. Only two additional lines of code! # pip install parallel-pandas import pandas as pd import numpy as np from parallel_pandas import ParallelPandas #initialize parallel-pandas ParallelPandas.initialize(n_cpu=8, disable_pr_bar=True) def foo(x): """Your awesome function""" return np.sqrt(np.sum(x ** 2)) df = pd.DataFrame(np.random.random((1000, 1000))) %%time res = df.apply(foo, raw=True) Wall time: 5.3 s # p_apply - is parallel analogue of apply method %%time res = df.p_apply(foo, raw=True, executor='processes') Wall time: 1.2 s
Parallelize pandas apply
New to pandas, I already want to parallelize a row-wise apply operation. So far I found Parallelize apply after pandas groupby However, that only seems to work for grouped data frames. My use case is different: I have a list of holidays and for my current row/date want to find the no-of-days before and after this day to the next holiday. This is the function I call via apply: def get_nearest_holiday(x, pivot): nearestHoliday = min(x, key=lambda x: abs(x- pivot)) difference = abs(nearesHoliday - pivot) return difference / np.timedelta64(1, 'D') How can I speed it up? edit I experimented a bit with pythons pools - but it was neither nice code, nor did I get my computed results.
[ "For the parallel approach this is the answer based on Parallelize apply after pandas groupby:\nfrom joblib import Parallel, delayed\nimport multiprocessing\n\ndef get_nearest_dateParallel(df):\n df['daysBeforeHoliday'] = df.myDates.apply(lambda x: get_nearest_date(holidays.day[holidays.day < x], x))\n df['daysAfterHoliday'] = df.myDates.apply(lambda x: get_nearest_date(holidays.day[holidays.day > x], x))\n return df\n\ndef applyParallel(dfGrouped, func):\n retLst = Parallel(n_jobs=multiprocessing.cpu_count())(delayed(func)(group) for name, group in dfGrouped)\n return pd.concat(retLst)\n\nprint ('parallel version: ')\n# 4 min 30 seconds\n%time result = applyParallel(datesFrame.groupby(datesFrame.index), get_nearest_dateParallel)\n\nbut I prefer @NinjaPuppy's approach because it does not require O(n * number_of_holidays) \n", "I think going down the route of trying stuff in parallel is probably over complicating this. I haven't tried this approach on a large sample so your mileage may vary, but it should give you an idea...\nLet's just start with some dates...\nimport pandas as pd\n\ndates = pd.to_datetime(['2016-01-03', '2016-09-09', '2016-12-12', '2016-03-03'])\n\nWe'll use some holiday data from pandas.tseries.holiday - note that in effect we want a DatetimeIndex...\nfrom pandas.tseries.holiday import USFederalHolidayCalendar\n\nholiday_calendar = USFederalHolidayCalendar()\nholidays = holiday_calendar.holidays('2016-01-01')\n\nThis gives us:\nDatetimeIndex(['2016-01-01', '2016-01-18', '2016-02-15', '2016-05-30',\n '2016-07-04', '2016-09-05', '2016-10-10', '2016-11-11',\n '2016-11-24', '2016-12-26',\n ...\n '2030-01-01', '2030-01-21', '2030-02-18', '2030-05-27',\n '2030-07-04', '2030-09-02', '2030-10-14', '2030-11-11',\n '2030-11-28', '2030-12-25'],\n dtype='datetime64[ns]', length=150, freq=None)\n\nNow we find the indices of the nearest nearest holiday for the original dates using searchsorted:\nindices = holidays.searchsorted(dates)\n# array([1, 6, 9, 3])\nnext_nearest = holidays[indices]\n# DatetimeIndex(['2016-01-18', '2016-10-10', '2016-12-26', '2016-05-30'], dtype='datetime64[ns]', freq=None)\n\nThen take the difference between the two:\nnext_nearest_diff = pd.to_timedelta(next_nearest.values - dates.values).days\n# array([15, 31, 14, 88])\n\nYou'll need to be careful about the indices so you don't wrap around, and for the previous date, do the calculation with the indices - 1 but it should act as (I hope) a relatively good base.\n", "I think that the pandarallel package makes it way easier to do this now. Have not looked into it much, but should do the trick.\n", "You can also easily parallelize your calculations using the parallel-pandas library. Only two additional lines of code!\n# pip install parallel-pandas\nimport pandas as pd\nimport numpy as np\nfrom parallel_pandas import ParallelPandas\n\n#initialize parallel-pandas\nParallelPandas.initialize(n_cpu=8, disable_pr_bar=True)\n\ndef foo(x):\n \"\"\"Your awesome function\"\"\"\n return np.sqrt(np.sum(x ** 2)) \n\ndf = pd.DataFrame(np.random.random((1000, 1000)))\n\n%%time\nres = df.apply(foo, raw=True)\n\nWall time: 5.3 s\n\n# p_apply - is parallel analogue of apply method\n%%time\nres = df.p_apply(foo, raw=True, executor='processes')\n\nWall time: 1.2 s\n\n" ]
[ 6, 4, 4, 0 ]
[]
[]
[ "apply", "embarrassingly_parallel", "pandas", "parallel_processing", "python" ]
stackoverflow_0039284989_apply_embarrassingly_parallel_pandas_parallel_processing_python.txt
Q: Dataframe fill rows with values based on condition Let's say I have this dataframe: A | B | C --------- n | b | c n | b | c n | b | c s | b | c n | b | c n | b | c n | b | c e | b | c n | b | c n | b | c s | b | c n | b | c n | b | c n | b | c e | b | c I want to fill and replace the column A rows values with 'x'. The rows to fill are the ones before 's' and after 'e' but not in between. So the result would be somthing like this : A | B | C --------- x | b | c x | b | c x | b | c s | b | c n | b | c n | b | c n | b | c e | b | c x | b | c x | b | c s | b | c n | b | c n | b | c n | b | c e | b | c Here's what I have tried : def applyFunc(s): if 's' in str(s): return 'x' return '' df['A'] = df['A'].apply(applyFunc) But this only replaces rows where there is 's'. A: First find the rows where a value is after 'e' or 's' with: A = d['A'] # enables shorter reference to df['A'] A.where(A.isin(['e', 's'])).ffill().fillna('e') ['e', 'e', 'e', 's', 's', 's', 's', 'e', 'e', 'e', 's', 's', 's', 's', 'e'] Then find the 'n' where is it after a 's' and replace with 'x': df['new_A'] = A.mask((A.where(A.isin(['e', 's'])).ffill().fillna('e').eq('e')&A.eq('n')), 'x') output: A B C new_A 0 n b c x 1 n b c x 2 n b c x 3 s b c s 4 n b c n 5 n b c n 6 n b c n 7 e b c e 8 n b c x 9 n b c x 10 s b c s 11 n b c n 12 n b c n 13 n b c n 14 e b c e NB. I saved the output in a new column for clarity, but the real code should be df['A'] = … A: Assuming that there are no duplicate s or e within groups, we can Series.mask the n values between s and e. We can track if we're between s and e by comparing there the Series.cumsum of s and e are equal: df['A'] = df['A'].mask( df['A'].eq('s').cumsum().eq(df['A'].eq('e').cumsum()) & df['A'].eq('n'), 'x' ) df: A B C 0 x b c 1 x b c 2 x b c 3 s b c 4 n b c 5 n b c 6 n b c 7 e b c 8 x b c 9 x b c 10 s b c 11 n b c 12 n b c 13 n b c 14 e b c Breakdown of steps as columns: # See Where S are df['S cumsum'] = df['A'].eq('s').cumsum() # See where E are df['E cumsum'] = df['A'].eq('e').cumsum() # See where S and E are the same meaning we have seen both or neither but # not one or the other df['S == E cumsum'] = df['S cumsum'].eq(df['E cumsum']) # See where A is n df['S == E cumsum AND A == n'] = df['S == E cumsum'] & df['A'].eq('n') A B C S cumsum E cumsum S == E cumsum S == E cumsum AND A == n 0 n b c 0 0 True True 1 n b c 0 0 True True 2 n b c 0 0 True True 3 s b c 1 0 False False 4 n b c 1 0 False False 5 n b c 1 0 False False 6 n b c 1 0 False False 7 e b c 1 1 True False 8 n b c 1 1 True True 9 n b c 1 1 True True 10 s b c 2 1 False False 11 n b c 2 1 False False 12 n b c 2 1 False False 13 n b c 2 1 False False 14 e b c 2 2 True False DataFrame and imports: import pandas as pd df = pd.DataFrame({ 'A': ['n', 'n', 'n', 's', 'n', 'n', 'n', 'e', 'n', 'n', 's', 'n', 'n', 'n', 'e'], 'B': ['b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b'], 'C': ['c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c'] }) If there are duplicates we can filter out the desired start and end values (s and e) and take only even groups: df: df = pd.DataFrame({ 'A': ['n', 'n', 'n', 's', 's', 'n', 'n', 'e', 'n', 'n', 's', 'n', 'n', 'e', 'e'], 'B': ['b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b'], 'C': ['c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c'] }) A B C 0 n b c 1 n b c 2 n b c 3 s b c 4 s b c # Duplicate S 5 n b c 6 n b c 7 e b c 8 n b c 9 n b c 10 s b c 11 n b c 12 n b c 13 e b c 14 e b c # Duplicate E Find s and e and filter to keep only even groups: s = df.loc[df['A'].isin(['s', 'e']), 'A'] df['A'] = df['A'].mask( ((df.index.isin(s[s.ne(s.shift())].index).cumsum() % 2) == 0) & df['A'].eq('n'), 'x' ) df: A B C 0 x b c 1 x b c 2 x b c 3 s b c 4 s b c 5 n b c 6 n b c 7 e b c 8 x b c 9 x b c 10 s b c 11 n b c 12 n b c 13 e b c 14 e b c A: df1.assign(s=df1.A.str.contains('s').cumsum())\ .assign(e=df1.A.str.contains('e').cumsum().mul(-1).shift())\ .assign(A=lambda dd:np.where((dd.s+dd.e).isin([0,np.NaN]),'x',dd.A)) A B C s e 0 x b c 0 NaN 1 x b c 0 0.0 2 x b c 0 0.0 3 s b c 1 0.0 4 n b c 1 0.0 5 n b c 1 0.0 6 n b c 1 0.0 7 e b c 1 0.0 8 x b c 1 -1.0 9 x b c 1 -1.0 10 s b c 2 -1.0 11 n b c 2 -1.0 12 n b c 2 -1.0 13 n b c 2 -1.0 14 e b c 2 -1.0
Dataframe fill rows with values based on condition
Let's say I have this dataframe: A | B | C --------- n | b | c n | b | c n | b | c s | b | c n | b | c n | b | c n | b | c e | b | c n | b | c n | b | c s | b | c n | b | c n | b | c n | b | c e | b | c I want to fill and replace the column A rows values with 'x'. The rows to fill are the ones before 's' and after 'e' but not in between. So the result would be somthing like this : A | B | C --------- x | b | c x | b | c x | b | c s | b | c n | b | c n | b | c n | b | c e | b | c x | b | c x | b | c s | b | c n | b | c n | b | c n | b | c e | b | c Here's what I have tried : def applyFunc(s): if 's' in str(s): return 'x' return '' df['A'] = df['A'].apply(applyFunc) But this only replaces rows where there is 's'.
[ "First find the rows where a value is after 'e' or 's' with:\nA = d['A'] # enables shorter reference to df['A']\nA.where(A.isin(['e', 's'])).ffill().fillna('e')\n\n['e', 'e', 'e', 's', 's', 's', 's', 'e', 'e', 'e', 's', 's', 's', 's', 'e']\n\nThen find the 'n' where is it after a 's' and replace with 'x':\ndf['new_A'] = A.mask((A.where(A.isin(['e', 's'])).ffill().fillna('e').eq('e')&A.eq('n')), 'x')\n\noutput:\n A B C new_A\n0 n b c x\n1 n b c x\n2 n b c x\n3 s b c s\n4 n b c n\n5 n b c n\n6 n b c n\n7 e b c e\n8 n b c x\n9 n b c x\n10 s b c s\n11 n b c n\n12 n b c n\n13 n b c n\n14 e b c e\n\nNB. I saved the output in a new column for clarity, but the real code should be df['A'] = …\n", "Assuming that there are no duplicate s or e within groups, we can Series.mask the n values between s and e. We can track if we're between s and e by comparing there the Series.cumsum of s and e are equal:\ndf['A'] = df['A'].mask(\n df['A'].eq('s').cumsum().eq(df['A'].eq('e').cumsum()) & df['A'].eq('n'),\n 'x'\n)\n\ndf:\n A B C\n0 x b c\n1 x b c\n2 x b c\n3 s b c\n4 n b c\n5 n b c\n6 n b c\n7 e b c\n8 x b c\n9 x b c\n10 s b c\n11 n b c\n12 n b c\n13 n b c\n14 e b c\n\n\nBreakdown of steps as columns:\n# See Where S are\ndf['S cumsum'] = df['A'].eq('s').cumsum()\n# See where E are\ndf['E cumsum'] = df['A'].eq('e').cumsum()\n# See where S and E are the same meaning we have seen both or neither but\n# not one or the other\ndf['S == E cumsum'] = df['S cumsum'].eq(df['E cumsum'])\n# See where A is n\ndf['S == E cumsum AND A == n'] = df['S == E cumsum'] & df['A'].eq('n')\n\n A B C S cumsum E cumsum S == E cumsum S == E cumsum AND A == n\n0 n b c 0 0 True True\n1 n b c 0 0 True True\n2 n b c 0 0 True True\n3 s b c 1 0 False False\n4 n b c 1 0 False False\n5 n b c 1 0 False False\n6 n b c 1 0 False False\n7 e b c 1 1 True False\n8 n b c 1 1 True True\n9 n b c 1 1 True True\n10 s b c 2 1 False False\n11 n b c 2 1 False False\n12 n b c 2 1 False False\n13 n b c 2 1 False False\n14 e b c 2 2 True False\n\n\nDataFrame and imports:\nimport pandas as pd\n\ndf = pd.DataFrame({\n 'A': ['n', 'n', 'n', 's', 'n', 'n', 'n', 'e', 'n', 'n', 's', 'n', 'n', 'n',\n 'e'],\n 'B': ['b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b',\n 'b'],\n 'C': ['c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c',\n 'c']\n})\n\n\nIf there are duplicates we can filter out the desired start and end values (s and e) and take only even groups:\ndf:\ndf = pd.DataFrame({\n 'A': ['n', 'n', 'n', 's', 's', 'n', 'n', 'e', 'n', 'n', 's', 'n', 'n', 'e',\n 'e'],\n 'B': ['b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b',\n 'b'],\n 'C': ['c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c', 'c',\n 'c']\n})\n\n A B C\n0 n b c\n1 n b c\n2 n b c\n3 s b c\n4 s b c # Duplicate S\n5 n b c\n6 n b c\n7 e b c\n8 n b c\n9 n b c\n10 s b c\n11 n b c\n12 n b c\n13 e b c\n14 e b c # Duplicate E\n\nFind s and e and filter to keep only even groups:\ns = df.loc[df['A'].isin(['s', 'e']), 'A']\ndf['A'] = df['A'].mask(\n ((df.index.isin(s[s.ne(s.shift())].index).cumsum() % 2) == 0)\n & df['A'].eq('n'),\n 'x'\n)\n\ndf:\n A B C\n0 x b c\n1 x b c\n2 x b c\n3 s b c\n4 s b c\n5 n b c\n6 n b c\n7 e b c\n8 x b c\n9 x b c\n10 s b c\n11 n b c\n12 n b c\n13 e b c\n14 e b c\n\n", " df1.assign(s=df1.A.str.contains('s').cumsum())\\\n .assign(e=df1.A.str.contains('e').cumsum().mul(-1).shift())\\\n .assign(A=lambda dd:np.where((dd.s+dd.e).isin([0,np.NaN]),'x',dd.A))\n \n\n A B C s e\n0 x b c 0 NaN\n1 x b c 0 0.0\n2 x b c 0 0.0\n3 s b c 1 0.0\n4 n b c 1 0.0\n5 n b c 1 0.0\n6 n b c 1 0.0\n7 e b c 1 0.0\n8 x b c 1 -1.0\n9 x b c 1 -1.0\n10 s b c 2 -1.0\n11 n b c 2 -1.0\n12 n b c 2 -1.0\n13 n b c 2 -1.0\n14 e b c 2 -1.0\n\n" ]
[ 3, 2, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0068926636_pandas_python.txt
Q: Pydroid3 opencv -215 assertion failed (permission issue) I have 2 xiaomi smartphones: Xiaomi Redmi 3 (lineageOS, Android 11) and Xiaomi Mi9 lite (MIUI, Android 10). (The goal is to use Redmi 3 on my pet project). I tried to run the same piece of code on both devices, but its work only with Mi9 lite. import cv2 cam = cv2.VideoCapture(0) s, img = cam.read() cv2.imwrite('qqq.jpg', img) On redmi 3 I've got the error: looks like some premission issue, cause opencv cant get the image from camera. And idk how to solve that, i already got Pydroid permission plugin but that doesnt work. A: Looks like pydroid can work properly only with Camera 2 API. But redmi 3 camera has not that technology:
Pydroid3 opencv -215 assertion failed (permission issue)
I have 2 xiaomi smartphones: Xiaomi Redmi 3 (lineageOS, Android 11) and Xiaomi Mi9 lite (MIUI, Android 10). (The goal is to use Redmi 3 on my pet project). I tried to run the same piece of code on both devices, but its work only with Mi9 lite. import cv2 cam = cv2.VideoCapture(0) s, img = cam.read() cv2.imwrite('qqq.jpg', img) On redmi 3 I've got the error: looks like some premission issue, cause opencv cant get the image from camera. And idk how to solve that, i already got Pydroid permission plugin but that doesnt work.
[ "Looks like pydroid can work properly only with Camera 2 API.\n\nBut redmi 3 camera has not that technology:\n\n" ]
[ 0 ]
[]
[]
[ "android", "mobile", "opencv", "pydroid", "python" ]
stackoverflow_0074542697_android_mobile_opencv_pydroid_python.txt
Q: Generate a normal distribution of dates within a range I have a date range - say between 1925-01-01 and 1992-01-01. I'd like to generate a list of x dates between that range, and have those x dates generated follow a 'normal' (bell curve - see image) distribution. There are many many answers on stackoverflow about doing this with integers (using numpy, scipy, etc), but I can't find a solid example with dates A: As per @sascha's comment, a conversion from the dates to a time value does the job: #!/usr/bin/env python3 import time import numpy _DATE_RANGE = ('1925-01-01', '1992-01-01') _DATE_FORMAT = '%Y-%m-%d' _EMPIRICAL_SCALE_RATIO = 0.15 _DISTRIBUTION_SIZE = 1000 def main(): time_range = tuple(time.mktime(time.strptime(d, _DATE_FORMAT)) for d in _DATE_RANGE) distribution = numpy.random.normal( loc=(time_range[0] + time_range[1]) * 0.5, scale=(time_range[1] - time_range[0]) * _EMPIRICAL_SCALE_RATIO, size=_DISTRIBUTION_SIZE ) date_range = tuple(time.strftime(_DATE_FORMAT, time.localtime(t)) for t in numpy.sort(distribution)) print(date_range) if __name__ == '__main__': main() Note that instead of the _EMPIRICAL_SCALE_RATIO, you could (should?) use scipy.stats.truncnorm to generate a truncated normal distribution. A: Here is an implementation using datetime module that also allows to generate hours, minutes, seconds & is using Numpy/Pandas friendly date format. from datetime import datetime import numpy def main(start, end, date_format, distribution_size, scale_ratio): # Converting to timestamp start = datetime.strptime(start, date_format).timestamp() end = datetime.strptime(end, date_format).timestamp() # Generate Normal Distribution mu = datetime.strptime('1958-01-01T00:00:00', date_format).timestamp() sigma = (end - start) * scale_ratio total_distribution = np.random.normal(loc=mu, scale=sigma, size=distribution_size) # Sort and Convert back to datetime sorted_distribution = numpy.sort(total_distribution) date_range = tuple(datetime.fromtimestamp(t) for t in sorted_distribution) print(date_range) start = '1925-01-01T00:00:00' end = '1992-01-01T00:00:00' date_format = '%Y-%m-%dT%H:%M:%S' main(start=start, end=end, date_format=date_format, distribution_size=1000, scale_ratio=0.05) Results: You can also blend multiple distributions like this: dist_1 = np.random.normal(loc=mu_1, scale=sigma_1, size=size_1) dist_2 = np.random.normal(loc=mu_2, scale=sigma_2, size=size_2) all_distributions = np.concatenate([dist_1, dist_2])
Generate a normal distribution of dates within a range
I have a date range - say between 1925-01-01 and 1992-01-01. I'd like to generate a list of x dates between that range, and have those x dates generated follow a 'normal' (bell curve - see image) distribution. There are many many answers on stackoverflow about doing this with integers (using numpy, scipy, etc), but I can't find a solid example with dates
[ "As per @sascha's comment, a conversion from the dates to a time value does the job:\n#!/usr/bin/env python3\n\nimport time\nimport numpy\n\n_DATE_RANGE = ('1925-01-01', '1992-01-01')\n_DATE_FORMAT = '%Y-%m-%d'\n_EMPIRICAL_SCALE_RATIO = 0.15\n_DISTRIBUTION_SIZE = 1000\n\ndef main():\n time_range = tuple(time.mktime(time.strptime(d, _DATE_FORMAT))\n for d in _DATE_RANGE)\n distribution = numpy.random.normal(\n loc=(time_range[0] + time_range[1]) * 0.5,\n scale=(time_range[1] - time_range[0]) * _EMPIRICAL_SCALE_RATIO,\n size=_DISTRIBUTION_SIZE\n )\n date_range = tuple(time.strftime(_DATE_FORMAT, time.localtime(t))\n for t in numpy.sort(distribution))\n print(date_range)\n\nif __name__ == '__main__':\n main()\n\nNote that instead of the _EMPIRICAL_SCALE_RATIO, you could (should?) use scipy.stats.truncnorm to generate a truncated normal distribution.\n", "Here is an implementation using datetime module that also allows to generate hours, minutes, seconds & is using Numpy/Pandas friendly date format.\nfrom datetime import datetime\nimport numpy\n\ndef main(start, end, date_format, distribution_size, scale_ratio):\n # Converting to timestamp\n start = datetime.strptime(start, date_format).timestamp()\n end = datetime.strptime(end, date_format).timestamp()\n \n # Generate Normal Distribution\n mu = datetime.strptime('1958-01-01T00:00:00', date_format).timestamp()\n sigma = (end - start) * scale_ratio\n total_distribution = np.random.normal(loc=mu, scale=sigma, size=distribution_size)\n \n # Sort and Convert back to datetime\n sorted_distribution = numpy.sort(total_distribution)\n date_range = tuple(datetime.fromtimestamp(t) for t in sorted_distribution)\n print(date_range)\n\n\n\nstart = '1925-01-01T00:00:00'\nend = '1992-01-01T00:00:00'\ndate_format = '%Y-%m-%dT%H:%M:%S'\n\nmain(start=start, end=end, date_format=date_format, distribution_size=1000, scale_ratio=0.05)\n\nResults:\n\nYou can also blend multiple distributions like this:\ndist_1 = np.random.normal(loc=mu_1, scale=sigma_1, size=size_1)\ndist_2 = np.random.normal(loc=mu_2, scale=sigma_2, size=size_2)\nall_distributions = np.concatenate([dist_1, dist_2])\n\n" ]
[ 5, 0 ]
[]
[]
[ "date", "gaussian", "normal_distribution", "numpy", "python" ]
stackoverflow_0039260616_date_gaussian_normal_distribution_numpy_python.txt
Q: Cannot resolve keyword 'slug' into field Im making comment and reply system in my blog using Django. Now im trying to get queryset of comments that dont have reply comments(if I dont do this, reply comments will be displayed on a page as regular comments). Here is error that i got: FieldError at /post/fourh-news Cannot resolve keyword 'slug' into field. Choices are: comm_to_repl, comm_to_repl_id, comment_content, created, id, post, post_of_comment, post_of_comment_id, replies, user_created_comment, user_created_comment_id Request Method: GET Request URL: http://127.0.0.1:8000/post/fourh-news Django Version: 4.1.2 Exception Type: FieldError Exception Value: Cannot resolve keyword 'slug' into field. Choices are: comm_to_repl, comm_to_repl_id, comment_content, created, id, post, post_of_comment, post_of_comment_id, replies, user_created_comment, user_created_comment_id Exception Location: D:\pythonProject28django_pet_project\venv\lib\site-packages\django\db\models\sql\query.py, line 1709, in names_to_path Raised during: blog.views.ShowSingleNews Python Version: 3.10.4 Model: class Post(models.Model): title = models.CharField(max_length=150, verbose_name='Название') slug = models.CharField(max_length=100, unique=True, verbose_name='Url slug') content = models.TextField(verbose_name='Контент') created_at = models.DateTimeField(auto_now=True, verbose_name='Дата добавления') updated_at = models.DateTimeField(auto_now=True, verbose_name='Дата обновления') posted_by = models.CharField(max_length=100, verbose_name='Кем добавлено') photo = models.ImageField(upload_to='photos/%Y/%m/%d', verbose_name='Фото', blank=True) views = models.IntegerField(default=0) category = models.ForeignKey('Category', on_delete=models.PROTECT, verbose_name='Категория') tag = models.ManyToManyField('Tags', verbose_name='Тэг', blank=True) comment = models.ForeignKey('Comment', verbose_name='Комментарий', on_delete=models.CASCADE, null=True, blank=True) def __str__(self): return self.title def get_absolute_url(self): return reverse('single_news', kwargs={'slug': self.slug}) class Meta: ordering = ['-created_at'] class Category(models.Model): title = models.CharField(max_length=150, verbose_name='Название') slug = models.CharField(max_length=100, unique=True, verbose_name='category_url_slug') def __str__(self): return self.title def get_absolute_url(self): return reverse('category', kwargs={'slug': self.slug}) class Meta: ordering = ['title'] class Tags(models.Model): title = models.CharField(max_length=150, verbose_name='Название') slug = models.CharField(max_length=100, unique=True, verbose_name='tag_url_slug') def get_absolute_url(self): return reverse('news_by_tag', kwargs={'slug': self.slug}) def __str__(self): return self.title class Meta: ordering = ['title'] class Comment(models.Model): user_created_comment = models.ForeignKey(User, related_name='user', on_delete=models.CASCADE, null=True) post_of_comment = models.ForeignKey(Post, related_name='comments', on_delete=models.CASCADE, null=True) comment_content = models.TextField(verbose_name='Текст комментария') created = models.DateTimeField(auto_now=True) comm_to_repl = models.ForeignKey("self", on_delete=models.CASCADE, null=True, blank=True, related_name='replies') def get_absolute_url(self): return reverse('single_news', kwargs={'slug': self.post_of_comment.slug}) class Meta: ordering = ['-created'] View: class ShowSingleNews(DetailView): model = Post template_name = 'blog/single_news.html' context_object_name = 'post' raise_exception = True form = CommentForm def post(self, request, *args, **kwargs): form = CommentForm(request.POST) if form.is_valid(): post = self.get_object() form.instance.user_created_comment = request.user form.instance.post_of_comment = post commrepl = request.POST.get("commentID") form.instance.comm_to_repl_id = int(commrepl) form.save() else: print("some error with form happened") print(form.errors.as_data()) return redirect(reverse("single_news", kwargs={ "slug": self.get_object().slug })) def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['title'] = Post.objects.get(slug=self.kwargs['slug']) context['form'] = self.form self.object.views = F('views') + 1 self.object.save() self.object.refresh_from_db() return context def get_queryset(self): return Comment.objects.filter(replies=None) Template: {% extends 'base.html' %} {% load static %} {% load sidebar %} {% block title %} {{ title }} {% endblock %} {% block header %} {% include 'inc/_header.html'%} {% endblock %} {% block content %} <section class="single-blog-area"> <div class="container"> <div class="row"> <div class="col-md-12"> <div class="border-top"> <div class="col-md-8"> <div class="blog-area"> <div class="blog-area-part"> <h2>{{ post.title}}</h2> <h5> {{ post.created_at }}</h5> <img src="{{ post.photo.url }}"> <div> <span>Category: {{ post.category }}</span> <br> <span>Posted by: {{ post.posted_by }}</span> <br> </div> <h5>Views: {{ post.views }}</h5> <p>{{ post.content|safe }}</p> <div class="commententries"> <h3>Comments</h3> {% if user.is_authenticated %} <form method="POST" action="{% url 'single_news' slug=post.slug %}"> {% csrf_token %} <input type="hidden" id="commentID"> <div class="comment"> <input type="text" name="comment_content" placeholder="Comment" class="comment"> </div> <div class="post"> <input type="submit" value="Post"> </div> </form> {% else %} <h5><a href="{% url 'login' %}">Login</a> in order to leave a comment</h5> {% endif %} <ul class="commentlist"> {% if not post.comments.all %} </br> <h5>No comments yet...</h5> {% else %} {% for comment in post.comments.all %} <li> <article class="comment"> <header class="comment-author"> <img src="{{ user.image.url }}" alt=""> </header> <section class="comment-details"> <div class="author-name"> <h5><a href="#">{{ comment.user_created_comment.username }}</a></h5> <p>{{ comment.created }}</p> </div> <div class="comment-body"> <p>{{ comment.comment_content }} </p> </div> <div class="reply"> <p><span><a href="#"><i class="fa fa-thumbs-up" aria-hidden="true"></i></a>12</span><span><button class="fa fa-reply" aria-hidden="true"></button>7</span></p> <form method="POST" action="{% url 'single_news' slug=post.slug %}"> {% csrf_token %} <input type="hidden" name="commentID" value="{{ comment.id }}"> <div class="comment"> <input type="text" name="comment_content" placeholder="Comment" class="replyComment"> </div> <div class="post"> <input type="submit" value="Reply"> </div> </form> </div> </section> {% if comment.replies.all %} {% for reply in comment.replies.all %} <ul class="children"> <li> <article class="comment"> <header class="comment-author"> <img src="{% static 'img/author-2.jpg' %}" alt=""> </header> <section class="comment-details"> <div class="author-name"> <h5><a href="#">{{ reply.user_created_comment.username }}</a></h5> <p>Reply to - {{ reply.comm_to_repl.user_created_comment }}</p> <p>{{ reply.created }}</p> </div> <div class="comment-body"> <p>{{ reply.comment_content}}</p> </div> <div class="reply"> <p><span><a href="#"><i class="fa fa-thumbs-up" aria-hidden="true"></i></a>12</span><span><a href="#"><i class="fa fa-reply" aria-hidden="true"></i></a>7</span></p> <form method="POST" action="{% url 'single_news' slug=post.slug %}"> {% csrf_token %} <input type="hidden" name="commentID" value="{{ reply.id }}"> <div class="comment"> <input type="text" name="comment_content" placeholder="Comment" class="replyComment"> </div> <div class="post"> <input type="submit" value="Reply"> </div> </form> </div> </section> </article> </li> </ul> {% endfor %} {% endif %} </article> {% endfor %} {% endif %} </ul> </div> </div> </div> </div> <div class="col-md-4"> <div class="newsletter"> <h2 class="sidebar-title">Search for the news</h2> <form action="{% url 'search' %}" method="get"> <input type="text" name="s" placeholder="Search..."> <input type="submit" value="Search"> </form> </div> {% get_popular_posts 5 %} <div class="tags" style=""> <h2 class="sidebar-title">Tags</h2> {% for ta in post.tag.all %} <p><a href="{{ ta.get_absolute_url }}">{{ ta.title }}</a></p> {% endfor %} </div> </div> </div> </div> </div> </div> </div> </section> {% endblock %} {% block footer %} {% include 'inc/_footer.html' %} {% endblock %} Urls: urlpatterns = [ path('', HomePage.as_view(), name='home'), path('category/<str:slug>/', GetCategory.as_view(), name='category'), path('post/<str:slug>', ShowSingleNews.as_view(), name='single_news'), path('tag/<str:slug>', GetNewsByTag.as_view(), name='news_by_tag'), path('search/', Search.as_view(), name='search'), path('registration/', registration, name='registration'), path('login/', loginn, name='login'), path('logout/', logoutt, name='logout'), forms: class CommentForm(forms.ModelForm): class Meta: model = Comment fields = ['comment_content'] A: You need to be a bit of change in passing URL in HTML like this... <form method="POST" action="{% url 'single_news' post.slug %}"> {% csrf_token %} <input type="hidden" id="commentID"> <div class="comment"> <input type="text" name="comment_content" placeholder="Comment" class="comment"> </div> <div class="post"> <input type="submit" value="Post"> </div> </form> NOTE:- If you want to pass url with key you can do like this <form method="POST" action="{% url 'single_news'?slug=post.slug %}"> {% csrf_token %} <input type="hidden" id="commentID"> <div class="comment"> <input type="text" name="comment_content" placeholder="Comment" class="comment"> </div> <div class="post"> <input type="submit" value="Post"> </div> </form>
Cannot resolve keyword 'slug' into field
Im making comment and reply system in my blog using Django. Now im trying to get queryset of comments that dont have reply comments(if I dont do this, reply comments will be displayed on a page as regular comments). Here is error that i got: FieldError at /post/fourh-news Cannot resolve keyword 'slug' into field. Choices are: comm_to_repl, comm_to_repl_id, comment_content, created, id, post, post_of_comment, post_of_comment_id, replies, user_created_comment, user_created_comment_id Request Method: GET Request URL: http://127.0.0.1:8000/post/fourh-news Django Version: 4.1.2 Exception Type: FieldError Exception Value: Cannot resolve keyword 'slug' into field. Choices are: comm_to_repl, comm_to_repl_id, comment_content, created, id, post, post_of_comment, post_of_comment_id, replies, user_created_comment, user_created_comment_id Exception Location: D:\pythonProject28django_pet_project\venv\lib\site-packages\django\db\models\sql\query.py, line 1709, in names_to_path Raised during: blog.views.ShowSingleNews Python Version: 3.10.4 Model: class Post(models.Model): title = models.CharField(max_length=150, verbose_name='Название') slug = models.CharField(max_length=100, unique=True, verbose_name='Url slug') content = models.TextField(verbose_name='Контент') created_at = models.DateTimeField(auto_now=True, verbose_name='Дата добавления') updated_at = models.DateTimeField(auto_now=True, verbose_name='Дата обновления') posted_by = models.CharField(max_length=100, verbose_name='Кем добавлено') photo = models.ImageField(upload_to='photos/%Y/%m/%d', verbose_name='Фото', blank=True) views = models.IntegerField(default=0) category = models.ForeignKey('Category', on_delete=models.PROTECT, verbose_name='Категория') tag = models.ManyToManyField('Tags', verbose_name='Тэг', blank=True) comment = models.ForeignKey('Comment', verbose_name='Комментарий', on_delete=models.CASCADE, null=True, blank=True) def __str__(self): return self.title def get_absolute_url(self): return reverse('single_news', kwargs={'slug': self.slug}) class Meta: ordering = ['-created_at'] class Category(models.Model): title = models.CharField(max_length=150, verbose_name='Название') slug = models.CharField(max_length=100, unique=True, verbose_name='category_url_slug') def __str__(self): return self.title def get_absolute_url(self): return reverse('category', kwargs={'slug': self.slug}) class Meta: ordering = ['title'] class Tags(models.Model): title = models.CharField(max_length=150, verbose_name='Название') slug = models.CharField(max_length=100, unique=True, verbose_name='tag_url_slug') def get_absolute_url(self): return reverse('news_by_tag', kwargs={'slug': self.slug}) def __str__(self): return self.title class Meta: ordering = ['title'] class Comment(models.Model): user_created_comment = models.ForeignKey(User, related_name='user', on_delete=models.CASCADE, null=True) post_of_comment = models.ForeignKey(Post, related_name='comments', on_delete=models.CASCADE, null=True) comment_content = models.TextField(verbose_name='Текст комментария') created = models.DateTimeField(auto_now=True) comm_to_repl = models.ForeignKey("self", on_delete=models.CASCADE, null=True, blank=True, related_name='replies') def get_absolute_url(self): return reverse('single_news', kwargs={'slug': self.post_of_comment.slug}) class Meta: ordering = ['-created'] View: class ShowSingleNews(DetailView): model = Post template_name = 'blog/single_news.html' context_object_name = 'post' raise_exception = True form = CommentForm def post(self, request, *args, **kwargs): form = CommentForm(request.POST) if form.is_valid(): post = self.get_object() form.instance.user_created_comment = request.user form.instance.post_of_comment = post commrepl = request.POST.get("commentID") form.instance.comm_to_repl_id = int(commrepl) form.save() else: print("some error with form happened") print(form.errors.as_data()) return redirect(reverse("single_news", kwargs={ "slug": self.get_object().slug })) def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['title'] = Post.objects.get(slug=self.kwargs['slug']) context['form'] = self.form self.object.views = F('views') + 1 self.object.save() self.object.refresh_from_db() return context def get_queryset(self): return Comment.objects.filter(replies=None) Template: {% extends 'base.html' %} {% load static %} {% load sidebar %} {% block title %} {{ title }} {% endblock %} {% block header %} {% include 'inc/_header.html'%} {% endblock %} {% block content %} <section class="single-blog-area"> <div class="container"> <div class="row"> <div class="col-md-12"> <div class="border-top"> <div class="col-md-8"> <div class="blog-area"> <div class="blog-area-part"> <h2>{{ post.title}}</h2> <h5> {{ post.created_at }}</h5> <img src="{{ post.photo.url }}"> <div> <span>Category: {{ post.category }}</span> <br> <span>Posted by: {{ post.posted_by }}</span> <br> </div> <h5>Views: {{ post.views }}</h5> <p>{{ post.content|safe }}</p> <div class="commententries"> <h3>Comments</h3> {% if user.is_authenticated %} <form method="POST" action="{% url 'single_news' slug=post.slug %}"> {% csrf_token %} <input type="hidden" id="commentID"> <div class="comment"> <input type="text" name="comment_content" placeholder="Comment" class="comment"> </div> <div class="post"> <input type="submit" value="Post"> </div> </form> {% else %} <h5><a href="{% url 'login' %}">Login</a> in order to leave a comment</h5> {% endif %} <ul class="commentlist"> {% if not post.comments.all %} </br> <h5>No comments yet...</h5> {% else %} {% for comment in post.comments.all %} <li> <article class="comment"> <header class="comment-author"> <img src="{{ user.image.url }}" alt=""> </header> <section class="comment-details"> <div class="author-name"> <h5><a href="#">{{ comment.user_created_comment.username }}</a></h5> <p>{{ comment.created }}</p> </div> <div class="comment-body"> <p>{{ comment.comment_content }} </p> </div> <div class="reply"> <p><span><a href="#"><i class="fa fa-thumbs-up" aria-hidden="true"></i></a>12</span><span><button class="fa fa-reply" aria-hidden="true"></button>7</span></p> <form method="POST" action="{% url 'single_news' slug=post.slug %}"> {% csrf_token %} <input type="hidden" name="commentID" value="{{ comment.id }}"> <div class="comment"> <input type="text" name="comment_content" placeholder="Comment" class="replyComment"> </div> <div class="post"> <input type="submit" value="Reply"> </div> </form> </div> </section> {% if comment.replies.all %} {% for reply in comment.replies.all %} <ul class="children"> <li> <article class="comment"> <header class="comment-author"> <img src="{% static 'img/author-2.jpg' %}" alt=""> </header> <section class="comment-details"> <div class="author-name"> <h5><a href="#">{{ reply.user_created_comment.username }}</a></h5> <p>Reply to - {{ reply.comm_to_repl.user_created_comment }}</p> <p>{{ reply.created }}</p> </div> <div class="comment-body"> <p>{{ reply.comment_content}}</p> </div> <div class="reply"> <p><span><a href="#"><i class="fa fa-thumbs-up" aria-hidden="true"></i></a>12</span><span><a href="#"><i class="fa fa-reply" aria-hidden="true"></i></a>7</span></p> <form method="POST" action="{% url 'single_news' slug=post.slug %}"> {% csrf_token %} <input type="hidden" name="commentID" value="{{ reply.id }}"> <div class="comment"> <input type="text" name="comment_content" placeholder="Comment" class="replyComment"> </div> <div class="post"> <input type="submit" value="Reply"> </div> </form> </div> </section> </article> </li> </ul> {% endfor %} {% endif %} </article> {% endfor %} {% endif %} </ul> </div> </div> </div> </div> <div class="col-md-4"> <div class="newsletter"> <h2 class="sidebar-title">Search for the news</h2> <form action="{% url 'search' %}" method="get"> <input type="text" name="s" placeholder="Search..."> <input type="submit" value="Search"> </form> </div> {% get_popular_posts 5 %} <div class="tags" style=""> <h2 class="sidebar-title">Tags</h2> {% for ta in post.tag.all %} <p><a href="{{ ta.get_absolute_url }}">{{ ta.title }}</a></p> {% endfor %} </div> </div> </div> </div> </div> </div> </div> </section> {% endblock %} {% block footer %} {% include 'inc/_footer.html' %} {% endblock %} Urls: urlpatterns = [ path('', HomePage.as_view(), name='home'), path('category/<str:slug>/', GetCategory.as_view(), name='category'), path('post/<str:slug>', ShowSingleNews.as_view(), name='single_news'), path('tag/<str:slug>', GetNewsByTag.as_view(), name='news_by_tag'), path('search/', Search.as_view(), name='search'), path('registration/', registration, name='registration'), path('login/', loginn, name='login'), path('logout/', logoutt, name='logout'), forms: class CommentForm(forms.ModelForm): class Meta: model = Comment fields = ['comment_content']
[ "You need to be a bit of change in passing URL in HTML like this...\n<form method=\"POST\" action=\"{% url 'single_news' post.slug %}\">\n {% csrf_token %}\n <input type=\"hidden\" id=\"commentID\">\n <div class=\"comment\">\n <input type=\"text\" name=\"comment_content\" placeholder=\"Comment\" class=\"comment\">\n </div>\n <div class=\"post\">\n <input type=\"submit\" value=\"Post\">\n </div>\n</form>\n\nNOTE:- If you want to pass url with key you can do like this\n<form method=\"POST\" action=\"{% url 'single_news'?slug=post.slug %}\">\n {% csrf_token %}\n <input type=\"hidden\" id=\"commentID\">\n <div class=\"comment\">\n <input type=\"text\" name=\"comment_content\" placeholder=\"Comment\" class=\"comment\">\n </div>\n <div class=\"post\">\n <input type=\"submit\" value=\"Post\">\n </div>\n</form>\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_forms", "django_models", "django_views", "python" ]
stackoverflow_0074502179_django_django_forms_django_models_django_views_python.txt
Q: How to solve "error: Microsoft Visual C++ 14.0 or greater is required" when installing Python packages? I'm trying to install a package on Python, but Python is throwing an error on installing packages. I'm getting an error every time I tried to install pip install google-search-api. Here is the error how can I successfully install it? error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ I already updated that and have the latest version of 14.27 but the problem is throwing the same error. A: Go to this link and download Microsoft C++ Build Tools: https://visualstudio.microsoft.com/visual-cpp-build-tools/ Open the installer, then follow the steps. You might have something like this, just download it or resume. If updating above doesn't work then you need to configure or make some updates here. You can make some updates here too by clicking "Modify". Check that and download what you need there or you might find that you just need to update Microsoft Visual C++ as stated on the error, but I also suggest updating everything there because you might still need it on your future programs. I think those with the C++ as I've done that before and had a similar problem just like that when installing a python package for creating WorldCloud visualization. UPDATE: December 28, 2020 You can also follow these steps here: Select: Workloads → Desktop development with C++ Then for Individual Components, select only: Windows 10 SDK C++ x64/x86 build tools You can also achieve the same automatically using the following command: vs_buildtools.exe --norestart --passive --downloadThenInstall --includeRecommended --add Microsoft.VisualStudio.Workload.NativeDesktop --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.MSBuildTools Reference: https://www.scivision.dev/python-windows-visual-c-14-required A: 2020 - redist/build tools for Visual C++ silent installs can be done using the following two commands : vs_buildtools__370953915.1537938681.exe --quiet --add Microsoft.VisualStudio.Workload.VCTools and VC_redist.x64.exe /q /norestart A: Upgrade your pip with: python -m pip install --upgrade pip Upgrade your wheel with: pip install --upgrade wheel Upgrade your setuptools with: pip install --upgrade setuptools close the terminal try installing the pacakage again. Boom !!! it works. A: check if no older version of Microsoft Visual C++ are installed. If so uninstall them. A: I tried everything and then finally, downgrading from python 3.10 to 3.9 is what worked. (I noticed it in this comment, but it is a bit different scenario: https://stackoverflow.com/a/70617749/17664284 ) A: In addition to the verified answer by @ice bear, just make sure to reboot your system after downloading and installing the latest visual studio build tools. And then the error you might be getting would go! A: here is my error ERROR: Could not build wheels for multidict, which is required to install pyproject.toml-based projects download whl https://www.lfd.uci.edu/~gohlke/pythonlibs/#multidict pip install multidict-6.0.2-py3-none-any.whl pip install httpie
How to solve "error: Microsoft Visual C++ 14.0 or greater is required" when installing Python packages?
I'm trying to install a package on Python, but Python is throwing an error on installing packages. I'm getting an error every time I tried to install pip install google-search-api. Here is the error how can I successfully install it? error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ I already updated that and have the latest version of 14.27 but the problem is throwing the same error.
[ "Go to this link and download Microsoft C++ Build Tools:\nhttps://visualstudio.microsoft.com/visual-cpp-build-tools/\n\nOpen the installer, then follow the steps.\nYou might have something like this, just download it or resume.\n\nIf updating above doesn't work then you need to configure or make some updates here. You can make some updates here too by clicking \"Modify\".\nCheck that and download what you need there or you might find that you just need to update Microsoft Visual C++ as stated on the error, but I also suggest updating everything there because you might still need it on your future programs. I think those with the C++ as I've done that before and had a similar problem just like that when installing a python package for creating WorldCloud visualization.\n\n\nUPDATE: December 28, 2020\nYou can also follow these steps here:\n\nSelect: Workloads → Desktop development with C++\nThen for Individual Components, select only:\n\nWindows 10 SDK\nC++ x64/x86 build tools\n\n\n\nYou can also achieve the same automatically using the following command:\nvs_buildtools.exe --norestart --passive --downloadThenInstall --includeRecommended --add Microsoft.VisualStudio.Workload.NativeDesktop --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.MSBuildTools\n\nReference:\nhttps://www.scivision.dev/python-windows-visual-c-14-required\n", "2020 - redist/build tools for Visual C++\nsilent installs can be done using the following two commands :\nvs_buildtools__370953915.1537938681.exe --quiet --add Microsoft.VisualStudio.Workload.VCTools\n\nand\nVC_redist.x64.exe /q /norestart\n\n", "\nUpgrade your pip with: python -m pip install --upgrade pip\n\nUpgrade your wheel with: pip install --upgrade wheel\n\nUpgrade your setuptools with: pip install --upgrade setuptools\n\nclose the terminal\n\ntry installing the pacakage again.\n\n\nBoom !!! it works.\n", "check if no older version of Microsoft Visual C++ are installed. If so uninstall them.\n", "I tried everything and then finally, downgrading from python 3.10 to 3.9 is what worked. (I noticed it in this comment, but it is a bit different scenario: https://stackoverflow.com/a/70617749/17664284 )\n", "In addition to the verified answer by @ice bear, just make sure to reboot your system after downloading and installing the latest visual studio build tools. And then the error you might be getting would go!\n", "\nhere is my error ERROR: Could not build wheels for multidict, which is required to install pyproject.toml-based projects\n\n\ndownload whl https://www.lfd.uci.edu/~gohlke/pythonlibs/#multidict\n\n\n\n\npip install multidict-6.0.2-py3-none-any.whl\n\npip install httpie\n\n\n" ]
[ 122, 3, 2, 0, 0, 0, 0 ]
[ "Tried Prason's approach. Also tried the fix suggested here\n\nconda install -c conda-forge implicit\npip install --upgrade gensim\n\n", "I encounered the above-mentionned problem when using virtualenv. Using conda environment instead solved the problem. Conda automatically installs vs2015_runtime which compiles the wheels with no problem.\n" ]
[ -1, -1 ]
[ "python", "python_3.x", "visual_studio" ]
stackoverflow_0064261546_python_python_3.x_visual_studio.txt
Q: How to remove root element from xml file using python i have a a number of xml files with me, whose format is: <objects> <object> <record> <invoice_source>EMAIL</invoice_source> <invoice_capture_date>2022-11-18</invoice_capture_date> <document_type>INVOICE</document_type> <data_capture_provider_code>00001</data_capture_provider_code> <data_capture_provider_reference>1264</data_capture_provider_reference> <document_capture_provide_code>00002</document_capture_provide_code> <document_capture_provider_ref>1264</document_capture_provider_ref> <rows/> </record> </object> </objects> there is two root objects in this xml. i want to remove one of them using. i want the xml to look like this: <objects> <record> <invoice_source>EMAIL</invoice_source> <invoice_capture_date>2022-11-18</invoice_capture_date> <document_type>INVOICE</document_type> <data_capture_provider_code>00001</data_capture_provider_code> <data_capture_provider_reference>1264</data_capture_provider_reference> <document_capture_provide_code>00002</document_capture_provide_code> <document_capture_provider_ref>1264</document_capture_provider_ref> <rows/> </record> </objects> i have a folder full of this files. i want to do it using python. is there any way. A: The direct way is shown below. If your real files are more complicated than one-object/one-record you'll have to be more specific with examples: from xml.etree import ElementTree as et xml = '''\ <objects> <object> <record> <invoice_source>EMAIL</invoice_source> <invoice_capture_date>2022-11-18</invoice_capture_date> <document_type>INVOICE</document_type> <data_capture_provider_code>00001</data_capture_provider_code> <data_capture_provider_reference>1264</data_capture_provider_reference> <document_capture_provide_code>00002</document_capture_provide_code> <document_capture_provider_ref>1264</document_capture_provider_ref> <rows/> </record> </object> </objects> ''' objects = et.fromstring(xml) objects.append(objects[0][0]) # move "record" out of "object" and append as child to "objects" objects.remove(objects[0]) # remove empty "object" et.indent(objects) # reformat indentation (Python 3.9+) et.dump(objects) # show result Output: <objects> <record> <invoice_source>EMAIL</invoice_source> <invoice_capture_date>2022-11-18</invoice_capture_date> <document_type>INVOICE</document_type> <data_capture_provider_code>00001</data_capture_provider_code> <data_capture_provider_reference>1264</data_capture_provider_reference> <document_capture_provide_code>00002</document_capture_provide_code> <document_capture_provider_ref>1264</document_capture_provider_ref> <rows /> </record> </objects> Another option that would handle any nested content in object: objects = et.fromstring(xml) objects = objects[0] # extract "object" (lose "objects" layer) objects.tag = 'objects' # rename "object" tag et.indent(objects) # reformat indentation (Python 3.9+) et.dump(objects) # show result (same output) A: My approach is to iterate over the children of <objects>, which is <object>, then move the <record> nodes up one level. After which, I can remove the <object> nodes. import xml.etree.ElementTree as ET doc = ET.parse("input.xml") objects = doc.getroot() for obj in objects: for record in obj: objects.append(record) objects.remove(obj) doc.write("output.xml") Here is the contents of output.xml: <objects> <record> <invoice_source>EMAIL</invoice_source> <invoice_capture_date>2022-11-18</invoice_capture_date> <document_type>INVOICE</document_type> <data_capture_provider_code>00001</data_capture_provider_code> <data_capture_provider_reference>1264</data_capture_provider_reference> <document_capture_provide_code>00002</document_capture_provide_code> <document_capture_provider_ref>1264</document_capture_provider_ref> <rows /> </record> </objects>
How to remove root element from xml file using python
i have a a number of xml files with me, whose format is: <objects> <object> <record> <invoice_source>EMAIL</invoice_source> <invoice_capture_date>2022-11-18</invoice_capture_date> <document_type>INVOICE</document_type> <data_capture_provider_code>00001</data_capture_provider_code> <data_capture_provider_reference>1264</data_capture_provider_reference> <document_capture_provide_code>00002</document_capture_provide_code> <document_capture_provider_ref>1264</document_capture_provider_ref> <rows/> </record> </object> </objects> there is two root objects in this xml. i want to remove one of them using. i want the xml to look like this: <objects> <record> <invoice_source>EMAIL</invoice_source> <invoice_capture_date>2022-11-18</invoice_capture_date> <document_type>INVOICE</document_type> <data_capture_provider_code>00001</data_capture_provider_code> <data_capture_provider_reference>1264</data_capture_provider_reference> <document_capture_provide_code>00002</document_capture_provide_code> <document_capture_provider_ref>1264</document_capture_provider_ref> <rows/> </record> </objects> i have a folder full of this files. i want to do it using python. is there any way.
[ "The direct way is shown below. If your real files are more complicated than one-object/one-record you'll have to be more specific with examples:\nfrom xml.etree import ElementTree as et\n\nxml = '''\\\n<objects>\n <object>\n <record>\n <invoice_source>EMAIL</invoice_source>\n <invoice_capture_date>2022-11-18</invoice_capture_date>\n <document_type>INVOICE</document_type>\n <data_capture_provider_code>00001</data_capture_provider_code>\n <data_capture_provider_reference>1264</data_capture_provider_reference>\n <document_capture_provide_code>00002</document_capture_provide_code>\n <document_capture_provider_ref>1264</document_capture_provider_ref>\n <rows/>\n </record>\n </object>\n</objects>\n'''\n\nobjects = et.fromstring(xml)\nobjects.append(objects[0][0]) # move \"record\" out of \"object\" and append as child to \"objects\"\nobjects.remove(objects[0]) # remove empty \"object\"\net.indent(objects) # reformat indentation (Python 3.9+)\net.dump(objects) # show result\n\nOutput:\n<objects>\n <record>\n <invoice_source>EMAIL</invoice_source>\n <invoice_capture_date>2022-11-18</invoice_capture_date>\n <document_type>INVOICE</document_type>\n <data_capture_provider_code>00001</data_capture_provider_code>\n <data_capture_provider_reference>1264</data_capture_provider_reference>\n <document_capture_provide_code>00002</document_capture_provide_code>\n <document_capture_provider_ref>1264</document_capture_provider_ref>\n <rows />\n </record>\n</objects>\n\nAnother option that would handle any nested content in object:\nobjects = et.fromstring(xml)\nobjects = objects[0] # extract \"object\" (lose \"objects\" layer)\nobjects.tag = 'objects' # rename \"object\" tag\net.indent(objects) # reformat indentation (Python 3.9+)\net.dump(objects) # show result (same output)\n\n", "My approach is to iterate over the children of <objects>, which is <object>, then move the <record> nodes up one level. After which, I can remove the <object> nodes.\nimport xml.etree.ElementTree as ET\n\ndoc = ET.parse(\"input.xml\")\nobjects = doc.getroot()\n\nfor obj in objects:\n for record in obj:\n objects.append(record)\n objects.remove(obj)\n\ndoc.write(\"output.xml\")\n\nHere is the contents of output.xml:\n<objects>\n <record>\n <invoice_source>EMAIL</invoice_source>\n <invoice_capture_date>2022-11-18</invoice_capture_date>\n <document_type>INVOICE</document_type>\n <data_capture_provider_code>00001</data_capture_provider_code>\n <data_capture_provider_reference>1264</data_capture_provider_reference>\n <document_capture_provide_code>00002</document_capture_provide_code>\n <document_capture_provider_ref>1264</document_capture_provider_ref>\n <rows />\n </record>\n </objects>\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0074542597_python_xml.txt
Q: Why is Pycharm not highlighting TODO's? In my settings, I have the TODO bound to highlight in yellow, yet in the actual code it does not highlight. Here is a screenshot of my settings: Editor -> TODO Does anyone know how to fix this? EDIT: I even tried re-installing Pycharm and I still have the issue. EDIT 2: In the TODO Window, it is saying "0 TODO items found in 0 files". I believe this means it is looking in the wrong files to check for TODO items. However, when I try to find TODO items in "this file" it still doesn't work. Does anyone know why this is? A: Go to Preferences (or Settings), Project Structure, and make sure the folder with your files is not in the "Excluded" tab's list. Click the folder you want to include and click on the "Sources" tab. Click Apply, then OK! It should work. A: I recently updated PyCharm Professional and my TODOs no longer worked. I went into settings and changed the alert icon, then saved, and retyped them and they worked. I imagine for my case, there was a delay in the new version picking them up. Might just need to retype them to get them working again, though the reboot should have addressed this. Not sure if your pattern is causing this, but mine is set up like so, with two separate patterns: \btodo\b.* \bfixme\b.* Neither is case sensitive, BTW... Perhaps try some other patterns to see if you can get those to work. A: I think the problem for me was the same as explained by @theBrownCoder but I couldn't find the project structure settings. Apart from not showing TODO's another symptom was impossibility to go to function definitions defined in other files and inability to rename python files with the error: "Selected element is used from non-project files. These usages won't be renamed." Googling for this the solution that worked for me was to delete the .idea folder (make sure to back it up just in case, you will lose the configurations). A: I had the exact same problem, and the solution suggested by theBrownCoder worked perfectly. For those who cannot find which menu theBrownCoder is referring to, go to File > Settings > Project: "Title of Project" > Project Structure. It is in the dropdown of Project in Settings where you can also select your Python interpreter. A: It might be the file type. Right click, Override File Type. I had this issue with a text file and it's copy, only the first one would use #TODO
Why is Pycharm not highlighting TODO's?
In my settings, I have the TODO bound to highlight in yellow, yet in the actual code it does not highlight. Here is a screenshot of my settings: Editor -> TODO Does anyone know how to fix this? EDIT: I even tried re-installing Pycharm and I still have the issue. EDIT 2: In the TODO Window, it is saying "0 TODO items found in 0 files". I believe this means it is looking in the wrong files to check for TODO items. However, when I try to find TODO items in "this file" it still doesn't work. Does anyone know why this is?
[ "Go to Preferences (or Settings), Project Structure, and make sure the folder with your files is not in the \"Excluded\" tab's list.\nClick the folder you want to include and click on the \"Sources\" tab. Click Apply, then OK!\nIt should work.\n", "I recently updated PyCharm Professional and my TODOs no longer worked. I went into settings and changed the alert icon, then saved, and retyped them and they worked. I imagine for my case, there was a delay in the new version picking them up. Might just need to retype them to get them working again, though the reboot should have addressed this.\nNot sure if your pattern is causing this, but mine is set up like so, with two separate patterns:\n\\btodo\\b.*\n\\bfixme\\b.*\nNeither is case sensitive, BTW...\nPerhaps try some other patterns to see if you can get those to work.\n", "I think the problem for me was the same as explained by @theBrownCoder but I couldn't find the project structure settings.\nApart from not showing TODO's another symptom was impossibility to go to function definitions defined in other files and inability to rename python files with the error: \"Selected element is used from non-project files. These usages won't be renamed.\"\nGoogling for this the solution that worked for me was to delete the .idea folder (make sure to back it up just in case, you will lose the configurations).\n", "I had the exact same problem, and the solution suggested by theBrownCoder worked perfectly.\nFor those who cannot find which menu theBrownCoder is referring to, go to File > Settings > Project: \"Title of Project\" > Project Structure.\nIt is in the dropdown of Project in Settings where you can also select your Python interpreter.\n", "It might be the file type.\nRight click, Override File Type.\nI had this issue with a text file and it's copy, only the first one would use #TODO\n" ]
[ 3, 2, 1, 0, 0 ]
[]
[]
[ "highlight", "pycharm", "python", "todo" ]
stackoverflow_0061678338_highlight_pycharm_python_todo.txt
Q: Scrape data by sending payload with an API in python I want to fetch list of articles from gfg based on a query. We can achieve this by using search box present in this site https://www.geeksforgeeks.org/ . They are using this api to display results "https://api.geeksforgeeks.org/post/api/googlesearch/" and they are passing search query in payload. This is the approach i have tried: import requests d = {'page':3, 'sort':'relevance', 'type':'premium', 'query':'nump'} r=requests.get('https://api.geeksforgeeks.org/post/api/googlesearch/', data = d).json() print(r) but i am getting this error requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)) Any solution for this problem will be highly appreciated. A: The api takes a POST request you are sending a GET request try changing to this: import requests d = {'page':3, 'sort':'relevance', 'type':'premium', 'query':'nump'} r=requests.post('https://api.geeksforgeeks.org/post/api/googlesearch/', data = d).json() print(r)
Scrape data by sending payload with an API in python
I want to fetch list of articles from gfg based on a query. We can achieve this by using search box present in this site https://www.geeksforgeeks.org/ . They are using this api to display results "https://api.geeksforgeeks.org/post/api/googlesearch/" and they are passing search query in payload. This is the approach i have tried: import requests d = {'page':3, 'sort':'relevance', 'type':'premium', 'query':'nump'} r=requests.get('https://api.geeksforgeeks.org/post/api/googlesearch/', data = d).json() print(r) but i am getting this error requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)) Any solution for this problem will be highly appreciated.
[ "The api takes a POST request you are sending a GET request try changing to this:\nimport requests\nd = {'page':3, 'sort':'relevance', 'type':'premium', 'query':'nump'}\nr=requests.post('https://api.geeksforgeeks.org/post/api/googlesearch/', data = d).json()\nprint(r)\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_requests", "web_scraping" ]
stackoverflow_0074542992_python_python_requests_web_scraping.txt
Q: Is there a Python equivalent of the C# null-coalescing operator? In C# there's a null-coalescing operator (written as ??) that allows for easy (short) null checking during assignment: string s = null; var other = s ?? "some default value"; Is there a python equivalent? I know that I can do: s = None other = s if s else "some default value" But is there an even shorter way (where I don't need to repeat s)? A: other = s or "some default value" Ok, it must be clarified how the or operator works. It is a boolean operator, so it works in a boolean context. If the values are not boolean, they are converted to boolean for the purposes of the operator. Note that the or operator does not return only True or False. Instead, it returns the first operand if the first operand evaluates to true, and it returns the second operand if the first operand evaluates to false. In this case, the expression x or y returns x if it is True or evaluates to true when converted to boolean. Otherwise, it returns y. For most cases, this will serve for the very same purpose of C♯'s null-coalescing operator, but keep in mind: 42 or "something" # returns 42 0 or "something" # returns "something" None or "something" # returns "something" False or "something" # returns "something" "" or "something" # returns "something" If you use your variable s to hold something that is either a reference to the instance of a class or None (as long as your class does not define members __nonzero__() and __len__()), it is secure to use the same semantics as the null-coalescing operator. In fact, it may even be useful to have this side-effect of Python. Since you know what values evaluates to false, you can use this to trigger the default value without using None specifically (an error object, for example). In some languages this behavior is referred to as the Elvis operator. A: Strictly, other = s if s is not None else "default value" Otherwise, s = False will become "default value", which may not be what was intended. If you want to make this shorter, try: def notNone(s,d): if s is None: return d else: return s other = notNone(s, "default value") A: Here's a function that will return the first argument that isn't None: def coalesce(*arg): return reduce(lambda x, y: x if x is not None else y, arg) # Prints "banana" print coalesce(None, "banana", "phone", None) reduce() might needlessly iterate over all the arguments even if the first argument is not None, so you can also use this version: def coalesce(*arg): for el in arg: if el is not None: return el return None A: In case you need to chain more than one null-conditional operation such as: model?.data()?.first() This is not a problem easily solved with or. It also cannot be solved with .get() which requires a dictionary type or similar (and cannot be nested anyway) or getattr() which will throw an exception when NoneType doesn't have the attribute. The relevant PEP considering adding null coalescing to the language is PEP 505 and the discussion relevant to the document is in the python-ideas thread. A: I realize this is answered, but there is another option when you're dealing with dict-like objects. If you have an object that might be: { name: { first: "John", last: "Doe" } } You can use: obj.get(property_name, value_if_null) Like: obj.get("name", {}).get("first", "Name is missing") By adding {} as the default value, if "name" is missing, an empty object is returned and passed through to the next get. This is similar to null-safe-navigation in C#, which would be like obj?.name?.first. A: Addionally to @Bothwells answer (which I prefer) for single values, in order to null-checking assingment of function return values, you can use new walrus-operator (since python3.8): def test(): return a = 2 if (x:= test()) is None else x Thus, test function does not need to be evaluated two times (as in a = 2 if test() is None else test()) A: In addition to Juliano's answer about behavior of "or": it's "fast" >>> 1 or 5/0 1 So sometimes it's might be a useful shortcut for things like object = getCachedVersion() or getFromDB() A: Regarding answers by @Hugh Bothwell, @mortehu and @glglgl. Setup Dataset for testing import random dataset = [random.randint(0,15) if random.random() > .6 else None for i in range(1000)] Define implementations def not_none(x, y=None): if x is None: return y return x def coalesce1(*arg): return reduce(lambda x, y: x if x is not None else y, arg) def coalesce2(*args): return next((i for i in args if i is not None), None) Make test function def test_func(dataset, func): default = 1 for i in dataset: func(i, default) Results on mac i7 @2.7Ghz using python 2.7 >>> %timeit test_func(dataset, not_none) 1000 loops, best of 3: 224 µs per loop >>> %timeit test_func(dataset, coalesce1) 1000 loops, best of 3: 471 µs per loop >>> %timeit test_func(dataset, coalesce2) 1000 loops, best of 3: 782 µs per loop Clearly the not_none function answers the OP's question correctly and handles the "falsy" problem. It is also the fastest and easiest to read. If applying the logic in many places, it is clearly the best way to go. If you have a problem where you want to find the 1st non-null value in a iterable, then @mortehu's response is the way to go. But it is a solution to a different problem than OP, although it can partially handle that case. It cannot take an iterable AND a default value. The last argument would be the default value returned, but then you wouldn't be passing in an iterable in that case as well as it isn't explicit that the last argument is a default to value. You could then do below, but I'd still use not_null for the single value use case. def coalesce(*args, **kwargs): default = kwargs.get('default') return next((a for a in arg if a is not None), default) A: to take care of possible exceptions: def default_val(expr, default=None): try: tmp = expr() except Exception: tmp = default return tmp use it like that: default_val(lambda: some['complex'].expression('with', 'possible')['exceptions'], '')
Is there a Python equivalent of the C# null-coalescing operator?
In C# there's a null-coalescing operator (written as ??) that allows for easy (short) null checking during assignment: string s = null; var other = s ?? "some default value"; Is there a python equivalent? I know that I can do: s = None other = s if s else "some default value" But is there an even shorter way (where I don't need to repeat s)?
[ "other = s or \"some default value\"\n\nOk, it must be clarified how the or operator works. It is a boolean operator, so it works in a boolean context. If the values are not boolean, they are converted to boolean for the purposes of the operator.\nNote that the or operator does not return only True or False. Instead, it returns the first operand if the first operand evaluates to true, and it returns the second operand if the first operand evaluates to false.\nIn this case, the expression x or y returns x if it is True or evaluates to true when converted to boolean. Otherwise, it returns y. For most cases, this will serve for the very same purpose of C♯'s null-coalescing operator, but keep in mind:\n42 or \"something\" # returns 42\n0 or \"something\" # returns \"something\"\nNone or \"something\" # returns \"something\"\nFalse or \"something\" # returns \"something\"\n\"\" or \"something\" # returns \"something\"\n\nIf you use your variable s to hold something that is either a reference to the instance of a class or None (as long as your class does not define members __nonzero__() and __len__()), it is secure to use the same semantics as the null-coalescing operator.\nIn fact, it may even be useful to have this side-effect of Python. Since you know what values evaluates to false, you can use this to trigger the default value without using None specifically (an error object, for example).\nIn some languages this behavior is referred to as the Elvis operator.\n", "Strictly,\nother = s if s is not None else \"default value\"\n\nOtherwise, s = False will become \"default value\", which may not be what was intended.\nIf you want to make this shorter, try:\ndef notNone(s,d):\n if s is None:\n return d\n else:\n return s\n\nother = notNone(s, \"default value\")\n\n", "Here's a function that will return the first argument that isn't None:\ndef coalesce(*arg):\n return reduce(lambda x, y: x if x is not None else y, arg)\n\n# Prints \"banana\"\nprint coalesce(None, \"banana\", \"phone\", None)\n\nreduce() might needlessly iterate over all the arguments even if the first argument is not None, so you can also use this version:\ndef coalesce(*arg):\n for el in arg:\n if el is not None:\n return el\n return None\n\n", "In case you need to chain more than one null-conditional operation such as:\nmodel?.data()?.first()\nThis is not a problem easily solved with or. It also cannot be solved with .get() which requires a dictionary type or similar (and cannot be nested anyway) or getattr() which will throw an exception when NoneType doesn't have the attribute.\nThe relevant PEP considering adding null coalescing to the language is PEP 505 and the discussion relevant to the document is in the python-ideas thread.\n", "I realize this is answered, but there is another option when you're dealing with dict-like objects.\nIf you have an object that might be:\n{\n name: {\n first: \"John\",\n last: \"Doe\"\n }\n}\n\nYou can use:\nobj.get(property_name, value_if_null)\n\nLike:\nobj.get(\"name\", {}).get(\"first\", \"Name is missing\") \n\nBy adding {} as the default value, if \"name\" is missing, an empty object is returned and passed through to the next get. This is similar to null-safe-navigation in C#, which would be like obj?.name?.first.\n", "Addionally to @Bothwells answer (which I prefer) for single values, in order to null-checking assingment of function return values, you can use new walrus-operator (since python3.8):\ndef test():\n return\n\na = 2 if (x:= test()) is None else x\n\nThus, test function does not need to be evaluated two times (as in a = 2 if test() is None else test())\n", "In addition to Juliano's answer about behavior of \"or\": \nit's \"fast\"\n>>> 1 or 5/0\n1\n\nSo sometimes it's might be a useful shortcut for things like\nobject = getCachedVersion() or getFromDB()\n\n", "Regarding answers by @Hugh Bothwell, @mortehu and @glglgl.\nSetup Dataset for testing\nimport random\n\ndataset = [random.randint(0,15) if random.random() > .6 else None for i in range(1000)]\n\nDefine implementations\ndef not_none(x, y=None):\n if x is None:\n return y\n return x\n\ndef coalesce1(*arg):\n return reduce(lambda x, y: x if x is not None else y, arg)\n\ndef coalesce2(*args):\n return next((i for i in args if i is not None), None)\n\nMake test function\ndef test_func(dataset, func):\n default = 1\n for i in dataset:\n func(i, default)\n\nResults on mac i7 @2.7Ghz using python 2.7\n>>> %timeit test_func(dataset, not_none)\n1000 loops, best of 3: 224 µs per loop\n\n>>> %timeit test_func(dataset, coalesce1)\n1000 loops, best of 3: 471 µs per loop\n\n>>> %timeit test_func(dataset, coalesce2)\n1000 loops, best of 3: 782 µs per loop\n\nClearly the not_none function answers the OP's question correctly and handles the \"falsy\" problem. It is also the fastest and easiest to read. If applying the logic in many places, it is clearly the best way to go.\nIf you have a problem where you want to find the 1st non-null value in a iterable, then @mortehu's response is the way to go. But it is a solution to a different problem than OP, although it can partially handle that case. It cannot take an iterable AND a default value. The last argument would be the default value returned, but then you wouldn't be passing in an iterable in that case as well as it isn't explicit that the last argument is a default to value. \nYou could then do below, but I'd still use not_null for the single value use case.\ndef coalesce(*args, **kwargs):\n default = kwargs.get('default')\n return next((a for a in arg if a is not None), default)\n\n", "to take care of possible exceptions:\ndef default_val(expr, default=None):\n try:\n tmp = expr()\n except Exception:\n tmp = default\n return tmp\n\nuse it like that:\ndefault_val(lambda: some['complex'].expression('with', 'possible')['exceptions'], '')\n\n" ]
[ 607, 120, 58, 22, 12, 6, 2, 0, 0 ]
[ "For those like me that stumbled here looking for a viable solution to this issue, when the variable might be undefined, the closest i got is:\nif 'variablename' in globals() and ((variablename or False) == True):\n print('variable exists and it\\'s true')\nelse:\n print('variable doesn\\'t exist, or it\\'s false')\n\nNote that a string is needed when checking in globals, but afterwards the actual variable is used when checking for value.\nMore on variable existence:\nHow do I check if a variable exists?\n", "Python has a get function that its very useful to return a value of an existent key, if the key exist;\nif not it will return a default value.\n\ndef main():\n names = ['Jack','Maria','Betsy','James','Jack']\n names_repeated = dict()\n default_value = 0\n\n for find_name in names:\n names_repeated[find_name] = names_repeated.get(find_name, default_value) + 1\n\nif you cannot find the name inside the dictionary, it will return the default_value, \nif the name exist then it will add any existing value with 1.\nhope this can help\n", "The two functions below I have found to be very useful when dealing with many variable testing cases. \ndef nz(value, none_value, strict=True):\n ''' This function is named after an old VBA function. It returns a default\n value if the passed in value is None. If strict is False it will\n treat an empty string as None as well.\n\n example:\n x = None\n nz(x,\"hello\")\n --> \"hello\"\n nz(x,\"\")\n --> \"\"\n y = \"\" \n nz(y,\"hello\")\n --> \"\"\n nz(y,\"hello\", False)\n --> \"hello\" '''\n\n if value is None and strict:\n return_val = none_value\n elif strict and value is not None:\n return_val = value\n elif not strict and not is_not_null(value):\n return_val = none_value\n else:\n return_val = value\n return return_val \n\ndef is_not_null(value):\n ''' test for None and empty string '''\n return value is not None and len(str(value)) > 0\n\n" ]
[ -1, -3, -5 ]
[ "null_coalescing_operator", "python" ]
stackoverflow_0004978738_null_coalescing_operator_python.txt
Q: How should I share data between CLI commands in Python? I want to make a CLI application in Python, but I find that I can't share data between commands. The global variable can't help. As a example, I get some videos by "xxx search", and I want to press "xxx download 1" to download the frist listed videos, but the data just miss. I have tried saving data to a file by pickle, and when I run another command, I will read the file and get back the data. But I'm doubting that is that a correct way to do that? A: Your solution is perfectly valid, you can even use tempfile to store the file in a clean manner. Another option is to make a shell within the application, so the it won't exit after every user prompt and create an interface like somewhat like this: $ xxx >>> search ### ### Your output here ### >>> download 1 ### Done $
How should I share data between CLI commands in Python?
I want to make a CLI application in Python, but I find that I can't share data between commands. The global variable can't help. As a example, I get some videos by "xxx search", and I want to press "xxx download 1" to download the frist listed videos, but the data just miss. I have tried saving data to a file by pickle, and when I run another command, I will read the file and get back the data. But I'm doubting that is that a correct way to do that?
[ "Your solution is perfectly valid, you can even use tempfile to store the file in a clean manner.\nAnother option is to make a shell within the application, so the it won't exit after every user prompt and create an interface like somewhat like this:\n$ xxx\n\n>>> search\n###\n### Your output here\n###\n>>> download 1\n### Done\n\n$\n\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074542649_python.txt
Q: Can't find hrefs of interest with BeautifulSoup I am trying to collect a list of hrefs from the Netflix careers site: https://jobs.netflix.com/search. Each job listing on this site has an anchor and a class: <a class=css-2y5mtm essqqm81>. To be thorough here, the entire anchor is: <a class="css-2y5mtm essqqm81" role="link" href="/jobs/244837014" aria-label="Manager, Written Communications"\>\ <span tabindex="-1" class="css-1vbg17 essqqm80"\>\<h4 class="css-hl3xbb e1rpdjew0"\>Manager, Written Communications\</h4\>\</span\>\</a\> Again, the information of interest here is the hrefs of the form href="/jobs/244837014". However, when I perform the standard BS commands to read the HTML: html_page = urllib.request.urlopen("https://jobs.netflix.com/search") soup = BeautifulSoup(html_page) I don't see any of the hrefs that I'm interested in inside of soup. Running the following loop does not show the hrefs of interest: for link in soup.findAll('a'): print(link.get('href')) What am I doing wrong? A: That information is being fed dynamically in page, via XHR calls. You need to scrape the API endpoint to get jobs info. The following code will give you a dataframe with all jobs currently listed by Netflix: import requests from bs4 import BeautifulSoup as bs import pandas as pd from tqdm import tqdm ## if Jupyter: from tqdm.notebook import tqdm pd.set_option('display.max_columns', None) pd.set_option('display.max_colwidth', None) headers = { 'referer': 'https://jobs.netflix.com/search', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } big_df = pd.DataFrame() s = requests.Session() s.headers.update(headers) for x in tqdm(range(1, 20)): url = f'https://jobs.netflix.com/api/search?page={x}' r = s.get(url) df = pd.json_normalize(r.json()['records']['postings']) big_df = pd.concat([big_df, df], axis=0, ignore_index=True) print(big_df[['text', 'team', 'external_id', 'updated_at', 'created_at', 'location', 'organization' ]]) Result: 100% 19/19 [00:29<00:00, 1.42s/it] text team external_id updated_at created_at location organization 0 Events Manager - SEA [Publicity] 244936062 2022-11-23T07:20:16+00:00 2022-11-23T04:47:29Z Bangkok, Thailand [Marketing and PR] 1 Manager, Written Communications [Publicity] 244837014 2022-11-23T07:20:16+00:00 2022-11-22T17:30:06Z Los Angeles, California [Marketing and Publicity] 2 Manager, Creative Marketing - Korea [Marketing] 244740829 2022-11-23T07:20:16+00:00 2022-11-22T07:39:56Z Seoul, South Korea [Marketing and PR] 3 Administrative Assistant - Philippines [Netflix Technology Services] 244683946 2022-11-23T07:20:16+00:00 2022-11-22T01:26:08Z Manila, Philippines [Corporate Functions] 4 Associate, Studio FP&A - APAC [Finance] 244680097 2022-11-23T07:20:16+00:00 2022-11-22T01:01:17Z Seoul, South Korea [Corporate Functions] ... ... ... ... ... ... ... ... 365 Software Engineer (L4/L5) - Content Engineering [Core Engineering, Studio Technologies] 77239837 2022-11-23T07:20:31+00:00 2021-04-22T07:46:29Z Mexico City, Mexico [Product] 366 Distributed Systems Engineer (L5) - Data Platform [Data Platform] 201740355 2022-11-23T07:20:31+00:00 2021-03-12T22:18:57Z Remote, United States [Product] 367 Senior Research Scientist, Computer Graphics / Computer Vision / Machine Learning [Data Science and Engineering] 227665988 2022-11-23T07:20:31+00:00 2021-02-04T18:54:10Z Los Gatos, California [Product] 368 Counsel, Content - Japan [Legal and Public Policy] 228338138 2022-11-23T07:20:31+00:00 2020-11-12T03:08:04Z Tokyo, Japan [Corporate Functions] 369 Associate, FP&A [Financial Planning and Analysis] 46317422 2022-11-23T07:20:31+00:00 2017-12-26T19:38:32Z Los Angeles, California [Corporate Functions] 370 rows × 7 columns ​ For each job, the url would be https://jobs.netflix.com/jobs/{external_id}
Can't find hrefs of interest with BeautifulSoup
I am trying to collect a list of hrefs from the Netflix careers site: https://jobs.netflix.com/search. Each job listing on this site has an anchor and a class: <a class=css-2y5mtm essqqm81>. To be thorough here, the entire anchor is: <a class="css-2y5mtm essqqm81" role="link" href="/jobs/244837014" aria-label="Manager, Written Communications"\>\ <span tabindex="-1" class="css-1vbg17 essqqm80"\>\<h4 class="css-hl3xbb e1rpdjew0"\>Manager, Written Communications\</h4\>\</span\>\</a\> Again, the information of interest here is the hrefs of the form href="/jobs/244837014". However, when I perform the standard BS commands to read the HTML: html_page = urllib.request.urlopen("https://jobs.netflix.com/search") soup = BeautifulSoup(html_page) I don't see any of the hrefs that I'm interested in inside of soup. Running the following loop does not show the hrefs of interest: for link in soup.findAll('a'): print(link.get('href')) What am I doing wrong?
[ "That information is being fed dynamically in page, via XHR calls. You need to scrape the API endpoint to get jobs info. The following code will give you a dataframe with all jobs currently listed by Netflix:\nimport requests\nfrom bs4 import BeautifulSoup as bs\nimport pandas as pd\nfrom tqdm import tqdm ## if Jupyter: from tqdm.notebook import tqdm\n\npd.set_option('display.max_columns', None)\npd.set_option('display.max_colwidth', None)\n\nheaders = {\n 'referer': 'https://jobs.netflix.com/search',\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'\n}\nbig_df = pd.DataFrame()\ns = requests.Session()\ns.headers.update(headers)\nfor x in tqdm(range(1, 20)):\n url = f'https://jobs.netflix.com/api/search?page={x}'\n r = s.get(url)\n df = pd.json_normalize(r.json()['records']['postings'])\n big_df = pd.concat([big_df, df], axis=0, ignore_index=True)\n\nprint(big_df[['text', 'team', 'external_id', 'updated_at', 'created_at', 'location', 'organization' ]])\n\nResult:\n100%\n19/19 [00:29<00:00, 1.42s/it]\ntext team external_id updated_at created_at location organization\n0 Events Manager - SEA [Publicity] 244936062 2022-11-23T07:20:16+00:00 2022-11-23T04:47:29Z Bangkok, Thailand [Marketing and PR]\n1 Manager, Written Communications [Publicity] 244837014 2022-11-23T07:20:16+00:00 2022-11-22T17:30:06Z Los Angeles, California [Marketing and Publicity]\n2 Manager, Creative Marketing - Korea [Marketing] 244740829 2022-11-23T07:20:16+00:00 2022-11-22T07:39:56Z Seoul, South Korea [Marketing and PR]\n3 Administrative Assistant - Philippines [Netflix Technology Services] 244683946 2022-11-23T07:20:16+00:00 2022-11-22T01:26:08Z Manila, Philippines [Corporate Functions]\n4 Associate, Studio FP&A - APAC [Finance] 244680097 2022-11-23T07:20:16+00:00 2022-11-22T01:01:17Z Seoul, South Korea [Corporate Functions]\n... ... ... ... ... ... ... ...\n365 Software Engineer (L4/L5) - Content Engineering [Core Engineering, Studio Technologies] 77239837 2022-11-23T07:20:31+00:00 2021-04-22T07:46:29Z Mexico City, Mexico [Product]\n366 Distributed Systems Engineer (L5) - Data Platform [Data Platform] 201740355 2022-11-23T07:20:31+00:00 2021-03-12T22:18:57Z Remote, United States [Product]\n367 Senior Research Scientist, Computer Graphics / Computer Vision / Machine Learning [Data Science and Engineering] 227665988 2022-11-23T07:20:31+00:00 2021-02-04T18:54:10Z Los Gatos, California [Product]\n368 Counsel, Content - Japan [Legal and Public Policy] 228338138 2022-11-23T07:20:31+00:00 2020-11-12T03:08:04Z Tokyo, Japan [Corporate Functions]\n369 Associate, FP&A [Financial Planning and Analysis] 46317422 2022-11-23T07:20:31+00:00 2017-12-26T19:38:32Z Los Angeles, California [Corporate Functions]\n370 rows × 7 columns\n\n​\nFor each job, the url would be https://jobs.netflix.com/jobs/{external_id}\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "html", "python", "urllib" ]
stackoverflow_0074541937_beautifulsoup_html_python_urllib.txt
Q: when i am converting list of dicts to dataframe i am getting different format of dataframe I am converting list of dictionaries to data frame to store in database but I am not getting proper format of data frame my_list=[{'A': '1111', 'B': '2222', 'C': '3333'}, {'A': '4444', 'B': '5555', 'C': '6666'}] This is the dataframe format i am getting This is the dataframe i want The code i am using df=pd.DataFrame(my_list) Thanks in advance A: You can try pd.DataFrame.from_dict(my_list)
when i am converting list of dicts to dataframe i am getting different format of dataframe
I am converting list of dictionaries to data frame to store in database but I am not getting proper format of data frame my_list=[{'A': '1111', 'B': '2222', 'C': '3333'}, {'A': '4444', 'B': '5555', 'C': '6666'}] This is the dataframe format i am getting This is the dataframe i want The code i am using df=pd.DataFrame(my_list) Thanks in advance
[ "You can try\npd.DataFrame.from_dict(my_list)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "dictionary", "python" ]
stackoverflow_0074542893_dataframe_dictionary_python.txt
Q: cdktf grafana alerts python sample required I am looking for "working"/"syntactically correct" (python) samples for provisioning unified alerts to grafana. A have a pure terraform config file, provided by grafana, however, the python syntax complicates it further. A: I have not the time to post my code yet, however the only problem is defining the model of RuleGroupRuleData. You may want to copy the model of a GUI-defined alert from here: /api/ruler/grafana/api/v1/rules Just copy it using a heredoc-python string: grafana_RuleGroupRuleData = [RuleGroupRuleData( ref_id = "A", query_type = "", relative_time_range = dict(from_ = 600, to = 0), datasource_uid = grafana_dataSource.uid, model = """{ "alias": "$col", "datasource": { "type": "influxdb", "uid": "datasource_influxdb" }, "groupBy": [ ... """
cdktf grafana alerts python sample required
I am looking for "working"/"syntactically correct" (python) samples for provisioning unified alerts to grafana. A have a pure terraform config file, provided by grafana, however, the python syntax complicates it further.
[ "I have not the time to post my code yet, however the only problem is defining the model of RuleGroupRuleData.\nYou may want to copy the model of a GUI-defined alert from here:\n/api/ruler/grafana/api/v1/rules\n\nJust copy it using a heredoc-python string:\ngrafana_RuleGroupRuleData = [RuleGroupRuleData( \nref_id = \"A\",\nquery_type = \"\",\nrelative_time_range = dict(from_ = 600, to = 0), \ndatasource_uid = grafana_dataSource.uid, \nmodel = \"\"\"{\n\"alias\": \"$col\",\n\"datasource\": {\n \"type\": \"influxdb\",\n \"uid\": \"datasource_influxdb\"\n},\n\"groupBy\": [ ...\n\"\"\"\n\n" ]
[ 0 ]
[]
[]
[ "alerts", "grafana", "python", "terraform_cdk" ]
stackoverflow_0074520622_alerts_grafana_python_terraform_cdk.txt
Q: OSError: Could not load shared object file: llvmlite.dll (SHAP related. What could be missing?) I want to use SHAP with Anaconda. Prequisites: llvmlite is installed: pip install llvmlite Requirement already satisfied: llvmlite in c:\users...\anaconda3\lib\site-packages (0.34.0) However, I get the error message in the supject, that llvmlite.dll could not be loaded: from sklearn.model_selection import train_test_split import xgboost import shap import numpy as np import matplotlib.pylab as pl ​ # print the JS visualization code to the notebook shap.initjs() --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-1-1cffb01788c0> in <module> 1 from sklearn.model_selection import train_test_split 2 import xgboost ----> 3 import shap 4 import numpy as np 5 import matplotlib.pylab as pl ~\Anaconda3\lib\site-packages\shap\__init__.py in <module> 10 warnings.warn("As of version 0.29.0 shap only supports Python 3 (not 2)!") 11 ---> 12 from ._explanation import Explanation 13 14 # explainers ~\Anaconda3\lib\site-packages\shap\_explanation.py in <module> 8 from slicer import Slicer, Alias 9 # from ._order import Order ---> 10 from .utils._general import OpChain 11 12 # slicer confuses pylint... ~\Anaconda3\lib\site-packages\shap\utils\__init__.py in <module> ----> 1 from ._clustering import hclust_ordering, partition_tree, partition_tree_shuffle, delta_minimization_order, hclust 2 from ._general import approximate_interactions, potential_interactions, sample, safe_isinstance, assert_import, record_import_error 3 from ._general import shapley_coefficients, convert_name, format_value, ordinal_str, OpChain 4 from ._show_progress import show_progress 5 from ._masked_model import MaskedModel, make_masks ~\Anaconda3\lib\site-packages\shap\utils\_clustering.py in <module> 2 import scipy as sp 3 from scipy.spatial.distance import pdist ----> 4 from numba import jit 5 import sklearn 6 import warnings ~\Anaconda3\lib\site-packages\numba\__init__.py in <module> 12 del get_versions 13 ---> 14 from numba.core import config 15 from numba.testing import _runtests as runtests 16 from numba.core import types, errors ~\Anaconda3\lib\site-packages\numba\core\config.py in <module> 14 15 ---> 16 import llvmlite.binding as ll 17 18 IS_WIN32 = sys.platform.startswith('win32') ~\Anaconda3\lib\site-packages\llvmlite\binding\__init__.py in <module> 2 Things that rely on the LLVM library 3 """ ----> 4 from .dylib import * 5 from .executionengine import * 6 from .initfini import * ~\Anaconda3\lib\site-packages\llvmlite\binding\dylib.py in <module> 1 from ctypes import c_void_p, c_char_p, c_bool, POINTER 2 ----> 3 from llvmlite.binding import ffi 4 from llvmlite.binding.common import _encode_string 5 ~\Anaconda3\lib\site-packages\llvmlite\binding\ffi.py in <module> 151 break 152 else: --> 153 raise OSError("Could not load shared object file: {}".format(_lib_name)) 154 155 OSError: Could not load shared object file: llvmlite.dll Does anybody have an idea what the root cause may be and whatelse I could try? THx, Marcus A: in my case, following actions worked: conda uninstall llvmlite and then conda install llvmlite A: I did both conda uninstall llvmlite pip install llvmlite and conda uninstall llvmlite conda install llvmlite but can't get it work. I got it work with conda install -c numba numba conda install -c numba llvmlite A: I had the same llvmlite missing DLL issue on Windows 10 with Python 3.8 in jupyter notebook when trying to import numba and solved it by installing with pip instead of conda : conda uninstall llvmlite pip install llvmlite If it still does not wortk try installing llvmlite from wheel by downloading your wheel on this page : https://www.lfd.uci.edu/~gohlke/pythonlibs/#llvmlite pip install llvmlite-0.34.0-cp38-cp38-win_amd64.whl (replace with your wheel) A: You need VC runtime to use Llvmlite Llvmlite: a lightweight LLVM binding for writing JIT compilers. Requires the Visual C++ Redistributable Packages for Visual Studio 2017. check this https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170 or for x64 direct link: https://aka.ms/vs/17/release/vc_redist.x64.exe
OSError: Could not load shared object file: llvmlite.dll (SHAP related. What could be missing?)
I want to use SHAP with Anaconda. Prequisites: llvmlite is installed: pip install llvmlite Requirement already satisfied: llvmlite in c:\users...\anaconda3\lib\site-packages (0.34.0) However, I get the error message in the supject, that llvmlite.dll could not be loaded: from sklearn.model_selection import train_test_split import xgboost import shap import numpy as np import matplotlib.pylab as pl ​ # print the JS visualization code to the notebook shap.initjs() --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-1-1cffb01788c0> in <module> 1 from sklearn.model_selection import train_test_split 2 import xgboost ----> 3 import shap 4 import numpy as np 5 import matplotlib.pylab as pl ~\Anaconda3\lib\site-packages\shap\__init__.py in <module> 10 warnings.warn("As of version 0.29.0 shap only supports Python 3 (not 2)!") 11 ---> 12 from ._explanation import Explanation 13 14 # explainers ~\Anaconda3\lib\site-packages\shap\_explanation.py in <module> 8 from slicer import Slicer, Alias 9 # from ._order import Order ---> 10 from .utils._general import OpChain 11 12 # slicer confuses pylint... ~\Anaconda3\lib\site-packages\shap\utils\__init__.py in <module> ----> 1 from ._clustering import hclust_ordering, partition_tree, partition_tree_shuffle, delta_minimization_order, hclust 2 from ._general import approximate_interactions, potential_interactions, sample, safe_isinstance, assert_import, record_import_error 3 from ._general import shapley_coefficients, convert_name, format_value, ordinal_str, OpChain 4 from ._show_progress import show_progress 5 from ._masked_model import MaskedModel, make_masks ~\Anaconda3\lib\site-packages\shap\utils\_clustering.py in <module> 2 import scipy as sp 3 from scipy.spatial.distance import pdist ----> 4 from numba import jit 5 import sklearn 6 import warnings ~\Anaconda3\lib\site-packages\numba\__init__.py in <module> 12 del get_versions 13 ---> 14 from numba.core import config 15 from numba.testing import _runtests as runtests 16 from numba.core import types, errors ~\Anaconda3\lib\site-packages\numba\core\config.py in <module> 14 15 ---> 16 import llvmlite.binding as ll 17 18 IS_WIN32 = sys.platform.startswith('win32') ~\Anaconda3\lib\site-packages\llvmlite\binding\__init__.py in <module> 2 Things that rely on the LLVM library 3 """ ----> 4 from .dylib import * 5 from .executionengine import * 6 from .initfini import * ~\Anaconda3\lib\site-packages\llvmlite\binding\dylib.py in <module> 1 from ctypes import c_void_p, c_char_p, c_bool, POINTER 2 ----> 3 from llvmlite.binding import ffi 4 from llvmlite.binding.common import _encode_string 5 ~\Anaconda3\lib\site-packages\llvmlite\binding\ffi.py in <module> 151 break 152 else: --> 153 raise OSError("Could not load shared object file: {}".format(_lib_name)) 154 155 OSError: Could not load shared object file: llvmlite.dll Does anybody have an idea what the root cause may be and whatelse I could try? THx, Marcus
[ "in my case, following actions worked:\nconda uninstall llvmlite\nand then\nconda install llvmlite\n", "I did both\nconda uninstall llvmlite\npip install llvmlite\n\nand\nconda uninstall llvmlite\nconda install llvmlite\n\nbut can't get it work.\nI got it work with\nconda install -c numba numba\nconda install -c numba llvmlite\n\n", "I had the same llvmlite missing DLL issue on Windows 10 with Python 3.8 in jupyter notebook when trying to import numba and solved it by installing with pip instead of conda :\nconda uninstall llvmlite\npip install llvmlite\n\nIf it still does not wortk try installing llvmlite from wheel by downloading your wheel on this page :\nhttps://www.lfd.uci.edu/~gohlke/pythonlibs/#llvmlite\npip install llvmlite-0.34.0-cp38-cp38-win_amd64.whl (replace with your wheel)\n", "You need VC runtime to use Llvmlite\nLlvmlite: a lightweight LLVM binding for writing JIT compilers.\nRequires the Visual C++ Redistributable Packages for Visual Studio 2017.\ncheck this https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170\nor for x64 direct link:\nhttps://aka.ms/vs/17/release/vc_redist.x64.exe\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "feature_extraction", "machine_learning", "python", "python_3.x", "shap" ]
stackoverflow_0064541502_feature_extraction_machine_learning_python_python_3.x_shap.txt
Q: How to convert datetime.datetime to array? I have an array that contains datetime.datetime objects. They are as follows: array([datetime.datetime(2011, 1, 1, 0, 3, 32, 262000), datetime.datetime(2011, 1, 1, 0, 5, 7, 290000), datetime.datetime(2011, 1, 1, 0, 6, 45, 383000), datetime.datetime(2011, 1, 1, 0, 8, 23, 335000)], dtype=object) I am trying to save this data as a .mat file using savemat through this command. save_structure = {'Time':time_array} savemat(path + '\2011.mat',save_structure) Here .mat is the matlab format file. I have used the following libraries: from spacepy import pycdf from scipy.io import savemat,loadmat import datetime It gives me the following error: TypeError: Could not convert 2011-01-01 00:03:32.262000 (type <class 'datetime.datetime'>) to array One of the ways through which I was getting through was by converting the following data into Unix but now I need these particular data for ease in my further data analysis. Is there a way to achieve what I am trying to? A: Convert the datetime.datetime to numpy.datetime64 first, then scipy seems to know how to handle the conversion: import datetime import numpy as np from scipy.io import savemat time_array = np.array([datetime.datetime(2011, 1, 1, 0, 3, 32, 262000), datetime.datetime(2011, 1, 1, 0, 5, 7, 290000), datetime.datetime(2011, 1, 1, 0, 6, 45, 383000), datetime.datetime(2011, 1, 1, 0, 8, 23, 335000)]) time_array_np = time_array.astype(np.datetime64) save_structure = {'Time': time_array_np} savemat('2011.mat', save_structure)
How to convert datetime.datetime to array?
I have an array that contains datetime.datetime objects. They are as follows: array([datetime.datetime(2011, 1, 1, 0, 3, 32, 262000), datetime.datetime(2011, 1, 1, 0, 5, 7, 290000), datetime.datetime(2011, 1, 1, 0, 6, 45, 383000), datetime.datetime(2011, 1, 1, 0, 8, 23, 335000)], dtype=object) I am trying to save this data as a .mat file using savemat through this command. save_structure = {'Time':time_array} savemat(path + '\2011.mat',save_structure) Here .mat is the matlab format file. I have used the following libraries: from spacepy import pycdf from scipy.io import savemat,loadmat import datetime It gives me the following error: TypeError: Could not convert 2011-01-01 00:03:32.262000 (type <class 'datetime.datetime'>) to array One of the ways through which I was getting through was by converting the following data into Unix but now I need these particular data for ease in my further data analysis. Is there a way to achieve what I am trying to?
[ "Convert the datetime.datetime to numpy.datetime64 first, then scipy seems to know how to handle the conversion:\nimport datetime\n\nimport numpy as np\nfrom scipy.io import savemat\n\ntime_array = np.array([datetime.datetime(2011, 1, 1, 0, 3, 32, 262000),\n datetime.datetime(2011, 1, 1, 0, 5, 7, 290000),\n datetime.datetime(2011, 1, 1, 0, 6, 45, 383000),\n datetime.datetime(2011, 1, 1, 0, 8, 23, 335000)])\n\ntime_array_np = time_array.astype(np.datetime64)\nsave_structure = {'Time': time_array_np}\nsavemat('2011.mat', save_structure)\n\n" ]
[ 1 ]
[]
[]
[ "datetime", "mat_file", "numpy", "python", "scipy" ]
stackoverflow_0074542391_datetime_mat_file_numpy_python_scipy.txt
Q: Python selenium error: no such element: Unable to locate element; clicking a button on the screen I've been trying to figure out this error for serveral hours until now.. I tried to click the red button on the screen, but somehow I can't use the xpath method. The error message waas given right this: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="mapContainer"]"} (Session info: chrome=107.0.5304.107) My goal is to crawl the information of a popup screen showed when the button was clicked. from selenium import webdriver # 셀레니움을 활성화 from selenium.webdriver import ActionChains from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By import time import pandas as pd import numpy as np import re from datetime import datetime dr = webdriver.Chrome() # 크롬 드라이버를 실행하는 명령어를 dr로 지정 dr.get('https://rawris-am.ekr.or.kr/wrms/') # 드라이버를 통해 url의 웹 페이지를 오픈 time.sleep(2) #2초 대기 act = ActionChains(dr) # 드라이버에 동작을 실행시키는 명령어를 act로 지정 dr.switch_to.frame('DivMapOpenLayers') time.sleep(1) element1 = dr.find_element(By.XPATH, '//*[@id="xcontainer_reservoir"]/table[2]/tbody/tr/td/div[1]/label[1]/input') CHOICE = {'경기':2, '강원':3, '충북':4, '충남':5, '전북':6, '전남':7, '경북':8, '경남':9, '제주':10} CHOICE = CHOICE.get('전남') dr.find_element(By.XPATH, f'//*[@id="xcontainer_reservoir"]/table[3]/tbody/tr[{CHOICE}]/td[2]').click() res_detail = dr.find_element(By.XPATH, f'//*[@id="xcontainer_reservoir"]/table[3]/tbody/tr[2]/td[2]').click() res_detail = dr.find_element(By.XPATH, f'//*[@id="xcontainer_reservoir"]/table[3]/tbody/tr[2]/td[2]').click() ee = dr.find_element(By.XPATH, '//*[@id="mapContainer"]') # error caused ee.get_attribute('style') dr.execute_script("arguments[0].setAttribute('style','display: block;')", ee); ee.get_attribute('style') 1. before putting mouse cursor on the point 2. when i put mouse cursor on the red point site url : https://rawris-am.ekr.or.kr/wrms/ thanks for advance!! A: I don't think you are doing much wrong. Adding explicit waits will always improve reliability as you are less vulnerable to timing issues: iframe = WebDriverWait(dr, 10).until(EC.presence_of_element_located((By.ID, "DivMapOpenLayers"))) driver.switch_to.frame(iframe) CHOICE = {'경기':2, '강원':3, '충북':4, '충남':5, '전북':6, '전남':7, '경북':8, '경남':9, '제주':10} CHOICE = CHOICE.get('전남') WebDriverWait(dr, 10).until(EC.presence_of_element_located((By.XPATH, f'//*[@id="xcontainer_reservoir"]/table[3]/tbody/tr[{CHOICE}]/td[2]'))).click() WebDriverWait(dr, 10).until(EC.presence_of_element_located((By.XPATH, f'//*[@id="xcontainer_reservoir"]/table[3]/tbody/tr[2]/td[2]'))).click() ee = WebDriverWait(dr, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id="mapContainer"]'))) print(ee.get_attribute("textContent"))
Python selenium error: no such element: Unable to locate element; clicking a button on the screen
I've been trying to figure out this error for serveral hours until now.. I tried to click the red button on the screen, but somehow I can't use the xpath method. The error message waas given right this: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="mapContainer"]"} (Session info: chrome=107.0.5304.107) My goal is to crawl the information of a popup screen showed when the button was clicked. from selenium import webdriver # 셀레니움을 활성화 from selenium.webdriver import ActionChains from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By import time import pandas as pd import numpy as np import re from datetime import datetime dr = webdriver.Chrome() # 크롬 드라이버를 실행하는 명령어를 dr로 지정 dr.get('https://rawris-am.ekr.or.kr/wrms/') # 드라이버를 통해 url의 웹 페이지를 오픈 time.sleep(2) #2초 대기 act = ActionChains(dr) # 드라이버에 동작을 실행시키는 명령어를 act로 지정 dr.switch_to.frame('DivMapOpenLayers') time.sleep(1) element1 = dr.find_element(By.XPATH, '//*[@id="xcontainer_reservoir"]/table[2]/tbody/tr/td/div[1]/label[1]/input') CHOICE = {'경기':2, '강원':3, '충북':4, '충남':5, '전북':6, '전남':7, '경북':8, '경남':9, '제주':10} CHOICE = CHOICE.get('전남') dr.find_element(By.XPATH, f'//*[@id="xcontainer_reservoir"]/table[3]/tbody/tr[{CHOICE}]/td[2]').click() res_detail = dr.find_element(By.XPATH, f'//*[@id="xcontainer_reservoir"]/table[3]/tbody/tr[2]/td[2]').click() res_detail = dr.find_element(By.XPATH, f'//*[@id="xcontainer_reservoir"]/table[3]/tbody/tr[2]/td[2]').click() ee = dr.find_element(By.XPATH, '//*[@id="mapContainer"]') # error caused ee.get_attribute('style') dr.execute_script("arguments[0].setAttribute('style','display: block;')", ee); ee.get_attribute('style') 1. before putting mouse cursor on the point 2. when i put mouse cursor on the red point site url : https://rawris-am.ekr.or.kr/wrms/ thanks for advance!!
[ "I don't think you are doing much wrong. Adding explicit waits will always improve reliability as you are less vulnerable to timing issues:\niframe = WebDriverWait(dr, 10).until(EC.presence_of_element_located((By.ID, \"DivMapOpenLayers\")))\ndriver.switch_to.frame(iframe)\nCHOICE = {'경기':2, '강원':3, '충북':4, '충남':5, '전북':6, '전남':7, '경북':8, '경남':9, '제주':10}\nCHOICE = CHOICE.get('전남') \nWebDriverWait(dr, 10).until(EC.presence_of_element_located((By.XPATH, f'//*[@id=\"xcontainer_reservoir\"]/table[3]/tbody/tr[{CHOICE}]/td[2]'))).click()\nWebDriverWait(dr, 10).until(EC.presence_of_element_located((By.XPATH, f'//*[@id=\"xcontainer_reservoir\"]/table[3]/tbody/tr[2]/td[2]'))).click()\nee = WebDriverWait(dr, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id=\"mapContainer\"]')))\nprint(ee.get_attribute(\"textContent\"))\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "web_crawler", "xpath" ]
stackoverflow_0074542058_python_selenium_web_crawler_xpath.txt
Q: Receiving permission denied error with Docker, nginx, uwsgi setup. I can manually write files inside the container I'm trying to setup a flask application to run in production using docker, nginx, and uwsgi. Docker file: # syntax=docker/dockerfile:1 FROM python:3.8-slim-buster WORKDIR /flask_app RUN apt-get clean \ && apt-get -y update RUN apt-get -y install nginx \ && apt-get -y install python3-dev \ && apt-get -y install build-essential EXPOSE 8080 COPY requirements.txt requirements.txt RUN pip install -r requirements.txt --src /usr/local/src COPY . . COPY nginx.conf /etc/nginx RUN chmod +x ./start.sh CMD ["./start.sh"] # CMD [ "python3", "-m" , "flask", "--app", "main", "run", "--host=0.0.0.0", "-p", "5001"] uwsgi.ini `[uwsgi] module = main:app uid = www-data gid = www-data master = true processes = 5 #socket = /tmp/uwsgi.socket socket = 127.0.0.1:8080 chmod-socket = 666 vacuum = true die-on-term = true` nginx.conf user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { access_log /dev/stdout; error_log /dev/stdout; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; index index.html index.htm; # Configuration containing list of application servers upstream uwsgicluster { server 127.0.0.1:8080; # server 127.0.0.1:8081; # .. # . } server { listen 8888 default_server; listen [::]:8888 default_server; server_name localhost; root /var/www/html; location / { include uwsgi_params; uwsgi_pass uwsgicluster; uwsgi_read_timeout 1h; uwsgi_send_timeout 1h; proxy_send_timeout 1h; proxy_read_timeout 1h; } } } The web application runs successfully until I try to upload an image to display. Saving it to an uploads folder under static/uploads returns this error: PermissionError: [Errno 13] Permission denied: 'static/uploads/timyoutube.jpg' I expect to not receive this error as it works using flask run in development mode. A: The issue turned out to be a permissions issue with the user www-data not having write permissions. I changed the owner of the workdir to www-data and that fixed the issue RUN chown -R www-data:www-data /flask_app
Receiving permission denied error with Docker, nginx, uwsgi setup. I can manually write files inside the container
I'm trying to setup a flask application to run in production using docker, nginx, and uwsgi. Docker file: # syntax=docker/dockerfile:1 FROM python:3.8-slim-buster WORKDIR /flask_app RUN apt-get clean \ && apt-get -y update RUN apt-get -y install nginx \ && apt-get -y install python3-dev \ && apt-get -y install build-essential EXPOSE 8080 COPY requirements.txt requirements.txt RUN pip install -r requirements.txt --src /usr/local/src COPY . . COPY nginx.conf /etc/nginx RUN chmod +x ./start.sh CMD ["./start.sh"] # CMD [ "python3", "-m" , "flask", "--app", "main", "run", "--host=0.0.0.0", "-p", "5001"] uwsgi.ini `[uwsgi] module = main:app uid = www-data gid = www-data master = true processes = 5 #socket = /tmp/uwsgi.socket socket = 127.0.0.1:8080 chmod-socket = 666 vacuum = true die-on-term = true` nginx.conf user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { access_log /dev/stdout; error_log /dev/stdout; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; index index.html index.htm; # Configuration containing list of application servers upstream uwsgicluster { server 127.0.0.1:8080; # server 127.0.0.1:8081; # .. # . } server { listen 8888 default_server; listen [::]:8888 default_server; server_name localhost; root /var/www/html; location / { include uwsgi_params; uwsgi_pass uwsgicluster; uwsgi_read_timeout 1h; uwsgi_send_timeout 1h; proxy_send_timeout 1h; proxy_read_timeout 1h; } } } The web application runs successfully until I try to upload an image to display. Saving it to an uploads folder under static/uploads returns this error: PermissionError: [Errno 13] Permission denied: 'static/uploads/timyoutube.jpg' I expect to not receive this error as it works using flask run in development mode.
[ "The issue turned out to be a permissions issue with the user www-data not having write permissions.\nI changed the owner of the workdir to www-data and that fixed the issue\nRUN chown -R www-data:www-data /flask_app\n\n" ]
[ 1 ]
[]
[]
[ "docker", "dockerfile", "flask", "nginx", "python" ]
stackoverflow_0074540333_docker_dockerfile_flask_nginx_python.txt
Q: when record created, it wont autofill record field I'm trying to make autofill record from bank.account.account to account.journal but it seems it didn't work. I wonder where is the mistake in this code class BankAccounAccount(models.Model): _name = 'bank.account.account' _description = "Bank Account Account" _rec_name = 'acc_number' acc_number = fields.Char(string="Account Number", required=True) bank_id = fields.Many2one('res.bank', string="Bank") bank_bic = fields.Char(string="Bank Identifier Code") company_id = fields.Many2one('res.company', string="Company", default=lambda self: self.env.user.company_id.id) branch_id = fields.Many2one('res.branch', string="Branch") @api.model def create(self, values): res = super(BankAccounAccount, self).create(values) for rec in self: values = { 'acc_number': rec.acc_number.id, 'bank_id': rec.bank_id.id, 'bank_bic': rec.bank_bic.id, 'company_id': rec.company_id.id, 'branch_id': rec.branch_id.id, } self.env['account.journal'].create(values) return res class AccountJournal(models.Model): _inherit = "account.journal" name = fields.Char(string='Journal Name', required=True, tracking=True) code = fields.Char(string='Short Code', size=5, required=True, help="Shorter name used for display. The journal entries of this journal will also be named using this prefix by default.", tracking=True) active = fields.Boolean(default=True, help="Set active to false to hide the Journal without removing it.", tracking=True) type = fields.Selection([ ('sale', 'Sales'), ('purchase', 'Purchase'), ('cash', 'Cash'), ('bank', 'Bank'), ('general', 'Miscellaneous'), ], required=True, help="Select 'Sale' for customer invoices journals.\n"\ "Select 'Purchase' for vendor bills journals.\n"\ "Select 'Cash' or 'Bank' for journals that are used in customer or vendor payments.\n"\ "Select 'General' for miscellaneous operations journals.", tracking=True) type_control_ids = fields.Many2many('account.account.type', 'journal_account_type_control_rel', 'journal_id', 'type_id', string='Allowed account types', tracking=True) account_control_ids = fields.Many2many('account.account', 'journal_account_control_rel', 'journal_id', 'account_id', string='Allowed accounts', check_company=True, domain="[('deprecated', '=', False), ('company_id', '=', company_id), ('is_off_balance', '=', False)]", tracking=True) default_account_type = fields.Many2one('account.account.type', compute="_compute_default_account_type", tracking=True) default_account_id = fields.Many2one( comodel_name='account.account', check_company=True, copy=False, ondelete='restrict', string='Default Account', domain="[('deprecated', '=', False), ('company_id', '=', company_id)," "'|', ('user_type_id', '=', default_account_type), ('user_type_id', 'in', type_control_ids)," "('user_type_id.type', 'not in', ('receivable', 'payable'))]", tracking=True) payment_debit_account_id = fields.Many2one( comodel_name='account.account', check_company=True, copy=False, ondelete='restrict', help="Incoming payments entries triggered by invoices/refunds will be posted on the Outstanding Receipts Account " "and displayed as blue lines in the bank reconciliation widget. During the reconciliation process, concerned " "transactions will be reconciled with entries on the Outstanding Receipts Account instead of the " "receivable account.", string='Outstanding Receipts Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ '|', ('user_type_id', '=', %s), ('id', '=', default_account_id)]" % self.env.ref('account.data_account_type_current_assets').id, tracking=True) payment_credit_account_id = fields.Many2one( comodel_name='account.account', check_company=True, copy=False, ondelete='restrict', help="Outgoing payments entries triggered by bills/credit notes will be posted on the Outstanding Payments Account " "and displayed as blue lines in the bank reconciliation widget. During the reconciliation process, concerned " "transactions will be reconciled with entries on the Outstanding Payments Account instead of the " "payable account.", string='Outstanding Payments Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ '|', ('user_type_id', '=', %s), ('id', '=', default_account_id)]" % self.env.ref('account.data_account_type_current_assets').id, tracking=True) suspense_account_id = fields.Many2one( comodel_name='account.account', check_company=True, ondelete='restrict', readonly=False, store=True, compute='_compute_suspense_account_id', help="Bank statements transactions will be posted on the suspense account until the final reconciliation " "allowing finding the right account.", string='Suspense Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ ('user_type_id', '=', %s)]" % self.env.ref('account.data_account_type_current_liabilities').id, tracking=True) restrict_mode_hash_table = fields.Boolean(string="Lock Posted Entries with Hash", help="If ticked, the accounting entry or invoice receives a hash as soon as it is posted and cannot be modified anymore.", tracking=True) sequence = fields.Integer(help='Used to order Journals in the dashboard view', default=10, tracking=True) invoice_reference_type = fields.Selection(string='Communication Type', required=True, selection=[('none', 'Free'), ('partner', 'Based on Customer'), ('invoice', 'Based on Invoice')], default='invoice', help='You can set here the default communication that will appear on customer invoices, once validated, to help the customer to refer to that particular invoice when making the payment.', tracking=True) invoice_reference_model = fields.Selection(string='Communication Standard', required=True, selection=[('odoo', 'Odoo'),('euro', 'European')], default=_default_invoice_reference_model, help="You can choose different models for each type of reference. The default one is the Odoo reference.", tracking=True) #groups_id = fields.Many2many('res.groups', 'account_journal_group_rel', 'journal_id', 'group_id', string='Groups') currency_id = fields.Many2one('res.currency', help='The currency used to enter statement', string="Currency", tracking=True) company_id = fields.Many2one('res.company', string='Company', required=True, readonly=True, index=True, default=lambda self: self.env.company, help="Company related to this journal", tracking=True) country_code = fields.Char(related='company_id.country_id.code', readonly=True, tracking=True) refund_sequence = fields.Boolean(string='Dedicated Credit Note Sequence', help="Check this box if you don't want to share the same sequence for invoices and credit notes made from this journal", default=False, tracking=True) sequence_override_regex = fields.Text(help="Technical field used to enforce complex sequence composition that the system would normally misunderstand.\n"\ "This is a regex that can include all the following capture groups: prefix1, year, prefix2, month, prefix3, seq, suffix.\n"\ "The prefix* groups are the separators between the year, month and the actual increasing sequence number (seq).\n"\ "e.g: ^(?P<prefix1>.*?)(?P<year>\d{4})(?P<prefix2>\D*?)(?P<month>\d{2})(?P<prefix3>\D+?)(?P<seq>\d+)(?P<suffix>\D*?)$", tracking=True) inbound_payment_method_ids = fields.Many2many( comodel_name='account.payment.method', relation='account_journal_inbound_payment_method_rel', column1='journal_id', column2='inbound_payment_method', domain=[('payment_type', '=', 'inbound')], string='Inbound Payment Methods', compute='_compute_inbound_payment_method_ids', store=True, readonly=False, help="Manual: Get paid by cash, check or any other method outside of Odoo.\n" "Electronic: Get paid automatically through a payment acquirer by requesting a transaction" " on a card saved by the customer when buying or subscribing online (payment token).\n" "Batch Deposit: Encase several customer checks at once by generating a batch deposit to" " submit to your bank. When encoding the bank statement in Odoo, you are suggested to" " reconcile the transaction with the batch deposit. Enable this option from the settings.", tracking=True ) outbound_payment_method_ids = fields.Many2many( comodel_name='account.payment.method', relation='account_journal_outbound_payment_method_rel', column1='journal_id', column2='outbound_payment_method', domain=[('payment_type', '=', 'outbound')], string='Outbound Payment Methods', compute='_compute_outbound_payment_method_ids', store=True, readonly=False, help="Manual: Pay bill by cash or any other method outside of Odoo.\n" "Check: Pay bill by check and print it from Odoo.\n" "SEPA Credit Transfer: Pay bill from a SEPA Credit Transfer file you submit to your" " bank. Enable this option from the settings.", tracking=True ) at_least_one_inbound = fields.Boolean(compute='_methods_compute', store=True, tracking=True) at_least_one_outbound = fields.Boolean(compute='_methods_compute', store=True, tracking=True) profit_account_id = fields.Many2one( comodel_name='account.account', check_company=True, help="Used to register a profit when the ending balance of a cash register differs from what the system computes", string='Profit Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ ('user_type_id', 'in', %s)]" % [self.env.ref('account.data_account_type_revenue').id, self.env.ref('account.data_account_type_other_income').id], tracking=True) loss_account_id = fields.Many2one( comodel_name='account.account', check_company=True, help="Used to register a loss when the ending balance of a cash register differs from what the system computes", string='Loss Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ ('user_type_id', '=', %s)]" % self.env.ref('account.data_account_type_expenses').id, tracking=True) # Bank journals fields company_partner_id = fields.Many2one('res.partner', related='company_id.partner_id', string='Account Holder', readonly=True, store=False) bank_account_id = fields.Many2one('res.partner.bank', string="Bank Account", ondelete='restrict', copy=False, check_company=True, domain="[('partner_id','=', company_partner_id), '|', ('company_id', '=', False), ('company_id', '=', company_id)]", tracking=True) bank_statements_source = fields.Selection(selection=_get_bank_statements_available_sources, string='Bank Feeds', default='undefined', help="Defines how the bank statements will be registered", tracking=True) bank_acc_number = fields.Char(related='bank_account_id.acc_number', readonly=False, tracking=True) bank_id = fields.Many2one('res.bank', related='bank_account_id.bank_id', readonly=False, tracking=True) # Sale journals fields sale_activity_type_id = fields.Many2one('mail.activity.type', string='Schedule Activity', default=False, help="Activity will be automatically scheduled on payment due date, improving collection process.", tracking=True) sale_activity_user_id = fields.Many2one('res.users', string="Activity User", help="Leave empty to assign the Salesperson of the invoice.", tracking=True) sale_activity_note = fields.Text('Activity Summary', tracking=True) # alias configuration for journals alias_id = fields.Many2one('mail.alias', string='Email Alias', help="Send one separate email for each invoice.\n\n" "Any file extension will be accepted.\n\n" "Only PDF and XML files will be interpreted by Odoo", copy=False, tracking=True) alias_domain = fields.Char('Alias domain', compute='_compute_alias_domain', default=_default_alias_domain, compute_sudo=True, tracking=True) alias_name = fields.Char('Alias Name', copy=False, related='alias_id.alias_name', help="It creates draft invoices and bills by sending an email.", readonly=False, tracking=True) journal_group_ids = fields.Many2many('account.journal.group', domain="[('company_id', '=', company_id)]", check_company=True, string="Journal Groups", tracking=True) secure_sequence_id = fields.Many2one('ir.sequence', help='Sequence to use to ensure the securisation of data', check_company=True, readonly=True, copy=False, tracking=True) Thanks I've been stuck for weeks and already tried manipulating but none of them are working A: 1- Perhaps you should try not to use for rec in self: vals = {'acc_number': self.acc_number, 'bank_id': self.bank_id.id, e.t.c} self.env['account.journal'].create(vals) 2- Rename the second values. It might help I am not sure when you create a new record in account.journal you are covering all required fields. A: To create account.journal record with bank type you need at least the below required fields (name, code, type, company_id). The code field in journal should be unique with size of 5 chars. I have added code field to your custom model and use it in create method. class BankAccounAccount(models.Model): _name = 'bank.account.account' _description = "Bank Account Account" _rec_name = 'acc_number' acc_number = fields.Char(string="Account Number", required=True) bank_id = fields.Many2one('res.bank', string="Bank") bank_bic = fields.Char(string="Bank Identifier Code") company_id = fields.Many2one('res.company', string="Company", default=lambda self: self.env.user.company_id.id) branch_id = fields.Many2one('res.branch', string="Branch") code = fields.Char(string='Short Code', size=5, required=True); # You need to make sure that the entered code not already used by another journal @api.model def create(self, values): res = super(BankAccounAccount, self).create(values) for rec in res: values = { 'company_id': rec.company_id.id, 'name': rec.acc_number, 'type':'bank', 'code': rec.code, #'acc_number': rec.acc_number.id, # the acc_number field is not exists in 'account.journal' so you canot use it #'bank_id': rec.bank_id.id, # this bank_id field is related to bank_account_id.bank_id and bank_account_id is Many2One from 'res.partner.bank' #'bank_bic': rec.bank_bic.id, # the bank_bic field is not exists in 'account.journal' so you canot use it #'branch_id': rec.branch_id.id, # the branch_id field is not exists in 'account.journal' unless } self.env['account.journal'].create(values) # make sure that the user you are using has access to account.journal unless use the below commented code self.env['account.journal'].sudo().create(values) return res
when record created, it wont autofill record field
I'm trying to make autofill record from bank.account.account to account.journal but it seems it didn't work. I wonder where is the mistake in this code class BankAccounAccount(models.Model): _name = 'bank.account.account' _description = "Bank Account Account" _rec_name = 'acc_number' acc_number = fields.Char(string="Account Number", required=True) bank_id = fields.Many2one('res.bank', string="Bank") bank_bic = fields.Char(string="Bank Identifier Code") company_id = fields.Many2one('res.company', string="Company", default=lambda self: self.env.user.company_id.id) branch_id = fields.Many2one('res.branch', string="Branch") @api.model def create(self, values): res = super(BankAccounAccount, self).create(values) for rec in self: values = { 'acc_number': rec.acc_number.id, 'bank_id': rec.bank_id.id, 'bank_bic': rec.bank_bic.id, 'company_id': rec.company_id.id, 'branch_id': rec.branch_id.id, } self.env['account.journal'].create(values) return res class AccountJournal(models.Model): _inherit = "account.journal" name = fields.Char(string='Journal Name', required=True, tracking=True) code = fields.Char(string='Short Code', size=5, required=True, help="Shorter name used for display. The journal entries of this journal will also be named using this prefix by default.", tracking=True) active = fields.Boolean(default=True, help="Set active to false to hide the Journal without removing it.", tracking=True) type = fields.Selection([ ('sale', 'Sales'), ('purchase', 'Purchase'), ('cash', 'Cash'), ('bank', 'Bank'), ('general', 'Miscellaneous'), ], required=True, help="Select 'Sale' for customer invoices journals.\n"\ "Select 'Purchase' for vendor bills journals.\n"\ "Select 'Cash' or 'Bank' for journals that are used in customer or vendor payments.\n"\ "Select 'General' for miscellaneous operations journals.", tracking=True) type_control_ids = fields.Many2many('account.account.type', 'journal_account_type_control_rel', 'journal_id', 'type_id', string='Allowed account types', tracking=True) account_control_ids = fields.Many2many('account.account', 'journal_account_control_rel', 'journal_id', 'account_id', string='Allowed accounts', check_company=True, domain="[('deprecated', '=', False), ('company_id', '=', company_id), ('is_off_balance', '=', False)]", tracking=True) default_account_type = fields.Many2one('account.account.type', compute="_compute_default_account_type", tracking=True) default_account_id = fields.Many2one( comodel_name='account.account', check_company=True, copy=False, ondelete='restrict', string='Default Account', domain="[('deprecated', '=', False), ('company_id', '=', company_id)," "'|', ('user_type_id', '=', default_account_type), ('user_type_id', 'in', type_control_ids)," "('user_type_id.type', 'not in', ('receivable', 'payable'))]", tracking=True) payment_debit_account_id = fields.Many2one( comodel_name='account.account', check_company=True, copy=False, ondelete='restrict', help="Incoming payments entries triggered by invoices/refunds will be posted on the Outstanding Receipts Account " "and displayed as blue lines in the bank reconciliation widget. During the reconciliation process, concerned " "transactions will be reconciled with entries on the Outstanding Receipts Account instead of the " "receivable account.", string='Outstanding Receipts Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ '|', ('user_type_id', '=', %s), ('id', '=', default_account_id)]" % self.env.ref('account.data_account_type_current_assets').id, tracking=True) payment_credit_account_id = fields.Many2one( comodel_name='account.account', check_company=True, copy=False, ondelete='restrict', help="Outgoing payments entries triggered by bills/credit notes will be posted on the Outstanding Payments Account " "and displayed as blue lines in the bank reconciliation widget. During the reconciliation process, concerned " "transactions will be reconciled with entries on the Outstanding Payments Account instead of the " "payable account.", string='Outstanding Payments Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ '|', ('user_type_id', '=', %s), ('id', '=', default_account_id)]" % self.env.ref('account.data_account_type_current_assets').id, tracking=True) suspense_account_id = fields.Many2one( comodel_name='account.account', check_company=True, ondelete='restrict', readonly=False, store=True, compute='_compute_suspense_account_id', help="Bank statements transactions will be posted on the suspense account until the final reconciliation " "allowing finding the right account.", string='Suspense Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ ('user_type_id', '=', %s)]" % self.env.ref('account.data_account_type_current_liabilities').id, tracking=True) restrict_mode_hash_table = fields.Boolean(string="Lock Posted Entries with Hash", help="If ticked, the accounting entry or invoice receives a hash as soon as it is posted and cannot be modified anymore.", tracking=True) sequence = fields.Integer(help='Used to order Journals in the dashboard view', default=10, tracking=True) invoice_reference_type = fields.Selection(string='Communication Type', required=True, selection=[('none', 'Free'), ('partner', 'Based on Customer'), ('invoice', 'Based on Invoice')], default='invoice', help='You can set here the default communication that will appear on customer invoices, once validated, to help the customer to refer to that particular invoice when making the payment.', tracking=True) invoice_reference_model = fields.Selection(string='Communication Standard', required=True, selection=[('odoo', 'Odoo'),('euro', 'European')], default=_default_invoice_reference_model, help="You can choose different models for each type of reference. The default one is the Odoo reference.", tracking=True) #groups_id = fields.Many2many('res.groups', 'account_journal_group_rel', 'journal_id', 'group_id', string='Groups') currency_id = fields.Many2one('res.currency', help='The currency used to enter statement', string="Currency", tracking=True) company_id = fields.Many2one('res.company', string='Company', required=True, readonly=True, index=True, default=lambda self: self.env.company, help="Company related to this journal", tracking=True) country_code = fields.Char(related='company_id.country_id.code', readonly=True, tracking=True) refund_sequence = fields.Boolean(string='Dedicated Credit Note Sequence', help="Check this box if you don't want to share the same sequence for invoices and credit notes made from this journal", default=False, tracking=True) sequence_override_regex = fields.Text(help="Technical field used to enforce complex sequence composition that the system would normally misunderstand.\n"\ "This is a regex that can include all the following capture groups: prefix1, year, prefix2, month, prefix3, seq, suffix.\n"\ "The prefix* groups are the separators between the year, month and the actual increasing sequence number (seq).\n"\ "e.g: ^(?P<prefix1>.*?)(?P<year>\d{4})(?P<prefix2>\D*?)(?P<month>\d{2})(?P<prefix3>\D+?)(?P<seq>\d+)(?P<suffix>\D*?)$", tracking=True) inbound_payment_method_ids = fields.Many2many( comodel_name='account.payment.method', relation='account_journal_inbound_payment_method_rel', column1='journal_id', column2='inbound_payment_method', domain=[('payment_type', '=', 'inbound')], string='Inbound Payment Methods', compute='_compute_inbound_payment_method_ids', store=True, readonly=False, help="Manual: Get paid by cash, check or any other method outside of Odoo.\n" "Electronic: Get paid automatically through a payment acquirer by requesting a transaction" " on a card saved by the customer when buying or subscribing online (payment token).\n" "Batch Deposit: Encase several customer checks at once by generating a batch deposit to" " submit to your bank. When encoding the bank statement in Odoo, you are suggested to" " reconcile the transaction with the batch deposit. Enable this option from the settings.", tracking=True ) outbound_payment_method_ids = fields.Many2many( comodel_name='account.payment.method', relation='account_journal_outbound_payment_method_rel', column1='journal_id', column2='outbound_payment_method', domain=[('payment_type', '=', 'outbound')], string='Outbound Payment Methods', compute='_compute_outbound_payment_method_ids', store=True, readonly=False, help="Manual: Pay bill by cash or any other method outside of Odoo.\n" "Check: Pay bill by check and print it from Odoo.\n" "SEPA Credit Transfer: Pay bill from a SEPA Credit Transfer file you submit to your" " bank. Enable this option from the settings.", tracking=True ) at_least_one_inbound = fields.Boolean(compute='_methods_compute', store=True, tracking=True) at_least_one_outbound = fields.Boolean(compute='_methods_compute', store=True, tracking=True) profit_account_id = fields.Many2one( comodel_name='account.account', check_company=True, help="Used to register a profit when the ending balance of a cash register differs from what the system computes", string='Profit Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ ('user_type_id', 'in', %s)]" % [self.env.ref('account.data_account_type_revenue').id, self.env.ref('account.data_account_type_other_income').id], tracking=True) loss_account_id = fields.Many2one( comodel_name='account.account', check_company=True, help="Used to register a loss when the ending balance of a cash register differs from what the system computes", string='Loss Account', domain=lambda self: "[('deprecated', '=', False), ('company_id', '=', company_id), \ ('user_type_id.type', 'not in', ('receivable', 'payable')), \ ('user_type_id', '=', %s)]" % self.env.ref('account.data_account_type_expenses').id, tracking=True) # Bank journals fields company_partner_id = fields.Many2one('res.partner', related='company_id.partner_id', string='Account Holder', readonly=True, store=False) bank_account_id = fields.Many2one('res.partner.bank', string="Bank Account", ondelete='restrict', copy=False, check_company=True, domain="[('partner_id','=', company_partner_id), '|', ('company_id', '=', False), ('company_id', '=', company_id)]", tracking=True) bank_statements_source = fields.Selection(selection=_get_bank_statements_available_sources, string='Bank Feeds', default='undefined', help="Defines how the bank statements will be registered", tracking=True) bank_acc_number = fields.Char(related='bank_account_id.acc_number', readonly=False, tracking=True) bank_id = fields.Many2one('res.bank', related='bank_account_id.bank_id', readonly=False, tracking=True) # Sale journals fields sale_activity_type_id = fields.Many2one('mail.activity.type', string='Schedule Activity', default=False, help="Activity will be automatically scheduled on payment due date, improving collection process.", tracking=True) sale_activity_user_id = fields.Many2one('res.users', string="Activity User", help="Leave empty to assign the Salesperson of the invoice.", tracking=True) sale_activity_note = fields.Text('Activity Summary', tracking=True) # alias configuration for journals alias_id = fields.Many2one('mail.alias', string='Email Alias', help="Send one separate email for each invoice.\n\n" "Any file extension will be accepted.\n\n" "Only PDF and XML files will be interpreted by Odoo", copy=False, tracking=True) alias_domain = fields.Char('Alias domain', compute='_compute_alias_domain', default=_default_alias_domain, compute_sudo=True, tracking=True) alias_name = fields.Char('Alias Name', copy=False, related='alias_id.alias_name', help="It creates draft invoices and bills by sending an email.", readonly=False, tracking=True) journal_group_ids = fields.Many2many('account.journal.group', domain="[('company_id', '=', company_id)]", check_company=True, string="Journal Groups", tracking=True) secure_sequence_id = fields.Many2one('ir.sequence', help='Sequence to use to ensure the securisation of data', check_company=True, readonly=True, copy=False, tracking=True) Thanks I've been stuck for weeks and already tried manipulating but none of them are working
[ "1- Perhaps you should try not to use for rec in self:\nvals = {'acc_number': self.acc_number, 'bank_id': self.bank_id.id, e.t.c} \nself.env['account.journal'].create(vals)\n\n2- Rename the second values. It might help\nI am not sure when you create a new record in account.journal you are covering all required fields.\n", "To create account.journal record with bank type you need at least the below required fields (name, code, type, company_id).\nThe code field in journal should be unique with size of 5 chars. I have added code field to your custom model and use it in create method.\nclass BankAccounAccount(models.Model):\n _name = 'bank.account.account'\n _description = \"Bank Account Account\"\n _rec_name = 'acc_number'\n\n acc_number = fields.Char(string=\"Account Number\", required=True)\n bank_id = fields.Many2one('res.bank', string=\"Bank\")\n bank_bic = fields.Char(string=\"Bank Identifier Code\")\n company_id = fields.Many2one('res.company', string=\"Company\", default=lambda self: self.env.user.company_id.id)\n branch_id = fields.Many2one('res.branch', string=\"Branch\")\n code = fields.Char(string='Short Code', size=5, required=True); # You need to make sure that the entered code not already used by another journal\n \n @api.model\n def create(self, values):\n res = super(BankAccounAccount, self).create(values)\n for rec in res:\n values = {\n 'company_id': rec.company_id.id,\n 'name': rec.acc_number,\n 'type':'bank',\n 'code': rec.code,\n #'acc_number': rec.acc_number.id, # the acc_number field is not exists in 'account.journal' so you canot use it\n #'bank_id': rec.bank_id.id, # this bank_id field is related to bank_account_id.bank_id and bank_account_id is Many2One from 'res.partner.bank' \n #'bank_bic': rec.bank_bic.id, # the bank_bic field is not exists in 'account.journal' so you canot use it\n #'branch_id': rec.branch_id.id, # the branch_id field is not exists in 'account.journal' unless \n }\n self.env['account.journal'].create(values) # make sure that the user you are using has access to account.journal unless use the below commented code\n self.env['account.journal'].sudo().create(values)\n return res\n\n\n" ]
[ 0, -1 ]
[]
[]
[ "odoo", "odoo_14", "python" ]
stackoverflow_0074528459_odoo_odoo_14_python.txt
Q: GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1 I am using the OpenCV package with the face_recognition package to detect faces on my laptop webcam. Whenever I run it, the code runs fine but I run into the same GStreamer error. from imutils.video import VideoStream import face_recognition import pickle import argparse import time import cv2 import imutils ap = argparse.ArgumentParser() ap.add_argument("-o", "--output", type=str, help="path to output video") ap.add_argument("-y", "--display", type=int, default=1, help="0 to prevent display of frames to screen") ap.add_argument("-d", "--detection", default="hog", type=str, help="Detection method (hog/cnn") args = vars(ap.parse_args()) print("[INFO] loading encodings...") data = pickle.load(open("encodings.p", "rb")) print("[INFO] starting video stream...") vs = VideoStream().start() writer = None time.sleep(3) while True: frame = vs.read() rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) rgb = imutils.resize(frame, width=750) r = frame.shape[1] / float(rgb.shape[1]) boxes = face_recognition.face_locations(rgb, model=args["detection"]) encodings = face_recognition.face_encodings(rgb, boxes) for encoding, locations in zip(encodings, boxes): matches = face_recognition.compare_faces(data["encodings"], encoding) name = "Unknown" names = {} if True in matches: ids = [i for (i, b) in enumerate(matches) if b] for i in ids: name = data["names"][i] names[name] = names.get(name, 0) + 1 name = max(names, key=names.get) for (top, right, bottom, left) in boxes: top = int(top * r) right = int(right * r) bottom = int(bottom * r) left = int(left * r) cv2.rectangle(frame, (left, top), (right, bottom), (255, 0, 0), 3) y = top - 15 if top - 15 > 15 else top + 15 cv2.putText(frame, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (255, 0, 0), 2) if writer is None and args["output"] is not None: fourcc = cv2.VideoWriter_fourcc(*"MJPG") writer = cv2.VideoWriter( args["output"], fourcc, 20, (frame.shape[1], frame.shape[2]), True) if writer is not None: writer.write(frame) if args["display"] == 1: cv2.imshow("frame", frame) key = cv2.waitKey(1) if key == ord("q"): break cv2.destroyAllWindows() vs.stop() if writer is not None: writer.release() I can't find any problems, yet I always get this error: [ WARN:0] global /home/azazel/opencv/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1 The camera still displays and the facial recog works but what does the error signify and how can I fix it? A: This is a bug from trying to use Gstreamer with OpenCV. It is mentioned in https://github.com/opencv/opencv/issues/10324 and https://github.com/opencv/opencv/pull/14834 (fix in the second link) Essentially, it is a problem that arised due to the way Gstreamer reads in frames and the way OpenCV tracks video frames. I think it is worth a try to add the line of code and rebuild OpenCV to see if it solves the problem. A: I am using ROS noetic I came across the same problem. I solved it by installing OpenCV again. pip install opencv-python I was working on publishing cam data as a topic. import rospy from sensor_msgs.msg import Image from cv_bridge import CvBridge, CvBridgeError import cv2 cap=cv2.VideoCapture(0) if not cap.isOpened(): print("cannot open camera ") bridge= CvBridge() def talker(): pub=rospy.Publisher('/web_cam',Image, queue_size=1) rospy.init_node('image',anonymous=True) rate=rospy.Rate(10) while not rospy.is_shutdown(): ret,frame = cap.read() if not ret: break msg=bridge.cv2_to_imgmsg(frame,"bgr8") pub.publish(msg) if cv2.waitKey(1) & 0xFF == ord('q'): break if rospy.is_shutdown(): cap.release() talker()
GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
I am using the OpenCV package with the face_recognition package to detect faces on my laptop webcam. Whenever I run it, the code runs fine but I run into the same GStreamer error. from imutils.video import VideoStream import face_recognition import pickle import argparse import time import cv2 import imutils ap = argparse.ArgumentParser() ap.add_argument("-o", "--output", type=str, help="path to output video") ap.add_argument("-y", "--display", type=int, default=1, help="0 to prevent display of frames to screen") ap.add_argument("-d", "--detection", default="hog", type=str, help="Detection method (hog/cnn") args = vars(ap.parse_args()) print("[INFO] loading encodings...") data = pickle.load(open("encodings.p", "rb")) print("[INFO] starting video stream...") vs = VideoStream().start() writer = None time.sleep(3) while True: frame = vs.read() rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) rgb = imutils.resize(frame, width=750) r = frame.shape[1] / float(rgb.shape[1]) boxes = face_recognition.face_locations(rgb, model=args["detection"]) encodings = face_recognition.face_encodings(rgb, boxes) for encoding, locations in zip(encodings, boxes): matches = face_recognition.compare_faces(data["encodings"], encoding) name = "Unknown" names = {} if True in matches: ids = [i for (i, b) in enumerate(matches) if b] for i in ids: name = data["names"][i] names[name] = names.get(name, 0) + 1 name = max(names, key=names.get) for (top, right, bottom, left) in boxes: top = int(top * r) right = int(right * r) bottom = int(bottom * r) left = int(left * r) cv2.rectangle(frame, (left, top), (right, bottom), (255, 0, 0), 3) y = top - 15 if top - 15 > 15 else top + 15 cv2.putText(frame, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (255, 0, 0), 2) if writer is None and args["output"] is not None: fourcc = cv2.VideoWriter_fourcc(*"MJPG") writer = cv2.VideoWriter( args["output"], fourcc, 20, (frame.shape[1], frame.shape[2]), True) if writer is not None: writer.write(frame) if args["display"] == 1: cv2.imshow("frame", frame) key = cv2.waitKey(1) if key == ord("q"): break cv2.destroyAllWindows() vs.stop() if writer is not None: writer.release() I can't find any problems, yet I always get this error: [ WARN:0] global /home/azazel/opencv/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1 The camera still displays and the facial recog works but what does the error signify and how can I fix it?
[ "This is a bug from trying to use Gstreamer with OpenCV. It is mentioned in https://github.com/opencv/opencv/issues/10324 and https://github.com/opencv/opencv/pull/14834 (fix in the second link)\nEssentially, it is a problem that arised due to the way Gstreamer reads in frames and the way OpenCV tracks video frames. I think it is worth a try to add the line of code and rebuild OpenCV to see if it solves the problem.\n", "I am using ROS noetic I came across the same problem. I solved it by installing OpenCV again.\npip install opencv-python \n\nI was working on publishing cam data as a topic.\nimport rospy \nfrom sensor_msgs.msg import Image \nfrom cv_bridge import CvBridge, CvBridgeError\nimport cv2\n\ncap=cv2.VideoCapture(0)\nif not cap.isOpened():\n print(\"cannot open camera \")\n\nbridge= CvBridge()\n\ndef talker():\n pub=rospy.Publisher('/web_cam',Image, queue_size=1)\n rospy.init_node('image',anonymous=True)\n rate=rospy.Rate(10)\n while not rospy.is_shutdown():\n ret,frame = cap.read()\n if not ret:\n break\n msg=bridge.cv2_to_imgmsg(frame,\"bgr8\")\n pub.publish(msg)\n\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\n if rospy.is_shutdown():\n cap.release()\ntalker()\n\n" ]
[ 1, 0 ]
[]
[]
[ "cv2", "face_recognition", "gstreamer", "opencv", "python" ]
stackoverflow_0063091548_cv2_face_recognition_gstreamer_opencv_python.txt
Q: Django: Question regarding queries over a junction table/intermediary model My question concerns the many-to-many section of the django models docs: It is mentioned there that by using an intermediary model it is possible to query on the intermediary model's attributes like so: Person.objects.filter( group__name='The Beatles', membership__date_joined__gt=date(1961,1,1)) However for the other many-to-many model (Group) a similar query results in a FieldError: # which groups have a person name containing 'Paul'? Group.objects.filter(person__name__contains="Paul") Yet queries that reference the junction table explicity do work: Person.objects.filter(membership__group__name__contains="The Beatles") Group.objects.filter(membership__person__name__contains="Paul") Shouldn't Group therefore have access to Person via the junction model? Models: from django.db import models class Person(models.Model): name = models.CharField(max_length=128) def __str__(self): return self.name class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person, through='Membership') def __str__(self): return self.name class Membership(models.Model): person = models.ForeignKey(Person, on_delete=models.CASCADE) group = models.ForeignKey(Group, on_delete=models.CASCADE) date_joined = models.DateField() invite_reason = models.CharField(max_length=64) A: "The model that defines the ManyToManyField uses the attribute name of that field itself, whereas the “reverse” model uses the lowercased model name of the original model, plus '_set' (just like reverse one-to-many relationships)." (docs: Many-to-many relationships) So instead of Group.objects.filter(person__name__contains="Paul") the correct query is Group.objects.filter(members__name__contains="Paul") since the related model is accessible via the name of the field attribute (not the model).
Django: Question regarding queries over a junction table/intermediary model
My question concerns the many-to-many section of the django models docs: It is mentioned there that by using an intermediary model it is possible to query on the intermediary model's attributes like so: Person.objects.filter( group__name='The Beatles', membership__date_joined__gt=date(1961,1,1)) However for the other many-to-many model (Group) a similar query results in a FieldError: # which groups have a person name containing 'Paul'? Group.objects.filter(person__name__contains="Paul") Yet queries that reference the junction table explicity do work: Person.objects.filter(membership__group__name__contains="The Beatles") Group.objects.filter(membership__person__name__contains="Paul") Shouldn't Group therefore have access to Person via the junction model? Models: from django.db import models class Person(models.Model): name = models.CharField(max_length=128) def __str__(self): return self.name class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person, through='Membership') def __str__(self): return self.name class Membership(models.Model): person = models.ForeignKey(Person, on_delete=models.CASCADE) group = models.ForeignKey(Group, on_delete=models.CASCADE) date_joined = models.DateField() invite_reason = models.CharField(max_length=64)
[ "\n\"The model that defines the ManyToManyField uses the attribute name of\nthat field itself, whereas the “reverse” model uses the lowercased\nmodel name of the original model, plus '_set' (just like reverse\none-to-many relationships).\" (docs: Many-to-many relationships)\n\nSo instead of\nGroup.objects.filter(person__name__contains=\"Paul\")\n\nthe correct query is\nGroup.objects.filter(members__name__contains=\"Paul\")\n\nsince the related model is accessible via the name of the field attribute (not the model).\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074411349_django_python.txt
Q: Cannot update sklearn on Jupyter notebook I am on a Sagemaker Jupyter notebook and I need to use version 0.22 or above to train and pickle my model. However, I cannot update the version of sklearn. Updating via pip !pip3 install sklearn --upgrade Output: WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip. Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue. To avoid this problem you can invoke Python with '-m pip' instead of running pip directly. Requirement already up-to-date: sklearn in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (0.0) Requirement already satisfied, skipping upgrade: scikit-learn in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sklearn) (0.22.1) Requirement already satisfied, skipping upgrade: numpy>=1.11.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn->sklearn) (1.18.1) Requirement already satisfied, skipping upgrade: joblib>=0.11 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn->sklearn) (0.15.1) Requirement already satisfied, skipping upgrade: scipy>=0.17.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn->sklearn) (1.4.1) import sklearn print('The scikit-learn version is {}.'.format(sklearn.__version__)) # The scikit-learn version is 0.20.3. <---- still 0.20 Updating via Conda !conda update scikit-learn -y or !conda update -n base scikit-learn -y Output: Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.5.12 latest version: 4.8.3 Please update conda by running $ conda update -n base -c defaults conda # All requested packages already installed. import sklearn print('The scikit-learn version is {}.'.format(sklearn.__version__)) # The scikit-learn version is 0.20.3. <---- still 0.20 I have also run conda update -n base -c defaults conda or conda update all but still getting the same version. A: From the error message it seems the following should work: python3 -m pip3 install --upgrade sklearn
Cannot update sklearn on Jupyter notebook
I am on a Sagemaker Jupyter notebook and I need to use version 0.22 or above to train and pickle my model. However, I cannot update the version of sklearn. Updating via pip !pip3 install sklearn --upgrade Output: WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip. Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue. To avoid this problem you can invoke Python with '-m pip' instead of running pip directly. Requirement already up-to-date: sklearn in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (0.0) Requirement already satisfied, skipping upgrade: scikit-learn in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from sklearn) (0.22.1) Requirement already satisfied, skipping upgrade: numpy>=1.11.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn->sklearn) (1.18.1) Requirement already satisfied, skipping upgrade: joblib>=0.11 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn->sklearn) (0.15.1) Requirement already satisfied, skipping upgrade: scipy>=0.17.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from scikit-learn->sklearn) (1.4.1) import sklearn print('The scikit-learn version is {}.'.format(sklearn.__version__)) # The scikit-learn version is 0.20.3. <---- still 0.20 Updating via Conda !conda update scikit-learn -y or !conda update -n base scikit-learn -y Output: Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.5.12 latest version: 4.8.3 Please update conda by running $ conda update -n base -c defaults conda # All requested packages already installed. import sklearn print('The scikit-learn version is {}.'.format(sklearn.__version__)) # The scikit-learn version is 0.20.3. <---- still 0.20 I have also run conda update -n base -c defaults conda or conda update all but still getting the same version.
[ "From the error message it seems the following should work:\npython3 -m pip3 install --upgrade sklearn\n\n" ]
[ 2 ]
[ "pip install sklearn --upgrade \n\nor\npip install sklearn -U\n\n" ]
[ -2 ]
[ "amazon_sagemaker", "jupyter_notebook", "python", "scikit_learn" ]
stackoverflow_0062185684_amazon_sagemaker_jupyter_notebook_python_scikit_learn.txt
Q: Problem with multiple decimal points (Python) I have having a bit of a problem: I am trying to convert these numbers: -0.2179, -8.742.754.508, 1.698.516.678, to -0.22, -8.74, 1.70, But I am really not sure how I do this, when the number of decimal points is different? I have tried .split('.') but its difficult with changing decimal points. I was wondering if you guys had any pointers for this small problem? Kind regard. for number in data.fundreturn: new_number = number.split('.')[0] fund.append(new_number) for number in data.bitcoinreturn: new_number = number.split('.')[0] bitcoin.append(new_number) but then I get 0, 8, and 1 The code snippet basically is me going through each column and trying to covert the values. A: If I understood your problem correctly, the split function would be enough as a solution if used like this: data = ["-0.2179", "-8.742.754.508", "1.698.516.678"] fund = [] for number in data: split = number.split('.') integer_part = split[0] fractional_part = ''.join([split[i] for i in range(1, len(split))]) new_number = float('.'.join((integer_part, fractional_part))) # rounding to two decimal points new_number = round(new_number, 2) fund.append(new_number) print(fund) A: This solution takes into account the decimal part after the first dot and ignores the rest of the string. def convert(number: str) -> float: n = number.split(".") return float(n[0]) if len(n)<2 else float(f"{n[0]}.{n[1]}") convert("32") # 32.0 convert("-8.742.754.508") # -8.742
Problem with multiple decimal points (Python)
I have having a bit of a problem: I am trying to convert these numbers: -0.2179, -8.742.754.508, 1.698.516.678, to -0.22, -8.74, 1.70, But I am really not sure how I do this, when the number of decimal points is different? I have tried .split('.') but its difficult with changing decimal points. I was wondering if you guys had any pointers for this small problem? Kind regard. for number in data.fundreturn: new_number = number.split('.')[0] fund.append(new_number) for number in data.bitcoinreturn: new_number = number.split('.')[0] bitcoin.append(new_number) but then I get 0, 8, and 1 The code snippet basically is me going through each column and trying to covert the values.
[ "If I understood your problem correctly, the split function would be enough as a solution if used like this:\ndata = [\"-0.2179\", \"-8.742.754.508\", \"1.698.516.678\"]\nfund = []\nfor number in data:\n split = number.split('.')\n integer_part = split[0]\n fractional_part = ''.join([split[i] for i in range(1, len(split))])\n new_number = float('.'.join((integer_part, fractional_part)))\n # rounding to two decimal points\n new_number = round(new_number, 2)\n fund.append(new_number)\n\nprint(fund)\n\n", "This solution takes into account the decimal part after the first dot and ignores the rest of the string.\ndef convert(number: str) -> float:\n n = number.split(\".\")\n return float(n[0]) if len(n)<2 else float(f\"{n[0]}.{n[1]}\")\n\nconvert(\"32\") # 32.0\nconvert(\"-8.742.754.508\") # -8.742\n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074543069_python.txt
Q: What's the difference between heapq and PriorityQueue in python? In python there's a built-in heapq algorithm that gives you push, pop, nlargest, nsmallest... etc that you can apply to lists. However, there's also the queue.PriorityQueue class that seems to support more or less the same functionality. What's the difference, and when would you use one over the other? A: Queue.PriorityQueue is a thread-safe class, while the heapq module makes no thread-safety guarantees. From the Queue module documentation: The Queue module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. The Queue class in this module implements all the required locking semantics. It depends on the availability of thread support in Python; see the threading module. The heapq module offers no locking, and operates on standard list objects, which are not meant to be thread-safe. In fact, the PriorityQueue implementation uses heapq under the hood to do all prioritisation work, with the base Queue class providing the locking to make this thread-safe. See the source code for details. This makes the heapq module faster; there is no locking overhead. In addition, you are free to use the various heapq functions in different, novel ways, the PriorityQueue only offers the straight-up queueing functionality. A: queue.PriorityQueue is a partial wrapper around the heapq class. In other words, a queue.PriorityQueue is actually a heapq, placed in the queue module with a couple of renamed methods to make the heapq easier to use, much like a regular queue. In heapq, you use use the method heappush() to add a new item and the method heappop() to remove one. That is not very queue-like, so queue.PriorityQueue let you use the usual queue methods such as put and get to do the same thing. There are some features of heapq that are not carried over into queue.PriorityQueue, such as heappushpop() and heapreplace(), but you are less likely to use those. If you need them (and I do in my current project), perhaps you should use heapq rather than queue.PriorityQueue. Also, since heapq is specialized for its purpose, it is not thread safe (as noted in another answer here.) A: Yes, I have found the code. It is clear that PriorityQueue uses heapq. class PriorityQueue(Queue): '''Variant of Queue that retrieves open entries in priority order (lowest first). Entries are typically tuples of the form: (priority number, data). ''' def _init(self, maxsize): self.queue = [] def _qsize(self): return len(self.queue) def _put(self, item): heappush(self.queue, item) def _get(self): return heappop(self.queue)
What's the difference between heapq and PriorityQueue in python?
In python there's a built-in heapq algorithm that gives you push, pop, nlargest, nsmallest... etc that you can apply to lists. However, there's also the queue.PriorityQueue class that seems to support more or less the same functionality. What's the difference, and when would you use one over the other?
[ "Queue.PriorityQueue is a thread-safe class, while the heapq module makes no thread-safety guarantees. From the Queue module documentation:\n\nThe Queue module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. The Queue class in this module implements all the required locking semantics. It depends on the availability of thread support in Python; see the threading module.\n\nThe heapq module offers no locking, and operates on standard list objects, which are not meant to be thread-safe.\nIn fact, the PriorityQueue implementation uses heapq under the hood to do all prioritisation work, with the base Queue class providing the locking to make this thread-safe. See the source code for details.\nThis makes the heapq module faster; there is no locking overhead. In addition, you are free to use the various heapq functions in different, novel ways, the PriorityQueue only offers the straight-up queueing functionality.\n", "queue.PriorityQueue is a partial wrapper around the heapq class.\nIn other words, a queue.PriorityQueue is actually a heapq, placed in the queue module with a couple of renamed methods to make the heapq easier to use, much like a regular queue.\nIn heapq, you use use the method heappush() to add a new item and the method heappop() to remove one. That is not very queue-like, so queue.PriorityQueue let you use the usual queue methods such as put and get to do the same thing.\nThere are some features of heapq that are not carried over into queue.PriorityQueue, such as heappushpop() and heapreplace(), but you are less likely to use those. If you need them (and I do in my current project), perhaps you should use heapq rather than queue.PriorityQueue.\nAlso, since heapq is specialized for its purpose, it is not thread safe (as noted in another answer here.)\n", "Yes, I have found the code. It is clear that PriorityQueue uses heapq.\nclass PriorityQueue(Queue):\n '''Variant of Queue that retrieves open entries in priority order (lowest first).\n\n Entries are typically tuples of the form: (priority number, data).\n '''\n\n def _init(self, maxsize):\n self.queue = []\n\n def _qsize(self):\n return len(self.queue)\n\n def _put(self, item):\n heappush(self.queue, item)\n\n def _get(self):\n return heappop(self.queue)\n\n" ]
[ 110, 22, 0 ]
[]
[]
[ "data_structures", "heap", "priority_queue", "python" ]
stackoverflow_0036991716_data_structures_heap_priority_queue_python.txt
Q: Training MNIST by loading my own img (with label of answer, load img 6 tell AI is 6) I have read through the following discussion (not saying how to load pic to MNIST database) MNIST trained network tested with my own samples I also planning to train my own mnist by input img, but most of the tutorial doen't teach how to load our personal img (with answer, teach AI to reconize) such as load all img "5" image into MNIST database, and teach them that number 5 how can we do so? the following .py script is training by MNIST own database (credit by student_DC), then reconize, but accuracy result is not ideal (about 10%), so I come up training my samply img too, before doing the MNIST but if I store the img on local file how to load them to MNIST model to train? script after trained (both train and test from MNIST img databse) can load my own img to pridict import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs= 3 ,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/02_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) plt.imshow(reIm) plt.savefig('/content/result.png') im1 = np.array(reIm.convert("L")) im1 = im1.reshape((1,28*28)) im1 = im1.astype('float32')/255 # predict = model.predict_classes(im1) predict_x=model.predict(im1) classes_x=np.argmax(predict_x,axis=1) print ("---------------------------------") print ('predict as:') print (predict_x) print ("") print ("") print ('predict number as:') print (classes_x) print ("---------------------------------") print ("Original img : ") the sample img screenshot: by export suggestion: to pass img to model.fit , that mine network.fit(train_images,train_labels,epochs= 3 ,batch_size=128) and let AI train my img logically can solve the problem I am stocking on how to create an array of train images and an array of corresponding labels for now, I search online don't get similar tutorial (with topic: Training MNIST by loading my own img ) A: cridet by this site, seems this is what you what for loading your own img to train https://blog.tanka.la/2018/10/28/build-the-mnist-model-with-your-own-handwritten-digits-using-tensorflow-keras-and-python/ from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers import Flatten from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.optimizers import Adam from keras.utils import np_utils from PIL import Image import numpy as np import os import matplotlib.pyplot as plt # load data (X_train, y_train), (X_test, y_test) = mnist.load_data() # Reshaping to format which CNN expects (batch, height, width, channels) X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], X_train.shape[2], 1).astype('float32') X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], X_test.shape[2], 1).astype('float32') # To load images to features and labels def load_images_to_data(image_label, image_directory, features_data, label_data): list_of_files = os.listdir(image_directory) for file in list_of_files: image_file_name = os.path.join(image_directory, file) if ".png" in image_file_name: img = Image.open(image_file_name).convert("L") img = np.resize(img, (28,28,1)) im2arr = np.array(img) im2arr = im2arr.reshape(1,28,28,1) features_data = np.append(features_data, im2arr, axis=0) label_data = np.append(label_data, [image_label], axis=0) return features_data, label_data # Load your own images to training and test data X_train, y_train = load_images_to_data('1', '/content/01', X_train, y_train) X_test, y_test = load_images_to_data('1', '/content/01', X_test, y_test) # normalize inputs from 0-255 to 0-1 X_train/=255 X_test/=255 # one hot encode number_of_classes = 10 y_train = np_utils.to_categorical(y_train, number_of_classes) y_test = np_utils.to_categorical(y_test, number_of_classes) # create model model = Sequential() model.add(Conv2D(32, (5, 5), input_shape=(X_train.shape[1], X_train.shape[2], 1), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(number_of_classes, activation='softmax')) # Compile model model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy']) # Fit the model model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=1, batch_size=200) img = Image.open('/content/01/userlmn_01.png').convert("L") plt.imshow(img) img = np.resize(img, (28,28,1)) im2arr = np.array(img) im2arr = im2arr.reshape(1,28,28,1) # y_pred = model.predict_classes(im2arr) predict_x=model.predict(im2arr) classes_x=np.argmax(predict_x,axis=1) print(classes_x)
Training MNIST by loading my own img (with label of answer, load img 6 tell AI is 6)
I have read through the following discussion (not saying how to load pic to MNIST database) MNIST trained network tested with my own samples I also planning to train my own mnist by input img, but most of the tutorial doen't teach how to load our personal img (with answer, teach AI to reconize) such as load all img "5" image into MNIST database, and teach them that number 5 how can we do so? the following .py script is training by MNIST own database (credit by student_DC), then reconize, but accuracy result is not ideal (about 10%), so I come up training my samply img too, before doing the MNIST but if I store the img on local file how to load them to MNIST model to train? script after trained (both train and test from MNIST img databse) can load my own img to pridict import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs= 3 ,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/02_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) plt.imshow(reIm) plt.savefig('/content/result.png') im1 = np.array(reIm.convert("L")) im1 = im1.reshape((1,28*28)) im1 = im1.astype('float32')/255 # predict = model.predict_classes(im1) predict_x=model.predict(im1) classes_x=np.argmax(predict_x,axis=1) print ("---------------------------------") print ('predict as:') print (predict_x) print ("") print ("") print ('predict number as:') print (classes_x) print ("---------------------------------") print ("Original img : ") the sample img screenshot: by export suggestion: to pass img to model.fit , that mine network.fit(train_images,train_labels,epochs= 3 ,batch_size=128) and let AI train my img logically can solve the problem I am stocking on how to create an array of train images and an array of corresponding labels for now, I search online don't get similar tutorial (with topic: Training MNIST by loading my own img )
[ "cridet by this site, seems this is what you what for loading your own img to train\nhttps://blog.tanka.la/2018/10/28/build-the-mnist-model-with-your-own-handwritten-digits-using-tensorflow-keras-and-python/\nfrom keras.datasets import mnist\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Dropout\nfrom keras.layers import Flatten\nfrom keras.layers.convolutional import Conv2D\nfrom keras.layers.convolutional import MaxPooling2D\nfrom keras.optimizers import Adam\nfrom keras.utils import np_utils\nfrom PIL import Image\nimport numpy as np\nimport os\nimport matplotlib.pyplot as plt\n\n# load data\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n\n\n# Reshaping to format which CNN expects (batch, height, width, channels)\nX_train = X_train.reshape(X_train.shape[0], X_train.shape[1], X_train.shape[2], 1).astype('float32')\nX_test = X_test.reshape(X_test.shape[0], X_test.shape[1], X_test.shape[2], 1).astype('float32')\n\n\n# To load images to features and labels\ndef load_images_to_data(image_label, image_directory, features_data, label_data):\n list_of_files = os.listdir(image_directory)\n for file in list_of_files:\n image_file_name = os.path.join(image_directory, file)\n if \".png\" in image_file_name:\n img = Image.open(image_file_name).convert(\"L\")\n img = np.resize(img, (28,28,1))\n im2arr = np.array(img)\n im2arr = im2arr.reshape(1,28,28,1)\n features_data = np.append(features_data, im2arr, axis=0)\n label_data = np.append(label_data, [image_label], axis=0)\n return features_data, label_data\n\n\n# Load your own images to training and test data\nX_train, y_train = load_images_to_data('1', '/content/01', X_train, y_train)\nX_test, y_test = load_images_to_data('1', '/content/01', X_test, y_test)\n\n# normalize inputs from 0-255 to 0-1\nX_train/=255\nX_test/=255\n\n\n# one hot encode\nnumber_of_classes = 10\ny_train = np_utils.to_categorical(y_train, number_of_classes)\ny_test = np_utils.to_categorical(y_test, number_of_classes)\n\n# create model\nmodel = Sequential()\nmodel.add(Conv2D(32, (5, 5), input_shape=(X_train.shape[1], X_train.shape[2], 1), activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Conv2D(32, (3, 3), activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Dropout(0.5))\nmodel.add(Flatten())\nmodel.add(Dense(128, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(number_of_classes, activation='softmax'))\n\n# Compile model\nmodel.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])\n\n# Fit the model\nmodel.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=1, batch_size=200)\n\n\nimg = Image.open('/content/01/userlmn_01.png').convert(\"L\")\nplt.imshow(img)\nimg = np.resize(img, (28,28,1))\n\nim2arr = np.array(img)\nim2arr = im2arr.reshape(1,28,28,1)\n\n\n# y_pred = model.predict_classes(im2arr)\n\npredict_x=model.predict(im2arr) \nclasses_x=np.argmax(predict_x,axis=1)\n\nprint(classes_x)\n\n\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "keras", "numpy", "python", "tensorflow" ]
stackoverflow_0074528059_deep_learning_keras_numpy_python_tensorflow.txt
Q: Java Python Integration I have a Java app that needs to integrate with a 3rd party library. The library is written in Python, and I don't have any say over that. I'm trying to figure out the best way to integrate with it. I'm trying out JEPP (Java Embedded Python) - has anyone used that before? My other thought is to use JNI to communicate with the C bindings for Python. Any thoughts on the best way to do this would be appreciated. Thanks. A: Why not use Jython? The only downside I can immediately think of is if your library uses CPython native extensions. EDIT: If you can use Jython now but think you may have problems with a later version of the library, I suggest you try to isolate the library from your app (e.g. some sort of adapter interface). Go with the simplest thing that works for the moment, then consider JNI/CPython/etc if and when you ever need to. There's little to be gained by going the (painful) JNI route unless you really have to. A: Frankly most ways to somehow run Python directly from within JVM don't work. They are either not-quite-compatible (new release of your third party library can use python 2.6 features and will not work with Jython 2.5) or hacky (it will break with cryptic JVM stacktrace not really leading to solution). My preferred way to integrate the two would use RPC. XML RPC is not a bad choice here, if you have moderate amounts of data. It is pretty well supported — Python has it in its standard library. Java libraries are also easy to find. Now depending on your setup either Java or Python part would be a server accepting connection from other language. A less popular but worth considering alternative way to do RPCs is Google protobuffers, which have 2/3 of support for nice rpc. You just need to provide your transport layer. Not that much work and the convenience of writing is reasonable. Another option is to write a C wrapper around that pieces of Python functionality that you need to expose to Java and use it via JVM native plugins. You can ease the pain by going with SWIG SWIG. Essentially in your case it works like that: Create a SWIG interface for all method calls from Java to C++. Create C/C++ code that will receive your calls and internally call python interpreter with right params. Convert response you get from python and send it via swig back to your Java code. This solution is fairly complex, a bit of an overkill in most cases. Still it is worth doing if you (for some reason) cannot afford RPCs. RPC still would be my preferred choice, though. A: Many years later, just to add an option which is more popular these days... If you need CPython functionality, py4j is a good option. py4j has seen seen frequent updates in 2016 2017 2018 2019 2020 and has gained some popularity, because it is used e.g. by Apache Spark to achieve CPython interoperability. A: The best solutions, is to use Python programs throw REST API. You define your services and call them. You perhaps need to learn some new modules. But you will be more flexible for futures changes. Here a small list of use full modules for this purpose: Python modules Flask Flask-SQLAlchemy Flask-Restful SQlite3 Jsonify Java modules (for calling rest api) Jersey or Apache CXF You will need a small Learning curve, but later you will get more productivity and modularity and even elasticity... A: My other thought is to use JNI to communicate with the C bindings for Python. I like very much JNA: JNA provides Java programs easy access to native shared libraries (DLLs on Windows) without writing anything but Java code—no JNI or native code is required. This functionality is comparable to Windows' Platform/Invoke and Python's ctypes. Access is dynamic at runtime without code generation. My 0.02$ :) A: You could use a messaging service like ActiveMQ. It has both Python and Java support. This way, you can leave the complicated JNI or C bindings as they are and deal solely with what I consider a simple interface. Moreover, when the library gets updated, you don't need to change much, if anything. A: Have you considered running Jython on the Java VM? A: I've investigated a similar setup with JNI. Maybe this will help if haven't seen it yet: http://wiki.cacr.caltech.edu/danse/index.php/Communication_between_Java_and_Python http://jpe.sourceforge.net/ A: These are some of the tools which make it easier to bridge the gap between Python and Java: 1.Jython Python implemented in Java 2.JPype Allows Python to run java commands 3.Jepp Java embedded Python 4.JCC a C++ code generator for calling Java from C++/Python 5.Javabridge a package for running and interacting with the JVM from CPython 6.py4j Allows Python to run java commands. 7.voc Part of BeeWare suite. Converts python code to Java bytecode. 8.p2j Converts Python code to Java. No longer developed. A: If you can get your Python code to work in Jython, then you should be able to use that to call it from Java: http://jython.sourceforge.net/cgi-bin/faqw.py?req=show&file=faq06.001.htp A: I also think that run command line in the Java wouldn't by bad practice (stackoverflow question here). Potentially share the data through some database. I like the way to connect two apps via the bash pipe, but I have not practice in this, so I'm wondering how difficult is to write logic to handle this on both python/java sides. Or other productive way could be to use the Remote Procedure Call (RPC) which supports procedural programming. Using RPC you can invokes methods in shared environments. As an example you can call a function in a remote machine from the local computer using RPC. We can define RPC as a communication type in distributed systems. (mentioned above by Marcin) Or, very naive way would to be to communicate by the common file. But for sake of simplicity and speed, my vote is to use the shared database x rest API x socket communication. Also I like the XML RPC as Marcin wrote. I would like to recommend to avoid any complication to run Python under JVM or C++ binding. Better to use today trends which are obviously web technologies. As a shared database the MongoDB may be good solution or even better the Redis as in memory database. A: Use pipes to communicate with python(a python script calls python library) via subprocesses, the whole process is like java<-> Pipes <-> py. If JNI, you should be familiar with python's bindings(not recommended) and compiled files like *.so to work with java. The whole process is like py -> c -> .so/.pyd -> JNI -> jar. Here is a good practice of stdio. https://github.com/JULIELab/java-stdio-ipc
Java Python Integration
I have a Java app that needs to integrate with a 3rd party library. The library is written in Python, and I don't have any say over that. I'm trying to figure out the best way to integrate with it. I'm trying out JEPP (Java Embedded Python) - has anyone used that before? My other thought is to use JNI to communicate with the C bindings for Python. Any thoughts on the best way to do this would be appreciated. Thanks.
[ "Why not use Jython? The only downside I can immediately think of is if your library uses CPython native extensions.\nEDIT: If you can use Jython now but think you may have problems with a later version of the library, I suggest you try to isolate the library from your app (e.g. some sort of adapter interface). Go with the simplest thing that works for the moment, then consider JNI/CPython/etc if and when you ever need to. There's little to be gained by going the (painful) JNI route unless you really have to.\n", "Frankly most ways to somehow run Python directly from within JVM don't work. They are either not-quite-compatible (new release of your third party library can use python 2.6 features and will not work with Jython 2.5) or hacky (it will break with cryptic JVM stacktrace not really leading to solution).\nMy preferred way to integrate the two would use RPC. XML RPC is not a bad choice here, if you have moderate amounts of data. It is pretty well supported — Python has it in its standard library. Java libraries are also easy to find. Now depending on your setup either Java or Python part would be a server accepting connection from other language.\nA less popular but worth considering alternative way to do RPCs is Google protobuffers, which have 2/3 of support for nice rpc. You just need to provide your transport layer. Not that much work and the convenience of writing is reasonable.\nAnother option is to write a C wrapper around that pieces of Python functionality that you need to expose to Java and use it via JVM native plugins. You can ease the pain by going with SWIG SWIG.\nEssentially in your case it works like that: \n\nCreate a SWIG interface for all method calls from Java to C++.\nCreate C/C++ code that will receive your calls and internally call python interpreter with right params.\nConvert response you get from python and send it via swig back to your Java code.\n\nThis solution is fairly complex, a bit of an overkill in most cases. Still it is worth doing if you (for some reason) cannot afford RPCs. RPC still would be my preferred choice, though.\n", "Many years later, just to add an option which is more popular these days...\nIf you need CPython functionality, py4j is a good option. py4j has seen seen frequent updates in 2016 2017 2018 2019 2020 and has gained some popularity, because it is used e.g. by Apache Spark to achieve CPython interoperability.\n", "The best solutions, is to use Python programs throw REST API. You define your services and call them. You perhaps need to learn some new modules. But you will be more flexible for futures changes.\nHere a small list of use full modules for this purpose:\nPython modules\n\nFlask\nFlask-SQLAlchemy\nFlask-Restful\nSQlite3\nJsonify\n\nJava modules (for calling rest api)\nJersey or Apache CXF\nYou will need a small Learning curve, but later you will get more productivity and modularity and even elasticity...\n", "\nMy other thought is to use JNI to communicate with the C bindings for Python.\n\nI like very much JNA:\n\nJNA provides Java programs easy access to native shared libraries (DLLs on Windows) without writing anything but Java code—no JNI or native code is required. This functionality is comparable to Windows' Platform/Invoke and Python's ctypes. Access is dynamic at runtime without code generation.\n\nMy 0.02$ :)\n", "You could use a messaging service like ActiveMQ. It has both Python and Java support. This way, you can leave the complicated JNI or C bindings as they are and deal solely with what I consider a simple interface. Moreover, when the library gets updated, you don't need to change much, if anything.\n", "Have you considered running Jython on the Java VM?\n", "I've investigated a similar setup with JNI. Maybe this will help if haven't seen it yet:\nhttp://wiki.cacr.caltech.edu/danse/index.php/Communication_between_Java_and_Python\nhttp://jpe.sourceforge.net/ \n", "These are some of the tools which make it easier to bridge the gap between Python and Java:\n1.Jython\n Python implemented in Java\n2.JPype\n Allows Python to run java commands\n3.Jepp\n Java embedded Python\n4.JCC\n a C++ code generator for calling Java from C++/Python\n5.Javabridge\n a package for running and interacting with the JVM from CPython\n6.py4j\n Allows Python to run java commands.\n7.voc\n Part of BeeWare suite. Converts python code to Java bytecode.\n8.p2j\n Converts Python code to Java. No longer developed.\n", "If you can get your Python code to work in Jython, then you should be able to use that to call it from Java:\n\nhttp://jython.sourceforge.net/cgi-bin/faqw.py?req=show&file=faq06.001.htp\n\n", "I also think that run command line in the Java wouldn't by bad practice (stackoverflow question here).\nPotentially share the data through some database.\nI like the way to connect two apps via the bash pipe, but I have not practice in this, so I'm wondering how difficult is to write logic to handle this on both python/java sides.\nOr other productive way could be to use the Remote Procedure Call (RPC) which supports procedural programming. Using RPC you can invokes methods in shared environments. As an example you can call a function in a remote machine from the local computer using RPC. We can define RPC as a communication type in distributed systems.\n(mentioned above by Marcin)\nOr, very naive way would to be to communicate by the common file. But for sake of simplicity and speed, my vote is to use the shared database x rest API x socket communication.\nAlso I like the XML RPC as Marcin wrote.\n\nI would like to recommend to avoid any complication to run Python under JVM or C++ binding. Better to use today trends which are obviously web technologies.\n\nAs a shared database the MongoDB may be good solution or even better the Redis as in memory database.\n", "Use pipes to communicate with python(a python script calls python library) via subprocesses, the whole process is like java<-> Pipes <-> py.\nIf JNI, you should be familiar with python's bindings(not recommended) and compiled files like *.so to work with java.\nThe whole process is like py -> c -> .so/.pyd -> JNI -> jar.\nHere is a good practice of stdio.\nhttps://github.com/JULIELab/java-stdio-ipc\n" ]
[ 37, 21, 21, 7, 6, 4, 3, 3, 3, 2, 0, 0 ]
[]
[]
[ "integration", "java", "python" ]
stackoverflow_0001119696_integration_java_python.txt
Q: How to remove duplicate days with multiple tickers in a single dataframe? Imagine I have a dataframe that contains minute data for different symbols: timestamp open high low close volume trade_count vwap symbol volume_10_day 0 2022-09-26 08:20:00+00:00 1.58 1.59 1.34 1.34 972 15 1.433220 ADA 2889145.1 1 2022-09-26 08:25:00+00:00 1.45 1.66 1.41 1.66 3778 25 1.551821 ADA 2889145.1 2 2022-09-26 08:30:00+00:00 1.70 1.70 1.39 1.47 13683 59 1.499826 ADA 2889145.1 3 2022-09-26 08:35:00+00:00 1.43 1.50 1.37 1.37 3627 10 1.406485 ADA 2889145.1 4 2022-09-26 08:40:00+00:00 1.40 1.44 1.40 1.44 1352 9 1.408365 ADA 2889145.1 -- 100 2022-09-26 08:20:00+00:00 1.58 1.59 1.34 1.34 972 15 1.433220 ADD 2889145.1 101 2022-09-26 08:25:00+00:00 1.45 1.66 1.41 1.66 3778 25 1.551821 ADD 2889145.1 102 2022-09-26 08:30:00+00:00 1.70 1.70 1.39 1.47 13683 59 1.499826 ADD 2889145.1 103 2022-09-26 08:35:00+00:00 1.43 1.50 1.37 1.37 3627 10 1.406485 ADD 2889145.1 104 2022-09-26 08:40:00+00:00 1.40 1.44 1.40 1.44 1352 9 1.408365 ADD 2889145.1 I want to be able to filter the list, so that it only returns a single dataframe with multiple days, but that no days are repeated (like in the example above where ADA and ADD both appear for the date 2022-09-26). How can I filter out duplicate days like this? I don't care how it's done - it could be just keeping whatever symbol appears first for a given date, like this for example: timestamp open high low close volume trade_count vwap symbol volume_10_day 0 2022-09-26 08:20:00+00:00 1.58 1.59 1.34 1.34 972 15 1.433220 ADA 2889145.1 1 2022-09-26 08:25:00+00:00 1.45 1.66 1.41 1.66 3778 25 1.551821 ADA 2889145.1 2 2022-09-26 08:30:00+00:00 1.70 1.70 1.39 1.47 13683 59 1.499826 ADA 2889145.1 3 2022-09-26 08:35:00+00:00 1.43 1.50 1.37 1.37 3627 10 1.406485 ADA 2889145.1 4 2022-09-26 08:40:00+00:00 1.40 1.44 1.40 1.44 1352 9 1.408365 ADA 2889145.1 -- 100 2022-09-27 08:20:00+00:00 1.58 1.59 1.34 1.34 972 15 1.433220 ADB 2889145.1 101 2022-09-27 08:25:00+00:00 1.45 1.66 1.41 1.66 3778 25 1.551821 ADB 2889145.1 102 2022-09-27 08:30:00+00:00 1.70 1.70 1.39 1.47 13683 59 1.499826 ADB 2889145.1 103 2022-09-27 08:35:00+00:00 1.43 1.50 1.37 1.37 3627 10 1.406485 ADB 2889145.1 104 2022-09-27 08:40:00+00:00 1.40 1.44 1.40 1.44 1352 9 1.408365 ADB 2889145.1 How can I achieve this? Update, tried drop_duplicates as suggested by Lukas, like so: Read from db in a df: df = pd.read_sql_query("SELECT * from ohlc_minutes", conn) Get the length (4769): print(len(df)) And then: df['timestamp'] = pd.to_datetime(df['timestamp']) df.drop_duplicates(subset=['symbol', 'timestamp']) print(len(df)) But it returns the same length. How can I get my drop_duplicates to work with minute data? A: You can use pd.drop_duplicates: df.drop_duplicates(subset=['timestamp', 'symbol']) By default, it will take the first appearance of the combination of the values in the timestamp and symbol columns, but you can change this behavior.
How to remove duplicate days with multiple tickers in a single dataframe?
Imagine I have a dataframe that contains minute data for different symbols: timestamp open high low close volume trade_count vwap symbol volume_10_day 0 2022-09-26 08:20:00+00:00 1.58 1.59 1.34 1.34 972 15 1.433220 ADA 2889145.1 1 2022-09-26 08:25:00+00:00 1.45 1.66 1.41 1.66 3778 25 1.551821 ADA 2889145.1 2 2022-09-26 08:30:00+00:00 1.70 1.70 1.39 1.47 13683 59 1.499826 ADA 2889145.1 3 2022-09-26 08:35:00+00:00 1.43 1.50 1.37 1.37 3627 10 1.406485 ADA 2889145.1 4 2022-09-26 08:40:00+00:00 1.40 1.44 1.40 1.44 1352 9 1.408365 ADA 2889145.1 -- 100 2022-09-26 08:20:00+00:00 1.58 1.59 1.34 1.34 972 15 1.433220 ADD 2889145.1 101 2022-09-26 08:25:00+00:00 1.45 1.66 1.41 1.66 3778 25 1.551821 ADD 2889145.1 102 2022-09-26 08:30:00+00:00 1.70 1.70 1.39 1.47 13683 59 1.499826 ADD 2889145.1 103 2022-09-26 08:35:00+00:00 1.43 1.50 1.37 1.37 3627 10 1.406485 ADD 2889145.1 104 2022-09-26 08:40:00+00:00 1.40 1.44 1.40 1.44 1352 9 1.408365 ADD 2889145.1 I want to be able to filter the list, so that it only returns a single dataframe with multiple days, but that no days are repeated (like in the example above where ADA and ADD both appear for the date 2022-09-26). How can I filter out duplicate days like this? I don't care how it's done - it could be just keeping whatever symbol appears first for a given date, like this for example: timestamp open high low close volume trade_count vwap symbol volume_10_day 0 2022-09-26 08:20:00+00:00 1.58 1.59 1.34 1.34 972 15 1.433220 ADA 2889145.1 1 2022-09-26 08:25:00+00:00 1.45 1.66 1.41 1.66 3778 25 1.551821 ADA 2889145.1 2 2022-09-26 08:30:00+00:00 1.70 1.70 1.39 1.47 13683 59 1.499826 ADA 2889145.1 3 2022-09-26 08:35:00+00:00 1.43 1.50 1.37 1.37 3627 10 1.406485 ADA 2889145.1 4 2022-09-26 08:40:00+00:00 1.40 1.44 1.40 1.44 1352 9 1.408365 ADA 2889145.1 -- 100 2022-09-27 08:20:00+00:00 1.58 1.59 1.34 1.34 972 15 1.433220 ADB 2889145.1 101 2022-09-27 08:25:00+00:00 1.45 1.66 1.41 1.66 3778 25 1.551821 ADB 2889145.1 102 2022-09-27 08:30:00+00:00 1.70 1.70 1.39 1.47 13683 59 1.499826 ADB 2889145.1 103 2022-09-27 08:35:00+00:00 1.43 1.50 1.37 1.37 3627 10 1.406485 ADB 2889145.1 104 2022-09-27 08:40:00+00:00 1.40 1.44 1.40 1.44 1352 9 1.408365 ADB 2889145.1 How can I achieve this? Update, tried drop_duplicates as suggested by Lukas, like so: Read from db in a df: df = pd.read_sql_query("SELECT * from ohlc_minutes", conn) Get the length (4769): print(len(df)) And then: df['timestamp'] = pd.to_datetime(df['timestamp']) df.drop_duplicates(subset=['symbol', 'timestamp']) print(len(df)) But it returns the same length. How can I get my drop_duplicates to work with minute data?
[ "You can use pd.drop_duplicates:\ndf.drop_duplicates(subset=['timestamp', 'symbol'])\n\nBy default, it will take the first appearance of the combination of the values in the timestamp and symbol columns, but you can change this behavior.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074540576_dataframe_numpy_pandas_python.txt
Q: Is there a way to prevent ray.init() from hanging when using Python on Apple silicon (the M1 Max)? So I am trying to run ray[rllib] in a Jupyter notebook (in a Miniforge virtual environment) on Apple silicon (the M1 Max). Although I can import ray normally into the notebook, the very next step (of running ray.init()) causes the notebook to hang. No error is returned--ray.init() never completes. Is there a fix for this? This is my first time using Ray. I don't think the notebook or the commands I am entering is the issue because the notebook came pre-made from an instructor, and I have managed to get an identical notebook to run normally in a Miniforge environment on Windows 10. I followed advice from developers at Ray M1 Mac (Apple Silicon) Support to install Miniforge for the M1 and create a virtual environment. I also leveraged this thread What is the proper way to install TensorFlow on Apple M1 in 2022 to devise a strategy for installing applications I need for a reinforcement learning application. Here are the contents of an environment.yml file I used to set up the Miniforge virtual environment: name: tf-metal channels: - apple - conda-forge dependencies: - python=3.9 - gym-all=0.21.0 - pip - tensorflow-deps ## uncommented for use with Jupyter - ipykernel ## PyPI packages - pip: - jupyterlab - ray[rllib]==1.11 - tensorflow-macos - tensorflow-metal The steps I used in Terminal for creating the virtual environment were these: # Download Miniforge3-MacOSX-arm64.sh and make it executable: chmod u+x ./Miniforge3-MacOSX-arm64.sh # run Miniforge ./Miniforge3-MacOSX-arm64.sh # (or update it) ./Miniforge3-MacOSX-arm64.sh -u # accept terms and conditions... # run 'conda init' by entering 'yes' # configure conda (then close and reopen Terminal): conda config --set auto_activate_base false # confirm '~/.bash_profile' reflects miniforge settings # good-to-go... # set up virtual environment conda create --name rl_course2 # (choose any name you want) # confirm acceptability of location (enter 'yes') # activate env: conda activate rl_course2 # configure channels (settings recommended by an instructor) conda config --env --add channels conda-forge conda config --env --set channel_priority strict # install dependencies using environment.yml file shown above: conda env update --name rl_course2 --file '/Users/.../environment.yml' # check output for errors...(none found via text search) So I created the virtual environment and installed all the dependencies with no errors, as far as I could tell: Successfully installed MarkupSafe-2.1.1 PyWavelets-1.4.1 Send2Trash-1.8.0 absl-py-1.3.0 anyio-3.6.2 argon2-cffi-21.3.0 argon2-cffi-bindings-21.2.0 astunparse-1.6.3 async-timeout-4.0.2 attrs-22.1.0 babel-2.11.0 beautifulsoup4-4.11.1 bleach-5.0.1 cachetools-5.2.0 certifi-2022.9.24 cffi-1.15.1 charset-normalizer-2.1.1 click-8.1.3 contourpy-1.0.6 cycler-0.11.0 defusedxml-0.7.1 dm-tree-0.1.7 fastjsonschema-2.16.2 filelock-3.8.0 flatbuffers-22.10.26 fonttools-4.38.0 gast-0.4.0 google-auth-2.14.1 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 grpcio-1.43.0 idna-3.4 imageio-2.22.4 importlib-metadata-5.0.0 ipython-genutils-0.2.0 jinja2-3.1.2 json5-0.9.10 jsonschema-4.17.1 jupyter-server-1.23.3 jupyterlab-3.5.0 jupyterlab-pygments-0.2.2 jupyterlab-server-2.16.3 keras-2.10.0 keras-preprocessing-1.1.2 kiwisolver-1.4.4 libclang-14.0.6 markdown-3.4.1 matplotlib-3.6.2 mistune-2.0.4 msgpack-1.0.4 nbclassic-0.4.8 nbclient-0.7.0 nbconvert-7.2.5 nbformat-5.7.0 networkx-2.8.8 notebook-6.5.2 notebook-shim-0.2.2 oauthlib-3.2.2 opt-einsum-3.3.0 pandas-1.5.1 pandocfilters-1.5.0 pillow-9.3.0 prometheus-client-0.15.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycparser-2.21 pyrsistent-0.19.2 pytz-2022.6 pyyaml-6.0 ray-1.11.0 redis-4.3.5 requests-2.28.1 requests-oauthlib-1.3.1 rsa-4.9 scikit-image-0.19.3 sniffio-1.3.0 soupsieve-2.3.2.post1 tabulate-0.9.0 tensorboard-2.10.1 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorboardX-2.5.1 tensorflow-estimator-2.10.0 tensorflow-macos-2.10.0 tensorflow-metal-0.6.0 termcolor-2.1.1 terminado-0.17.0 tifffile-2022.10.10 tinycss2-1.2.1 tomli-2.0.1 typing-extensions-4.4.0 urllib3-1.26.12 webencodings-0.5.1 websocket-client-1.4.2 werkzeug-2.2.2 wrapt-1.14.1 zipp-3.10.0 Last step (while working in the rl_course2 environment) using Terminal: launch Jupyter... (rl_course2) MacBook-Pro ~$ jupyter notebook Now, in the Jupyter/Python notebook (Chrome browser): import ray # works! ray.init() # never completes (no errors)! So I tried similar steps in the same environment using Terminal (no notebook): (rl_course2) MacBook-Pro ~$ python3 Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:48:25) [Clang 14.0.6 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow >>> import ray >>> ray.init() [no errors, but never completes] Is there a way to fix this and run Ray normally in my Jupyter environment? Update 1: Just now, I was able to run the simple TensorFlow test script recommended by Apple (see Get started with tensorflow-metal) using the virtual environment discussed above, and five epochs of training completed with no errors in about two minutes on an M1 Max with 64 GB of memory, so the environment appears to be working fine. Perhaps the issue involves Ray? A: I have found one of possibly several answers to my question. Changing the environment.yml file (described above) slightly to import ray[rllib] rather than ray[rllib]==1.11 enabled Jupyter notebook to run ray.init() normally and execute the remainder of the code in the notebook. It appears there was a bug in ray[rllib] version 1.11 that prevented ray.init() from running on the M1 Max under some circumstances. So to summarize: to overcome a hang involving ray.init() on Apple Silicon (M1 Max), I was able to solve it by modifying the environment.yml file to this: name: tf-metal channels: - apple - conda-forge dependencies: - python=3.9 - gym-all=0.21.0 - pip - tensorflow-deps ## uncommented for use with Jupyter - ipykernel ## PyPI packages - pip: - jupyterlab - ray[rllib] - tensorflow-macos - tensorflow-metal I subsequently created a Miniforge environment using the procedure described above. Python version 3.9.15 and Ray version 2.1.0 were installed in the notebook automatically, and the notebook ran normally on the M1 Max.
Is there a way to prevent ray.init() from hanging when using Python on Apple silicon (the M1 Max)?
So I am trying to run ray[rllib] in a Jupyter notebook (in a Miniforge virtual environment) on Apple silicon (the M1 Max). Although I can import ray normally into the notebook, the very next step (of running ray.init()) causes the notebook to hang. No error is returned--ray.init() never completes. Is there a fix for this? This is my first time using Ray. I don't think the notebook or the commands I am entering is the issue because the notebook came pre-made from an instructor, and I have managed to get an identical notebook to run normally in a Miniforge environment on Windows 10. I followed advice from developers at Ray M1 Mac (Apple Silicon) Support to install Miniforge for the M1 and create a virtual environment. I also leveraged this thread What is the proper way to install TensorFlow on Apple M1 in 2022 to devise a strategy for installing applications I need for a reinforcement learning application. Here are the contents of an environment.yml file I used to set up the Miniforge virtual environment: name: tf-metal channels: - apple - conda-forge dependencies: - python=3.9 - gym-all=0.21.0 - pip - tensorflow-deps ## uncommented for use with Jupyter - ipykernel ## PyPI packages - pip: - jupyterlab - ray[rllib]==1.11 - tensorflow-macos - tensorflow-metal The steps I used in Terminal for creating the virtual environment were these: # Download Miniforge3-MacOSX-arm64.sh and make it executable: chmod u+x ./Miniforge3-MacOSX-arm64.sh # run Miniforge ./Miniforge3-MacOSX-arm64.sh # (or update it) ./Miniforge3-MacOSX-arm64.sh -u # accept terms and conditions... # run 'conda init' by entering 'yes' # configure conda (then close and reopen Terminal): conda config --set auto_activate_base false # confirm '~/.bash_profile' reflects miniforge settings # good-to-go... # set up virtual environment conda create --name rl_course2 # (choose any name you want) # confirm acceptability of location (enter 'yes') # activate env: conda activate rl_course2 # configure channels (settings recommended by an instructor) conda config --env --add channels conda-forge conda config --env --set channel_priority strict # install dependencies using environment.yml file shown above: conda env update --name rl_course2 --file '/Users/.../environment.yml' # check output for errors...(none found via text search) So I created the virtual environment and installed all the dependencies with no errors, as far as I could tell: Successfully installed MarkupSafe-2.1.1 PyWavelets-1.4.1 Send2Trash-1.8.0 absl-py-1.3.0 anyio-3.6.2 argon2-cffi-21.3.0 argon2-cffi-bindings-21.2.0 astunparse-1.6.3 async-timeout-4.0.2 attrs-22.1.0 babel-2.11.0 beautifulsoup4-4.11.1 bleach-5.0.1 cachetools-5.2.0 certifi-2022.9.24 cffi-1.15.1 charset-normalizer-2.1.1 click-8.1.3 contourpy-1.0.6 cycler-0.11.0 defusedxml-0.7.1 dm-tree-0.1.7 fastjsonschema-2.16.2 filelock-3.8.0 flatbuffers-22.10.26 fonttools-4.38.0 gast-0.4.0 google-auth-2.14.1 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 grpcio-1.43.0 idna-3.4 imageio-2.22.4 importlib-metadata-5.0.0 ipython-genutils-0.2.0 jinja2-3.1.2 json5-0.9.10 jsonschema-4.17.1 jupyter-server-1.23.3 jupyterlab-3.5.0 jupyterlab-pygments-0.2.2 jupyterlab-server-2.16.3 keras-2.10.0 keras-preprocessing-1.1.2 kiwisolver-1.4.4 libclang-14.0.6 markdown-3.4.1 matplotlib-3.6.2 mistune-2.0.4 msgpack-1.0.4 nbclassic-0.4.8 nbclient-0.7.0 nbconvert-7.2.5 nbformat-5.7.0 networkx-2.8.8 notebook-6.5.2 notebook-shim-0.2.2 oauthlib-3.2.2 opt-einsum-3.3.0 pandas-1.5.1 pandocfilters-1.5.0 pillow-9.3.0 prometheus-client-0.15.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycparser-2.21 pyrsistent-0.19.2 pytz-2022.6 pyyaml-6.0 ray-1.11.0 redis-4.3.5 requests-2.28.1 requests-oauthlib-1.3.1 rsa-4.9 scikit-image-0.19.3 sniffio-1.3.0 soupsieve-2.3.2.post1 tabulate-0.9.0 tensorboard-2.10.1 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorboardX-2.5.1 tensorflow-estimator-2.10.0 tensorflow-macos-2.10.0 tensorflow-metal-0.6.0 termcolor-2.1.1 terminado-0.17.0 tifffile-2022.10.10 tinycss2-1.2.1 tomli-2.0.1 typing-extensions-4.4.0 urllib3-1.26.12 webencodings-0.5.1 websocket-client-1.4.2 werkzeug-2.2.2 wrapt-1.14.1 zipp-3.10.0 Last step (while working in the rl_course2 environment) using Terminal: launch Jupyter... (rl_course2) MacBook-Pro ~$ jupyter notebook Now, in the Jupyter/Python notebook (Chrome browser): import ray # works! ray.init() # never completes (no errors)! So I tried similar steps in the same environment using Terminal (no notebook): (rl_course2) MacBook-Pro ~$ python3 Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:48:25) [Clang 14.0.6 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow >>> import ray >>> ray.init() [no errors, but never completes] Is there a way to fix this and run Ray normally in my Jupyter environment? Update 1: Just now, I was able to run the simple TensorFlow test script recommended by Apple (see Get started with tensorflow-metal) using the virtual environment discussed above, and five epochs of training completed with no errors in about two minutes on an M1 Max with 64 GB of memory, so the environment appears to be working fine. Perhaps the issue involves Ray?
[ "I have found one of possibly several answers to my question. Changing the environment.yml file (described above) slightly to import ray[rllib] rather than ray[rllib]==1.11 enabled Jupyter notebook to run ray.init() normally and execute the remainder of the code in the notebook. It appears there was a bug in ray[rllib] version 1.11 that prevented ray.init() from running on the M1 Max under some circumstances.\nSo to summarize: to overcome a hang involving ray.init() on Apple Silicon (M1 Max), I was able to solve it by modifying the environment.yml file to this:\nname: tf-metal\nchannels:\n - apple\n - conda-forge\ndependencies:\n - python=3.9\n - gym-all=0.21.0\n - pip\n - tensorflow-deps\n\n ## uncommented for use with Jupyter\n - ipykernel\n\n ## PyPI packages\n - pip:\n - jupyterlab\n - ray[rllib]\n - tensorflow-macos\n - tensorflow-metal\n\nI subsequently created a Miniforge environment using the procedure described above. Python version 3.9.15 and Ray version 2.1.0 were installed in the notebook automatically, and the notebook ran normally on the M1 Max.\n" ]
[ 0 ]
[]
[]
[ "apple_m1", "mini_forge", "python", "ray", "tensorflow" ]
stackoverflow_0074541573_apple_m1_mini_forge_python_ray_tensorflow.txt
Q: How to scrape related searches on google? I'm trying to scrape google for related searches when given a list of keywords, and then output these related searches into a csv file. My problem is getting beautiful soup to identify the related searches html tags. Here is an example html tag in the source code: <div data-ved="2ahUKEwitr8CPkLT3AhVRVsAKHVF-C80QmoICKAV6BAgEEBE">iphone xr</div> Here are my webdriver settings: from selenium import webdriver user_agent = 'Chrome/100.0.4896.60' webdriver_options = webdriver.ChromeOptions() webdriver_options.add_argument('user-agent={0}'.format(user_agent)) capabilities = webdriver_options.to_capabilities() capabilities["acceptSslCerts"] = True capabilities["acceptInsecureCerts"] = True Here is my code as is: queries = ["iphone"] driver = webdriver.Chrome(options=webdriver_options, desired_capabilities=capabilities, port=4444) df2 = [] driver.get("https://google.com") time.sleep(3) driver.find_element(By.CSS_SELECTOR, "[aria-label='Agree to the use of cookies and other data for the purposes described']").click() # get_current_related_searches for query in queries: driver.get("https://google.com/search?q=" + query) time.sleep(3) soup = BeautifulSoup(driver.page_source, 'html.parser') p = soup.find_all('div data-ved') print(p) d = pd.DataFrame({'loop': 1, 'source': query, 'from': query, 'to': [s.text for s in p]}) terms = d["to"] df2.append(d) time.sleep(3) df = pd.concat(df2).reset_index(drop=False) df.to_csv("related_searches.csv") Its the p=soup.find_all which is incorrect I'm just not sure how to get BS to identify these specific html tags. Any help would be great :) A: @jakecohensol, as you've pointed out, the selector in p = soup.find_all is wrong. The correct CSS selector: .y6Uyqe .AB4Wff. Chrome/100.0.4896.60 User-Agent header is incorrect. Google blocks requests with such an agent string. With the full User-Agent string Google returns a proper HTML response. Google Related Searches can be scraped without a browser. It will be faster and more reliable. Here's your fixed code snippet (link to the full code in online IDE) import time import requests from bs4 import BeautifulSoup import pandas as pd headers = { "User-Agent": "Mozilla/5.0 (X11; CrOS x86_64 14526.89.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.133 Safari/537.36" } queries = ["iphone", "pixel", "samsung"] df2 = [] # get_current_related_searches for query in queries: params = {"q": query} response = requests.get("https://google.com/search", params=params, headers=headers) soup = BeautifulSoup(response.text, "html.parser") p = soup.select(".y6Uyqe .AB4Wff") d = pd.DataFrame( {"loop": 1, "source": query, "from": query, "to": [s.text for s in p]} ) terms = d["to"] df2.append(d) time.sleep(3) df = pd.concat(df2).reset_index(drop=False) df.to_csv("related_searches.csv") Sample output: ,index,loop,source,from,to 0,0,1,iphone,iphone,iphone 13 1,1,1,iphone,iphone,iphone 12 2,2,1,iphone,iphone,iphone x 3,3,1,iphone,iphone,iphone 8 4,4,1,iphone,iphone,iphone 7 5,5,1,iphone,iphone,iphone xr 6,6,1,iphone,iphone,find my iphone 7,0,1,pixel,pixel,pixel 6 8,1,1,pixel,pixel,google pixel 9,2,1,pixel,pixel,pixel phone 10,3,1,pixel,pixel,pixel 6 pro 11,4,1,pixel,pixel,pixel 3 12,5,1,pixel,pixel,google pixel price 13,6,1,pixel,pixel,pixel 6 release date 14,0,1,samsung,samsung,samsung galaxy 15,1,1,samsung,samsung,samsung tv 16,2,1,samsung,samsung,samsung tablet 17,3,1,samsung,samsung,samsung account 18,4,1,samsung,samsung,samsung mobile 19,5,1,samsung,samsung,samsung store 20,6,1,samsung,samsung,samsung a21s 21,7,1,samsung,samsung,samsung login A: Have a look at SelectorGadget Chrome extension to get CSS selector by clicking on desired element in your browser that returns a HTML element. Check out what's your user agent, or find multiple user agents for mobile, tablet, PC, or different OS in order to rotate user agents which reduces the chance of being blocked a little bit. The ideal scenario is to combine rotating user agents with rotated proxies (ideally residential), and CAPTCHA solver to solve Google CAPTCHA that will appear eventually. As an alternative, there's a Google Search Engine Results API to scrape Google search results if you don't want to figure out how to create and maintain the parser from scratch, or how bypass blocks from Google (or other search engines). Example code to integrate: import os from serpapi import GoogleSearch queries = [ 'banana', 'minecraft', 'apple stock', 'how to create a apple pie' ] def serpapi_scrape_related_queries(): related_searches = [] for query in queries: print(f'extracting related queries from query: {query}') params = { 'api_key': os.getenv('API_KEY'), # your serpapi api key 'device': 'desktop', # device to retrive results from 'engine': 'google', # serpapi parsing engine 'q': query, # search query 'gl': 'us', # country of the search 'hl': 'en' # language of the search } search = GoogleSearch(params) # where data extracts on the backend results = search.get_dict() # JSON -> dict for result in results['related_searches']: query = result['query'] link = result['link'] related_searches.append({ 'query': query, 'link': link }) pd.DataFrame(data=related_searches).to_csv('serpapi_related_queries.csv', index=False) serpapi_scrape_related_queries() Part of the dataframe output: query link 0 banana benefits https://www.google.com/search?gl=us&hl=en&q=Ba... 1 banana republic https://www.google.com/search?gl=us&hl=en&q=Ba... 2 banana tree https://www.google.com/search?gl=us&hl=en&q=Ba... 3 banana meaning https://www.google.com/search?gl=us&hl=en&q=Ba... 4 banana plant https://www.google.com/search?gl=us&hl=en&q=Ba...
How to scrape related searches on google?
I'm trying to scrape google for related searches when given a list of keywords, and then output these related searches into a csv file. My problem is getting beautiful soup to identify the related searches html tags. Here is an example html tag in the source code: <div data-ved="2ahUKEwitr8CPkLT3AhVRVsAKHVF-C80QmoICKAV6BAgEEBE">iphone xr</div> Here are my webdriver settings: from selenium import webdriver user_agent = 'Chrome/100.0.4896.60' webdriver_options = webdriver.ChromeOptions() webdriver_options.add_argument('user-agent={0}'.format(user_agent)) capabilities = webdriver_options.to_capabilities() capabilities["acceptSslCerts"] = True capabilities["acceptInsecureCerts"] = True Here is my code as is: queries = ["iphone"] driver = webdriver.Chrome(options=webdriver_options, desired_capabilities=capabilities, port=4444) df2 = [] driver.get("https://google.com") time.sleep(3) driver.find_element(By.CSS_SELECTOR, "[aria-label='Agree to the use of cookies and other data for the purposes described']").click() # get_current_related_searches for query in queries: driver.get("https://google.com/search?q=" + query) time.sleep(3) soup = BeautifulSoup(driver.page_source, 'html.parser') p = soup.find_all('div data-ved') print(p) d = pd.DataFrame({'loop': 1, 'source': query, 'from': query, 'to': [s.text for s in p]}) terms = d["to"] df2.append(d) time.sleep(3) df = pd.concat(df2).reset_index(drop=False) df.to_csv("related_searches.csv") Its the p=soup.find_all which is incorrect I'm just not sure how to get BS to identify these specific html tags. Any help would be great :)
[ "@jakecohensol, as you've pointed out, the selector in p = soup.find_all is wrong. The correct CSS selector: .y6Uyqe .AB4Wff.\nChrome/100.0.4896.60 User-Agent header is incorrect. Google blocks requests with such an agent string. With the full User-Agent string Google returns a proper HTML response.\nGoogle Related Searches can be scraped without a browser. It will be faster and more reliable.\nHere's your fixed code snippet (link to the full code in online IDE)\nimport time\nimport requests\nfrom bs4 import BeautifulSoup\nimport pandas as pd\n\nheaders = {\n \"User-Agent\": \"Mozilla/5.0 (X11; CrOS x86_64 14526.89.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.133 Safari/537.36\"\n}\n\nqueries = [\"iphone\", \"pixel\", \"samsung\"]\n\ndf2 = []\n\n# get_current_related_searches\nfor query in queries:\n params = {\"q\": query}\n response = requests.get(\"https://google.com/search\", params=params, headers=headers)\n\n soup = BeautifulSoup(response.text, \"html.parser\")\n\n p = soup.select(\".y6Uyqe .AB4Wff\")\n\n d = pd.DataFrame(\n {\"loop\": 1, \"source\": query, \"from\": query, \"to\": [s.text for s in p]}\n )\n\n terms = d[\"to\"]\n df2.append(d)\n\n time.sleep(3)\n\ndf = pd.concat(df2).reset_index(drop=False)\n\ndf.to_csv(\"related_searches.csv\")\n\nSample output:\n,index,loop,source,from,to\n0,0,1,iphone,iphone,iphone 13\n1,1,1,iphone,iphone,iphone 12\n2,2,1,iphone,iphone,iphone x\n3,3,1,iphone,iphone,iphone 8\n4,4,1,iphone,iphone,iphone 7\n5,5,1,iphone,iphone,iphone xr\n6,6,1,iphone,iphone,find my iphone\n7,0,1,pixel,pixel,pixel 6\n8,1,1,pixel,pixel,google pixel\n9,2,1,pixel,pixel,pixel phone\n10,3,1,pixel,pixel,pixel 6 pro\n11,4,1,pixel,pixel,pixel 3\n12,5,1,pixel,pixel,google pixel price\n13,6,1,pixel,pixel,pixel 6 release date\n14,0,1,samsung,samsung,samsung galaxy\n15,1,1,samsung,samsung,samsung tv\n16,2,1,samsung,samsung,samsung tablet\n17,3,1,samsung,samsung,samsung account\n18,4,1,samsung,samsung,samsung mobile\n19,5,1,samsung,samsung,samsung store\n20,6,1,samsung,samsung,samsung a21s\n21,7,1,samsung,samsung,samsung login\n\n", "Have a look at SelectorGadget Chrome extension to get CSS selector by clicking on desired element in your browser that returns a HTML element.\nCheck out what's your user agent, or find multiple user agents for mobile, tablet, PC, or different OS in order to rotate user agents which reduces the chance of being blocked a little bit.\nThe ideal scenario is to combine rotating user agents with rotated proxies (ideally residential), and CAPTCHA solver to solve Google CAPTCHA that will appear eventually.\nAs an alternative, there's a Google Search Engine Results API to scrape Google search results if you don't want to figure out how to create and maintain the parser from scratch, or how bypass blocks from Google (or other search engines).\nExample code to integrate:\nimport os\nfrom serpapi import GoogleSearch\n\nqueries = [\n 'banana',\n 'minecraft',\n 'apple stock',\n 'how to create a apple pie'\n]\n\ndef serpapi_scrape_related_queries():\n\n related_searches = []\n\n for query in queries:\n print(f'extracting related queries from query: {query}')\n\n params = {\n 'api_key': os.getenv('API_KEY'), # your serpapi api key\n 'device': 'desktop', # device to retrive results from\n 'engine': 'google', # serpapi parsing engine\n 'q': query, # search query\n 'gl': 'us', # country of the search\n 'hl': 'en' # language of the search\n }\n\n search = GoogleSearch(params) # where data extracts on the backend\n results = search.get_dict() # JSON -> dict\n\n for result in results['related_searches']:\n query = result['query']\n link = result['link']\n\n related_searches.append({\n 'query': query,\n 'link': link\n })\n\n pd.DataFrame(data=related_searches).to_csv('serpapi_related_queries.csv', index=False)\n\nserpapi_scrape_related_queries()\n\nPart of the dataframe output:\n query link\n0 banana benefits https://www.google.com/search?gl=us&hl=en&q=Ba...\n1 banana republic https://www.google.com/search?gl=us&hl=en&q=Ba...\n2 banana tree https://www.google.com/search?gl=us&hl=en&q=Ba...\n3 banana meaning https://www.google.com/search?gl=us&hl=en&q=Ba...\n4 banana plant https://www.google.com/search?gl=us&hl=en&q=Ba...\n\n" ]
[ 2, 1 ]
[]
[]
[ "beautifulsoup", "google_chrome", "python", "selenium", "web_scraping" ]
stackoverflow_0072028100_beautifulsoup_google_chrome_python_selenium_web_scraping.txt
Q: Creating DataFrame based on two or more non equal lists I have two lists let's say list1 = ["apple","banana"] list2 = ["M","T","W","TR","F","S"] I want to create a data frame of two columns fruit and day so that the result will look something like this fruit day apple M apple T apple W apple TR apple F apple S banana M and so on... currently, my actual data is columnar meaning items in list2 are in columns, but I want them in rows, any help would be appreciated. A: try this: from itertools import product import pandas as pd list1 = ["apple","banana"] list2 = ["M","T","W","TR","F","S"] df = pd.DataFrame( product(list1, list2), columns=['fruit', 'day'] ) print(df) >>> fruit day 0 apple M 1 apple T 2 apple W 3 apple TR 4 apple F 5 apple S 6 banana M 7 banana T 8 banana W 9 banana TR 10 banana F 11 banana S A: same result with merge: df = pd.merge(pd.Series(list1,name='fruit'), pd.Series(list2,name='day'),how='cross') print(df) ''' fruit day 0 apple M 1 apple T 2 apple W 3 apple TR 4 apple F 5 apple S 6 banana M 7 banana T 8 banana W 9 banana TR 10 banana F 11 banana S
Creating DataFrame based on two or more non equal lists
I have two lists let's say list1 = ["apple","banana"] list2 = ["M","T","W","TR","F","S"] I want to create a data frame of two columns fruit and day so that the result will look something like this fruit day apple M apple T apple W apple TR apple F apple S banana M and so on... currently, my actual data is columnar meaning items in list2 are in columns, but I want them in rows, any help would be appreciated.
[ "try this:\nfrom itertools import product\nimport pandas as pd\n\n\nlist1 = [\"apple\",\"banana\"]\nlist2 = [\"M\",\"T\",\"W\",\"TR\",\"F\",\"S\"]\ndf = pd.DataFrame(\n product(list1, list2),\n columns=['fruit', 'day']\n)\nprint(df)\n>>>\n fruit day\n0 apple M\n1 apple T\n2 apple W\n3 apple TR\n4 apple F\n5 apple S\n6 banana M\n7 banana T\n8 banana W\n9 banana TR\n10 banana F\n11 banana S\n\n", "same result with merge:\ndf = pd.merge(pd.Series(list1,name='fruit'),\n pd.Series(list2,name='day'),how='cross')\n\nprint(df)\n'''\n fruit day\n0 apple M\n1 apple T\n2 apple W\n3 apple TR\n4 apple F\n5 apple S\n6 banana M\n7 banana T\n8 banana W\n9 banana TR\n10 banana F\n11 banana S\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python", "python_3.x" ]
stackoverflow_0074540976_pandas_python_python_3.x.txt
Q: write formula to an excel column with for loop and if statement I intent to use python to write excel formula to a particular column 'M'. But i want to skip/ignore if another column name "Status" contain the word Closed. I'm open to all other method that works. My codes: for row in range(len(df["Status"])): for x in range(len(df["Status"])): if str(df["Status"]) == "Closed": break else: dateFmt = workbook.add_format({'num_format': 'dd mmm yyyy'}) formula1 = f'=IF(M{row}="Pending", N{row}-D{row}, IF(M{row}="Closed", VALUE(L{row}), IF(M{row}="Select...", "NA")))' formula2 = f'=IF(M{row}="Closed", VALUE(N{row}), IF(M{row}="Pending", TODAY()))' worksheet.write_formula(f'N{row}', formula2, dateFmt) worksheet.write_formula(f'L{row}', formula1) I expect it to be like this: expected output with formula applied by python But my codes give me this: not the outcome i wanted Additional Details I wanted to use Python to look through the whole column M "Status" in my Excel Data. If column M contain the word 'Closed', skip that row and do nothing. If contain the word 'Pending', input a formula to another column L and column N. My code did not skip the row that contain the word Closed, and reset data value to Zero, date to year 1900(see photo) and my heading become FALSE. However, it work perfectly for rows that contain the word Pending(see photo). A: I cannot verify your excel formulas, however, this is the logic that I would apply to your code. Only write the formula to rows which contain the Pending status Ignore the lines when the status is Closed for row, status in enumerate(df.Status): if status == "Pending": dateFmt = workbook.add_format({'num_format': 'dd mmm yyyy'}) formula1 = f'=IF(M{row}="Pending", N{row}-D{row}, IF(M{row}="Closed", VALUE(L{row}), IF(M{row}="Select...", "NA")))' formula2 = f'=IF(M{row}="Closed", VALUE(N{row}), IF(M{row}="Pending", TODAY()))' worksheet.write_formula(f'N{row}', formula2, dateFmt) worksheet.write_formula(f'L{row}', formula1)
write formula to an excel column with for loop and if statement
I intent to use python to write excel formula to a particular column 'M'. But i want to skip/ignore if another column name "Status" contain the word Closed. I'm open to all other method that works. My codes: for row in range(len(df["Status"])): for x in range(len(df["Status"])): if str(df["Status"]) == "Closed": break else: dateFmt = workbook.add_format({'num_format': 'dd mmm yyyy'}) formula1 = f'=IF(M{row}="Pending", N{row}-D{row}, IF(M{row}="Closed", VALUE(L{row}), IF(M{row}="Select...", "NA")))' formula2 = f'=IF(M{row}="Closed", VALUE(N{row}), IF(M{row}="Pending", TODAY()))' worksheet.write_formula(f'N{row}', formula2, dateFmt) worksheet.write_formula(f'L{row}', formula1) I expect it to be like this: expected output with formula applied by python But my codes give me this: not the outcome i wanted Additional Details I wanted to use Python to look through the whole column M "Status" in my Excel Data. If column M contain the word 'Closed', skip that row and do nothing. If contain the word 'Pending', input a formula to another column L and column N. My code did not skip the row that contain the word Closed, and reset data value to Zero, date to year 1900(see photo) and my heading become FALSE. However, it work perfectly for rows that contain the word Pending(see photo).
[ "I cannot verify your excel formulas, however, this is the logic that I would apply to your code.\n\nOnly write the formula to rows which contain the Pending status\nIgnore the lines when the status is Closed\n\nfor row, status in enumerate(df.Status):\n if status == \"Pending\":\n dateFmt = workbook.add_format({'num_format': 'dd mmm yyyy'})\n formula1 = f'=IF(M{row}=\"Pending\", N{row}-D{row}, IF(M{row}=\"Closed\", VALUE(L{row}), IF(M{row}=\"Select...\", \"NA\")))'\n formula2 = f'=IF(M{row}=\"Closed\", VALUE(N{row}), IF(M{row}=\"Pending\", TODAY()))'\n worksheet.write_formula(f'N{row}', formula2, dateFmt)\n worksheet.write_formula(f'L{row}', formula1)\n\n\n" ]
[ 0 ]
[]
[]
[ "excel", "excel_formula", "python" ]
stackoverflow_0074517043_excel_excel_formula_python.txt
Q: List comprehension with complex conditions in python I was looking up for ways to make my loop fast,then I found about list comprehensions. I tried it on my own, but I don't fully understand it yet. From what I learned researching about list comprehensions, the code I like to execute would be on the left side, followed by the conditions then the for loop. So, it would basically look like this. ["Something I'd like to execute" Some conditions for loop] Following this style, I did it like this. The code I was trying to turn into a one liner: graph = [] for g in range(M): satisfy = [] graph_count = 0 for i in range(N-1): count = 0 for j in range(N): if i < j and count < 1: if graph_count < g: count += 1 graph_count += 1 satisfy.append("1") else: satisfy.append("0") elif i < j: satisfy.append("0") graph.append("".join(map(str,satisfy))) My Attempt graph = [[count+=1,graph_count+=1,satisfy.append("1") if graph_count < g else satisfy.append("0") and if i<j and count<1 else satisfy.append("0") if i<j for j in range(N) count=0 for i in range(N-1)] graph_count=0, "".join(map(str,satisfy)) for g in range(M)] What am I doing wrong? A: There are two kinds of optimization: micro-optimization (statement level, e.g. f-strings are faster than format function) macro optimization (algorithm, used data structures, etc) Optimizing algorithms may have much higher returns than spending the same effort on micro-optimizations. Here is my solution to your problem. After analyzing the expected output, I found some patterns and reduced the number of loops and allocations: from itertools import accumulate graph = [] size = N * (N - 1) // 2 s = list(reversed(list(accumulate(range(N))))) for g in range(M): satisfy = ["0"] * size for i in range(N - 1): if g > i: satisfy[size - s[i]] = "1" graph.append("".join(satisfy))
List comprehension with complex conditions in python
I was looking up for ways to make my loop fast,then I found about list comprehensions. I tried it on my own, but I don't fully understand it yet. From what I learned researching about list comprehensions, the code I like to execute would be on the left side, followed by the conditions then the for loop. So, it would basically look like this. ["Something I'd like to execute" Some conditions for loop] Following this style, I did it like this. The code I was trying to turn into a one liner: graph = [] for g in range(M): satisfy = [] graph_count = 0 for i in range(N-1): count = 0 for j in range(N): if i < j and count < 1: if graph_count < g: count += 1 graph_count += 1 satisfy.append("1") else: satisfy.append("0") elif i < j: satisfy.append("0") graph.append("".join(map(str,satisfy))) My Attempt graph = [[count+=1,graph_count+=1,satisfy.append("1") if graph_count < g else satisfy.append("0") and if i<j and count<1 else satisfy.append("0") if i<j for j in range(N) count=0 for i in range(N-1)] graph_count=0, "".join(map(str,satisfy)) for g in range(M)] What am I doing wrong?
[ "There are two kinds of optimization:\n\nmicro-optimization (statement level, e.g. f-strings are faster than format function)\nmacro optimization (algorithm, used data structures, etc)\n\nOptimizing algorithms may have much higher returns than spending the same effort on micro-optimizations.\nHere is my solution to your problem. After analyzing the expected output, I found some patterns and reduced the number of loops and allocations:\nfrom itertools import accumulate\n\ngraph = []\nsize = N * (N - 1) // 2\ns = list(reversed(list(accumulate(range(N)))))\nfor g in range(M):\n satisfy = [\"0\"] * size\n for i in range(N - 1):\n if g > i:\n satisfy[size - s[i]] = \"1\"\n\n graph.append(\"\".join(satisfy))\n\n" ]
[ 1 ]
[]
[]
[ "list_comprehension", "performance", "python", "python_3.x" ]
stackoverflow_0074406232_list_comprehension_performance_python_python_3.x.txt
Q: Showing an error of __init__() missing 6 required positional arguments: import time delay = 1.5 class Lawyers: I set up my constructor with the following code: def __init__(self, someName, someAge, someExperience, someCity, someCollege, someTotalCase): self.name = someName self.age = someAge self.experience = someExperience self.city = someCity self.college = someCollege self.salary = 500 self.totalcases = someTotalCase Then added some methods: def addLawyer(self): name = input("Enter their full name: ") age = int(input("Enter their age: ")) experince = int(input ("Enter the years of experience in the field: ")) city = input ("Enter which city they are from: ") college = input ("Enter the college they attended: ") totalcases = int(input("Please enter the total number of cases done: ")) self.name = name self.age = age self.experience = experince self.city = city self.college = college self.totalcases = totalcases #new_info = Lawyers(name, age, experince, city,college) print ("Employee added succesfully") def promotion(self): self.salary = self.salary + 200; def averagecases(self): self.averagecasesperyear = int(self.totalcases/self.experience) print(self.name, "does", self.averagecasesperyear, "cases every year" ) Here I do my main function def main(): lawyer1 = Lawyers("John Smith", 21, 3, "Ohio", "Georgetown", 72) lawyer2 = Lawyers("Adam Jones", 24, 2, "Atlanta", "Emory", 60) lawyer3 = Lawyers("John Doe", 26, 4, "Seatle", "UCLA", 93) lawyer4 = Lawyers("Jessica Adams", 25, 3, "Boston", "Stanford", 83) listofEmployees = [lawyer1, lawyer2, lawyer3, lawyer4] keeplooping = True while keeplooping: time.sleep(delay) print("Type 1 to see all the available lawyers") print("Type 2 to add a new employee") print ("Type 3 to promote an employee") print("Type 4 to finish the program") userChoice = int(input("Please choose one of the options above: ")) if userChoice == 1: for eachEmployee in listofEmployees: eachEmployee.averagecases() elif userChoice == 2: lawyer5 = Lawyers() lawyer5().addLawyer() listofEmployees.append(lawyer5) for i in listofEmployees: print(i.name) else: break main() The code breaks when it comes to lawyer5 = Lawyers() When it comes to lawyer5 = Lawyers() it is giving an error of init() missing 6 required positional arguments: 'someName', 'someAge', 'someExperience', 'someCity', 'someCollege', and 'someTotalCase' I am trying to create a code that will collect the infromation about the new employee(new lawyer) and adds it to the list together with the information of other employees how can i fix that A: You need to init the object with the variables someName, someAge, someExperience, someCity, someCollege, someTotalCase like this lawyers5 = Lawyers("Mac", 12, 1, "Annonay", "Cambridge", 82) if you want to be able to fill these informations later on you need to fill default values: class Lawyers: def __init__(self, someName="", someAge="", someExperience="", someCity="", someCollege="", someTotalCase=""): ... You have another error: lawyer5().addLawyer() should be: lawyer5.addLawyer()
Showing an error of __init__() missing 6 required positional arguments:
import time delay = 1.5 class Lawyers: I set up my constructor with the following code: def __init__(self, someName, someAge, someExperience, someCity, someCollege, someTotalCase): self.name = someName self.age = someAge self.experience = someExperience self.city = someCity self.college = someCollege self.salary = 500 self.totalcases = someTotalCase Then added some methods: def addLawyer(self): name = input("Enter their full name: ") age = int(input("Enter their age: ")) experince = int(input ("Enter the years of experience in the field: ")) city = input ("Enter which city they are from: ") college = input ("Enter the college they attended: ") totalcases = int(input("Please enter the total number of cases done: ")) self.name = name self.age = age self.experience = experince self.city = city self.college = college self.totalcases = totalcases #new_info = Lawyers(name, age, experince, city,college) print ("Employee added succesfully") def promotion(self): self.salary = self.salary + 200; def averagecases(self): self.averagecasesperyear = int(self.totalcases/self.experience) print(self.name, "does", self.averagecasesperyear, "cases every year" ) Here I do my main function def main(): lawyer1 = Lawyers("John Smith", 21, 3, "Ohio", "Georgetown", 72) lawyer2 = Lawyers("Adam Jones", 24, 2, "Atlanta", "Emory", 60) lawyer3 = Lawyers("John Doe", 26, 4, "Seatle", "UCLA", 93) lawyer4 = Lawyers("Jessica Adams", 25, 3, "Boston", "Stanford", 83) listofEmployees = [lawyer1, lawyer2, lawyer3, lawyer4] keeplooping = True while keeplooping: time.sleep(delay) print("Type 1 to see all the available lawyers") print("Type 2 to add a new employee") print ("Type 3 to promote an employee") print("Type 4 to finish the program") userChoice = int(input("Please choose one of the options above: ")) if userChoice == 1: for eachEmployee in listofEmployees: eachEmployee.averagecases() elif userChoice == 2: lawyer5 = Lawyers() lawyer5().addLawyer() listofEmployees.append(lawyer5) for i in listofEmployees: print(i.name) else: break main() The code breaks when it comes to lawyer5 = Lawyers() When it comes to lawyer5 = Lawyers() it is giving an error of init() missing 6 required positional arguments: 'someName', 'someAge', 'someExperience', 'someCity', 'someCollege', and 'someTotalCase' I am trying to create a code that will collect the infromation about the new employee(new lawyer) and adds it to the list together with the information of other employees how can i fix that
[ "You need to init the object with the variables someName, someAge, someExperience, someCity, someCollege, someTotalCase\nlike this\nlawyers5 = Lawyers(\"Mac\", 12, 1, \"Annonay\", \"Cambridge\", 82)\n\nif you want to be able to fill these informations later on you need to fill default values:\nclass Lawyers:\n \n def __init__(self, someName=\"\", someAge=\"\", someExperience=\"\", someCity=\"\", someCollege=\"\", someTotalCase=\"\"):\n ...\n\nYou have another error:\nlawyer5().addLawyer()\nshould be:\nlawyer5.addLawyer()\n" ]
[ 1 ]
[]
[]
[ "class", "function", "python" ]
stackoverflow_0074543457_class_function_python.txt
Q: shifting up the column basis groupby Existing Dataframe : Id event time_spent_in_sec A in 0 A step_1 2.2 A step_2 3 A done 3 B in 0 B step_1 5 B step_2 8 B step_3 15 B done 7 Expected Dataframe : Id event time_spent_in_sec A in 2.2 A step_1 3 A step_2 3 A done 0 B in 5 B step_1 8 B step_2 15 B step_3 7 B done 0 I am looking to shift the value in a column time_spent_in_sec and fill last row of each unique Id by 0. I tried using shift(1) but stuck with filling the last row with 0 A: You can use numpy.roll df['time_spent_in_sec'] = np.roll(df['time_spent_in_sec'], -1) A: You can use .fillna() to fill it with the original first number: df.time_spent_in_sec.shift(-1).fillna(df.time_spent_in_sec[0]) Or: df.time_spent_in_sec.shift(-1, fill_value = df.time_spent_in_sec[0]) A: Other way to do it: Transforming into a list and shift if from there. list_col = list(df["time_spent_in_sec"]) list_col.append(list_col.pop(0)) df["time_spent_in_sec"] = list_col.copy()
shifting up the column basis groupby
Existing Dataframe : Id event time_spent_in_sec A in 0 A step_1 2.2 A step_2 3 A done 3 B in 0 B step_1 5 B step_2 8 B step_3 15 B done 7 Expected Dataframe : Id event time_spent_in_sec A in 2.2 A step_1 3 A step_2 3 A done 0 B in 5 B step_1 8 B step_2 15 B step_3 7 B done 0 I am looking to shift the value in a column time_spent_in_sec and fill last row of each unique Id by 0. I tried using shift(1) but stuck with filling the last row with 0
[ "You can use numpy.roll\ndf['time_spent_in_sec'] = np.roll(df['time_spent_in_sec'], -1)\n\n", "You can use .fillna() to fill it with the original first number:\ndf.time_spent_in_sec.shift(-1).fillna(df.time_spent_in_sec[0])\n\nOr:\ndf.time_spent_in_sec.shift(-1, fill_value = df.time_spent_in_sec[0])\n\n", "Other way to do it: Transforming into a list and shift if from there.\nlist_col = list(df[\"time_spent_in_sec\"])\nlist_col.append(list_col.pop(0))\ndf[\"time_spent_in_sec\"] = list_col.copy()\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074543383_dataframe_pandas_python.txt
Q: Finding the intersection of a plane and a line without libraries Me and my friend are trying to make a 3D renderer without external libraries. We are trying to find the plane with the direction the User looking at as a normal vector and position of the User translated in the direction of the normal vector. So the plane Will always be in frot of the User with some distance. We planned to define a line from the vertecies of the shape to the User and find the intersection with the plane and draw the shape. And we are avoiding to use any libraries becouse it is something of a challenge to try to make it ourselfes. But We were not able to define the plane or line since it would be something like; 3x+4z=7y And We dont know how can We communicate that to Python. A: It is not clear how your plane is defined. If you have normal vector n and some point p in plane, you can get free coefficient of generap plane equation (d) substituting p coordinates into this: d = nx * px + nx * py + nz * pz Now plane equation is nx * x + nx * y + nz * z - d = 0 Define line using parametric equation x = x0 + dx * t y = y0 + dy * t z = z0 + dz * t where (x0,y0,z0) is some point at the line and (dx,dy,dz) is direction vector (if you have another point at the line, than dx=x1-x0 and so on) Now substitute these expressions into plane equation and solve equation for unknown t, then get x,y,z - intersection point. Nnote that sometimes equation has no solution - when line is parallel to the plane, or infinite number of solutions - when line lies in the plane.
Finding the intersection of a plane and a line without libraries
Me and my friend are trying to make a 3D renderer without external libraries. We are trying to find the plane with the direction the User looking at as a normal vector and position of the User translated in the direction of the normal vector. So the plane Will always be in frot of the User with some distance. We planned to define a line from the vertecies of the shape to the User and find the intersection with the plane and draw the shape. And we are avoiding to use any libraries becouse it is something of a challenge to try to make it ourselfes. But We were not able to define the plane or line since it would be something like; 3x+4z=7y And We dont know how can We communicate that to Python.
[ "It is not clear how your plane is defined.\nIf you have normal vector n and some point p in plane, you can get free coefficient of generap plane equation (d) substituting p coordinates into this:\nd = nx * px + nx * py + nz * pz\n\nNow plane equation is\nnx * x + nx * y + nz * z - d = 0\n\nDefine line using parametric equation\nx = x0 + dx * t\ny = y0 + dy * t\nz = z0 + dz * t\n\nwhere (x0,y0,z0) is some point at the line and (dx,dy,dz) is direction vector (if you have another point at the line, than dx=x1-x0 and so on)\nNow substitute these expressions into plane equation and solve equation for unknown t, then get x,y,z - intersection point.\nNnote that sometimes equation has no solution - when line is parallel to the plane, or infinite number of solutions - when line lies in the plane.\n" ]
[ 0 ]
[]
[]
[ "3d", "geometry", "intersection", "plane", "python" ]
stackoverflow_0074543327_3d_geometry_intersection_plane_python.txt
Q: How to sort Pivot Table with values? I am trying to transfer Google Search Console data into a Pivot Table in Pandas and sort it. I use the searchconsole module in Python to request this data from the API. Code report = webproperty.query.range(DATA).get().to_dataframe() #Name columns report.columns=['zoekwoord','pagina','klikken','vertoningen','ctr','positie'] #Make Pivot pivot = report.pivot_table(index=['pagina','zoekwoord'], values=['klikken','vertoningen','ctr','positie']) #Define output writer = pd.ExcelWriter(r'~/Downloads/gsc_output.xlsx', engine='xlsxwriter') # Write each dataframe to a different worksheet. report.to_excel(writer, sheet_name='Data', index=False) pivot.to_excel(writer, sheet_name='Draaitabel') # Close the Pandas Excel writer and output the Excel file. writer.save() Example Data Page Query Clicks /page-a query 1 20 /page-b query 2 40 /page-a query 3 40 I want to see the queries per page and sort the amount of clicks, like: Page Query Clicks /page-a query 3 40 query 1 20 /page-b query 2 40 If I use .sort_values I don't get the data I want: Page Query Clicks /page-a query 3 40 /page-b query 2 40 /page-a query 1 20 How to do this? :) A: You can use pandas.DataFrame.sort_values. Try this : ( df.sort_values(by=["Page", "Query", "Clicks"], ascending=[True, False, False], inplace=True, ignore_index=True) ) df.loc[df["Page"].duplicated(), "Page"]= "" # Output : print(df) Page Query Clicks 0 /page-a query 3 40 1 query 1 20 2 /page-b query 2 40 # Input used: df= pd.DataFrame({'Page': ['/page-a', '/page-b', '/page-a'], 'Query': ['query 1', 'query 2', 'query 3'], 'Clicks': [20, 40, 40]})
How to sort Pivot Table with values?
I am trying to transfer Google Search Console data into a Pivot Table in Pandas and sort it. I use the searchconsole module in Python to request this data from the API. Code report = webproperty.query.range(DATA).get().to_dataframe() #Name columns report.columns=['zoekwoord','pagina','klikken','vertoningen','ctr','positie'] #Make Pivot pivot = report.pivot_table(index=['pagina','zoekwoord'], values=['klikken','vertoningen','ctr','positie']) #Define output writer = pd.ExcelWriter(r'~/Downloads/gsc_output.xlsx', engine='xlsxwriter') # Write each dataframe to a different worksheet. report.to_excel(writer, sheet_name='Data', index=False) pivot.to_excel(writer, sheet_name='Draaitabel') # Close the Pandas Excel writer and output the Excel file. writer.save() Example Data Page Query Clicks /page-a query 1 20 /page-b query 2 40 /page-a query 3 40 I want to see the queries per page and sort the amount of clicks, like: Page Query Clicks /page-a query 3 40 query 1 20 /page-b query 2 40 If I use .sort_values I don't get the data I want: Page Query Clicks /page-a query 3 40 /page-b query 2 40 /page-a query 1 20 How to do this? :)
[ "You can use pandas.DataFrame.sort_values.\nTry this :\n(\n df.sort_values(by=[\"Page\", \"Query\", \"Clicks\"],\n ascending=[True, False, False],\n inplace=True,\n ignore_index=True)\n)\n\ndf.loc[df[\"Page\"].duplicated(), \"Page\"]= \"\"\n\n# Output :\nprint(df)\n\n Page Query Clicks\n0 /page-a query 3 40\n1 query 1 20\n2 /page-b query 2 40\n\n# Input used:\ndf= pd.DataFrame({'Page': ['/page-a', '/page-b', '/page-a'],\n 'Query': ['query 1', 'query 2', 'query 3'],\n 'Clicks': [20, 40, 40]})\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074543490_pandas_python.txt
Q: How can I use .format() with list in python? import excel2img import os filelist = [ '1R 1R 1R', '24 54 994', '9d 13 885', ] file_name = "C:/Users/3315/Desktop/fdifndfd.xlsx" img_name ='C:/Users/3315/Desktop/Weekly/company/{}.png' .format(filelist[1]) excel2img.export_img(file_name, img_name, "", "'{}'!A1:HO29") .format(filelist[1]) i want to use list element in format function to use this code as repeatable with for i in filelist:. but it keeps fail to use list data although print(filelist[1]) is possible. I look through many papers but it doesn't deal with list in format function.Can anyone give me the solution? here's error message Exception: Failed locating range '{}'!A1:HO29 A: You're formatting the output of your export _img function, not the string you put into it. That can't work. It's also fundamentally different than what you do in the lines above.
How can I use .format() with list in python?
import excel2img import os filelist = [ '1R 1R 1R', '24 54 994', '9d 13 885', ] file_name = "C:/Users/3315/Desktop/fdifndfd.xlsx" img_name ='C:/Users/3315/Desktop/Weekly/company/{}.png' .format(filelist[1]) excel2img.export_img(file_name, img_name, "", "'{}'!A1:HO29") .format(filelist[1]) i want to use list element in format function to use this code as repeatable with for i in filelist:. but it keeps fail to use list data although print(filelist[1]) is possible. I look through many papers but it doesn't deal with list in format function.Can anyone give me the solution? here's error message Exception: Failed locating range '{}'!A1:HO29
[ "You're formatting the output of your export _img function, not the string you put into it.\nThat can't work. It's also fundamentally different than what you do in the lines above.\n" ]
[ 1 ]
[]
[]
[ "format", "formatting", "grammar", "python", "python_3.x" ]
stackoverflow_0074543594_format_formatting_grammar_python_python_3.x.txt
Q: How to get a file from an S3 bucket then send it over email using Python? I've been working on a service where I must grab a csv file from an S3 bucket, then send it all using python. I've tried various different methods such as MIMEapplication, however all have encountered problems : ( I think the biggest issue is that most examples define a path to a local directory, opposed to accessing the file through s3, so any help would be appreciated !!! A: You would first need to download the file from the Amazon S3 bucket to the local disk. For example: import boto3 s3 = boto3.client('s3') s3.download_file('my-bucket', 'object-name', '/tmp/filename') Alternatively, you might be able to use smart-open · PyPI, which gives the ability to open() a file in Amazon S3 as if it were on a local disk.
How to get a file from an S3 bucket then send it over email using Python?
I've been working on a service where I must grab a csv file from an S3 bucket, then send it all using python. I've tried various different methods such as MIMEapplication, however all have encountered problems : ( I think the biggest issue is that most examples define a path to a local directory, opposed to accessing the file through s3, so any help would be appreciated !!!
[ "You would first need to download the file from the Amazon S3 bucket to the local disk.\nFor example:\nimport boto3\n\ns3 = boto3.client('s3')\ns3.download_file('my-bucket', 'object-name', '/tmp/filename')\n\nAlternatively, you might be able to use smart-open · PyPI, which gives the ability to open() a file in Amazon S3 as if it were on a local disk.\n" ]
[ 2 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "boto3", "mime", "python" ]
stackoverflow_0074543448_amazon_s3_amazon_web_services_boto3_mime_python.txt
Q: How to display and edit all Jupyter shortcuts in vscode (similar to typical `jupyter-notebook`)? Within Visual Studio Code, I would like to be view and customize my Jupyter-notebook shortcuts e.g. ctrl-shift-c to clear cell content, etc. similar to what is available using the typical browser-based interface. However, I did not manage to find a way to do so. https://code.visualstudio.com/docs/python/jupyter-support-py A: To extend on Oliver.R's answer. Go to the keyboard shortcuts (with Ctrl+K, Ctrl+S or Ctrl+Shift+P and search for Open Keyboard Shortcuts and then search for notebook which should list all the available bindings for Jupyter. Alternatively, you could search for the shortcuts themselves (using the record key feature) or just typing ctrl+shift+c which shows you all bindings for that particular keys. Once you have found the keybinding you want to change. You can click on it and push Enter (or use the Change Keybinding button to the left of the column) to change it. However, I am not sure, if all the listed shortcuts work / are implemented. For me, only a fraction of the listed keybindings actually work. A: In VSCode, the keyboard shortcuts (accessed by ctrl+K, then ctrl+S) in the namespace python.datascience (search it in the shortcuts window) relate to notebook management such as running cells. I believe these are the only available built-in options however, and your example of "clear cell content" does not appear to be available. A: The support for jupyter shortcuts improved greatly. It is hard to search for available shortcuts in vscode as quite a few are reusing generic actions and names. For example, selecting a cell is called "list.expandSelectionDown", and moving coursor down "list.down", but moving cell down is "notebook.cell.moveDown". The best I've come up with is to search for 'notebookEditorFocused' it is almost always used in 'when' section. I've also looked up through github issues and code for some shortcuts. Here is how I've found a way to navigate headers with cmd|ctrl + shift + . + up/down. And an alternative to ctrl + m that merges cells. I've composed a list of Jupyter notebook shortcuts, that currently work in VSCode, looking from the angle of using nbdev with vscode.
How to display and edit all Jupyter shortcuts in vscode (similar to typical `jupyter-notebook`)?
Within Visual Studio Code, I would like to be view and customize my Jupyter-notebook shortcuts e.g. ctrl-shift-c to clear cell content, etc. similar to what is available using the typical browser-based interface. However, I did not manage to find a way to do so. https://code.visualstudio.com/docs/python/jupyter-support-py
[ "To extend on Oliver.R's answer.\nGo to the keyboard shortcuts (with Ctrl+K, Ctrl+S or Ctrl+Shift+P and search for Open Keyboard Shortcuts and then search for notebook which should list all the available bindings for Jupyter.\nAlternatively, you could search for the shortcuts themselves (using the record key feature) or just typing ctrl+shift+c which shows you all bindings for that particular keys.\nOnce you have found the keybinding you want to change. You can click on it and push Enter (or use the Change Keybinding button to the left of the column) to change it.\nHowever, I am not sure, if all the listed shortcuts work / are implemented. For me, only a fraction of the listed keybindings actually work.\n", "In VSCode, the keyboard shortcuts (accessed by ctrl+K, then ctrl+S) in the namespace python.datascience (search it in the shortcuts window) relate to notebook management such as running cells. I believe these are the only available built-in options however, and your example of \"clear cell content\" does not appear to be available.\n", "The support for jupyter shortcuts improved greatly.\nIt is hard to search for available shortcuts in vscode as quite a few are reusing generic actions and names. For example, selecting a cell is called \"list.expandSelectionDown\", and moving coursor down \"list.down\", but moving cell down is \"notebook.cell.moveDown\".\nThe best I've come up with is to search for 'notebookEditorFocused' it is almost always used in 'when' section.\n\nI've also looked up through github issues and code for some shortcuts. Here is how I've found a way to navigate headers with cmd|ctrl + shift + . + up/down. And an alternative to ctrl + m that merges cells.\nI've composed a list of Jupyter notebook shortcuts, that currently work in VSCode, looking from the angle of using nbdev with vscode.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "jupyter_notebook", "python", "visual_studio_code", "vscode_settings" ]
stackoverflow_0059743718_jupyter_notebook_python_visual_studio_code_vscode_settings.txt
Q: How to check the status of docker-compose up -d command When we run docker-compose up-d command to run dockers using docker-compose.yml file, it starts building images or pulling images from the registry. We can see each and every step of this command on the terminal. I am trying to run this command from a python script. The command starts successfully but after the command, I do not have any idea of how much the process has been completed. Is there any way I can monitor the status of docker-compose up -d command so that script can let the user (who is using the script) know how much the process has completed or if the docker-compose command has failed due to some reasons.? Thanks CODE: from pexpect import pxssh session = pxssh.pxssh() if not session.login(ip_address,<USERNAME>, <PASSWORD>): print("SSH session failed on login") print(str(session)) else: print("SSH session login successfull") session.sendline("sudo docker-compose up -d") session.prompt() resp = session.before print(resp) A: You can view docker compose logs with following ways Use docker compose up -d to start all services in detached mode (-d) (you won't see any logs in detached mode) Use docker compose logs -f -t to attach yourself to the logs of all running services, whereas -f means you follow the log output and the -t option gives you nice timestamps (Docs) credit EDIT: Docker Compose is now available as part of the core Docker CLI. docker-compose is still supported for now but most documentation I have seen now refers to docker compose as standard. See https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command for more. A: What I do to debug small issues is to run: docker-compose up {service_name} This way I get to see the output for an individual service. If the service has a dependency you can always start multiple services like so: docker-compose up {service_name1} {service_name2} Additionally I use: docker-compose logs -f -t {service_name1} To see the logs of an already running service or alternatively: docker logs -t -f {container_name} Notice that the command above needs the container name and not the service name This way you can make sure service by service that everything works as expected and then you can launch them all in detached mode as suggested in the other answers A: I think should use the command docker-compose top, could check the result, It shoul not be empty when the container is running. If the containers is stop or exit or Create, it should return empty A: If you need a programmatic way: sleep 2 seconds check the container was up several seconds ago => Mean you've just successfully deployed it docker ps will look like: a6f088b1567e lc_fe_isr-app "docker-entrypoint.s…" 2 seconds ago Up 2 seconds 0.0.0.0:10001->3000/tcp lc_fe_isr-app-1 # # Check if the a single container was started successfully # CONTAINER_NAME="lc_fe_isr-app-1" sleep 2 docker ps | grep $CONTAINER_NAME UP_SECONDS_AGO=`docker ps | grep $CONTAINER_NAME | grep ' seconds'` echo $UP_SECONDS_AGO if [ -n "$UP_SECONDS_AGO" ] then echo "Deploy successfully" else echo "Deploy FAILED" exit 1 fi
How to check the status of docker-compose up -d command
When we run docker-compose up-d command to run dockers using docker-compose.yml file, it starts building images or pulling images from the registry. We can see each and every step of this command on the terminal. I am trying to run this command from a python script. The command starts successfully but after the command, I do not have any idea of how much the process has been completed. Is there any way I can monitor the status of docker-compose up -d command so that script can let the user (who is using the script) know how much the process has completed or if the docker-compose command has failed due to some reasons.? Thanks CODE: from pexpect import pxssh session = pxssh.pxssh() if not session.login(ip_address,<USERNAME>, <PASSWORD>): print("SSH session failed on login") print(str(session)) else: print("SSH session login successfull") session.sendline("sudo docker-compose up -d") session.prompt() resp = session.before print(resp)
[ "You can view docker compose logs with following ways\n\nUse docker compose up -d to start all services in detached mode (-d)\n(you won't see any logs in detached mode)\nUse docker compose logs -f -t to attach yourself to the logs of all\nrunning services, whereas -f means you follow the log output and the\n-t option gives you nice timestamps (Docs)\n\ncredit\nEDIT: Docker Compose is now available as part of the core Docker CLI. docker-compose is still supported for now but most documentation I have seen now refers to docker compose as standard. See https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command for more.\n", "What I do to debug small issues is to run:\ndocker-compose up {service_name} \n\nThis way I get to see the output for an individual service. If the service has a dependency you can always start multiple services like so: \ndocker-compose up {service_name1} {service_name2}\n\nAdditionally I use:\ndocker-compose logs -f -t {service_name1}\n\nTo see the logs of an already running service or alternatively: \ndocker logs -t -f {container_name}\n\nNotice that the command above needs the container name and not the service name\nThis way you can make sure service by service that everything works as expected and then you can launch them all in detached mode as suggested in the other answers\n", "I think should use the command docker-compose top, could check the result, It shoul not be empty when the container is running.\nIf the containers is stop or exit or Create, it should return empty\n", "If you need a programmatic way:\n\nsleep 2 seconds\ncheck the container was up several seconds ago => Mean you've just successfully deployed it\n\ndocker ps will look like:\na6f088b1567e lc_fe_isr-app \"docker-entrypoint.s…\" 2 seconds ago Up 2 seconds 0.0.0.0:10001->3000/tcp lc_fe_isr-app-1\n\n#\n# Check if the a single container was started successfully\n#\nCONTAINER_NAME=\"lc_fe_isr-app-1\"\nsleep 2\ndocker ps | grep $CONTAINER_NAME\nUP_SECONDS_AGO=`docker ps | grep $CONTAINER_NAME | grep ' seconds'`\necho $UP_SECONDS_AGO\n\nif [ -n \"$UP_SECONDS_AGO\" ]\nthen\n echo \"Deploy successfully\"\nelse\n echo \"Deploy FAILED\"\n exit 1\nfi\n\n" ]
[ 6, 2, 2, 0 ]
[]
[]
[ "docker", "docker_compose", "pexpect", "python" ]
stackoverflow_0048783546_docker_docker_compose_pexpect_python.txt
Q: How can I convert list of string to pandas DataFrame in Python I have .txt file containing data like this. The first element is the column names sepparated by whitespace, and the next element is the data. ['n Au[%] Ag[%] Cu[%] Zn[%] Ni[%] Pd[%] Fe[%] Cd[%] mq[ ]', '1 71.085 4.6578 22.468 1.6971 0.0292 0.0000 0.0627 0.0000 1.1019', '2 71.444 4.0611 22.946 1.4333 0.0400 0.0000 0.0763 0.0000 1.1298', '3 71.845 4.2909 22.308 1.4234 0.0293 0.0000 0.1031 0.0000 1.0750', '4 71.842 4.2794 22.290 1.4686 0.0339 0.0000 0.0856 0.0000 1.1334'] How can i convert this list of text into Pandas DataFrame? A: Use pandas.read_csv() with the delim_whitespace option :-) Input file data.txt n Au[%] Ag[%] Cu[%] Zn[%] Ni[%] Pd[%] Fe[%] Cd[%] mq[ ] 1 71.085 4.6578 22.468 1.6971 0.0292 0.0000 0.0627 0.0000 1.1019 2 71.444 4.0611 22.946 1.4333 0.0400 0.0000 0.0763 0.0000 1.1298 3 71.845 4.2909 22.308 1.4234 0.0293 0.0000 0.1031 0.0000 1.0750 4 71.842 4.2794 22.290 1.4686 0.0339 0.0000 0.0856 0.0000 1.1334 Processing import pandas as pd file = "/path/to/file" df = pd.read_csv(file, delim_whitespace=True) Output n Au[%] Ag[%] Cu[%] Zn[%] Ni[%] Pd[%] Fe[%] Cd[%] mq[ ] 0 1 71.085 4.6578 22.468 1.6971 0.0292 0.0 0.0627 0.0 1.1019 NaN 1 2 71.444 4.0611 22.946 1.4333 0.0400 0.0 0.0763 0.0 1.1298 NaN 2 3 71.845 4.2909 22.308 1.4234 0.0293 0.0 0.1031 0.0 1.0750 NaN 3 4 71.842 4.2794 22.290 1.4686 0.0339 0.0 0.0856 0.0 1.1334 NaN A: Given the information you have provided, I have written a few lines of basic Python code. # Import needed dependencies import pandas as pd Below is your data as shown above. I kept it in its original format, but added '%' in the last column value for consistency sake. mylist = [ 'n Au[%] Ag[%] Cu[%] Zn[%] Ni[%] Pd[%] Fe[%] Cd[%] mq[%]', '1 71.085 4.6578 22.468 1.6971 0.0292 0.0000 0.0627 0.0000 1.1019', '2 71.444 4.0611 22.946 1.4333 0.0400 0.0000 0.0763 0.0000 1.1298', '3 71.845 4.2909 22.308 1.4234 0.0293 0.0000 0.1031 0.0000 1.0750', '4 71.842 4.2794 22.290 1.4686 0.0339 0.0000 0.0856 0.0000 1.1334' ] Extract the first list element as it contains the values that will be the column values. # Extract the column values from the first row col_values = mylist[0] col_values = col_values.split() del col_values[0] Take each list element and brake it into it string components as well as delete the first element. # Loop through each row of the file. a_list = [] for row in mylist[1:]: row_values = row row_values = row_values.split() del row_values[0] a_list.append(row_values) Collect all column values into a primary list called main_list. # Count variable count = 0 main_list = [] for col in col_values: temp_list = [] for _list in a_list: temp_list.append(_list[count]) main_list.append(temp_list) count += 1 Now let's create a dictionary and use it to make a dataframe. my_dct = {} # Create custom dictionary based on dim's of main_list for iteration in range(len(main_list)): my_dct.update({col_values[iteration]:main_list[iteration]}) my_df = pd.DataFrame(dct) A quick screen capture of the above code run within a Kaggle notebook Hopefully, you find this useful.
How can I convert list of string to pandas DataFrame in Python
I have .txt file containing data like this. The first element is the column names sepparated by whitespace, and the next element is the data. ['n Au[%] Ag[%] Cu[%] Zn[%] Ni[%] Pd[%] Fe[%] Cd[%] mq[ ]', '1 71.085 4.6578 22.468 1.6971 0.0292 0.0000 0.0627 0.0000 1.1019', '2 71.444 4.0611 22.946 1.4333 0.0400 0.0000 0.0763 0.0000 1.1298', '3 71.845 4.2909 22.308 1.4234 0.0293 0.0000 0.1031 0.0000 1.0750', '4 71.842 4.2794 22.290 1.4686 0.0339 0.0000 0.0856 0.0000 1.1334'] How can i convert this list of text into Pandas DataFrame?
[ "Use pandas.read_csv() with the delim_whitespace option :-)\nInput file data.txt\n n Au[%] Ag[%] Cu[%] Zn[%] Ni[%] Pd[%] Fe[%] Cd[%] mq[ ]\n 1 71.085 4.6578 22.468 1.6971 0.0292 0.0000 0.0627 0.0000 1.1019 \n 2 71.444 4.0611 22.946 1.4333 0.0400 0.0000 0.0763 0.0000 1.1298 \n 3 71.845 4.2909 22.308 1.4234 0.0293 0.0000 0.1031 0.0000 1.0750 \n 4 71.842 4.2794 22.290 1.4686 0.0339 0.0000 0.0856 0.0000 1.1334 \n\nProcessing\nimport pandas as pd\n\nfile = \"/path/to/file\"\n\ndf = pd.read_csv(file, delim_whitespace=True)\n\nOutput\n n Au[%] Ag[%] Cu[%] Zn[%] Ni[%] Pd[%] Fe[%] Cd[%] mq[ ]\n0 1 71.085 4.6578 22.468 1.6971 0.0292 0.0 0.0627 0.0 1.1019 NaN\n1 2 71.444 4.0611 22.946 1.4333 0.0400 0.0 0.0763 0.0 1.1298 NaN\n2 3 71.845 4.2909 22.308 1.4234 0.0293 0.0 0.1031 0.0 1.0750 NaN\n3 4 71.842 4.2794 22.290 1.4686 0.0339 0.0 0.0856 0.0 1.1334 NaN\n\n", "Given the information you have provided, I have written a few lines of basic Python code.\n# Import needed dependencies\nimport pandas as pd\n\nBelow is your data as shown above. I kept it in its original format, but added '%' in the last column value for consistency sake.\nmylist = [\n'n Au[%] Ag[%] Cu[%] Zn[%] Ni[%] Pd[%] Fe[%] Cd[%] mq[%]', \n'1 71.085 4.6578 22.468 1.6971 0.0292 0.0000 0.0627 0.0000 1.1019', \n'2 71.444 4.0611 22.946 1.4333 0.0400 0.0000 0.0763 0.0000 1.1298', \n'3 71.845 4.2909 22.308 1.4234 0.0293 0.0000 0.1031 0.0000 1.0750', \n'4 71.842 4.2794 22.290 1.4686 0.0339 0.0000 0.0856 0.0000 1.1334'\n]\n\nExtract the first list element as it contains the values that will be the column values.\n# Extract the column values from the first row\ncol_values = mylist[0]\ncol_values = col_values.split()\ndel col_values[0]\n\nTake each list element and brake it into it string components as well as delete the first element.\n# Loop through each row of the file.\n\na_list = []\n\nfor row in mylist[1:]:\n \n row_values = row\n row_values = row_values.split()\n \n del row_values[0]\n a_list.append(row_values)\n\nCollect all column values into a primary list called main_list.\n# Count variable\ncount = 0\nmain_list = []\n\nfor col in col_values:\n\n temp_list = []\n for _list in a_list:\n temp_list.append(_list[count])\n \n main_list.append(temp_list)\n\n count += 1\n\nNow let's create a dictionary and use it to make a dataframe.\nmy_dct = {}\n\n# Create custom dictionary based on dim's of main_list\n\nfor iteration in range(len(main_list)):\n my_dct.update({col_values[iteration]:main_list[iteration]})\n\nmy_df = pd.DataFrame(dct)\n\n\nA quick screen capture of the above code run within a Kaggle notebook\nHopefully, you find this useful.\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074541705_dataframe_pandas_python.txt
Q: Adding Foreign Key to model - Django class Plans(models.Model): id = models.IntegerField(primary_key=True) name = models.CharField(max_length=255) plan_type = models.CharField(max_length=255) class Order(models.Model): id = models.IntegerField(primary_key=True) selected_plan_id = models.IntegerField(primary_key=True) Order's selected_plan_id is Plans's id. Which model should I add a foreign key to? How? A: First of all there are some bad ways to pointout: two fields cannot be primary keys in a table also django as default includes primary key id in every table, so no need to add id field. You should be doing this way: class Order(models.Model): selected_plan_id = models.ForeignKey(Plans, on_delete=models.CASCADE) A: The solution that you are looking for class Order(models.Model): id = models.IntegerField(primary_key=True) selected_plan_id = models.ForeignKey(Plans, on_delete=models.CASCADE) The purpose of using models.CASCADE is that when the referenced object is deleted, also delete the objects that have references to it. Also i dont suggest to you add 'id' keyword to your property, django makes automatically it. If you add the 'id' keyword to end of the your property like this case, you gonna see the column called 'selected_plan_id_id' in your table. A: class Order(models.Model): id = models.IntegerField(primary_key=True) selected_plan_id = models.IntegerField(primary_key=True) Plain= models.ForeignKey(Plain) Check the dependence of the table and after getting that made one key as foreign like in this one plain is not depend on the order. But the order depends on the plan.
Adding Foreign Key to model - Django
class Plans(models.Model): id = models.IntegerField(primary_key=True) name = models.CharField(max_length=255) plan_type = models.CharField(max_length=255) class Order(models.Model): id = models.IntegerField(primary_key=True) selected_plan_id = models.IntegerField(primary_key=True) Order's selected_plan_id is Plans's id. Which model should I add a foreign key to? How?
[ "First of all there are some bad ways to pointout:\n\ntwo fields cannot be primary keys in a table\nalso django as default includes primary key id in every table, so no need to add id field.\n\nYou should be doing this way:\nclass Order(models.Model):\n selected_plan_id = models.ForeignKey(Plans, on_delete=models.CASCADE)\n\n", "The solution that you are looking for\nclass Order(models.Model):\n id = models.IntegerField(primary_key=True)\n selected_plan_id = models.ForeignKey(Plans, on_delete=models.CASCADE)\n\nThe purpose of using models.CASCADE is that when the referenced object is deleted, also delete the objects that have references to it.\nAlso i dont suggest to you add 'id' keyword to your property, django makes automatically it. If you add the 'id' keyword to end of the your property like this case, you gonna see the column called 'selected_plan_id_id' in your table.\n", "class Order(models.Model):\n id = models.IntegerField(primary_key=True)\n selected_plan_id = models.IntegerField(primary_key=True)\nPlain= models.ForeignKey(Plain)\n\nCheck the dependence of the table and after getting that made one key as foreign like in this one plain is not depend on the order. But the order depends on the plan.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "database", "django", "foreign_keys", "model", "python" ]
stackoverflow_0074542097_database_django_foreign_keys_model_python.txt
Q: How to add "orderd data" wity apply method in pandas (not use for-loop) ID A B C D Orderd No1 8 9 5 2 D:2 C:5 A:8 B:9 No2 3 1 7 9 B:1 A:3 C:7 D:9 No3 29 34 5 294 C:5 A:29 B:34 D:294 I would like to add "Orderd" column with column of A, B, C and D. If I use for loop, I can do it as like for n in range(len(df)): df['Orderd'][n] = df.T.sort_values(by=n,ascending=True)[n].to_string() However, this method is too slow. I would like to do like this with "df.apply" method for doing speedy. A: you can use apply directly on your dataframe, indicating the axis = 1 import pandas as pd columns = ["ID","A","B","C","D"] data = [["No1",8,9,5,2], ["No2",3,1,7,9], ["No3",29,34,5,294]] df = pd.DataFrame(data=data, columns=columns) df = df.set_index("ID") # important to avoid having an error df["Orderd"] = df.apply(lambda x: x.sort_values().to_dict(), axis=1) outputs: A B C D Orderd ID No1 8 9 5 2 {'D': 2, 'C': 5, 'A': 8, 'B': 9} No2 3 1 7 9 {'B': 1, 'A': 3, 'C': 7, 'D': 9} No3 29 34 5 294 {'C': 5, 'A': 29, 'B': 34, 'D': 294} A: I managed to do it like this: df['Ordered'] = df.apply(lambda row: ' '.join([':'.join(s) for s in dict(row[1:].sort_values().astype('str')).items()]), axis=1) Basically, I take all values in the row excluding the first one, which gives you a series. I sort it and convert to string.Then I convert the series to an dict and retrieve the items. I then use two list comprehensions to first join the Letter-Value pairs with a colon and then join the pair strings with a space.
How to add "orderd data" wity apply method in pandas (not use for-loop)
ID A B C D Orderd No1 8 9 5 2 D:2 C:5 A:8 B:9 No2 3 1 7 9 B:1 A:3 C:7 D:9 No3 29 34 5 294 C:5 A:29 B:34 D:294 I would like to add "Orderd" column with column of A, B, C and D. If I use for loop, I can do it as like for n in range(len(df)): df['Orderd'][n] = df.T.sort_values(by=n,ascending=True)[n].to_string() However, this method is too slow. I would like to do like this with "df.apply" method for doing speedy.
[ "you can use apply directly on your dataframe, indicating the axis = 1\nimport pandas as pd\n\ncolumns = [\"ID\",\"A\",\"B\",\"C\",\"D\"]\ndata = [[\"No1\",8,9,5,2],\n [\"No2\",3,1,7,9],\n [\"No3\",29,34,5,294]]\n\ndf = pd.DataFrame(data=data, columns=columns)\ndf = df.set_index(\"ID\") # important to avoid having an error\n\ndf[\"Orderd\"] = df.apply(lambda x: x.sort_values().to_dict(), axis=1)\n\noutputs:\n A B C D Orderd\nID \nNo1 8 9 5 2 {'D': 2, 'C': 5, 'A': 8, 'B': 9}\nNo2 3 1 7 9 {'B': 1, 'A': 3, 'C': 7, 'D': 9}\nNo3 29 34 5 294 {'C': 5, 'A': 29, 'B': 34, 'D': 294}\n\n", "I managed to do it like this:\ndf['Ordered'] = df.apply(lambda row: ' '.join([':'.join(s) for s in dict(row[1:].sort_values().astype('str')).items()]), axis=1)\n\nBasically, I take all values in the row excluding the first one, which gives you a series. I sort it and convert to string.Then I convert the series to an dict and retrieve the items. I then use two list comprehensions to first join the Letter-Value pairs with a colon and then join the pair strings with a space.\n" ]
[ 0, 0 ]
[]
[]
[ "apply", "pandas", "python" ]
stackoverflow_0074543468_apply_pandas_python.txt
Q: mysql.connector.errors.NotSupportedError: Authentication plugin 'mysql_native_password' is not supported only with pyinstaller exe I am fighting to find a solution for my problem: When I start my Python application in my IDE, the database connection is working fine. But when I build an exe with pyinstaller with the following command python3 -m PyInstaller .\home.py and start the application and trigger the connection to the db it gives me the following error: Previously I had the same error with "caching_sha2_password" instead of "mysql_native_password", then I changed the db plugin to "mysql_native_password" but it still doesn't work in the exe. My database is running in a Docker Container. The root user, which I use for the connection has also mysql_native_password as the authentication plugin. However, somehow the connection to the db works every time when I start my application from my IDE. This problem only occurs, after I have exported my application into an exe with pyinstaller. The connection to the db looks like this: mysql.connector.connect( host="localhost", user="user", passwd="password", database="db_name" ) And yes, I have already checked, that I only have mysql-connector-python installed. I would be very glad if you could help me out, as this is the final step of my application to be ready for shipment. Thank you in advance! A: After I could not find an answer for my problem, I just switched to Postgres and used the corresponding Python driver. Now it works!
mysql.connector.errors.NotSupportedError: Authentication plugin 'mysql_native_password' is not supported only with pyinstaller exe
I am fighting to find a solution for my problem: When I start my Python application in my IDE, the database connection is working fine. But when I build an exe with pyinstaller with the following command python3 -m PyInstaller .\home.py and start the application and trigger the connection to the db it gives me the following error: Previously I had the same error with "caching_sha2_password" instead of "mysql_native_password", then I changed the db plugin to "mysql_native_password" but it still doesn't work in the exe. My database is running in a Docker Container. The root user, which I use for the connection has also mysql_native_password as the authentication plugin. However, somehow the connection to the db works every time when I start my application from my IDE. This problem only occurs, after I have exported my application into an exe with pyinstaller. The connection to the db looks like this: mysql.connector.connect( host="localhost", user="user", passwd="password", database="db_name" ) And yes, I have already checked, that I only have mysql-connector-python installed. I would be very glad if you could help me out, as this is the final step of my application to be ready for shipment. Thank you in advance!
[ "After I could not find an answer for my problem, I just switched to Postgres and used the corresponding Python driver. Now it works!\n" ]
[ 0 ]
[]
[]
[ "authentication", "docker", "mysql_connector_python", "pyinstaller", "python" ]
stackoverflow_0074476907_authentication_docker_mysql_connector_python_pyinstaller_python.txt
Q: Delete all columns for which value repents consecutively more than 3 times I have adf that looks like this: date stock1 stock2 stock3 stock4 stock5 stock6 stock7 stock8 stock9 stock10 10/20 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.9 11/20 0.1 0.9 0.3 0.4 0.3 0.5 0.3 0.2 0.4 0.1 12/20 0.1 0.6 0.9 0.5 0.6 0.7 0.8 0.7 0.9 0.1 10/20 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.9 11/20 0.8 0.9 0.3 0.4 0.3 0.5 0.3 0.2 0.9 0.1 12/20 0.3 0.6 0.9 0.5 0.6 0.7 0.8 0.7 0.9 0.1 10/20 0.1 0.2 0.3 0.4 0.5 0.7 0.7 0.8 0.9 0.9 11/20 0.8 0.9 0.3 0.4 0.3 0.7 0.3 0.2 0.4 0.1 12/20 0.3 0.6 0.9 0.5 0.6 0.7 0.8 0.7 0.9 0.1 I want to delete all columns for which the same value repeats, consecutively, more than 3 times. In this example, the columns "stock1", "stock6" and "stock9" should be deleted. In the other columns, we have repeating values more than 3 times, but not one after the other. I think I can adapt the code from that question Removing values that repeat more than 5 times in Pandas DataFrame, but I could not manage to do that yet. A: You can set "date" aside as index, then check if the rows are different from the next one as use it to groupby+cumcount. Then compute the max count per column, if greater than N-1, drop the column: df2 = df.set_index('date') N = 3 df2.loc[:, df2.apply(lambda c: c.groupby(c.ne(c.shift()).cumsum()).cumcount()).max().lt(N-1)] output: stock2 stock3 stock4 stock5 stock7 stock8 stock10 date 10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9 11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1 12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1 10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9 11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1 12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1 10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9 11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1 12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1 intermediate count of successive values: >>> df2.apply(lambda c: c.groupby(c.ne(c.shift()).cumsum()).cumcount()) stock1 stock2 stock3 stock4 stock5 stock6 stock7 stock8 stock9 stock10 date 10/20 0 0 0 0 0 0 0 0 0 0 11/20 1 0 1 1 0 0 0 0 0 0 12/20 2 0 0 0 0 0 0 0 0 1 10/20 3 0 0 0 0 0 0 0 1 0 11/20 0 0 1 1 0 0 0 0 2 0 12/20 0 0 0 0 0 0 0 0 3 1 10/20 0 0 0 0 0 1 0 0 4 0 11/20 0 0 1 1 0 2 0 0 0 0 12/20 0 0 0 0 0 3 0 0 0 1 A: You could want avoid apply here: N = 3 df.loc[:, df.set_index('date') .ne(df.shift()).cumsum() .stack() .groupby(level=1) .value_counts() .max(level=0).le(N)] A: df1[['date']].join( df1.iloc[:,1:].loc[:,df1.iloc[:,1:].apply(lambda ss:(ss.diff()!=0).cumsum().value_counts().max())<3] ) date stock2 stock3 stock4 stock5 stock7 stock8 stock10 0 10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9 1 11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1 2 12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1 3 10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9 4 11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1 5 12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1 6 10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9 7 11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1 8 12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1
Delete all columns for which value repents consecutively more than 3 times
I have adf that looks like this: date stock1 stock2 stock3 stock4 stock5 stock6 stock7 stock8 stock9 stock10 10/20 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.9 11/20 0.1 0.9 0.3 0.4 0.3 0.5 0.3 0.2 0.4 0.1 12/20 0.1 0.6 0.9 0.5 0.6 0.7 0.8 0.7 0.9 0.1 10/20 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.9 11/20 0.8 0.9 0.3 0.4 0.3 0.5 0.3 0.2 0.9 0.1 12/20 0.3 0.6 0.9 0.5 0.6 0.7 0.8 0.7 0.9 0.1 10/20 0.1 0.2 0.3 0.4 0.5 0.7 0.7 0.8 0.9 0.9 11/20 0.8 0.9 0.3 0.4 0.3 0.7 0.3 0.2 0.4 0.1 12/20 0.3 0.6 0.9 0.5 0.6 0.7 0.8 0.7 0.9 0.1 I want to delete all columns for which the same value repeats, consecutively, more than 3 times. In this example, the columns "stock1", "stock6" and "stock9" should be deleted. In the other columns, we have repeating values more than 3 times, but not one after the other. I think I can adapt the code from that question Removing values that repeat more than 5 times in Pandas DataFrame, but I could not manage to do that yet.
[ "You can set \"date\" aside as index, then check if the rows are different from the next one as use it to groupby+cumcount.\nThen compute the max count per column, if greater than N-1, drop the column:\ndf2 = df.set_index('date')\nN = 3\ndf2.loc[:, df2.apply(lambda c: c.groupby(c.ne(c.shift()).cumsum()).cumcount()).max().lt(N-1)]\n\noutput:\n stock2 stock3 stock4 stock5 stock7 stock8 stock10\ndate \n10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9\n11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1\n12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1\n10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9\n11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1\n12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1\n10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9\n11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1\n12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1\n\nintermediate count of successive values:\n>>> df2.apply(lambda c: c.groupby(c.ne(c.shift()).cumsum()).cumcount())\n\n stock1 stock2 stock3 stock4 stock5 stock6 stock7 stock8 stock9 stock10\ndate \n10/20 0 0 0 0 0 0 0 0 0 0\n11/20 1 0 1 1 0 0 0 0 0 0\n12/20 2 0 0 0 0 0 0 0 0 1\n10/20 3 0 0 0 0 0 0 0 1 0\n11/20 0 0 1 1 0 0 0 0 2 0\n12/20 0 0 0 0 0 0 0 0 3 1\n10/20 0 0 0 0 0 1 0 0 4 0\n11/20 0 0 1 1 0 2 0 0 0 0\n12/20 0 0 0 0 0 3 0 0 0 1\n\n", "You could want avoid apply here:\nN = 3\ndf.loc[:, \n df.set_index('date')\n .ne(df.shift()).cumsum()\n .stack()\n .groupby(level=1)\n .value_counts()\n .max(level=0).le(N)]\n\n", "df1[['date']].join(\n df1.iloc[:,1:].loc[:,df1.iloc[:,1:].apply(lambda ss:(ss.diff()!=0).cumsum().value_counts().max())<3]\n)\n \n \n date stock2 stock3 stock4 stock5 stock7 stock8 stock10\n0 10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9\n1 11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1\n2 12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1\n3 10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9\n4 11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1\n5 12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1\n6 10/20 0.2 0.3 0.4 0.5 0.7 0.8 0.9\n7 11/20 0.9 0.3 0.4 0.3 0.3 0.2 0.1\n8 12/20 0.6 0.9 0.5 0.6 0.8 0.7 0.1\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0071206247_pandas_python.txt
Q: Type hinting for scipy sparse matrices How do you type hint scipy sparse matrices, such as CSR, CSC, LIL etc.? Below is what I have been doing, but it doesn't feel right: def foo(mat: scipy.sparse.csr.csr_matrix): # Do whatever What do we do if our function can accept multiple types of scipy sparse matrices (i.e any of them)? A: All of csr, csc, lil are types of scipy.sparse.base.spmatrix: from scipy import sparse c1 = sparse.lil.lil_matrix c2 = sparse.csr.csr_matrix c3 = sparse.csc.csc_matrix print(c1.__bases__[0]) print(c2.__base__.__base__.__base__) print(c3.__base__.__base__.__base__) Output: <class 'scipy.sparse.base.spmatrix'> <class 'scipy.sparse.base.spmatrix'> <class 'scipy.sparse.base.spmatrix'> So you have an option to: def foo(mat: scipy.sparse.base.spmatrix): # Do whatever A: spicy.sparse.base is now deprecated. You should use spicy.sparse.spmatrix instead, which provides a base class for all sparse matrices. def foo(mat: spicy.sparse.spmatrix): # Whatever you want pass
Type hinting for scipy sparse matrices
How do you type hint scipy sparse matrices, such as CSR, CSC, LIL etc.? Below is what I have been doing, but it doesn't feel right: def foo(mat: scipy.sparse.csr.csr_matrix): # Do whatever What do we do if our function can accept multiple types of scipy sparse matrices (i.e any of them)?
[ "All of csr, csc, lil are types of scipy.sparse.base.spmatrix:\nfrom scipy import sparse\nc1 = sparse.lil.lil_matrix\nc2 = sparse.csr.csr_matrix\nc3 = sparse.csc.csc_matrix\n\nprint(c1.__bases__[0])\nprint(c2.__base__.__base__.__base__)\nprint(c3.__base__.__base__.__base__)\n\nOutput:\n<class 'scipy.sparse.base.spmatrix'>\n<class 'scipy.sparse.base.spmatrix'>\n<class 'scipy.sparse.base.spmatrix'>\n\nSo you have an option to:\ndef foo(mat: scipy.sparse.base.spmatrix):\n # Do whatever\n\n", "spicy.sparse.base is now deprecated. You should use spicy.sparse.spmatrix instead, which provides a base class for all sparse matrices.\ndef foo(mat: spicy.sparse.spmatrix):\n # Whatever you want\n pass\n\n" ]
[ 4, 0 ]
[]
[]
[ "python", "scipy" ]
stackoverflow_0071501140_python_scipy.txt
Q: openpyxl write another new sheet when a sheet reached 1048576 rows wb = openpyxl.Workbook() ws = wb.active ws.title = 'sheet_name_1' sheet_number = 1 for row_count in range(1,5242880): if row_count > 1000000: sheet_number = sheet_number + 1 wb.create_sheet(sheet_number) # maybe add code to switch to new sheet when row is over # 1000000 row_count = row_count - 1000000 else: ws.cell(row= row_count , column=1,value=row_count) wb.save('test.xlsx') Above is the script I have 1048576*5=5242880 rows of data to write in a single .xlsx file. Is there some openpyxl script when using openpyxl to create another new worksheets when a sheet reached 1048576 rows. Therefore,the result is a .xlsx file with at least 6 worksheets to store more than 5242880 rows. Thanks a lot. A: Not tested yet but you can try this : excel_limit = 1048576 with pd.ExcelWriter('Final_ExcelFile.xlsx') as wr: for i in range(0, df.shape[0], excel_limit): df.iloc[i:i+excel_limit, :].to_excel(wr, sheet_name=f'Sheet_Number_{i}', index=False)
openpyxl write another new sheet when a sheet reached 1048576 rows
wb = openpyxl.Workbook() ws = wb.active ws.title = 'sheet_name_1' sheet_number = 1 for row_count in range(1,5242880): if row_count > 1000000: sheet_number = sheet_number + 1 wb.create_sheet(sheet_number) # maybe add code to switch to new sheet when row is over # 1000000 row_count = row_count - 1000000 else: ws.cell(row= row_count , column=1,value=row_count) wb.save('test.xlsx') Above is the script I have 1048576*5=5242880 rows of data to write in a single .xlsx file. Is there some openpyxl script when using openpyxl to create another new worksheets when a sheet reached 1048576 rows. Therefore,the result is a .xlsx file with at least 6 worksheets to store more than 5242880 rows. Thanks a lot.
[ "Not tested yet but you can try this :\nexcel_limit = 1048576\n\nwith pd.ExcelWriter('Final_ExcelFile.xlsx') as wr:\n for i in range(0, df.shape[0], excel_limit):\n df.iloc[i:i+excel_limit, :].to_excel(wr, sheet_name=f'Sheet_Number_{i}', index=False)\n\n" ]
[ 0 ]
[]
[]
[ "excel", "limit", "openpyxl", "python", "xlsx" ]
stackoverflow_0074543741_excel_limit_openpyxl_python_xlsx.txt
Q: Copy keyword breaks numpy's copy/view philosophy I have noticed that none of the methods that are used to convert between types of sparse matrices are using copy kwarg, supplied in the the method. Even though, copying in most cases actually happens, the data array (where it is valid) always has a base set, which means that it shows up as a view in the code. However, de facto the copy has been made. Is this an intentional behavior? For instance, here are examples of with csr and csc arrays. As you can see, all of them have bases, no matter what. In [1]: import numpy as np ...: from scipy import sparse ...: ...: a = np.arange(20).reshape(4, 5) ...: csr = sparse.csr_array(a, copy=True) ...: print('csr.data.base', id(csr.data.base) if csr.data.base is not None else None) ...: ...: csr_copy = csr.copy() ...: print('csr_copy.data.base', id(csr_copy.data.base) if csr_copy.data.base is not None else None) ...: ...: csc_copy = csr.tocsc(copy=True) ...: print('csc_copy.data.base', id(csc_copy.data.base) if csc_copy.data.base is not None else None) ...: ...: csc_copy_2 = csr.tocsc() ...: print('csc_copy_2.data.base', id(csc_copy_2.data.base) if csc_copy_2.data.base is not None else None) csr.data.base 4392865488 csr_copy.data.base 4392866448 csc_copy.data.base 4392866640 csc_copy_2.data.base 4392867120 While it makes sense for csr_copy to have the same base as csr.data, I don't understand why any other objects have base attribute set for data to begin with. In particular, this behavior prevents user from direct manipulation with data and indices parameters of the array. For instance, it becomes, impossible extend csr matrix, by adding rows to it using inplace resize method: In [2]: old_nnz = csr.nnz ...: row = [1, 2, 3, 4, 5] # Lets append row of 5 elements to csr ...: ...: csr.resize(5, 5) ...: ...: print(id(csr.data)) ...: print(csr.data) ...: ...: print(id(csr.data.base)) ...: print(csr.data.base) ...: ...: csr.data.resize((old_nnz + len(row),), refcheck=True) 4757413808 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] 4757413520 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] Traceback (most recent call last): File "/opt/homebrew/Caskroom/miniforge/base/envs/dev/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3433, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-34-c52e3457494e>", line 12, in <module> csr.data.resize((old_nnz + len(row),), refcheck=True) ValueError: cannot resize this array: it does not own its data While using np.resize might work, I am not sure how inplace it is: In [3]: old_nnz = csr.nnz ...: row = [1, 2, 3, 4, 5] # Let's append row of 5 elements to csr ...: ...: csr.resize(5, 5) ...: ...: print('Data') ...: print(id(csr.data)) ...: print(csr.data) ...: ...: print("Data's Base") ...: print(id(csr.data.base)) ...: print(csr.data.base) ...: ...: print('New Data') ...: new_data = np.resize(csr.data, (old_nnz + len(row),)) ...: print(id(new_data)) ...: print(new_data) ...: ...: print("New Data's Base") ...: print(id(new_data.base)) ...: print(new_data.base) ...: ...: new_indices = np.resize(csr.indices, (old_nnz + len(row),)) Data 5256251600 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] Data's Base 5256250736 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] New Data 5256250928 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 2 3 4 5] New Data's Base 5256253040 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] I have been reading the source code for these functions and I don't see copy even used in some of those. For instance, in _csr.py: def tocsc(self, copy=False): idx_dtype = get_index_dtype((self.indptr, self.indices), maxval=max(self.nnz, self.shape[0])) indptr = np.empty(self.shape[1] + 1, dtype=idx_dtype) indices = np.empty(self.nnz, dtype=idx_dtype) data = np.empty(self.nnz, dtype=upcast(self.dtype)) csr_tocsc(self.shape[0], self.shape[1], self.indptr.astype(idx_dtype), self.indices.astype(idx_dtype), self.data, indptr, indices, data) A = self._csc_container((data, indices, indptr), shape=self.shape) A.has_sorted_indices = True return A Even though I see that new array (data) is created, somewhere down the line, maybe somewhere between C/Python interface it is put into base. A: I only have scipy v 1.7.3, so don't have access to the major rewrite of the sparse module in 1.8 (e.g. not csr_array or _data.py file). Whether something has a base or not is not a reliable measure of whether a copy was made. Take your first example: In [74]: a = np.arange(20).reshape(4, 5) ...: ...: csr = sparse.csr_matrix(a, copy=True) In [75]: a Out[75]: array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]]) In [76]: a.base Out[76]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) a is a view of the 1d array produced by the arange. That array is not accessible - except as the base. In [77]: csr Out[77]: <4x5 sparse matrix of type '<class 'numpy.intc'>' with 19 stored elements in Compressed Sparse Row format> The data attribute has a base - that looks the same as itself. id is different. But we'd have to study the code to see how data was derived from its base. In [78]: csr.data Out[78]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], dtype=int32) In [79]: csr.data.base Out[79]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], dtype=int32) It isn't a view of a or a.base, as we can prove by modifying an element. In [82]: csr.data[0] = 100 In [83]: csr.A Out[83]: array([[ 0, 100, 2, 3, 4], [ 5, 6, 7, 8, 9], [ 10, 11, 12, 13, 14], [ 15, 16, 17, 18, 19]], dtype=int32) In [84]: a Out[84]: array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]]) The copy parameter makes most sense when keeping the same format. Changing format can involve reordering the data (csr to csc), or summing duplicates (coo to csr), etc. Let's try making a new csr: In [87]: csr1 = sparse.csr_matrix(csr, copy=False) In [88]: csr2 = sparse.csr_matrix(csr, copy=True) As with csr, both of these have data.base and different ids. But if I modify an element of csr, that change only appears in csr1. csr2 is indeed a copy. In [93]: csr.data[1] = 200 In [97]: csr1.data[1] Out[97]: 200 In [98]: csr2.data[1] Out[98]: 2 resize I haven't used resize before for sparse, and rarely use it for numpy. But playing with the csr, it's evident it's quite a different operation. csr.resize(5,5) appears to just change the indptr (and shape), without change to data or indices. csr.resize(5,6) just seems to change the shape. I don't see a change in main attributes. Neither adds nonzero values, so "padding" with 0s doesn't change much. You don't want to do csr.data.resize(...). Such a change would also require changing indices and indptr (to maintain a consistent csr format). data can have 0s, but it should be cleaned up with a call to eliminate_zeros. ravel The sparse code could be doing something as harmless as ravel. In [129]: x = np.array([1,2,3]).ravel() In [130]: x.base Out[130]: array([1, 2, 3]) In [131]: x.resize(4) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [131], in <cell line: 1>() ----> 1 x.resize(4) ValueError: cannot resize this array: it does not own its data Another example where base is not a reliable indicator of copy/view, is when indexing columns (of a 2d array): Make a 2d array, which has its own data: In [144]: arr = np.array([[1,2],[3,4]]) In [145]: arr.base # None A row selection also has a None base: In [146]: arr[[1]].base But a column selection does not - even though it is a copy: In [147]: arr[:,[1]].base Out[147]: array([[2, 4]]) In [148]: arr[:,[1]] Out[148]: array([[2], [4]]) Evidently the indexing operation selects a (1,2) array, which is then reshaped to (2,1). Actually, looking a strides, I think it's doing a transpose. arr[:,[1]].base.T.
Copy keyword breaks numpy's copy/view philosophy
I have noticed that none of the methods that are used to convert between types of sparse matrices are using copy kwarg, supplied in the the method. Even though, copying in most cases actually happens, the data array (where it is valid) always has a base set, which means that it shows up as a view in the code. However, de facto the copy has been made. Is this an intentional behavior? For instance, here are examples of with csr and csc arrays. As you can see, all of them have bases, no matter what. In [1]: import numpy as np ...: from scipy import sparse ...: ...: a = np.arange(20).reshape(4, 5) ...: csr = sparse.csr_array(a, copy=True) ...: print('csr.data.base', id(csr.data.base) if csr.data.base is not None else None) ...: ...: csr_copy = csr.copy() ...: print('csr_copy.data.base', id(csr_copy.data.base) if csr_copy.data.base is not None else None) ...: ...: csc_copy = csr.tocsc(copy=True) ...: print('csc_copy.data.base', id(csc_copy.data.base) if csc_copy.data.base is not None else None) ...: ...: csc_copy_2 = csr.tocsc() ...: print('csc_copy_2.data.base', id(csc_copy_2.data.base) if csc_copy_2.data.base is not None else None) csr.data.base 4392865488 csr_copy.data.base 4392866448 csc_copy.data.base 4392866640 csc_copy_2.data.base 4392867120 While it makes sense for csr_copy to have the same base as csr.data, I don't understand why any other objects have base attribute set for data to begin with. In particular, this behavior prevents user from direct manipulation with data and indices parameters of the array. For instance, it becomes, impossible extend csr matrix, by adding rows to it using inplace resize method: In [2]: old_nnz = csr.nnz ...: row = [1, 2, 3, 4, 5] # Lets append row of 5 elements to csr ...: ...: csr.resize(5, 5) ...: ...: print(id(csr.data)) ...: print(csr.data) ...: ...: print(id(csr.data.base)) ...: print(csr.data.base) ...: ...: csr.data.resize((old_nnz + len(row),), refcheck=True) 4757413808 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] 4757413520 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] Traceback (most recent call last): File "/opt/homebrew/Caskroom/miniforge/base/envs/dev/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3433, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-34-c52e3457494e>", line 12, in <module> csr.data.resize((old_nnz + len(row),), refcheck=True) ValueError: cannot resize this array: it does not own its data While using np.resize might work, I am not sure how inplace it is: In [3]: old_nnz = csr.nnz ...: row = [1, 2, 3, 4, 5] # Let's append row of 5 elements to csr ...: ...: csr.resize(5, 5) ...: ...: print('Data') ...: print(id(csr.data)) ...: print(csr.data) ...: ...: print("Data's Base") ...: print(id(csr.data.base)) ...: print(csr.data.base) ...: ...: print('New Data') ...: new_data = np.resize(csr.data, (old_nnz + len(row),)) ...: print(id(new_data)) ...: print(new_data) ...: ...: print("New Data's Base") ...: print(id(new_data.base)) ...: print(new_data.base) ...: ...: new_indices = np.resize(csr.indices, (old_nnz + len(row),)) Data 5256251600 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] Data's Base 5256250736 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] New Data 5256250928 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 2 3 4 5] New Data's Base 5256253040 [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] I have been reading the source code for these functions and I don't see copy even used in some of those. For instance, in _csr.py: def tocsc(self, copy=False): idx_dtype = get_index_dtype((self.indptr, self.indices), maxval=max(self.nnz, self.shape[0])) indptr = np.empty(self.shape[1] + 1, dtype=idx_dtype) indices = np.empty(self.nnz, dtype=idx_dtype) data = np.empty(self.nnz, dtype=upcast(self.dtype)) csr_tocsc(self.shape[0], self.shape[1], self.indptr.astype(idx_dtype), self.indices.astype(idx_dtype), self.data, indptr, indices, data) A = self._csc_container((data, indices, indptr), shape=self.shape) A.has_sorted_indices = True return A Even though I see that new array (data) is created, somewhere down the line, maybe somewhere between C/Python interface it is put into base.
[ "I only have scipy v 1.7.3, so don't have access to the major rewrite of the sparse module in 1.8 (e.g. not csr_array or _data.py file).\nWhether something has a base or not is not a reliable measure of whether a copy was made. Take your first example:\nIn [74]: a = np.arange(20).reshape(4, 5)\n ...: ...: csr = sparse.csr_matrix(a, copy=True)\n\nIn [75]: a\nOut[75]: \narray([[ 0, 1, 2, 3, 4],\n [ 5, 6, 7, 8, 9],\n [10, 11, 12, 13, 14],\n [15, 16, 17, 18, 19]])\n\nIn [76]: a.base\nOut[76]: \narray([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,\n 17, 18, 19])\n\na is a view of the 1d array produced by the arange. That array is not accessible - except as the base.\nIn [77]: csr\nOut[77]: \n<4x5 sparse matrix of type '<class 'numpy.intc'>'\n with 19 stored elements in Compressed Sparse Row format>\n\nThe data attribute has a base - that looks the same as itself. id is different. But we'd have to study the code to see how data was derived from its base.\nIn [78]: csr.data\nOut[78]: \narray([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,\n 18, 19], dtype=int32)\n\nIn [79]: csr.data.base\nOut[79]: \narray([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,\n 18, 19], dtype=int32)\n\nIt isn't a view of a or a.base, as we can prove by modifying an element.\nIn [82]: csr.data[0] = 100\n\nIn [83]: csr.A\nOut[83]: \narray([[ 0, 100, 2, 3, 4],\n [ 5, 6, 7, 8, 9],\n [ 10, 11, 12, 13, 14],\n [ 15, 16, 17, 18, 19]], dtype=int32)\n\nIn [84]: a\nOut[84]: \narray([[ 0, 1, 2, 3, 4],\n [ 5, 6, 7, 8, 9],\n [10, 11, 12, 13, 14],\n [15, 16, 17, 18, 19]])\n\nThe copy parameter makes most sense when keeping the same format. Changing format can involve reordering the data (csr to csc), or summing duplicates (coo to csr), etc.\nLet's try making a new csr:\nIn [87]: csr1 = sparse.csr_matrix(csr, copy=False)\nIn [88]: csr2 = sparse.csr_matrix(csr, copy=True)\n\nAs with csr, both of these have data.base and different ids. But if I modify an element of csr, that change only appears in csr1. csr2 is indeed a copy.\nIn [93]: csr.data[1] = 200\nIn [97]: csr1.data[1]\nOut[97]: 200\nIn [98]: csr2.data[1]\nOut[98]: 2\n\nresize\nI haven't used resize before for sparse, and rarely use it for numpy. But playing with the csr, it's evident it's quite a different operation.\ncsr.resize(5,5) appears to just change the indptr (and shape), without change to data or indices.\ncsr.resize(5,6) just seems to change the shape. I don't see a change in main attributes. Neither adds nonzero values, so \"padding\" with 0s doesn't change much.\nYou don't want to do csr.data.resize(...). Such a change would also require changing indices and indptr (to maintain a consistent csr format). data can have 0s, but it should be cleaned up with a call to eliminate_zeros.\nravel\nThe sparse code could be doing something as harmless as ravel.\nIn [129]: x = np.array([1,2,3]).ravel()\nIn [130]: x.base\nOut[130]: array([1, 2, 3])\nIn [131]: x.resize(4)\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\nInput In [131], in <cell line: 1>()\n----> 1 x.resize(4)\n\nValueError: cannot resize this array: it does not own its data\n\nAnother example where base is not a reliable indicator of copy/view, is when indexing columns (of a 2d array):\nMake a 2d array, which has its own data:\nIn [144]: arr = np.array([[1,2],[3,4]])\nIn [145]: arr.base # None\n\nA row selection also has a None base:\nIn [146]: arr[[1]].base\n\nBut a column selection does not - even though it is a copy:\nIn [147]: arr[:,[1]].base\nOut[147]: array([[2, 4]]) \nIn [148]: arr[:,[1]]\nOut[148]: \narray([[2],\n [4]])\n\nEvidently the indexing operation selects a (1,2) array, which is then reshaped to (2,1). Actually, looking a strides, I think it's doing a transpose. arr[:,[1]].base.T.\n" ]
[ 0 ]
[]
[]
[ "copy", "python", "scipy", "sparse_matrix" ]
stackoverflow_0074542785_copy_python_scipy_sparse_matrix.txt
Q: How to scroll the element until a certain word appears? I'm scraping Google Maps and I need to know how to scroll the query column until the word appears "You've reached the end of the list". I am using selenium for scraping. Code I currently use: for a in range(100): barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[contains(@aria-label,'" + procurar + "')]"))) driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", barraRolagem) this code works, however the range is variable and is not certain, that is, it can scroll a lot and it can also scroll a little, for this reason I need to stop scrolling when the code finds the phrase "You've reached the end of the list" Link https://www.google.com.br/maps/search/contabilidade+balneario+camboriu/@-26.9905418,-48.6289914,15z A: You can scroll in a loop until "You've reached the end of the list" text is visible. When text is found visible - break the loop. Otherwise do a scroll. Since in case element not visible exception is thrown try-except block is needed here. Additional scroll is added after the element is found visible since Selenium detects that element visible while it is still not actually visible, so one more scroll should be done finally. The following code works correctly: import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") options.add_argument('--disable-notifications') webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 1) url = "https://www.google.com.br/maps/search/contabilidade+balneario+camboriu/@-26.9905418,-48.6289914,15z" driver.get(url) while True: try: wait.until(EC.visibility_of_element_located((By.XPATH, "//span[contains(text(),'reached the end')]"))) barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[@aria-label]"))) driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", barraRolagem) break except: barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[@aria-label]"))) driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", barraRolagem) time.sleep(0.5)
How to scroll the element until a certain word appears?
I'm scraping Google Maps and I need to know how to scroll the query column until the word appears "You've reached the end of the list". I am using selenium for scraping. Code I currently use: for a in range(100): barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[contains(@aria-label,'" + procurar + "')]"))) driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", barraRolagem) this code works, however the range is variable and is not certain, that is, it can scroll a lot and it can also scroll a little, for this reason I need to stop scrolling when the code finds the phrase "You've reached the end of the list" Link https://www.google.com.br/maps/search/contabilidade+balneario+camboriu/@-26.9905418,-48.6289914,15z
[ "You can scroll in a loop until \"You've reached the end of the list\" text is visible.\nWhen text is found visible - break the loop. Otherwise do a scroll.\nSince in case element not visible exception is thrown try-except block is needed here.\nAdditional scroll is added after the element is found visible since Selenium detects that element visible while it is still not actually visible, so one more scroll should be done finally.\nThe following code works correctly:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.add_argument('--disable-notifications')\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 1)\n\nurl = \"https://www.google.com.br/maps/search/contabilidade+balneario+camboriu/@-26.9905418,-48.6289914,15z\"\ndriver.get(url)\nwhile True:\n try:\n wait.until(EC.visibility_of_element_located((By.XPATH, \"//span[contains(text(),'reached the end')]\")))\n barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, \"//div[@role='main']//div[@aria-label]\")))\n driver.execute_script(\"arguments[0].scroll(0, arguments[0].scrollHeight);\", barraRolagem)\n break\n except:\n barraRolagem = wait.until(EC.presence_of_element_located((By.XPATH, \"//div[@role='main']//div[@aria-label]\")))\n driver.execute_script(\"arguments[0].scroll(0, arguments[0].scrollHeight);\", barraRolagem)\n time.sleep(0.5)\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium", "try_catch", "web_scraping", "webdriverwait" ]
stackoverflow_0074541474_python_selenium_try_catch_web_scraping_webdriverwait.txt
Q: Separate text between square brackets as a separate column in python I have the following the following columns, column_1 = ["Northern Rockies, British Columbia [9A87]", "Northwest Territories [2H89]", "Canada [00052A]", "Division No. 1, Newfoundland and Labrador [52A]"] column_2 = ["aa", "bb", "cc", "dd"] column_3 = [4, 4.5, 23, 1] zipped = list(zip(column_1 , column_2, column_3)) df = pd.DataFrame(zipped, columns=['column_1' , 'column_2', 'column_3']) I want to extract the text between the square brackets from the first column as a separate column. Below is the output I am looking for, column_1 = ["Northern Rockies, British Columbia", "Northwest Territories", "Canada", "Division No. 1, Newfoundland and Labrador"] column_2 = ["aa", "bb", "cc", "dd"] column_3 = [4, 4.5, 23, 1] column_4 = ["9A87", "2H89", "00052A", "52A"] zipped = list(zip(column_1 , column_2, column_3, column_4)) df = pd.DataFrame(zipped, columns=['column_1' , 'column_2', 'column_3', 'column_4']) I am using square bracket here but I think the solution should apply to any type of bracket. A: Hello, Salahuddin! From the column_1, I assumed that square brackets or any kind of brackets come always after the text. eg. Northern Rockies, British Columbia [9A87] df[['column_1','column_4']] = df['column_1'].str.extract(r'(.*)[\[\{\(](.*)[\]\}\)]',flags=re.IGNORECASE) This will work for any kind of brackets like [], (), {} A: I would use str.extract and str.repalce here: df["column_4"] = df["column_1"].str.extract(r'\[(.*?)\]') df["column_1"] = df["column_1"].str.replace(r'\s*\[.*?\]$', '', regex=True)
Separate text between square brackets as a separate column in python
I have the following the following columns, column_1 = ["Northern Rockies, British Columbia [9A87]", "Northwest Territories [2H89]", "Canada [00052A]", "Division No. 1, Newfoundland and Labrador [52A]"] column_2 = ["aa", "bb", "cc", "dd"] column_3 = [4, 4.5, 23, 1] zipped = list(zip(column_1 , column_2, column_3)) df = pd.DataFrame(zipped, columns=['column_1' , 'column_2', 'column_3']) I want to extract the text between the square brackets from the first column as a separate column. Below is the output I am looking for, column_1 = ["Northern Rockies, British Columbia", "Northwest Territories", "Canada", "Division No. 1, Newfoundland and Labrador"] column_2 = ["aa", "bb", "cc", "dd"] column_3 = [4, 4.5, 23, 1] column_4 = ["9A87", "2H89", "00052A", "52A"] zipped = list(zip(column_1 , column_2, column_3, column_4)) df = pd.DataFrame(zipped, columns=['column_1' , 'column_2', 'column_3', 'column_4']) I am using square bracket here but I think the solution should apply to any type of bracket.
[ "\nHello, Salahuddin!\nFrom the column_1, I assumed that square brackets or any kind of brackets come always after the text. eg. Northern Rockies, British Columbia [9A87]\ndf[['column_1','column_4']] = df['column_1'].str.extract(r'(.*)[\\[\\{\\(](.*)[\\]\\}\\)]',flags=re.IGNORECASE)\n\nThis will work for any kind of brackets like [], (), {}\n", "I would use str.extract and str.repalce here:\ndf[\"column_4\"] = df[\"column_1\"].str.extract(r'\\[(.*?)\\]')\ndf[\"column_1\"] = df[\"column_1\"].str.replace(r'\\s*\\[.*?\\]$', '', regex=True)\n\n" ]
[ 2, 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074543221_python_regex.txt
Q: Return one field after creation via post-request in django rest framework This is my class-based view: class PurchaseAPICreate(generics.CreateAPIView): serializer_class = PurchaseSerializer and serializer: class PurchaseSerializer(serializers.ModelSerializer): class Meta: model = Purchase fields = "__all__" Get-request return me all fields, but I need only id. I tried fields = ('id',). But post request needs all fields for serialization. I did this, but think it shouldn't work in this way. class PurchaseAPICreate(generics.CreateAPIView): serializer_class = PurchaseSerializer queryset = Shop.objects.all() def create(self, request, *args, **kwargs): queryset = self.get_queryset() serializer = PurchaseSerializer(queryset, many=True) return Response({ 'id': serializer.data[len(serializer.data)-1]['id'] }) How I can get only id in right way? A: You can set write or read only in extra_kwargs in the Meta class of a serializer. E.g. class PurchaseSerializer(serializers.ModelSerializer): class Meta: model = Purchase fields = "__all__" extra_kwargs = { 'field_1': {'write_only': True}, 'field...N': {'write_only': True}, 'id': {'read_only': True} }
Return one field after creation via post-request in django rest framework
This is my class-based view: class PurchaseAPICreate(generics.CreateAPIView): serializer_class = PurchaseSerializer and serializer: class PurchaseSerializer(serializers.ModelSerializer): class Meta: model = Purchase fields = "__all__" Get-request return me all fields, but I need only id. I tried fields = ('id',). But post request needs all fields for serialization. I did this, but think it shouldn't work in this way. class PurchaseAPICreate(generics.CreateAPIView): serializer_class = PurchaseSerializer queryset = Shop.objects.all() def create(self, request, *args, **kwargs): queryset = self.get_queryset() serializer = PurchaseSerializer(queryset, many=True) return Response({ 'id': serializer.data[len(serializer.data)-1]['id'] }) How I can get only id in right way?
[ "You can set write or read only in extra_kwargs in the Meta class of a serializer.\nE.g.\nclass PurchaseSerializer(serializers.ModelSerializer):\n class Meta:\n model = Purchase\n fields = \"__all__\"\n extra_kwargs = {\n 'field_1': {'write_only': True},\n 'field...N': {'write_only': True},\n 'id': {'read_only': True}\n }\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "python", "serialization" ]
stackoverflow_0071006750_django_django_rest_framework_python_serialization.txt
Q: Pandas : How to align/center a date column & aggregate other column on either direction of the date? How to align/center date-column of a dataframe (and its assoicated rows) based on an event (another column value). Explaining with example: I have a data frame as below. What I'm trying to do is the center the date column based on event column. In this case 3/12/12 is the center. Then I need the average of values from center - 2months (21) and center + 2months (30.5) df=pd.DataFrame([ ['1/10/12',No, 20], ['2/11/12',No, 22], ['3/12/12',Yes, 29], ['4/14/12',No, 30], ['5/14/12',No, 31] ], columns=['Time', 'event', 'value']) In the above case the resulting dataframe will be: df=pd.DataFrame([ ['pre_center', 20], ['center', 22], ['post_center', 30.5] ], columns=['Range', 'average_value']) A: You can use: # convert to datetime s = pd.to_datetime(df['Time']) # identify the center center = s[df['event'].eq('Yes')].iloc[0] # identify if the date is before/center/after group = (np.sign(s.sub(center).dt.days.astype(int)) .map({-1: 'pre_center', 0: 'center', 1: 'post_center'}) ) # aggregate out = df.groupby(group)['value'].agg(average_value='sum') Output: Range average_value 0 center 29 1 post_center 61 2 pre_center 42 If you want to include a threshold: s = pd.to_datetime(df['Time']) center = s[df['event'].eq('Yes')].iloc[0] diff = pd.DateOffset(months=2) m1 = s.between(center-diff, center) m2 = s.between(center, center+diff) group = np.select([m1&m2, m1, m2], ['center', 'pre_center', 'post_center'], np.nan) out = (df.groupby(group)['value'] .agg(average_value='sum') .drop('nan', errors='ignore') .rename_axis('Range').reset_index() ) Output: Range average_value 0 center 29 1 post_center 30 2 pre_center 22
Pandas : How to align/center a date column & aggregate other column on either direction of the date?
How to align/center date-column of a dataframe (and its assoicated rows) based on an event (another column value). Explaining with example: I have a data frame as below. What I'm trying to do is the center the date column based on event column. In this case 3/12/12 is the center. Then I need the average of values from center - 2months (21) and center + 2months (30.5) df=pd.DataFrame([ ['1/10/12',No, 20], ['2/11/12',No, 22], ['3/12/12',Yes, 29], ['4/14/12',No, 30], ['5/14/12',No, 31] ], columns=['Time', 'event', 'value']) In the above case the resulting dataframe will be: df=pd.DataFrame([ ['pre_center', 20], ['center', 22], ['post_center', 30.5] ], columns=['Range', 'average_value'])
[ "You can use:\n# convert to datetime\ns = pd.to_datetime(df['Time'])\n\n# identify the center\ncenter = s[df['event'].eq('Yes')].iloc[0]\n\n# identify if the date is before/center/after\ngroup = (np.sign(s.sub(center).dt.days.astype(int))\n .map({-1: 'pre_center', 0: 'center', 1: 'post_center'})\n )\n\n# aggregate\nout = df.groupby(group)['value'].agg(average_value='sum')\n\nOutput:\n Range average_value\n0 center 29\n1 post_center 61\n2 pre_center 42\n\nIf you want to include a threshold:\ns = pd.to_datetime(df['Time'])\n\ncenter = s[df['event'].eq('Yes')].iloc[0]\n\ndiff = pd.DateOffset(months=2)\nm1 = s.between(center-diff, center)\nm2 = s.between(center, center+diff)\ngroup = np.select([m1&m2, m1, m2], ['center', 'pre_center', 'post_center'], np.nan)\n\nout = (df.groupby(group)['value']\n .agg(average_value='sum')\n .drop('nan', errors='ignore')\n .rename_axis('Range').reset_index()\n )\n\nOutput:\n Range average_value\n0 center 29\n1 post_center 30\n2 pre_center 22\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "time_series" ]
stackoverflow_0074543893_pandas_python_time_series.txt
Q: No module named urllib3 I wrote a script to call an API and ran it successfully last week. This week, it won't run. I get back the following error message: Traceback (most recent call last): File "user_audit.py", line 2, in <module> import requests File "c:\Python27\lib\site-packages\requests\__init__.py", line 60, in <module> from .packages.urllib3.exceptions import DependencyWarning File "c:\Python27\lib\site-packages\requests\packages\__init__.py", line 29, in <module> import urllib3 ImportError: No module named urllib3 I've confirmed that packages is up to date, tried uninstalling and reinstalling it, but nothing has worked so far. Can someone help? ADDENDUM I installed urllib3 as suggested by @MSHossain, but then got another error message. The new message referenced another file that I'd written, which had created a Python compiled file. The other file was using smptlib to attempt to send an email. I don't understand how this would happen, but I deleted the other file and my script ran without any problems. I've accepted the answer below as I was able to pip install urllib3, but it should have already been included in the requests module. A: Either urllib3 is not imported or not installed. To import, use import urllib3 at the top of the file. To install write: pip install urllib3 into terminal. It could be that you did not activate the environment variable correctly. To activate the environment variable, write source env/bin/activate into terminal. Here env is the environment variable name. A: pip install urllib3 The reason it broke is that I had installed an incompatible version of urllib3 as a transient dependency of awscli. You'll see such conflicts when you rerun the install. A: I solved it by running pip install --upgrade requests A: set you environment by writing source env/bin/activate if env not found write virtualenv env first then source env/bin/activate , then check pip freeze if urllib3 not found there then reinstall urllib3, hope it helps. A: I already had it installed. Solved it by running pip install --upgrade urllib3 Hope it helps someone :) A: Few minutes back, I faced the same issue. And this was because, I used virtual environment. I believe that due to venv directory, the pip installed might have stopped working. Fortunately, I have setup downloaded in my directory. I ran the setup and chose the option to repair, and now, everything works fine. A: For me in PyCharm I had to put import urllib3 at the top of the file as mentioned earlier then PyCharm gave the option to import. Even after installing it with pip A: Reinstalling urllib3 solves my problem. Run: pip uninstall urllib3 pip install urllib3 A: For the sake of completness. It means the python installation you are using does not have the package installed, be sure you are using the same python installation as the one where you are installing the package. For me what was happening is that I had a virtual environment made with pyenv, even when the virtual environment had the package installed and in its latest version, it was not found because somehow the underlaying python install was used and not the one where I had the urllib3 installed. Solution: Use the absolute path to the python binary: /home/[username]/.pyenv/versions/[envname]/bin/python python-script.py
No module named urllib3
I wrote a script to call an API and ran it successfully last week. This week, it won't run. I get back the following error message: Traceback (most recent call last): File "user_audit.py", line 2, in <module> import requests File "c:\Python27\lib\site-packages\requests\__init__.py", line 60, in <module> from .packages.urllib3.exceptions import DependencyWarning File "c:\Python27\lib\site-packages\requests\packages\__init__.py", line 29, in <module> import urllib3 ImportError: No module named urllib3 I've confirmed that packages is up to date, tried uninstalling and reinstalling it, but nothing has worked so far. Can someone help? ADDENDUM I installed urllib3 as suggested by @MSHossain, but then got another error message. The new message referenced another file that I'd written, which had created a Python compiled file. The other file was using smptlib to attempt to send an email. I don't understand how this would happen, but I deleted the other file and my script ran without any problems. I've accepted the answer below as I was able to pip install urllib3, but it should have already been included in the requests module.
[ "Either urllib3 is not imported or not installed.\nTo import, use\nimport urllib3\n\nat the top of the file. To install write:\npip install urllib3\n\ninto terminal.\nIt could be that you did not activate the environment variable correctly.\nTo activate the environment variable, write\nsource env/bin/activate\n\ninto terminal. Here env is the environment variable name.\n", "pip install urllib3 \n\nThe reason it broke is that I had installed an incompatible version of urllib3 as a transient dependency of awscli. You'll see such conflicts when you rerun the install.\n", "I solved it by running\npip install --upgrade requests\n\n", "set you environment by writing source env/bin/activate if env not found write virtualenv env first then source env/bin/activate , then check pip freeze if urllib3 not found there then reinstall urllib3, hope it helps.\n", "I already had it installed. Solved it by running pip install --upgrade urllib3\nHope it helps someone :)\n", "Few minutes back, I faced the same issue. And this was because, I used virtual environment. I believe that due to venv directory, the pip installed might have stopped working.\nFortunately, I have setup downloaded in my directory. I ran the setup and chose the option to repair, and now, everything works fine.\n", "For me in PyCharm I had to put import urllib3 at the top of the file as mentioned earlier then PyCharm gave the option to import. Even after installing it with pip\n", "Reinstalling urllib3 solves my problem. Run:\npip uninstall urllib3\npip install urllib3\n\n", "For the sake of completness. It means the python installation you are using does not have the package installed, be sure you are using the same python installation as the one where you are installing the package.\nFor me what was happening is that I had a virtual environment made with pyenv, even when the virtual environment had the package installed and in its latest version, it was not found because somehow the underlaying python install was used and not the one where I had the urllib3 installed.\nSolution: Use the absolute path to the python binary:\n/home/[username]/.pyenv/versions/[envname]/bin/python python-script.py\n\n" ]
[ 17, 5, 3, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "python", "python_2.7", "urllib3", "xml" ]
stackoverflow_0042651145_python_python_2.7_urllib3_xml.txt
Q: How to separate flask files when two files depend on each other? I'm trying to develop a database driven flask app. I have an api.py file which has the flask app, api and SQLAlchemy db objects and a users.py file which contains the routes ands code to create a database table. In the users.py file, there's a UserManager Resource which has the routes. I have to add this resource to the API from this file. So users.py needs to import the db and the api from the api.py file and api.py needs to serve the flask app so it has all the variables users.py needs, but api.py also needs to import users.py in order to use it as a flask blueprint. api.py: ... from user import user app = Flask(__name__) app.register_blueprint(user) api = Api(app) ... db = SQLAlchemy(app) ma = Marshmallow(app) ... if __name__ == '__main__': app.run(debug=True) users.py: from api import api user = Blueprint('user', __name__, template_folder='templates') db = SQLAlchemy(api.app) ma = Marshmallow(api.app) class User(db.Model): user_id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String(120), unique=True, nullable=False) username = db.Column(db.String(32), unique=True, nullable=False) password = db.Column(db.String(32)) first_name = db.Column(db.String(32)) last_name = db.Column(db.String(32)) ... ... class UserManager(Resource): @user.route('/get/<user_id>', methods = ['GET']) def get_user(user_id): ... This of course results in errors due to circular imports. Question is, how do I separate the flask files with the routes from the api file when the blueprint has dependencies from the api (the db and the api objects) I also somehow have to do something like api.add_resource(UserManager, '/api/users') but I'm not sure where that'd go given the circular import. Tried reducing dependencies between two files, but couldn't achieve described goal without doing a 2 way import. Either trying to get 2 way import to work or still have same structure of separate files with routes but using 1 way import. A: This is a well known problem in flask. The solution is to use application factories. Cookiecutter Flask does this really well and offers a good template. It is well worth to check out their repo and try to understand what they are doing. Assuming you have a folder app and this folder contains a file __init__.py and your other files user.py, etc. Create a file app/extensions.py with this content and any other extension you need to initialize. ... from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() ... Import the db object (which was created, but not initialized) in the files where you need it. from app.extensions import db In your app/api.py file from flask import Flask from app.extensions import db from app.user import user as user_bp def create_app(): app = Flask(__name__) register_extensions(app) register_blueprints(app) return app def register_extensions(app): """Register Flask extensions.""" db.init_app(app) return None def register_blueprints(app): """Register Flask blueprints.""" app.register_blueprint(user_bp) return None if __name__ == '__main__': app = create_app() app.run(debug=True) Add stuff based on this approach as needed.
How to separate flask files when two files depend on each other?
I'm trying to develop a database driven flask app. I have an api.py file which has the flask app, api and SQLAlchemy db objects and a users.py file which contains the routes ands code to create a database table. In the users.py file, there's a UserManager Resource which has the routes. I have to add this resource to the API from this file. So users.py needs to import the db and the api from the api.py file and api.py needs to serve the flask app so it has all the variables users.py needs, but api.py also needs to import users.py in order to use it as a flask blueprint. api.py: ... from user import user app = Flask(__name__) app.register_blueprint(user) api = Api(app) ... db = SQLAlchemy(app) ma = Marshmallow(app) ... if __name__ == '__main__': app.run(debug=True) users.py: from api import api user = Blueprint('user', __name__, template_folder='templates') db = SQLAlchemy(api.app) ma = Marshmallow(api.app) class User(db.Model): user_id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String(120), unique=True, nullable=False) username = db.Column(db.String(32), unique=True, nullable=False) password = db.Column(db.String(32)) first_name = db.Column(db.String(32)) last_name = db.Column(db.String(32)) ... ... class UserManager(Resource): @user.route('/get/<user_id>', methods = ['GET']) def get_user(user_id): ... This of course results in errors due to circular imports. Question is, how do I separate the flask files with the routes from the api file when the blueprint has dependencies from the api (the db and the api objects) I also somehow have to do something like api.add_resource(UserManager, '/api/users') but I'm not sure where that'd go given the circular import. Tried reducing dependencies between two files, but couldn't achieve described goal without doing a 2 way import. Either trying to get 2 way import to work or still have same structure of separate files with routes but using 1 way import.
[ "This is a well known problem in flask. The solution is to use application factories.\nCookiecutter Flask does this really well and offers a good template. It is well worth to check out their repo and try to understand what they are doing.\nAssuming you have a folder app and this folder contains a file __init__.py and your other files user.py, etc.\n\nCreate a file app/extensions.py with this content and any other extension you need to initialize.\n\n...\nfrom flask_sqlalchemy import SQLAlchemy\n\ndb = SQLAlchemy()\n...\n\n\nImport the db object (which was created, but not initialized) in the files where you need it.\n\nfrom app.extensions import db\n\n\nIn your app/api.py file\n\nfrom flask import Flask\nfrom app.extensions import db\nfrom app.user import user as user_bp\n\ndef create_app():\n app = Flask(__name__)\n register_extensions(app)\n register_blueprints(app)\n return app\n\ndef register_extensions(app):\n \"\"\"Register Flask extensions.\"\"\"\n db.init_app(app)\n return None\n\ndef register_blueprints(app):\n \"\"\"Register Flask blueprints.\"\"\"\n app.register_blueprint(user_bp)\n return None\n\nif __name__ == '__main__':\n app = create_app()\n app.run(debug=True)\n\nAdd stuff based on this approach as needed.\n" ]
[ 0 ]
[]
[]
[ "api", "flask", "python", "rest", "sqlalchemy" ]
stackoverflow_0074542819_api_flask_python_rest_sqlalchemy.txt
Q: Replace list comprehension with vectorized method to build new features I have this dataframe, data. data = pd.DataFrame({'group':['A', 'A', 'B', 'C', 'C', 'B'], 'value':[0.2, 0.21, 0.54, 0.02, 0.001, 0.19]}) I want to build three new features. Below is my target output. pd.DataFrame({'group':['A', 'A', 'B', 'C', 'C', 'B'], 'value':[0.2, 0.21, 0.54, 0.02, 0.001, 0.19], 'group_A':[0.2, 0.21, 0,0,0,0], 'group_B':[0,0,0.54, 0, 0, 0.19], 'group_C':[0,0,0,0.02, 0.001,0]}) What is the most efficient way to perform such a task? The code below solves the problem. But perhaps there is a vectorized way to do it on my very large real world data set? for g in data.group.unique(): tmp= [0 if j==g else i for i, j in zip(data.value, data.group)] data['group_{}'.format(g)]=tmp A: Use DataFrame.join with DataFrame.pivot, DataFrame.add_prefix and DataFrame.fillna: df = (data.join(data.reset_index() .pivot('index','group','value') .add_prefix('group_') .fillna(0))) print (df) group value group_A group_B group_C 0 A 0.200 0.20 0.00 0.000 1 A 0.210 0.21 0.00 0.000 2 B 0.540 0.00 0.54 0.000 3 C 0.020 0.00 0.00 0.020 4 C 0.001 0.00 0.00 0.001 5 B 0.190 0.00 0.19 0.000 Alternative solution: df = (data.join(data.set_index('group', append=True)['value'] .unstack(fill_value=0) .add_prefix('group_'))) print (df) group value group_A group_B group_C 0 A 0.200 0.20 0.00 0.000 1 A 0.210 0.21 0.00 0.000 2 B 0.540 0.00 0.54 0.000 3 C 0.020 0.00 0.00 0.020 4 C 0.001 0.00 0.00 0.001 5 B 0.190 0.00 0.19 0.000
Replace list comprehension with vectorized method to build new features
I have this dataframe, data. data = pd.DataFrame({'group':['A', 'A', 'B', 'C', 'C', 'B'], 'value':[0.2, 0.21, 0.54, 0.02, 0.001, 0.19]}) I want to build three new features. Below is my target output. pd.DataFrame({'group':['A', 'A', 'B', 'C', 'C', 'B'], 'value':[0.2, 0.21, 0.54, 0.02, 0.001, 0.19], 'group_A':[0.2, 0.21, 0,0,0,0], 'group_B':[0,0,0.54, 0, 0, 0.19], 'group_C':[0,0,0,0.02, 0.001,0]}) What is the most efficient way to perform such a task? The code below solves the problem. But perhaps there is a vectorized way to do it on my very large real world data set? for g in data.group.unique(): tmp= [0 if j==g else i for i, j in zip(data.value, data.group)] data['group_{}'.format(g)]=tmp
[ "Use DataFrame.join with DataFrame.pivot, DataFrame.add_prefix and DataFrame.fillna:\ndf = (data.join(data.reset_index()\n .pivot('index','group','value')\n .add_prefix('group_')\n .fillna(0)))\nprint (df)\n group value group_A group_B group_C\n0 A 0.200 0.20 0.00 0.000\n1 A 0.210 0.21 0.00 0.000\n2 B 0.540 0.00 0.54 0.000\n3 C 0.020 0.00 0.00 0.020\n4 C 0.001 0.00 0.00 0.001\n5 B 0.190 0.00 0.19 0.000\n\nAlternative solution:\ndf = (data.join(data.set_index('group', append=True)['value']\n .unstack(fill_value=0)\n .add_prefix('group_')))\nprint (df)\n group value group_A group_B group_C\n0 A 0.200 0.20 0.00 0.000\n1 A 0.210 0.21 0.00 0.000\n2 B 0.540 0.00 0.54 0.000\n3 C 0.020 0.00 0.00 0.020\n4 C 0.001 0.00 0.00 0.001\n5 B 0.190 0.00 0.19 0.000\n\n" ]
[ 2 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074543974_numpy_pandas_python.txt
Q: How to set x and y axis columns in python subplot matplotlib def plot(self): plt.figure(figsize=(20, 5)) ax1 = plt.subplot(211) ax1.plot(self.signals['CLOSE']) ax1.set_title('Price') ax2 = plt.subplot(212, sharex=ax1) ax2.set_title('RSI') ax2.plot(self.signals[['RSI']]) ax2.axhline(30, linestyle='--', alpha=0.5, color='#ff0000') ax2.axhline(70, linestyle='--', alpha=0.5, color='#ff0000') plt.show() I am plotting two charts in python application. But the x axis values are indexes like 1,2,3,.... But my dataframe has a column self.signals['DATA'] so how can I use it as x axis values? A: With set_xticks you set, where the positions are, for example: ax1.set_xticks([1, 2, 3]) and with set_xticklabels you can define what it says there: ax1.set_xticklabels(['one', 'two', 'three']) see https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_xticklabels.html for more options. A: But the x axis values are indexes like 1,2,3,... But my dataframe has a column self.signals['DATA'] so how can I use it as x axis values? I assume you are using pandas and matplotlib. According to matplotlib documentation, you can simply pass the X and Y values to the plot function. So instead of calling ax1.plot(self.signals['CLOSE']) you can for instance do: ax1.plot(self.signals['DATA'], self.signals['CLOSE']) Depending on other arguments, this will do a scatter plot or a line plot. see matplotlib documentation for more fine tuning of your charts. or you could even try: plot('DATA', 'CLOSE', data=self.signals) Quoting documentation: Call signatures: plot([x], y, [fmt], *, data=None, **kwargs) plot([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs)
How to set x and y axis columns in python subplot matplotlib
def plot(self): plt.figure(figsize=(20, 5)) ax1 = plt.subplot(211) ax1.plot(self.signals['CLOSE']) ax1.set_title('Price') ax2 = plt.subplot(212, sharex=ax1) ax2.set_title('RSI') ax2.plot(self.signals[['RSI']]) ax2.axhline(30, linestyle='--', alpha=0.5, color='#ff0000') ax2.axhline(70, linestyle='--', alpha=0.5, color='#ff0000') plt.show() I am plotting two charts in python application. But the x axis values are indexes like 1,2,3,.... But my dataframe has a column self.signals['DATA'] so how can I use it as x axis values?
[ "With set_xticks you set, where the positions are, for example:\nax1.set_xticks([1, 2, 3])\n\nand with set_xticklabels you can define what it says there:\nax1.set_xticklabels(['one', 'two', 'three'])\n\nsee https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_xticklabels.html for more options.\n", "\nBut the x axis values are indexes like 1,2,3,...\nBut my dataframe has a column self.signals['DATA'] so how can I use it as x axis values?\n\nI assume you are using pandas and matplotlib.\nAccording to matplotlib documentation, you can simply pass the X and Y values to the plot function.\nSo instead of calling\nax1.plot(self.signals['CLOSE'])\n\nyou can for instance do:\nax1.plot(self.signals['DATA'], self.signals['CLOSE'])\n\nDepending on other arguments, this will do a scatter plot or a line plot.\nsee matplotlib documentation for more fine tuning of your charts.\nor you could even try:\nplot('DATA', 'CLOSE', data=self.signals)\n\n\nQuoting documentation:\n\nCall signatures:\nplot([x], y, [fmt], *, data=None, **kwargs)\nplot([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs)\n\n" ]
[ 0, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074538722_matplotlib_python.txt
Q: Speed up Boto3 file transfer across buckets I want to copy a sub-subfolder in an S3 bucket into a different bucket using Python (boto3). However, the process is painfully slow. If I copy the folder "by hand" straight on S3 from the browser, the process takes 72 seconds (for a folder with around 140 objects, total size roughly 1.0 GB). However, if I try to copy it with boto3, it takes 9 times longer (653 seconds). This is the code that I am using, re-adapted from the boto3 documentation and various answers here in SO: import boto3 s3 = boto3.resource('s3') # define source bucket src_bucket_name = 'bucket_1' prefix = 'folder_1/' client = boto3.client('s3') src_bucket = s3.Bucket(src_bucket_name) # define destination bucket dest_bucket_name = 'bucket_2' dest_bucket = s3.Bucket(dest_bucket_name) folder = "folder_1/subfolder_1" response_sub = client.list_objects_v2(Bucket=src_bucket_name, Prefix = folder) # list files to be copied (select only images, but in this folder there are only images anyway) files_src = [prefix['Key'] for prefix in response_sub['Contents'] if prefix['Key'].split('.')[-1].lower() in ['jpg','jpeg','png','tiff'] ] # list of file names after copy dest_prefix = 'folder_1/subfolder_1/' files_dest = [dest_prefix+i for i in files_src] for src,dest in zip(files_src,files_dest): copy_source = { 'Bucket': src_bucket_name, 'Key': src } dest_bucket.copy(copy_source, dest) Note that up to the last for loop, the code takes a couple of seconds only to run. Any idea of how to speed up this? Am I doing something stupid/should use some other way of copying files/entire folders? A: Thanks to @Suyog Shimpi (who pointed to a similar SO post), I was able to significantly speed up the copying process. Here the code slightly readapted from the other post: import os import boto3 import botocore import boto3.s3.transfer as s3transfer import tqdm s3 = boto3.resource('s3') # define source bucket src_bucket_name = 'bucket_1' prefix = 'folder_1/' client = boto3.client('s3') src_bucket = s3.Bucket(src_bucket_name) # define destination bucket dest_bucket_name = 'bucket_2' dest_bucket = s3.Bucket(dest_bucket_name) folder = "folder_1/subfolder_1" response_sub = client.list_objects_v2(Bucket=src_bucket_name, Prefix = folder) # list files to be copied (select only images, but in this folder there are only images anyway) files_src = [prefix['Key'] for prefix in response_sub['Contents'] if prefix['Key'].split('.')[-1].lower() in ['jpg','jpeg','png','tiff'] ] # list of file names after copy dest_prefix = 'folder_1/subfolder_1/' files_dest = [dest_prefix+i for i in files_src] botocore_config = botocore.config.Config(max_pool_connections=20) s3client = boto3.client('s3', config=botocore_config) transfer_config = s3transfer.TransferConfig( use_threads=True, max_concurrency=20, ) # note that timing the process is optional # total_size of the files can be obtained with boto3, or on the browser %time progress = tqdm.tqdm( desc='upload', total=total_size, unit='B', unit_scale=1, position=0, bar_format='{desc:<10}{percentage:3.0f}%|{bar:10}{r_bar}') s3t = s3transfer.create_transfer_manager(s3client, transfer_config) for src,dest in zip(files_src,files_dest): copy_source = { 'Bucket': src_bucket_name, 'Key': src } s3t.copy(copy_source=copy_source, bucket = dest_bucket_name, key = dest, subscribers=[s3transfer.ProgressCallbackInvoker(progress.update),], ) # close transfer job s3t.shutdown() progress.close(); A: Thanks Fraccalo for your solution, it helped me a lot! I adjusted it a little so that we can copy more than 1000 files: import boto3 import botocore import boto3.s3.transfer as s3transfer import tqdm s3 = boto3.resource('s3') # define source bucket src_bucket_name = 'bucket_1' prefix = 'folder_1/' client = boto3.client('s3') src_bucket = s3.Bucket(src_bucket_name) # define destination bucket dest_bucket_name = 'bucket_2' dest_bucket = s3.Bucket(dest_bucket_name) folder = "folder_1/subfolder_1" files_src = [] bucket_size = 0 # use paginator to read more than 1000 files paginator = client.get_paginator('list_objects_v2') operation_parameters = {'Bucket': src_bucket_name, 'Prefix': folder} page_iterator = paginator.paginate(**operation_parameters) for page in page_iterator: if page.get('Contents', None): files_src.extend([prefix['Key'] for prefix in page['Contents']]) bucket_size += sum(obj['Size'] for obj in page['Contents']) # list of file names after copy dest_prefix = 'folder_1/subfolder_1/' files_dest = [dest_prefix+i for i in files_src] botocore_config = botocore.config.Config(max_pool_connections=20) s3client = boto3.client('s3', config=botocore_config) transfer_config = s3transfer.TransferConfig( use_threads=True, max_concurrency=20, ) progress = tqdm.tqdm( desc='upload', total=bucket_size, unit='B', unit_scale=1, position=0, bar_format='{desc:<10}{percentage:3.0f}%|{bar:10}{r_bar}') s3t = s3transfer.create_transfer_manager(s3client, transfer_config) for src,dest in zip(files_src,files_dest): copy_source = { 'Bucket': src_bucket_name, 'Key': src } s3t.copy(copy_source=copy_source, bucket = dest_bucket_name, key = dest, subscribers=[s3transfer.ProgressCallbackInvoker(progress.update),], ) # close transfer job s3t.shutdown() progress.close();
Speed up Boto3 file transfer across buckets
I want to copy a sub-subfolder in an S3 bucket into a different bucket using Python (boto3). However, the process is painfully slow. If I copy the folder "by hand" straight on S3 from the browser, the process takes 72 seconds (for a folder with around 140 objects, total size roughly 1.0 GB). However, if I try to copy it with boto3, it takes 9 times longer (653 seconds). This is the code that I am using, re-adapted from the boto3 documentation and various answers here in SO: import boto3 s3 = boto3.resource('s3') # define source bucket src_bucket_name = 'bucket_1' prefix = 'folder_1/' client = boto3.client('s3') src_bucket = s3.Bucket(src_bucket_name) # define destination bucket dest_bucket_name = 'bucket_2' dest_bucket = s3.Bucket(dest_bucket_name) folder = "folder_1/subfolder_1" response_sub = client.list_objects_v2(Bucket=src_bucket_name, Prefix = folder) # list files to be copied (select only images, but in this folder there are only images anyway) files_src = [prefix['Key'] for prefix in response_sub['Contents'] if prefix['Key'].split('.')[-1].lower() in ['jpg','jpeg','png','tiff'] ] # list of file names after copy dest_prefix = 'folder_1/subfolder_1/' files_dest = [dest_prefix+i for i in files_src] for src,dest in zip(files_src,files_dest): copy_source = { 'Bucket': src_bucket_name, 'Key': src } dest_bucket.copy(copy_source, dest) Note that up to the last for loop, the code takes a couple of seconds only to run. Any idea of how to speed up this? Am I doing something stupid/should use some other way of copying files/entire folders?
[ "Thanks to @Suyog Shimpi (who pointed to a similar SO post), I was able to significantly speed up the copying process.\nHere the code slightly readapted from the other post:\nimport os\nimport boto3\nimport botocore\nimport boto3.s3.transfer as s3transfer\nimport tqdm\n\ns3 = boto3.resource('s3')\n\n# define source bucket\nsrc_bucket_name = 'bucket_1'\nprefix = 'folder_1/' \nclient = boto3.client('s3')\nsrc_bucket = s3.Bucket(src_bucket_name)\n\n# define destination bucket\ndest_bucket_name = 'bucket_2'\ndest_bucket = s3.Bucket(dest_bucket_name)\n\nfolder = \"folder_1/subfolder_1\"\nresponse_sub = client.list_objects_v2(Bucket=src_bucket_name, Prefix = folder)\n\n# list files to be copied (select only images, but in this folder there are only images anyway)\nfiles_src = [prefix['Key'] for prefix in response_sub['Contents'] if prefix['Key'].split('.')[-1].lower() in ['jpg','jpeg','png','tiff'] ]\n\n# list of file names after copy\ndest_prefix = 'folder_1/subfolder_1/'\nfiles_dest = [dest_prefix+i for i in files_src]\n\nbotocore_config = botocore.config.Config(max_pool_connections=20)\ns3client = boto3.client('s3', config=botocore_config)\n\ntransfer_config = s3transfer.TransferConfig(\n use_threads=True,\n max_concurrency=20,\n)\n\n# note that timing the process is optional\n# total_size of the files can be obtained with boto3, or on the browser \n%time\nprogress = tqdm.tqdm(\n desc='upload',\n total=total_size, unit='B', unit_scale=1,\n position=0,\n bar_format='{desc:<10}{percentage:3.0f}%|{bar:10}{r_bar}')\n\ns3t = s3transfer.create_transfer_manager(s3client, transfer_config)\n\n\nfor src,dest in zip(files_src,files_dest):\n \n copy_source = {\n 'Bucket': src_bucket_name,\n 'Key': src\n }\n\n s3t.copy(copy_source=copy_source,\n bucket = dest_bucket_name,\n key = dest,\n subscribers=[s3transfer.ProgressCallbackInvoker(progress.update),],\n )\n\n# close transfer job\ns3t.shutdown() \nprogress.close();\n\n", "Thanks Fraccalo for your solution, it helped me a lot!\nI adjusted it a little so that we can copy more than 1000 files:\nimport boto3\nimport botocore\nimport boto3.s3.transfer as s3transfer\nimport tqdm\n\ns3 = boto3.resource('s3')\n\n# define source bucket\nsrc_bucket_name = 'bucket_1'\nprefix = 'folder_1/' \nclient = boto3.client('s3')\nsrc_bucket = s3.Bucket(src_bucket_name)\n\n# define destination bucket\ndest_bucket_name = 'bucket_2'\ndest_bucket = s3.Bucket(dest_bucket_name)\n\nfolder = \"folder_1/subfolder_1\"\nfiles_src = []\nbucket_size = 0\n# use paginator to read more than 1000 files\npaginator = client.get_paginator('list_objects_v2')\noperation_parameters = {'Bucket': src_bucket_name,\n 'Prefix': folder}\npage_iterator = paginator.paginate(**operation_parameters)\nfor page in page_iterator:\n if page.get('Contents', None):\n files_src.extend([prefix['Key'] for prefix in page['Contents']])\n bucket_size += sum(obj['Size'] for obj in page['Contents'])\n\n\n\n# list of file names after copy\ndest_prefix = 'folder_1/subfolder_1/'\nfiles_dest = [dest_prefix+i for i in files_src]\n\nbotocore_config = botocore.config.Config(max_pool_connections=20)\ns3client = boto3.client('s3', config=botocore_config)\n\ntransfer_config = s3transfer.TransferConfig(\n use_threads=True,\n max_concurrency=20,\n)\n\nprogress = tqdm.tqdm(\n desc='upload',\n total=bucket_size, unit='B', unit_scale=1,\n position=0,\n bar_format='{desc:<10}{percentage:3.0f}%|{bar:10}{r_bar}')\n\ns3t = s3transfer.create_transfer_manager(s3client, transfer_config)\n\n\nfor src,dest in zip(files_src,files_dest):\n \n copy_source = {\n 'Bucket': src_bucket_name,\n 'Key': src\n }\n\n s3t.copy(copy_source=copy_source,\n bucket = dest_bucket_name,\n key = dest,\n subscribers=[s3transfer.ProgressCallbackInvoker(progress.update),],\n )\n\n# close transfer job\ns3t.shutdown() \nprogress.close();\n\n" ]
[ 3, 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "boto3", "python" ]
stackoverflow_0069223091_amazon_s3_amazon_web_services_boto3_python.txt
Q: Use Data Subscription through python in TDengine I'm trying data subscription feature of TDengine. I tested its Python demo. from taos.tmq import TaosConsumer # Syntax: `consumer = TaosConsumer(*topics, **args)` # # Example: consumer = TaosConsumer('topic1', 'topic2', td_connect_ip = "127.0.0.1", group_id = "local") ... When executing the script, there is an error information: ImportError: cannot import name 'TaosConsumer' Did I miss some steps? A: I think you need to update the version of taospy on your system - taospy is the TDengine connector for Python. Try running pip install -U taospy to update it and then do your test again. Make sure you're running Python 3.7 or later.
Use Data Subscription through python in TDengine
I'm trying data subscription feature of TDengine. I tested its Python demo. from taos.tmq import TaosConsumer # Syntax: `consumer = TaosConsumer(*topics, **args)` # # Example: consumer = TaosConsumer('topic1', 'topic2', td_connect_ip = "127.0.0.1", group_id = "local") ... When executing the script, there is an error information: ImportError: cannot import name 'TaosConsumer' Did I miss some steps?
[ "I think you need to update the version of taospy on your system - taospy is the TDengine connector for Python.\nTry running pip install -U taospy to update it and then do your test again.\nMake sure you're running Python 3.7 or later.\n" ]
[ 0 ]
[]
[]
[ "database", "python", "tdengine" ]
stackoverflow_0074543552_database_python_tdengine.txt
Q: How to fix this python syntax error for the code segment given below enter image description here ` # Define output GEE Asset names change_primary_asset_name = f'users/{"Annanya"}/{"vegetation-change"}/vegetation_change_primary' change_secondary_asset_name = f'users/{"Annanya"}/{"vegetation-change"}/vegetation_change_secondary' # Check if GEE Asset already exists prior to export; primary change if(change_primary_asset:= ee.FeatureCollection(change_primary_asset_name)): A: The error does give the reason. if(change_primary_asset:= ee.FeatureCollection(change_primary_asset_name)) Is not valid syntax. If you are checking if two variables are equal you need to use == so in your example: if(change_primary_asset == ee.FeatureCollection(change_primary_asset_name)): I suspect this is a simple typo but if not then I'd suggest checking out the Python docs here on flow control statements.
How to fix this python syntax error for the code segment given below
enter image description here ` # Define output GEE Asset names change_primary_asset_name = f'users/{"Annanya"}/{"vegetation-change"}/vegetation_change_primary' change_secondary_asset_name = f'users/{"Annanya"}/{"vegetation-change"}/vegetation_change_secondary' # Check if GEE Asset already exists prior to export; primary change if(change_primary_asset:= ee.FeatureCollection(change_primary_asset_name)):
[ "The error does give the reason.\nif(change_primary_asset:= ee.FeatureCollection(change_primary_asset_name))\n\nIs not valid syntax. If you are checking if two variables are equal you need to use == so in your example:\nif(change_primary_asset == ee.FeatureCollection(change_primary_asset_name)):\n\nI suspect this is a simple typo but if not then I'd suggest checking out the Python docs here on flow control statements.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074543946_python_python_3.x.txt