QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,740,916
| 18,616,461
|
How do I make my Discord ban function work?
|
<p>I am trying to make my Discord bot able to ban people, but the function is not working:</p>
<p>The relevant code is as follows:</p>
<pre><code>banned_member = get(bot.get_all_members(), id=int_offender_id)
banned_member.ban(reason="Test")
</code></pre>
<p>It can find the banned_member object, but it cannot issue a ban. In the server, I created a role that allows the bot to ban people.</p>
|
<python><python-3.x><discord><discord.py>
|
2023-03-15 05:41:51
| 1
| 1,086
|
Alan Shiah
|
75,740,766
| 8,481,155
|
Publish Pandas Dataframe rows as PubSub Message
|
<p>I have a task to publish the rows of a <a href="/questions/tagged/pandas" class="post-tag" title="show questions tagged 'pandas'" aria-label="show questions tagged 'pandas'" rel="tag" aria-labelledby="tag-pandas-tooltip-container">pandas</a> <a href="/questions/tagged/dataframe" class="post-tag" title="show questions tagged 'dataframe'" aria-label="show questions tagged 'dataframe'" rel="tag" aria-labelledby="tag-dataframe-tooltip-container">dataframe</a> as a <a href="/questions/tagged/pubsub" class="post-tag" title="show questions tagged 'pubsub'" aria-label="show questions tagged 'pubsub'" rel="tag" aria-labelledby="tag-pubsub-tooltip-container">pubsub</a> message.
What is the best to do this? My <a href="/questions/tagged/pandas" class="post-tag" title="show questions tagged 'pandas'" aria-label="show questions tagged 'pandas'" rel="tag" aria-labelledby="tag-pandas-tooltip-container">pandas</a> <a href="/questions/tagged/dataframe" class="post-tag" title="show questions tagged 'dataframe'" aria-label="show questions tagged 'dataframe'" rel="tag" aria-labelledby="tag-dataframe-tooltip-container">dataframe</a> could consist of around 1 million records.</p>
<p>This is an example I have found to publish.</p>
<pre><code>for n in range(1, 10):
data_str = f"Message number {n}"
# Data must be a bytestring
data = data_str.encode("utf-8")
# When you publish a message, the client returns a future.
future = publisher.publish(topic_path, data)
print(future.result())
</code></pre>
<p>Can I loop through each row, convert the row to string, encode and publish like below?</p>
<pre><code>for row_dict in df.to_dict(orient="records"):
data = str(row_dict).encode("utf-8")
future = publisher.publish(topic_path, data)
print(future.result())
</code></pre>
<p>Or I'm I missing something simpler?
I would like the <a href="/questions/tagged/pubsub" class="post-tag" title="show questions tagged 'pubsub'" aria-label="show questions tagged 'pubsub'" rel="tag" aria-labelledby="tag-pubsub-tooltip-container">pubsub</a> message to be in a dict format with each message for a row like below.</p>
<pre><code>{'col1': '123','col2': 'abc'}
{'col1': '124','col2': 'def'}
{'col1': '125','col2': 'ghi'}
{'col1': '126','col2': 'jkl'}
</code></pre>
|
<python><python-3.x><pandas><dataframe><google-cloud-pubsub>
|
2023-03-15 05:11:15
| 1
| 701
|
Ashok KS
|
75,740,655
| 10,613,037
|
Access state within nested utility functions
|
<p>I've created a <code>metadata</code> object in my <code>transformations.py</code>. I want to access it in utility functions such as <code>utils/some_fn</code> . What is the best way to do so?</p>
<pre class="lang-py prettyprint-override"><code># metadata.py
class Metadata:
def __init__(self, year, sheet_name) -> None:
self.year = year
self.sheet_name = sheet_name
def create_metadata(sheet_name):
metadata = Metadata(year=2023, sheet_name=sheet_name)
return metadata
# main.py
import pandas as pd
sheet_names_by_dfs = pd.read_excel('a workbook.xlsx', sheet_name=None)
for sheet_name, df in sheet_names_by_dfs:
transformations(sheet_name, df)
# transformations.py
from utils import some_fn
def transformations(sheet_name, df):
metadata = create_metadata(sheet_name)
...
some_fn()
# utils.py
def some_fn():
# How can I go about accessing metadata?
pass
</code></pre>
|
<python><pandas><software-design>
|
2023-03-15 04:50:35
| 0
| 320
|
meg hidey
|
75,740,652
| 1,519,468
|
FastAPI StreamingResponse not streaming with generator function
|
<p>I have a relatively simple FastAPI app that accepts a query and streams back the response from ChatGPT's API. ChatGPT is streaming back the result and I can see this being printed to console as it comes in.</p>
<p>What's not working is the <code>StreamingResponse</code> back from FastAPI. The response gets sent all together instead. I'm really at a loss as to why this isn't working.</p>
<p>Here is the FastAPI app code:</p>
<pre class="lang-py prettyprint-override"><code>import os
import time
import openai
import fastapi
from fastapi import Depends, HTTPException, status, Request
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from fastapi.responses import StreamingResponse
auth_scheme = HTTPBearer()
app = fastapi.FastAPI()
openai.api_key = os.environ["OPENAI_API_KEY"]
def ask_statesman(query: str):
#prompt = router(query)
completion_reason = None
response = ""
while not completion_reason or completion_reason == "length":
openai_stream = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": query}],
temperature=0.0,
stream=True,
)
for line in openai_stream:
completion_reason = line["choices"][0]["finish_reason"]
if "content" in line["choices"][0].delta:
current_response = line["choices"][0].delta.content
print(current_response)
yield current_response
time.sleep(0.25)
@app.post("/")
async def request_handler(auth_key: str, query: str):
if auth_key != "123":
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid authentication credentials",
headers={"WWW-Authenticate": auth_scheme.scheme_name},
)
else:
stream_response = ask_statesman(query)
return StreamingResponse(stream_response, media_type="text/plain")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000, debug=True, log_level="debug")
</code></pre>
<p>And here is the very simple <code>test.py</code> file to test the above:</p>
<pre class="lang-py prettyprint-override"><code>import requests
query = "How tall is the Eiffel tower?"
url = "http://localhost:8000"
params = {"auth_key": "123", "query": query}
response = requests.post(url, params=params, stream=True)
for chunk in response.iter_lines():
if chunk:
print(chunk.decode("utf-8"))
</code></pre>
|
<python><python-requests><streaming><fastapi><openai-api>
|
2023-03-15 04:48:33
| 5
| 533
|
Robert Ritz
|
75,740,543
| 24,039
|
How do I convert a timezone aware string literal into a datetime object in Python
|
<p>I have a string from a 3rd party log file that I need to convert into a <code>datetime</code> object. The log entry is in the form:</p>
<pre><code>... timestamp=datetime.datetime(2023, 2, 25, 15, 59, 21, 410787, tzinfo=tzlocal()), ...
</code></pre>
<p>I have tried extracting into a tuple/list and using the datetime constructor like this:</p>
<pre><code> timestamp_str = chop(line, 'timestamp=datetime.datetime(', '),')
timestamp_chunks = timestamp_str.split(', ')
dt_list = [int(x) for x in timestamp_chunks[0:7]]
</code></pre>
<p>where <code>chop</code> is a utility function I wrote that returns a substring.</p>
<p>I can make a <code>datetime</code> by unpacking the tuple/list into the constructor</p>
<pre><code>dt = datetime.datetime(*dt_list)
</code></pre>
<p>This works, but ignores the timezone information.</p>
<p>I tried adding it onto the end:</p>
<pre><code>dt_list.append(timestamp_chunks[7])
dt = datetime.datetime(*dt_list)
</code></pre>
<p>but then I get the error <code>tzinfo argument must be None or of a tzinfo subclass, not type 'str'</code></p>
<p>I have tried different approaches, like using <code>dateutil.parser</code>, but it doesn't help because this isn't in any accepted <code>strftime</code> format.</p>
<p>I could do some funky math to figure out the unix epoch, but that still leaves me with the timezone issue.</p>
<p>I was hoping there would be some date utility that would rehydrate a datetime from a string tuple like the one I have in the log - which looks like it is a <code>repr</code> or <code>str</code> of a <code>datetime</code> object</p>
|
<python><python-3.x><datetime><timezone><python-datetime>
|
2023-03-15 04:25:48
| 1
| 81,221
|
Simon
|
75,740,280
| 2,717,373
|
Importing python module from another directory, which then imports a second module from that directory
|
<p>I have a bunch of data processing scripts in the root directory of my Python application that I want to clean up by putting into their own directory, but I can't get it to work properly.</p>
<p>This is my file structure:</p>
<pre><code>.
+-- Root
|
|-- main_script.py
|
+-- utils
| |
| |-- utilsA.py
| `- utilsB.py
|
+-- processing_scripts
|
`- proccess.py
</code></pre>
<p>Within <code>process.py</code> there is the line <code>from utils import utilsA as u</code>, then within <code>utilsA.py</code> there is the line <code>from utils.utilsB import util_function</code>.</p>
<p>When <code>process.py</code> is located in the root directory, this works completely fine, however when I move <code>process.py</code> to the <code>processing_scripts</code> directory and try to run it from there, it no longer works. It just says <code>ModuleNotFoundError: No module named 'utils'</code>.</p>
<p>I did a search and found a number of similar questions on here, the answers to which mostly suggest to use some variant on <code>sys.path.append('../')</code>. This solution works if <code>utilsA.py</code> has no reference to <code>utilsB.py</code>, and I change the line in <code>process.py</code> to <code>from Root.utils import utilsA as u</code>, but as soon as <code>utilsA.py</code> tries to reference <code>utilsB.py</code>, it gives the same error: <code>ModuleNotFoundError: No module named 'utils'</code>.</p>
<p>Is there a way of tricking <code>process.py</code> into thinking it is running from the root directory, so that it can then reference things, which reference other things, without breaking? I can get it to work if I go into <code>utilsA.py</code> and add the <code>sys.path.append('../')</code> line, as well as changing the reference to <code>utils.utilsB</code> to <code>Root.utils.utilsB</code>, but I would rather not have to go through and change every single file like that, just to run some processing scripts - and that solution feels incredibly hacky..</p>
|
<python><module>
|
2023-03-15 03:29:51
| 0
| 1,373
|
guskenny83
|
75,740,156
| 13,849,446
|
Ensure browser is closed even if script is forced exited
|
<p>I am trying to close browser even if the script is not exited normally. I have tried few methods which I found on internet but failed to do so. First was to use try/except and finally. I implemented it but if does not works if the script execution is forcefully stopped.
I have reproduced the code I have. It is not closing the browser on force exit if using chrome. Firefox is closing without any problem. The code is divided in two files (scraper.py, initialize.py)</p>
<p><em>scraper.py</em></p>
<pre><code>try:
from .initialize import Init
except:
from initialize import Init
import time
URL = "https://web.whatsapp.com/"
def main(driver):
# do parsing and other stuff
driver.get(URL)
time.sleep(15)
def get_driver(browser, headless):
# calling start drivers method of init class from initialize file
return Init(method='profile',write_directory='output/',selenium_wire=True,undetected_chromedriver=True,old_profile_using=False).start_driver(browser=browser, url=URL, headless=headless, refresh=True)
try:
driver = get_driver("firefox" , False)
main(driver)
# get_driver returns driver.
except Exception as e:
print(f"Error: {e}")
finally:
try:
driver.quit()
except NameError:
pass
</code></pre>
<p><em>initialize.py</em></p>
<pre><code>from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.service import Service
import shutil, os, sys, getpass, browser_cookie3, json, traceback
from time import sleep
from webdriver_manager.firefox import GeckoDriverManager
from webdriver_manager import utils
"""def drivers():
driver_installation = ChromeDriverManager().install()
service = Service(driver_installation)
driver = webdriver.Chrome(service=service)
return driver
class Init:
def __init__(self, browser):
self.browser = browser
def get_driver(self): #for sake of simplicity this is the get_driver function
driver = drivers()
return driver"""
class Init:
basename = os.path.basename(sys.argv[0]).split(".")[0]
homedir = os.path.join("C:\\ProgramData", basename, getpass.getuser())
#WRITE_DIRECTORY = 'output_files'
#WRITE_DIRECTORY = f'{homedir}\output_files/'
WRITE_DIRECTORY = os.path.join(homedir,'output_files')
def __init__(self, method, write_directory=WRITE_DIRECTORY, selenium_wire=False,
undetected_chromedriver=False, old_profile_using=False):
self.method = method
self.write_directory = write_directory
self.selenium_wire = selenium_wire
self.undetected_chromedriver = undetected_chromedriver
self.old_profile_using = old_profile_using
if not any((self.selenium_wire, self.undetected_chromedriver)):
raise Exception("Specify at least one driver")
if method not in ['profile', 'cookies']:
raise Exception(f"Unexpected method {method=}")
@staticmethod
def find_path_to_browser_profile(browser):
"""Getting path to firefox or chrome default profile"""
if sys.platform == 'win32':
browser_path = {
'chrome': os.path.join(
os.environ.get('LOCALAPPDATA'), 'Google', 'Chrome', 'User Data'),
"firefox": os.path.join(
os.environ.get('APPDATA'), 'Mozilla', 'Firefox')
}
else:
browser_path = {
'chrome': os.path.expanduser('~/.config/google-chrome'),
"firefox": os.path.expanduser('~/.mozilla/firefox')
}
path_to_profiles = browser_path[browser]
print(f'{path_to_profiles=}')
if os.path.exists(path_to_profiles):
if browser == 'firefox':
return browser_cookie3.Firefox.get_default_profile(path_to_profiles)
if browser == 'chrome':
return path_to_profiles
else:
print(f"{browser} profile - not found")
def _remove_old_db(self, path_to_browser_profile, browser):
if browser == 'firefox':
path_to_browser_profile = os.path.join(path_to_browser_profile, 'storage', 'default')
folder_list = [i for i in
os.listdir(path_to_browser_profile) if 'to-do.live' in i]
if folder_list:
path_to_folder = os.path.join(path_to_browser_profile, folder_list[0], 'idb')
try:
shutil.rmtree(path_to_folder)
print('removed')
return
except:
sleep(2)
pass
elif browser == 'chrome':
path_to_browser_profile = os.path.join(path_to_browser_profile, 'Default', 'IndexedDB')
else:
raise Exception('browser not supported')
print(path_to_browser_profile)
folder_list = [i for i in
os.listdir(path_to_browser_profile) if 'to-do.live' in i]
if folder_list:
print(folder_list[0])
path_to_folder = os.path.join(path_to_browser_profile, folder_list[0])
shutil.rmtree(path_to_folder)
print('removed')
def init_driver(self, browser, path_to_profile=None, headless=False, remove_old_session=False):
"""Initialize webdriver"""
print(f"Initialize webdriver for {browser}")
if self.selenium_wire and self.undetected_chromedriver:
from seleniumwire.undetected_chromedriver.v2 import Chrome, ChromeOptions # working
elif self.selenium_wire:
from seleniumwire.webdriver import Chrome, ChromeOptions # not working
else:
from undetected_chromedriver import Chrome, ChromeOptions
if self.method == 'profile':
path_to_profile = self.copy_profile(browser)
if remove_old_session:
self._remove_old_db(path_to_profile, browser)
if 'firefox' in browser:
wdm_path = os.path.join(os.path.expanduser("~"), ".wdm", "drivers.json")
print(wdm_path)
with open(wdm_path, "r") as f:
driver_logs = json.load(f)
gecko_drivers = {}
g_count = 0
for drivers, val in driver_logs.items():
if "geckodriver" in drivers:
g_count += 1
gecko_drivers[drivers] = val
firefox_options = webdriver.FirefoxOptions()
# firefox_options.add_argument('--no-sandbox')
if self.method == 'profile':
path_to_firefox_profile = path_to_profile
profile = webdriver.FirefoxProfile(path_to_firefox_profile)
profile.accept_untrusted_certs = True
profile.set_preference("dom.webdriver.enabled", False)
profile.set_preference('useAutomationExtension', False)
profile.set_preference("security.mixed_content.block_active_content", False)
profile.update_preferences()
firefox_options.set_preference("general.useragent.override", 'user-agent=Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0')
#firefox_options.add_argument(
# 'user-agent=Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0')
if headless:
firefox_options.add_argument("--headless")
firefox_options.add_argument('--ignore-certificate-errors')
firefox_options.add_argument("--width=1400")
firefox_options.add_argument("--height=1000")
try:
driver_installation = GeckoDriverManager().install()
except Exception as e:
c = 1
if "rate limit" in str(e):
for i in gecko_drivers.values():
if c==g_count:
driver_installation = i["binary_path"]
break
c +=1
service = Service(driver_installation)
if sys.platform == 'win32':
from subprocess import CREATE_NO_WINDOW
service.creationflags = CREATE_NO_WINDOW
driver = webdriver.Firefox(options=firefox_options, firefox_profile=profile,
service=service)
driver.implicitly_wait(10)
else:
firefox_options = webdriver.FirefoxOptions()
firefox_options.add_argument('--incognito')
firefox_options.add_argument("--disable-blink-features=AutomationControlled")
firefox_options.add_argument(
'user-agent=Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0')
firefox_options.add_argument("--mute-audio")
if headless:
firefox_options.add_argument("--headless")
try:
driver_installation = GeckoDriverManager().install()
except Exception as e:
if "rate limit" in str(e):
c = 1
for i in gecko_drivers.values():
if c==g_count:
driver_installation = i["binary_path"]
c +=1
service = Service(driver_installation)
if sys.platform == 'win32':
from subprocess import CREATE_NO_WINDOW
service.creationflags = CREATE_NO_WINDOW
driver = webdriver.Firefox(options=firefox_options, executable_path=driver_installation,
service=service)
return driver
elif 'chrome' in browser:
chrome_options = ChromeOptions()
if self.method == 'profile':
chrome_options.add_argument('--no-first-run --no-service-autorun --password-store=basic')
chrome_options.add_argument("--disable-extensions")
# chrome_options.user_data_dir = path_to_profile
chrome_options.add_argument("--window-size=1400,1000")
chrome_options.add_argument(
'user-agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36')
if sys.platform == 'linux':
chrome_options.binary_location = '/bin/google-chrome'
if headless:
chrome_options.add_argument("--headless")
# You cannot use default chromedriver for google services, because it was made by google.
browser_version = int(utils.get_browser_version_from_os('google-chrome').split('.')[0])
print(f"{browser_version=}")
driver = Chrome(options=chrome_options, version_main=browser_version, patcher_force_close=True,
user_data_dir=path_to_profile)
driver.implicitly_wait(10)
return driver
else:
chrome_options.add_argument('--incognito')
chrome_options.add_argument("--disable-blink-features=AutomationControlled")
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--mute-audio")
chrome_options.add_argument("--window-size=1400,1000")
if headless:
chrome_options.add_argument("--headless")
chrome_options.add_argument(
'user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36')
driver_installation = ChromeDriverManager().install()
service = Service(driver_installation)
if sys.platform == 'win32':
from subprocess import CREATE_NO_WINDOW
service.creationflags = CREATE_NO_WINDOW
driver = webdriver.Chrome(driver_installation, options=chrome_options, service=service)
return driver
def copy_profile(self, browser):
"""Copy browsers profile to /home/{username}/browser_profiles/{browser} directory and returns path to it"""
path_to_profile = self.find_path_to_browser_profile(browser)
sys_path_to_copy = os.path.join(os.getcwd(), self.write_directory, browser)
print(f'copy_profile() initializer: sys_path_to_copy: {sys_path_to_copy}')
if browser == 'chrome':
if self.old_profile_using:
print('Using old profile')
return sys_path_to_copy
if not os.path.exists(sys_path_to_copy):
try:
shutil.copytree(path_to_profile, sys_path_to_copy,
symlinks=True, ignore_dangling_symlinks=True, dirs_exist_ok=True)
except:
pass
print(f'{browser} profile copied from {path_to_profile} to {sys_path_to_copy}')
if sys.platform == 'win32':
try:
shutil.copytree(os.path.join(sys_path_to_copy,'Default','Network'), os.path.join(sys_path_to_copy,'Default'),dirs_exist_ok=True)
except:
pass
return sys_path_to_copy
elif browser == 'firefox':
firefox_dir = path_to_profile.split('/')[-1]
if self.old_profile_using:
print('Using old profile')
return os.path.join(sys_path_to_copy, firefox_dir)
if not os.path.exists(sys_path_to_copy):
try:
shutil.copytree(path_to_profile, os.path.join(sys_path_to_copy, firefox_dir),
ignore_dangling_symlinks=True)
except:
pass
print(
f'{browser} profile copied from {path_to_profile} to {sys_path_to_copy}/{firefox_dir}')
return os.path.join(sys_path_to_copy, firefox_dir)
def start_driver(self, url, browser, headless, refresh=True, remove_old_session=False):
"""Prepering folders and init webdriver"""
self.create_folders(self.write_directory)
if self.method == "profile":
path_to_profile = f'{self.write_directory}{browser}'
if not self.old_profile_using:
self.remove_profile_folder(path_to_profile)
driver = self.init_driver(browser, headless=headless, remove_old_session=remove_old_session)
try:
sleep(3)
driver.get(url)
sleep(5)
if refresh and "telegram" not in url:
driver.refresh()
print(driver.current_url)
if driver.current_url == "about:blank":
driver.get(url)
sleep(5)
return driver
except:
# raise Exception
print(traceback.format_exc())
driver.close()
driver.quit()
def remove_profile_folder(self, path_to_profile_folder):
"""Removes browser profile folder"""
print("Removing old profile")
while os.path.exists(path_to_profile_folder):
try:
shutil.rmtree(path_to_profile_folder)
except Exception as ex:
print(ex)
sleep(2)
pass
def create_folders(self, folder_path):
if not os.path.exists(folder_path):
print(f'initializer() Creating folder for {folder_path}')
os.makedirs(folder_path) # Creating folder
return folder_path
</code></pre>
<p><strong>Things I tried</strong></p>
<pre><code>try:
driver = get_driver(browser)
# get_driver returns driver.
main(
driver,
)
except:
print(traceback.format_exc())
finally:
try:
driver.quit()
except NameError:
pass
</code></pre>
<p>The other thing I tried was to use atexit</p>
<pre><code>import atexit
def get_driver(browser): #for sake of simplicity this is the get_driver function
driver_installation = ChromeDriverManager().install()
service = Service(driver_installation)
driver = webdriver.Chrome(service=service)
return driver
@atexit.register()
def cleanup():
driver.close()
driver.quit()
try:
driver = get_driver(browser)
# get_driver returns driver.
main(
driver,
)
except:
print(traceback.format_exc())
finally:
try:
driver.quit()
except NameError:
pass
</code></pre>
<p>Note, I tries the above in my main code not like this. These also work fine in simple script but not in the code I have provided above.
I do not know if I am using it correctly or not.
The browser is closed on force exit if I am executing the script using VS Code debugger but it fails in simple execution.
Thanks in advance for any help.</p>
|
<python><selenium-webdriver><webdriver>
|
2023-03-15 03:03:56
| 2
| 1,146
|
farhan jatt
|
75,740,088
| 3,573,626
|
Pandas dataframe - transform selected cell values based on their suffix
|
<p>I have a dataframe as below:</p>
<pre><code>data_dict = {'id': {0: 'G1', 1: 'G2', 2: 'G3'},
'S': {0: 35.74, 1: 36.84, 2: 38.37},
'A': {0: 8.34, 1: '2.83%', 2: 10.55},
'C': {0: '6.63%', 1: '5.29%', 2: 3.6}}
df = pd.DataFrame(data_dict)
</code></pre>
<p>I want to multiply all the values in the data frame with 10000 (except under the column 'id' - 1st column) if they endswith %:</p>
<pre><code>cols = df.columns[1:]
for index, row in df.loc[:, df.columns != 'id'].iterrows():
for c in cols:
if str(row[c]).endswith('%'):
data_value = str(row[c])
data_value = data_value.replace('%',"")
df.at[index,c]= float(data_value) * 10000
</code></pre>
<p>Finally, this sets all the columns values (except the first column) to numeric:</p>
<pre><code>df[cols[1:]] = df[cols[1:]].apply(pd.to_numeric, errors='coerce')
</code></pre>
<p>Is there a simple way to convert the values instead of iterating the rows?</p>
|
<python><pandas><dataframe><type-conversion>
|
2023-03-15 02:42:07
| 3
| 1,043
|
kitchenprinzessin
|
75,739,929
| 726,802
|
Issue while trying to submit ajax form in django
|
<p><strong>Server Side Code</strong></p>
<pre><code>@csrf_protect
def Authenticate(request):
if request.method == "POST":
return JsonResponse({"success": False})
</code></pre>
<p><strong>Form</strong></p>
<pre><code><form class="row" id="frmLogin">
{% csrf_token %}
<input type="text" name="username" />
<input type="password" name="password" />
<button class="btn btn-primary">Submit</button>
</form>
</code></pre>
<p><strong>JQuery</strong></p>
<pre><code>$(document).ready(function() {
$("form#frmLogin").validate({
rules: {
"username": {
"required": true
},
"password": {
"required": true
}
},
submitHandler: function(form) {
$.post({
url: "/authenticate/",
contentType: "application/json;charset=utf-8;",
dataType: "json",
data: JSON.stringify({
"username": $(form).find("[name='username']").val(),
"password": $(form).find("[name='password']").val(),
"csrfmiddlewaretoken": $(form).find("[name='csrfmiddlewaretoken']").val()
}),
async: true,
success: function(response) {
}
});
}
});
});
</code></pre>
<p><strong>Payload Info</strong></p>
<p><a href="https://i.sstatic.net/XiMEL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XiMEL.png" alt="enter image description here" /></a></p>
<p><strong>Same Origin Message</strong></p>
<p><a href="https://i.sstatic.net/vtI6h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vtI6h.png" alt="enter image description here" /></a></p>
<p><strong>Error Message</strong></p>
<p><a href="https://i.sstatic.net/1TVs4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1TVs4.png" alt="enter image description here" /></a></p>
|
<python><jquery><django>
|
2023-03-15 02:07:48
| 1
| 10,163
|
Pankaj
|
75,739,919
| 15,975,987
|
Count number of occurrences in Pandas column with some multiples
|
<p>I have a Pandas dataframe with a column called <code>specialty</code> that looks like this:</p>
<pre><code>0 1,5
1 1
2 1
3 1
4 1
5 1,5
6 3
7 3
8 1
9 1,3
10 1
11 1,2,4,6
</code></pre>
<p>I want to count the number of occurrences of each number. If there is more than one value for a certain cell, I would want it to be added to the count of <code>7</code>, but I'd also like each number in that cell to be added to its respective total. For example, for this data my desired output would look something like this:</p>
<pre><code>1: 10
2: 1
3: 3
4: 1
5: 2
6: 1
7: 4
</code></pre>
<p>The <code>7</code> here is referring to the fact that there are 4 rows with multiple values. However, each of those numbers should be added to the overall count for each number.</p>
<p>Here's what I have so far, but it only counts multiples instead of also each number in the multiple. I'm also computing the percentage of each value.</p>
<pre><code># Convert specialties with more than 1 to item 7 (multiple)
part_chars['specialty'] = np.where(part_chars['specialty'].str.len() > 1, 7, part_chars['specialty'])
counts = part_chars.specialty.value_counts()
percs = part_chars.specialty.value_counts(normalize=True)*100
pd.concat([counts,percs], axis=1, keys=['count', 'percentage'])
</code></pre>
<p>And the output:</p>
<pre><code> count percentage
7 213 40.804598
1 211 40.421456
3 39 7.471264
5 23 4.406130
6 13 2.490421
4 12 2.298851
2 11 2.107280
</code></pre>
<p>I'm not sure how to make the counts work the way I want them. Would appreciate any help with this.</p>
|
<python><pandas><numpy><where-clause><series>
|
2023-03-15 02:04:39
| 2
| 429
|
Hefe
|
75,739,877
| 2,783,767
|
How to make sure timerseriesAI/tsai uses GPU
|
<p>I am using tsai 0.3.5 for timeseries classification.
But it is taking unusual time for training an epoch.
Can somebody please let me know how to make sure that tsai uses GPU and not CPU.</p>
<p>Please find below my code.</p>
<pre><code>import os
os.chdir(os.path.dirname(os.path.abspath(__file__)))
from pickle import load
from multiprocessing import Process
import numpy as np
from tsai.all import *
import matplotlib.pyplot as plt
from sklearn.metrics import precision_recall_curve
num_datesets = 1
for dataset_idx in range(num_datesets):
X_train = load(open(r"X_train_"+str(dataset_idx)+".pkl", 'rb'))
y_train = load(open(r"y_train_"+str(dataset_idx)+".pkl", 'rb'))
X_test = load(open(r"X_test_"+str(dataset_idx)+".pkl", 'rb'))
y_test = load(open(r"y_test_"+str(dataset_idx)+".pkl", 'rb'))
print("dataset loaded")
learn = TSClassifier(X_train, y_train, arch=InceptionTimePlus, arch_config=dict(fc_dropout=0.5))
print("training started")
learn.fit_one_cycle(5, 0.0005)
learn.export("tsai_"+str(dataset_idx)+".pkl")
probas, target, preds = learn.get_X_preds(X_test, y_test)
precision, recall, thresholds = precision_recall_curve(target, probas)
plt.clf()
plt.fill_between(recall, precision)
plt.ylabel("Precision")
plt.xlabel("Recall")
plt.title("tsai_"+str(dataset_idx)+"_precision_recall_curve")
plt.savefig("tsai_"+str(dataset_idx)+".png")
plt.show()
</code></pre>
|
<python><pytorch><time-series>
|
2023-03-15 01:54:39
| 1
| 394
|
Granth
|
75,739,619
| 11,163,122
|
Library housing CNN shape calculation in a function?
|
<p>I find myself continually re-implementing the same free function for a convolutional neural network's output shape, given hyperparameters. I am growing tired of re-implementing this function and occasionally also unit tests.</p>
<blockquote>
<p><a href="https://i.sstatic.net/DDQh6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DDQh6.png" alt="pytorch nn.Conv3d shape formulae" /></a></p>
</blockquote>
<p><a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html" rel="nofollow noreferrer">source</a></p>
<p>Is there a library (preference to <code>pytorch</code>, <code>tensorflow</code>, or <code>numpy</code>) that houses a function that implements this formula?</p>
<hr />
<p>Here is what I just implemented for a PyTorch-based project using Python 3.10+, but I would rather just import this.</p>
<pre class="lang-py prettyprint-override"><code>def conv_conversion(
in_shape: tuple[int, ...],
kernel_size: int | tuple[int, ...],
padding: int | tuple[int, ...] = 0,
dilation: int | tuple[int, ...] = 1,
stride: int | tuple[int, ...] = 1,
) -> tuple[int, ...]:
"""Perform a Conv layer calculation matching nn.Conv's defaults."""
def to_tuple(value: int | tuple[int, ...]) -> tuple[int, ...]:
return (value,) * len(in_shape) if isinstance(value, int) else value
k, p = to_tuple(kernel_size), to_tuple(padding)
dil, s = to_tuple(dilation), to_tuple(stride)
return tuple(
int((in_shape[i] + 2 * p[i] - dil[i] * (k[i] - 1) - 1) / s[i] + 1)
for i in range(len(in_shape))
)
</code></pre>
|
<python><numpy><tensorflow><pytorch><conv-neural-network>
|
2023-03-15 00:50:04
| 1
| 2,961
|
Intrastellar Explorer
|
75,739,529
| 6,778,374
|
Can a Python program query LC_CTYPE locale settings?
|
<p>In C++ (and C) you can use the functions in <a href="https://en.cppreference.com/w/cpp/header/cwctype" rel="nofollow noreferrer"><cwctype></a> to query LC_CTYPE character classes for a given character in the active locale.</p>
<p>Is there a way to access these options in Python? <strong>Specifically, can I test whether a given character is blank in the active locale?</strong></p>
<p>Basically, I want a Python equivalent of this C++ example:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <iostream>
#include <cwctype>
#include <clocale>
// From: https://en.cppreference.com/w/cpp/string/wide/iswblank
int main()
{
wchar_t c = L'\u3000'; // Ideographic space (' ')
std::cout << std::hex << std::showbase << std::boolalpha;
std::cout << "in the default locale, iswblank(" << (std::wint_t)c << ") = "
<< (bool)std::iswblank(c) << '\n';
// output: in the default locale, iswblank(0x3000) = false
std::setlocale(LC_ALL, "en_US.utf8");
std::cout << "in Unicode locale, iswblank(" << (std::wint_t)c << ") = "
<< (bool)std::iswblank(c) << '\n';
// output: in Unicode locale, iswblank(0x3000) = true
}
</code></pre>
<p>I have not been able to find a way to do this in Python. The <a href="https://docs.python.org/3/library/locale.html" rel="nofollow noreferrer">locale module</a> provides many functions, but seems to completely miss LC_CTYPE queries.</p>
|
<python><locale>
|
2023-03-15 00:26:42
| 1
| 675
|
NeatNit
|
75,739,475
| 1,114,872
|
reveal_type gives me a type, but it does not exist
|
<p>So, I have a code that typechecks, but when I try to run it, it complains about the type not existing</p>
<pre><code>import pyrsistent
a = pyrsistent.pset([1,2,3])
#reveal_type(a) #mypy file.py, gives type pyrsistent.typing.PSet[buildins.int]
#b : pyrsistent.typing.PSet[int] = a
#when running: AttributeError: module 'pyrsistent' has no attribute 'typing'
#mypy does not complain, though
print ( a )
</code></pre>
<p>The line that fails is the assignment to the variable b (commented above)</p>
<p>How can I import the type, or anotate the code differently, to fix this problem?</p>
|
<python><python-import><mypy><python-typing>
|
2023-03-15 00:14:28
| 1
| 1,512
|
josinalvo
|
75,739,438
| 4,507,231
|
AttributeError: module 'pandas' has no attribute 'core' in Statsmodels - broken Conda environment inside Pycharm
|
<p>I've been trying to upgrade Pandas to version 1.4.4 (was 1.4.1) inside Pycharm, with the project pointing to a Conda environment. I decided to make a new conda environment and do it that way. That was a big mess, but "something" has affected the original conda environment, which I left well alone - and rightly so. I'm trying to get the original environment in PyCharm back to executing my code. Unfortunately, I'm seeing the following. This is the kind of error that, when googling, is very bespoke to the package in question. Thus, it isn't easy to abstract anything from it.</p>
<p>I'm not sure what information is going to help. But I'll provide what I can when asked. Firstly, this is the error.</p>
<pre><code>File "C:\Users\yewro\PycharmProjects\HCDD\GUI\MainGUI.py", line 3, in <module>
from GUI.AnalysisWindow import AnalysisWindow
File "C:\Users\yewro\PycharmProjects\HCDD\GUI\AnalysisWindow.py", line 5, in <module>
from GUI.Controllers.AnalysisController import AnalysisController
File "C:\Users\yewro\PycharmProjects\HCDD\GUI\Controllers\AnalysisController.py", line 13, in <module>
from Engines.DFBuilders.CohortDFBuilder import *
File "C:\Users\yewro\PycharmProjects\HCDD\Engines\DFBuilders\CohortDFBuilder.py", line 4, in <module>
import statsmodels.formula.api as smf
File "C:\Users\yewro\anaconda3\envs\HCDD\lib\site-packages\statsmodels\formula\__init__.py", line 2, in <module>
from .formulatools import handle_formula_data
File "C:\Users\yewro\anaconda3\envs\HCDD\lib\site-packages\statsmodels\formula\formulatools.py", line 2, in <module>
from patsy import dmatrices, NAAction
File "C:\Users\yewro\anaconda3\envs\HCDD\lib\site-packages\patsy\__init__.py", line 77, in <module>
import patsy.highlevel
File "C:\Users\yewro\anaconda3\envs\HCDD\lib\site-packages\patsy\highlevel.py", line 19, in <module>
from patsy.design_info import DesignMatrix, DesignInfo
File "C:\Users\yewro\anaconda3\envs\HCDD\lib\site-packages\patsy\design_info.py", line 31, in <module>
from patsy.util import atleast_2d_column_default
File "C:\Users\yewro\anaconda3\envs\HCDD\lib\site-packages\patsy\util.py", line 53, in <module>
_pandas_is_categorical_dtype = getattr(pandas.core.common,
AttributeError: module 'pandas' has no attribute 'core'
</code></pre>
<p>Python is 3.9, and pandas is stuck at 1.4.1. Statsmodels is 0.13.3.</p>
<p>I would be so grateful for any help.</p>
|
<python><pandas><pycharm><conda><statsmodels>
|
2023-03-15 00:06:59
| 0
| 1,177
|
Anthony Nash
|
75,739,383
| 318,938
|
python access /Library on macos
|
<p>I am trying to write an install file to the /Library folder if the user doesn't have it. I can run the code in "sudo python ..." and it works.</p>
<p>However using pyinstaller and creating a packaged app gives me an Error 13 permission denied.</p>
<blockquote>
<p>[Errno 13] Permission denied: '/Library/...')</p>
</blockquote>
<p>Is there a key value in Info.plist to grant full access to files? I have manually added full disk access to the app without luck. What are the necessary security measures needed to be able to get this to run?</p>
<p>Here is the Info.plist</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDisplayName</key>
<string>correction</string>
<key>CFBundleExecutable</key>
<string>correction</string>
<key>CFBundleIconFile</key>
<string>icon-windowed.icns</string>
<key>CFBundleIdentifier</key>
<string>com.me.correction</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>correction</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>0.0.0</string>
<key>NSCameraUsageDescription</key>
<string>Access Camera</string>
<key>NSHighResolutionCapable</key>
<true/>
<key>NSSystemAdministrationUsageDescription</key>
<string>Install Files For Driver</string>
</dict>
</plist>
</code></pre>
|
<python><macos><info.plist>
|
2023-03-14 23:54:39
| 1
| 2,842
|
msj121
|
75,739,362
| 2,913,864
|
number of groups in a polars groupby
|
<p>For a polars groupby object, what is the equivalent of the pandas <code>ngroups</code> attribute on a pandas groupby object? Or in any case, what is the idiomatic what to get the number of groups in a polars groupby object?</p>
|
<python><dataframe><group-by><python-polars>
|
2023-03-14 23:47:18
| 1
| 9,650
|
Alan
|
75,739,323
| 9,749,972
|
Why doesn't numpy ndarray have a __dict__ attribute?
|
<p>I created a numpy array and tried to get its attributes with '<strong>dict</strong>', but failed.</p>
<pre><code>>>> import numpy as np
>>> a = np.arange(12)
>>> a.reshape(3,4)
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> type(a)
<class 'numpy.ndarray'>
>>> a.__dict__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'numpy.ndarray' object has no attribute __dict__. Did you mean: '__dir__'?
>>> '__dict__' in dir(a)
False
</code></pre>
<p>The <a href="https://stackoverflow.com/questions/19907442/explain-dict-attribute">document</a> says that obj.<strong>dict</strong> is used to store an object's (writable) attributes. I know I can use dir(a) and getattr() to fetch all the <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html" rel="nofollow noreferrer">attributes</a> {k,v}. Also, all the attributes such as shape and itemsize can be derived from the array constructor and besides <a href="https://www.nature.com/articles/s41586-020-2649-2.pdf" rel="nofollow noreferrer">strides</a> can be calculated like below:</p>
<pre><code>def calStrides(self, shape, itemSize):
loops = len(shape)
print(f"calculating strides... loop times = {loops}")
lst = []
for i in range(loops):
y = 1 # first multiplicand
for j in range(i + 1, loops):
y *= shape[j]
lst.append(y * itemSize)
self.cal_strides = tuple(lst)
</code></pre>
<p>Does this make them 'read-only' (as opposed to assignment or 'writable')? Is that the reason why there is no <strong>dict</strong> provided for data inspection?</p>
|
<python><numpy>
|
2023-03-14 23:38:48
| 1
| 691
|
Leon Chang
|
75,739,308
| 2,373,145
|
AEAD authentication with huge input that doesn't fit into RAM
|
<p>Say I've got a file on disk that doesn't fit in the computer's main memory. The file consists of two sections, a small section of less than 100 bytes at the beginning of the file, and a large section, consisting of the rest of the file.</p>
<p>I need to use AEAD (either ChaCha20-Poly1305 or AES-GCM) to encrypt the small section and authenticate the large section.</p>
<p>As far as I understand AEAD, in principle it should be possible to do this by loading the file into RAM piece-by-piece in small parts.</p>
<p>My problem is that the Python Cryptography AEAD API doesn't seem to be designed for this usecase, or at least it's missing similar examples.</p>
<p>This is the API documentation for ChaCha20-Poly1305: <a href="https://cryptography.io/en/latest/hazmat/primitives/aead/#cryptography.hazmat.primitives.ciphers.aead.ChaCha20Poly1305" rel="nofollow noreferrer">https://cryptography.io/en/latest/hazmat/primitives/aead/#cryptography.hazmat.primitives.ciphers.aead.ChaCha20Poly1305</a></p>
<p>The docs indicate that the <code>associated_data</code> (input that needs to be authenticated, but not encrypted) parameter must be a bytes-like object, defined here: <a href="https://cryptography.io/en/latest/glossary/#term-bytes-like" rel="nofollow noreferrer">https://cryptography.io/en/latest/glossary/#term-bytes-like</a></p>
<p>They further point to the Python Buffer Protocol: <a href="https://docs.python.org/3/c-api/buffer.html" rel="nofollow noreferrer">https://docs.python.org/3/c-api/buffer.html</a></p>
<p>Ideas for a solution:</p>
<ol>
<li><p>Hash the large section and only authenticate the resulting digest. I'm wary of doing this as I'm not entirely sure about the security implications, although it seems like it would be OK.</p>
</li>
<li><p>Construct a bytes-like type that would represent the large section without trying to load it into memory all at once. I have little experience with Python so I'm not sure how to proceed in this direction.</p>
</li>
</ol>
<p>EDIT: the first idea is fine: <a href="https://security.stackexchange.com/questions/269129/aead-authenticating-a-digest-of-my-data-instead-the-data-itself">https://security.stackexchange.com/questions/269129/aead-authenticating-a-digest-of-my-data-instead-the-data-itself</a></p>
|
<python><python-3.x><cryptography><pycrypto><aes-gcm>
|
2023-03-14 23:35:09
| 0
| 363
|
user2373145
|
75,739,294
| 5,054,505
|
Can I use a Google credential json file for authentication in a Dataflow job?
|
<p>I want to use a credential json file (or string) to authenticate a Beam job to read from a GCS bucket.
Notably, the credentials are provided by a user (in an existing process so I'm stuck using the json file rather than a service account in my own GCP project).</p>
<h2>What I've tried</h2>
<ul>
<li>Using <code>fsspec</code>/<code>gcsfs</code>: this seems to work, but I'm worried about scalability and I assume Beam's file io is more optimized to scale and plays nicer with Beam objects</li>
<li>Creating my own "storage_client" to pass to <code>apache_beam.io.gcp.gcsio.GcsIO</code>, trying to go off <a href="https://beam.apache.org/releases/pydoc/current/apache_beam.io.gcp.gcsio.html" rel="nofollow noreferrer">these docs</a>. The storage client seems to be Beam's own internal client:
<ul>
<li>I've tried passing a <code>google.cloud.storage.Client</code> for this, and the client listed in the docs is deprecated with lots of things written in red on the repo, so I'd like to avoid using it.</li>
<li>I've tried passing <code>google.auth</code> credentials to it in a few different forms: the raw dict (read by parsing the json), <code>google.auth.load_credentials_from_file</code>-based creds, and <a href="https://google-auth.readthedocs.io/en/latest/reference/google.auth.impersonated_credentials.html" rel="nofollow noreferrer">impersonated credentials</a></li>
</ul>
</li>
<li>Creating <code>from apache_beam.options.pipeline_options.GoogleCloudOptions</code> to pass to a <code>apache_beam.io.gcp.gcsfilesystem.GCSFileSystem</code>. I found <a href="https://beam.apache.org/releases/pydoc/current/apache_beam.io.gcp.gcsfilesystem.html" rel="nofollow noreferrer">these docs from beam</a> and <a href="https://cloud.google.com/dataflow/docs/reference/pipeline-options" rel="nofollow noreferrer">these docs from google</a>, but I haven't been able to piece together how to pass <em>json credentials</em> for authentication.</li>
<li>I've dabbled in the idea of using <a href="https://cloud.google.com/security-key-management" rel="nofollow noreferrer">Google KMS</a> for passing the creds securely, but (as far as I can tell) it doesn't solve my problem for getting Beam to use the credentials.</li>
</ul>
<h2>Small-ish Example</h2>
<p>It'd be nice to get something like this working:</p>
<pre><code>from apache_beam.options.pipeline_options import GoogleCloudOptions, PipelineOptions
import apache_beam as beam
from apache_beam.io.gcp.gcsio import GcsIO
from apache_beam.io.gcp.gcsfilesystem import GCSFileSystem
# need to support "type": "service_account", but would be nice to be able to support "type": "authorized_user" too, if possible
cred_file = "/path/to/file.json"
# not sure which of these is preferable for this...
options = PipelineOptions()
google_cloud_options = options.view_as(GoogleCloudOptions)
# voodoo
with beam.Pipeline(options=options) as p:
gcs = GcsIO()
gcs.open("gs://path/to/file.txt")
# or
gcs = GCSFileSystem(options)
gcs.open("gs://path/to/file.txt")
</code></pre>
<p>Happy to provide any other info that could help, as you can see I've been at this for a while now.</p>
|
<python><google-cloud-storage><google-cloud-dataflow><apache-beam><apache-beam-io>
|
2023-03-14 23:31:56
| 1
| 610
|
Patrick
|
75,739,264
| 7,966,156
|
How to interact with PyBullet GUI in jupyter notebook?
|
<p>I used the following code in a jupyter notebook</p>
<pre class="lang-py prettyprint-override"><code>p.connect(p.GUI)
</code></pre>
<p>which creates a popup window that looks like this:</p>
<p><a href="https://i.sstatic.net/XiUez.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XiUez.png" alt="enter image description here" /></a></p>
<p>Unfortunately, I can't interact with the window or the contents at all. Every time I scroll over it I get the Mac OS spinning rainbow wheel. I can't seem to even close it as I can't click on the red eXit button.</p>
<p>My questions:</p>
<p>1.) How do I just close the window? Do I have to do it using something like <code>p.connect(p.quit)</code>?</p>
<p>2.) Is there a way I can make it interactable? For example being able to click on the "Test" tab?</p>
|
<python><macos><jupyter-notebook><popup><pybullet>
|
2023-03-14 23:27:11
| 0
| 628
|
Nova
|
75,739,211
| 952,870
|
function to validate dates from a list and return a tuple with findings
|
<p>I need help to create a function that receives the 2 values below and returns a tuple with 2 lists inside as detailed below.</p>
<pre><code>initial_date = date(2021, 11, 30)
today = date.today()
balance_dates = {
1: date(2020, 5, 31), 2: date(2020, 6, 20), 3: date(2020, 6, 20),
4: date(2020, 8, 30), 5: date(2020, 5, 31), 6: date(2020, 12, 31),
7: date(2020, 5, 31), 8: date(2020, 11, 30), 9: date(2023, 2, 28),
10: date(2024, 5, 31), 11: date(2023, 11, 30), 12: date(2023, 2, 28),
</code></pre>
<p>}</p>
<p><strong>Function:</strong> check_missing_or_wrong_balances(initial_date, balance_date) (<strong>Returns a tuple</strong> with 2 lists)</p>
<p><strong>Description of the tuple</strong>:</p>
<ol>
<li><p>(list 1) Check if <code>balance_date</code> has at least one date representing the very last day of each month from <code>initial_date</code> to the current date and if not, create/append the missing month (full date with the last day YYY-mm-dd) to a list and return it as the first value of the tuple.</p>
</li>
<li><p>(list 2) if the date tested above isn't the last day of the given month, create/append the id of that date in another list returned as the second value of the tuple. Additionally, add the ids of future dates (after the current date) and ids of duplicated dates, leaving only the first match found outside of this (e.g. if 3 exact dates were found, add the id of 2 occurences).</p>
</li>
</ol>
|
<python>
|
2023-03-14 23:15:00
| 1
| 2,815
|
Pabluez
|
75,739,170
| 8,030,874
|
Can't find Python on MACBOOK M1 even python being installed
|
<p>I'm getting the error
<strong>"Error: Can't find Python executable "python", you can set the PYTHON env variable"</strong></p>
<p>when running the <strong>npm install</strong> command on a React project in Visual Studio Code on <strong>Macbook M1</strong>.</p>
<p>Python is installed with brew but it still gives the error:</p>
<blockquote>
<p>brew install python</p>
</blockquote>
<blockquote>
<p>Warning: python@3.11 3.11.2_1 is already installed and up-to-date.
To reinstall 3.11.2_1, run:
brew reinstall python@3.11</p>
</blockquote>
<p>But I can`t use python version command</p>
<blockquote>
<p>▶ python --version</p>
<p>zsh: command not found: python</p>
</blockquote>
<p>Errors</p>
<blockquote>
<p>gyp verb check python checking for Python executable "python2" in the PATH
gyp verb <code>which</code> failed Error: not found: python2</p>
<p>...</p>
<p>gyp verb check python checking for Python executable "python" in the PATH
gyp verb <code>which</code> failed Error: not found: python</p>
</blockquote>
<p>Package.json</p>
<pre><code>{ "name": "my-project",
"version": "1.0.0",
"private": true,
"dependencies": {
"@ckeditor/ckeditor5-build-classic": "^18.0.0",
"@ckeditor/ckeditor5-react": "^2.1.0",
"@devexpress/dx-react-core": "^2.7.5",
"@devexpress/dx-react-grid": "^2.7.5",
"@devexpress/dx-react-grid-bootstrap4": "^2.7.5",
"axios": "^0.21.1",
"bootstrap": "^4.6.0",
"cpf-cnpj-validator": "^1.0.3",
"crypto-js": "^4.0.0",
"custom-error": "^0.2.1",
"draft-js": "^0.11.7",
"draft-js-export-html": "^1.4.1",
"draft-js-import-html": "^1.4.1",
"firebase": "^8.4.1",
"firebase-tools": "^7.16.2",
"fs": "0.0.1-security",
"google-maps-react": "^2.0.6",
"install": "^0.13.0",
"jquery": "^3.6.0",
"js-base64": "^2.6.4",
"js-md5": "^0.7.3",
"moment": "^2.29.1",
"node-sass": "^4.14.1",
"qrcode.react": "^0.9.3",
"react": "^16.14.0",
"react-autosuggest": "^9.4.3",
"react-confirm-alert": "^2.7.0",
"react-currency-input": "^1.3.6",
"react-datepicker": "^2.16.0",
"react-dom": "^16.14.0",
"react-file-viewer": "^1.2.1",
"react-filter-search": "^1.0.11",
"react-html-parser": "^2.0.2",
"react-input-mask": "^2.0.4",
"react-intl-currency-input": "^0.2.6",
"react-lazy-load-image-component": "^1.5.1",
"react-loading-skeleton": "^2.2.0",
"react-lottie": "^1.2.3",
"react-notification-alert": "0.0.12",
"react-places-autocomplete": "^7.3.0",
"react-redux": "^7.2.3",
"react-responsive-carousel": "^3.2.18",
"react-router-dom": "^5.2.0",
"react-scripts": "^3.4.4",
"react-select": "^3.2.0",
"reactstrap": "^8.9.0",
"redux": "^4.0.5",
"valid-url": "^1.0.9"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject",
"deploy:dev": "firebase deploy --only hosting:ingresso-gospel-dev",
"deploy": "firebase deploy",
"build:deploy": "npm run build && node node_modules/react-build-cache && firebase deploy"
},
"eslintConfig": {
"extends": "react-app"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"eslint-plugin-react-hooks": "^2.5.1",
"react-build-cache": "^1.1.4"
}
}
</code></pre>
|
<python><reactjs><npm><apple-m1>
|
2023-03-14 23:08:20
| 1
| 386
|
Sabrina B.
|
75,738,915
| 258,483
|
How to join arbitrary parts of url in Python?
|
<p><code>urljoin</code> corrupts the data</p>
<pre><code>from urllib.parse import urljoin
base = "https://dummy.restapiexample.com/api/v1"
tail = "/employees"
urljoin(base, tail)
</code></pre>
<p>returns</p>
<pre><code>'https://dummy.restapiexample.com/employees'
</code></pre>
<p>eating "/api/v1".</p>
|
<python><url>
|
2023-03-14 22:25:11
| 1
| 51,780
|
Dims
|
75,738,739
| 17,158,703
|
Repeat-fill 3D numpy array with matching planes
|
<p>I have two 3D numpy arrays of large size <code>(~1.2b elements, e.g. 80k x 500 x 30)</code>.
These two arrays have the same size on 2 dimension but differ on the third, which is related to timestamps. I also have two arrays containing the timestamp values corresponding to the planes along the size differing axis. I would like to stretch/broadcast and fill the shorter array with the last plane that contained values in itself prior to the gap, in order to get to the same size as the larger array such that I can then sum them up. The "gaps" and "last plane" are determined by how the timestamp arrays match that relate to the data arrays.</p>
<p>Here is the logic illustrated (has black font, for dark mode version see further below):</p>
<p><a href="https://i.sstatic.net/gb0Hw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gb0Hw.png" alt="enter image description here" /></a></p>
<hr />
<h2>Timestamp Relationship between Arrays</h2>
<p>Along the axis in which the arrays differ in size, the corresponding values in the timestamp arrays are dates. However, the granularity of these timestamps can vary. For example, the timestamps relating to the larger array may be a series of consecutive days, and the timestamps relating to the shorter array may be every Friday within the same period, but ending short or beginning late, i.e. missing the first 5 Fridays or the last 10 Fridays when comparing to the start and end of the larger array.</p>
<p>When there is such a "gap", I would like the (temproally speaking) last available plane of the 3D array to stretched over that gap until the next available plane has values.</p>
<p>Note, the Fridays example was just one case. It could also be that the larger array corresponds to every Monday in a certain period and the shorter array is the first day of every month in that period, but ending short or starting late.</p>
<p>Another important note might be that the axis of the 3D arrays relating to the timestamps has the smallest size of all the axis. Meaning for example, array 1 might be (80k, 500, 180) and array 2 might be (80k, 500, 50). So looping over that axis and then repeating planes of size (80k, 500, 1) might work. But I don't know how.</p>
<p>Also, the shorter array is fully contained in the larger.</p>
<h2>Code Example</h2>
<p>Let's say there are two arrays A and B</p>
<pre><code>m, k, n, v = 3, 4, 10, 4
A = np.random.randint(10, size=(m, n, k))
B = np.random.randint(10, size=(m, v, k))
A_timestamps = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
B_timestamps = [2, 4, 7, 8]
</code></pre>
<p>However, although B_timestamps is fully contained, its lowest timestamp is greater than the lowest timestamp in A. In other words, B start late.</p>
<p>Then the reindexing with this method</p>
<pre><code>equals = np.equal.outer(A_timestamps, B_timestamps)
filled = np.maximum.accumulate(equals, axis=0)
reindex = len(B_timestamps) - np.argmax(filled[:, ::-1], axis=1) - 1
</code></pre>
<p>leads to</p>
<pre><code># array([4, 1, 1, 2, 2, 2, 3, 4, 4, 4], dtype=int64)
</code></pre>
<p>starting with a 4.</p>
<p>Ideally, when reindexing the array B, I would want the gap in the front, due to B starting late, to be filled with zeros.</p>
<p>I guess I could check for the first matching timestamp first, only reindex B from there on, and then maybe use np.pad to add zeros at B's front to make up for the excluded timestamps in A.</p>
<hr />
<p>Repeating image with white font for dark mode users:</p>
<p><a href="https://i.sstatic.net/d7Feg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d7Feg.png" alt="enter image description here" /></a></p>
|
<python><numpy><numpy-ndarray><array-broadcasting>
|
2023-03-14 21:55:05
| 2
| 823
|
Dattel Klauber
|
75,738,705
| 8,505,509
|
Google Vision Rest API - KeyError: 'responses'
|
<p>I am using the Google Vision Rest API to extract text from an image</p>
<pre class="lang-py prettyprint-override"><code>VISION_API_ENDPOINT = 'https://vision.googleapis.com/v1/images:annotate'
def get_response(self, img_bytes):
# Payload for text detection request
data = {
"requests": [{
"image": {
"content": base64.b64encode(img_bytes).decode("utf-8")
},
"features": [{
"type": "TEXT_DETECTION"
}],
"imageContext": {
"languageHints": ["en"]
}
}]
}
response = requests.post(VISION_API_ENDPOINT, headers=self.request_headers, json=data)
return response.json()
</code></pre>
<p>I was making many requests to the API to extract text from small images (128x32 size). After around 7000 requests, the following error occurred -</p>
<pre><code>Traceback (most recent call last):
File "/path/to/file", line 97, in get_labels
texts = response["responses"][0]
KeyError: 'responses
</code></pre>
<p>The error kept occurring for all the subsequent images. Unfortunately, I did not log the response for this exception. I do not have access to the images for which this exception occurred since they were generated on-the-fly (I am working on reproducing the error). The GCP console shows that there was no error for any of the requests, so I am not sure why the "responses" key is not present. Any help would be appreciated.</p>
|
<python><google-vision><vision-api>
|
2023-03-14 21:48:17
| 0
| 1,205
|
Ganesh Tata
|
75,738,643
| 4,183,877
|
Sharing models between Django apps yields: django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet
|
<p>I have a Django project with multiple apps set up like:</p>
<pre><code>myproject/
- apps/
- app1/
- app2/
</code></pre>
<p>Everything works when running <code>python manage.py runserver</code> until I need to <a href="https://stackoverflow.com/questions/4137287/sharing-models-between-django-apps?noredirect=1&lq=1">share models</a> in <code>app1</code> with <code>app2</code>. It seems this should be as simple as adding a standard import statement in the appropriate file in <code>app2</code>, e.g:</p>
<pre><code># myproject/apps/app2/callback_funcs/sub_functions.py
from apps.app1.models import SharedDataModel
</code></pre>
<p>Unfortunately, as soon as I add the import statement, I get the error <code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.</code></p>
<p>I tried solutions given <a href="https://stackoverflow.com/questions/36909440/exception-django-core-exceptions-appregistrynotready-apps-arent-loaded-yet">here</a>, <a href="https://stackoverflow.com/questions/67443303/python3-bot-py-django-core-exceptions-appregistrynotready-apps-arent-loaded-ye">here</a>, and <a href="https://stackoverflow.com/questions/58305225/django-core-exceptions-appregistrynotready-apps-arent-loaded-yet-when-trying?rq=1">here</a>, yet the problem persists.</p>
<p>All apps are in my <code>INSTALLED_APPS</code> list and being read correctly (again, as long as I exclude the problematic import statement above). I thought the order of the apps in the <code>INSTALLED_APPS</code> list might be an issue (I've seen instructions elsewhere that state certain apps need to be first) but this doesn't seem to affect the results.</p>
<p>I have:</p>
<pre><code>INSTALLED_APPS = [
...
'apps.app1',
'apps.app2.app',
...
]
</code></pre>
<p><code>app1</code> is a standard Django app, <code>app2</code> is a <a href="https://django-plotly-dash.readthedocs.io/en/latest/" rel="nofollow noreferrer">Django Plotly Dash</a> app and has an app file which contains:</p>
<pre><code># myproject/apps/app2/app.py
from django_plotly_dash import DjangoDash
from .layout import layout
from .callbacks import register_callbacks
app = DjangoDash("app2")
app.layout = layout
register_callbacks(app)
</code></pre>
<p>Interestingly, I imported the model into another, non-Django Plotly Dash app, and it didn't give me this error. Perhaps the issue is stemming from this library?</p>
<p>I'm running Python 3.9.16, Django 3.2.16, and Django Plotly Dash 2.1.3.</p>
<p>Full traceback is here:</p>
<pre><code>Exception in thread django-main-thread:
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/core/management/commands/runserver.py", line 110, in inner_run
autoreload.raise_last_exception()
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/core/management/__init__.py", line 375, in execute
autoreload.check_errors(django.setup)()
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/apps/config.py", line 224, in create
import_module(entry)
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/ubuntu/GitHub/myproject/apps/app2/sub_module.py", line 3, in <module>
from .callbacks import register_callbacks
File "/home/ubuntu/GitHub/myproject/apps/app2/callbacks.py", line 11, in <module>
from .callback_funcs import sub_funcs
File "/home/ubuntu/GitHub/myproject/apps/app2/callback_funcs/sub_funcs.py", line 5, in <module>
from apps.app1.models import SharedDataModel
File "/home/ubuntu/GitHub/myproject/apps/app1/models.py", line 8, in <module>
class SharedDataModel(models.Model):
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/db/models/base.py", line 108, in __new__
app_config = apps.get_containing_app_config(module)
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/apps/registry.py", line 253, in get_containing_app_config
self.check_apps_ready()
File "/home/ubuntu/miniconda3/envs/myproject/lib/python3.9/site-packages/django/apps/registry.py", line 136, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
|
<python><python-3.x><django><django-models><plotly-dash>
|
2023-03-14 21:39:20
| 0
| 1,305
|
hubbs5
|
75,738,404
| 8,769,787
|
Why do matplotlib transforms not work as expected?
|
<p>I am trying to move from data coordinates to axes coordinates in matplotlib.</p>
<p>Here is my code:</p>
<pre><code> a, b = (.5,2)
x = np.linspace(-1,4,100)
y = [psi(xv,a,b) for xv in x]
plt.figure(figsize=(10,5))
ax = plt.gca()
point = (a,1)
display = (369, 316.8)
trans1 = ax.transData.transform([0,0])
print (trans1)
trans = ax.transData.inverted().transform(trans1)
print (trans)
trans = ax.transAxes.inverted().transform(trans1)
print (trans)
plt.plot(trans[0], trans[1], 'ro')
plt.plot(0,0, 'ro', transform=ax.transData)
plt.plot(x,y, lw=3)
plt.axhline(y = 1, color = 'r', linestyle = '--', lw=3, xmax=.3)
plt.title('Psi Function', fontsize=30)
plt.yticks([0, .2, .4, .6, .8, 1],['0','.2','','','','A'], fontsize = 'x-large')
plt.xticks([-1, 0, 0.5, 2, 3, 4], ['','0', 'a', 'b', '', ''])
plt.ylabel('$\Psi$', fontsize=25)
plt.xlabel('$x$', fontsize=25)
plt.grid()
</code></pre>
<p>The crucial bit of code I am concerned about is :</p>
<pre><code> trans1 = ax.transData.transform([0,0])
print (trans1)
trans = ax.transData.inverted().transform(trans1)
print (trans)
trans = ax.transAxes.inverted().transform(trans1)
print (trans)
</code></pre>
<p>I am first transforming the data points (0,0) into screen coordinates and then trying to transform them back into both data coordinates, printing the result, and transforming them into axes cooredinates and expecting to get two different coordinates. I know (0,0) should be correct for data coordinates but the axes doesn't begin at (0,0) so they should be different and I would expect values greater than zero.</p>
<p>I am using matplotlib 3.5.1 so I am not sure if there is a problem with my version or not.</p>
<p><a href="https://i.sstatic.net/XVI08.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XVI08.png" alt="enter image description here" /></a></p>
<p>I edited my code to show:</p>
<pre><code>ax.set_xlim(0,2)
ax.set_ylim(0,1.1)
trans1 = ax.transData.transform([0,0])
print (trans1)
trans = ax.transData.inverted().transform(trans1)
print (trans)
trans = ax.transAxes.inverted().transform(trans1)
</code></pre>
<p>and the results I got were:</p>
<pre><code>[90. 45.]
[ 0.00000000e+00 -2.77555756e-17]
[0. 0.]
</code></pre>
<p>so really effectively still no change.</p>
|
<python><matplotlib>
|
2023-03-14 21:07:59
| 0
| 408
|
Kiwiheretic
|
75,738,335
| 5,495,860
|
Converting h3 hex id to polygon in python
|
<p>I'm trying to take a h3 hex id and convert it to a polygon to use in a geodataframe (and eventually export as a shapefile). All of the methods in h3 documented online don't seem to work for me, and cannot figure out what method to use.</p>
<p>I'm using h3 3.7.6</p>
<pre><code>import h3
h3.cells_to_multi_polygon('86264c28fffffff', geo_json=False)
</code></pre>
<p>gives me <code>AttributeError: module 'h3' has no attribute 'cells_to_multi_polygon'</code> (reference: <a href="https://h3geo.org/docs/api/regions/" rel="nofollow noreferrer">https://h3geo.org/docs/api/regions/</a>)</p>
<pre><code>import h3
h3.h3_to_geo_boundary('86264c28fffffff', geo_json=False)
</code></pre>
<p>gives me <code>AttributeError: module 'h3' has no attribute 'h3_to_geo_boundary'</code> (reference: <a href="https://stackoverflow.com/questions/51159241/how-to-generate-shapefiles-for-h3-hexagons-in-a-particular-area">How to generate shapefiles for H3 hexagons in a particular area</a>)</p>
<pre><code>import h3
h3.cells_to_polygons('86264c28fffffff')
</code></pre>
<p>gives me <code>AttributeError: module 'h3' has no attribute 'cells_to_polygons'</code> (reference: <a href="https://uber.github.io/h3-py/api_reference.html#h3.cells_to_polygons" rel="nofollow noreferrer">https://uber.github.io/h3-py/api_reference.html#h3.cells_to_polygons</a>)</p>
<p>those are the three methods that I've seen used in other posts (see references), none of these exist in the most recent version of h3 and I cannot figure out the correct one to use</p>
|
<python><geospatial><h3>
|
2023-03-14 20:58:29
| 2
| 472
|
Geoff
|
75,738,231
| 1,441,592
|
Beautify pandas.DataFrame.apply calls if they applied to same column
|
<p>I have such data preprocessing code:</p>
<pre class="lang-py prettyprint-override"><code>df['age'] = df['resume'].apply(lambda x: x['age'] if 'age' in x else None)
df['gender'] = df['resume'].apply(lambda x: x['gender']['id'] if 'gender' in x else None)
...
</code></pre>
<p>All <code>.apply</code> calls are applied to <code>df['resume']</code> column</p>
<p>Is there any more elegant record?</p>
|
<python><pandas>
|
2023-03-14 20:46:37
| 1
| 3,341
|
Paul Serikov
|
75,738,160
| 9,576,988
|
Flask WTForms don't always submit (gunicorn + nginx)
|
<p>I recently migrated my site to another environment (PythonAnywhere to Vultr) and need to set servers up myself now. I've gotten everything running, but I've noticed my form POSTs don't always submit. It seems like 50% of POSTs work, and the other 50% don't. It doesn't matter if the page is refreshed or not, each submission's success seems completely random. I didn't have this problem with PythonAnywhere, or while developing the site locally on Windows, so I suspect it's an issue with NGINX and/or gunicorn configs.</p>
<p><code>main.py - the route that submits 50/50. The other form routes have the same strange behaviour - they don't always submit, and the form remains filled out.</code>:</p>
<pre class="lang-py prettyprint-override"><code>@app.route("/", methods=["GET", "POST"])
def home():
form = CommentForm()
if form.validate_on_submit():
create_entry(form.comment.data)
flash('<img src="/static/media/thank_you.jpg"/><h1>Thank you!</h1>')
return redirect(url_for("home"))
return render_template(
"home.html", form=form, comments=get_comments(),
)
</code></pre>
<p><code>home.html - the flash snippet which sometimes displays the <img> and <h1> flash message</code>:</p>
<pre><code>{% with messages = get_flashed_messages() %}
{% if messages %}
{% for message in messages %}
{{ message | safe }} <!-- safe used to render <img> and <h1> -->
{% endfor %}
{% endif %}
{% endwith %}
{% block body %}{% endblock %}
</code></pre>
<p><code>/etc/systemd/system/gunicorn.service</code>:</p>
<pre><code>[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/ubuntu/NAME.com
Environment="PATH=/home/ubuntu/venv_flask/bin"
ExecStart=/home/ubuntu/venv_flask/bin/gunicorn -w 2 -b 127.0.0.1:9001 'main:app'
[Install]
WantedBy=multi-user.target
</code></pre>
<p><code>/etc/nginx/sites-enabled</code>:</p>
<pre><code>server {
server_name NAME.com www.NAME.com;
root /home/ubuntu/NAME.com;
location / {
proxy_pass http://127.0.0.1:9001/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Prefix /;
# tried adding `proxy_set_header Cookie $http_cookie;` with no luck
}
}
</code></pre>
<h2>Edits</h2>
<p>I've added an extra worker (3 in total now), and uninstalled docker completely. I feel like maybe is a hardware limitation... Will keep investigating.</p>
<p>Nope, resources seem OK...</p>
<p><a href="https://i.sstatic.net/6O5de.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6O5de.png" alt="enter image description here" /></a></p>
<p>! The code gets to <code>form.validate_on_submit()</code> and it evaluates to <code>False</code> for some reason...</p>
<p>Aha!</p>
<p><code>{'csrf_token': ['The CSRF token is invalid.']} </code></p>
<p>This question should be left open to help others who make similar searches. It provides guidance on debugging, and the way this question is asked is unique - it's arguably NOT a duplicate. The answer to my question is related to the posted duplicate question, but THE QUESTION IS NOT A DUPLICATE. Mind your own business and leave my genuine and appropriate questions open!</p>
|
<python><nginx><flask><gunicorn><flask-wtforms>
|
2023-03-14 20:38:23
| 1
| 594
|
scrollout
|
75,737,860
| 10,317,162
|
Selenium How to click on div inside IFrame
|
<p>I am building a application that should help smaller local business to get google reviews from there customers easier. For that I am using Selenium in Python.</p>
<p>This is the procedure explained:</p>
<ol>
<li>In chrome i open the direct review link.</li>
<li>The review link first loads the google maps page and then the pop-up
window opens with the star rating selection at the top and below the
review input box.</li>
<li>I want with selenium to select the 5th star.</li>
</ol>
<p>This is my current code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
review_link = "https://g.page/r/CQcv68N5l734EB0/review"
# Set up the Chrome webdriver
driver = webdriver.Chrome()
driver.maximize_window()
# Load the review page and wait for it to load
driver.get(review_link)
time.sleep(10)
# Switch to the iframe containing the review stars
driver.switch_to.frame(driver.find_element(By.NAME, "goog-reviews-write-widget"))
time.sleep(5)
# Click the 5th star using its CSS selector
five_star = driver.find_element(By.CSS_SELECTOR, "#kCvOeb > div.O51MUd > div.Ugvple > div > div.WGh5lc.u4Qsab > div > div:nth-child(5)")
five_star.click()
# Switch back to the main page
driver.switch_to.default_content()
print("selected five stars")
# Close the webdriver
driver.quit()
</code></pre>
<p>Just as a note, i know that <code>WebDriverWait()</code> is better than <code>time.sleep()</code>. The reason i choose <code>time.sleep()</code> is for simplicity and easier testing. In production i would change it to <code>WebDriverWait()</code>. Same story with the css selector, i will change this in future to a simpler XPath.</p>
<p>With the code above i am getting this error:</p>
<pre><code>selenium.common.exceptions.ElementClickInterceptedException:
Message: element click intercepted: Element <iframe name="goog-reviews-write-widget" role="presentation" class="goog-reviews-write-widget" src="https://www.google.com/maps/api/js/ReviewsService.LoadWriteWidget2?key=AIzaSyAQiTKe3tivKXammrJ6ov6u8E7KwZPNFss&amp;authuser=0&amp;hl
=en&amp;origin=https%3A%2F%2Fwww.google.com&amp;pb=!2m1!1sChIJselsQL2hmkcRBy_rw3mXvfg!7b1&amp;cb=26368821"
cd_frame_id_="c0287439a9a949d63db71626600d7c1c"></iframe> is not clickable at point (780, 184).
Other element would receive the click: <iframe name="goog-reviews-write-widget" role="presentation" class="goog-reviews-write-widget" src="https://www.google.com/maps/api/js/ReviewsService.LoadWriteWidget2?key=AIzaSyAQiTKe3tivKXammrJ6ov6u8E7KwZPNFss&amp;authuser=0&amp;hl=en&amp;origin=https%3A%2F%2Fwww.google.com&amp;pb=!2m1!1sChIJselsQL2hmkcRBy_rw3mXvfg!7b1&amp;cb=14671787"></iframe>
</code></pre>
<p>As I understand it, the Iframe intercepts the click. This is the selector of the 5th star, that i am using:
<a href="https://i.sstatic.net/NIVKf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NIVKf.png" alt="enter image description here" /></a></p>
<p>Why do i have to switch to Iframe anyways you ask maybe? This is because, if i don't switch to IFrame, the css selector to the 5th can't be found. Here is the IFrame i am referring to:
<a href="https://i.sstatic.net/PYJQb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PYJQb.png" alt="enter image description here" /></a></p>
<p>My attempts of solving this were:</p>
<ol>
<li><p>Using the css selector of 5th star svg inside the div of the 5th star. Here I also got a <code>ElementClickInterceptedException</code>, but this the click was intercepted by the div in which the svg is inside and that i am clicking in the above code from me. Also the click didn't when trough as the 5th star wasn't selected.</p>
</li>
<li><p>using javascript click method:</p>
<p><code>driver.execute_script("arguments[0].click();", five_star)</code></p>
</li>
</ol>
<p>With this line instead of the <code>.click()</code> method, the code went trough without any exceptions but the click didn't apply the 5th star.</p>
<p><strong>I am now getting out of ideas how to solve this problem. How could I click on the 5th Star inside the Iframe?</strong></p>
|
<python><selenium-webdriver><iframe><automation>
|
2023-03-14 20:01:36
| 1
| 464
|
Nightscape
|
75,737,733
| 272,023
|
Why is BytesIO not splitting binary file correctly by newlines?
|
<p>I have a BytesIO buffer into which is being written the contents of a file in S3. That file has <a href="https://github.com/py-bson/bson" rel="nofollow noreferrer">pybson</a> BSON objects written to it, separated by <code>\n</code> characters, i.e. binary characters separated by new lines.</p>
<p>I want to parse the event objects in the file. I am iterating through each line like this:</p>
<pre><code>def iter_event(data: BytesIO):
for line in data:
yield bson.loads(line)
</code></pre>
<p>I am finding that there seem to be some rogue characters being injected or corrupted at the end of the <code>line</code> variable in some cases and my code is failing with the same exception as mentioned briefly in one of the comments in <a href="https://stackoverflow.com/a/27528088/272023">this SO question</a>. When I look at the file using a binary editor I cannot see the rogue character, it seems to only occur in the <code>line</code> variable. (For what it's worth, the end of the BSON object looks like <code>\x00\x00\n</code> in a binary editor and my line variable ends in <code>\x00\x10e\n</code>.</p>
<p>Is there an issue with iterating through each line like this? If not, what's a better approach please?</p>
|
<python><bson>
|
2023-03-14 19:46:45
| 0
| 12,131
|
John
|
75,737,649
| 4,173,059
|
Unable to find unique elements in 2 dataframes
|
<p>I have two dataframes -<br></p>
<blockquote>
<p>df1 = Dataset file which contains info about
<code>img</code> files<br> df2 = It is a simple Pandas data frame of number of
files in a folder which I created using below code -</p>
</blockquote>
<pre><code>binary_folder_files = os.listdir(binary_folder_path)
binary_list_df = pd.DataFrame(binary_folder_files, columns=['img'])
</code></pre>
<p>The total number of files in df1 & binary_list_df are - 8239 & 8241 respectively.
<br> I am unable to find the elements which are uncommon among each other.<br> I tried the below method but I am getting empty dataframes on printing both <code>unique_to_df1</code> & <code>unique_to_df2</code> -</p>
<pre><code>merged = pd.merge(df1, binary_list_df, on='img', how='outer', indicator=True)
unique_to_df1 = merged[merged['_merge'] == 'left_only']
unique_to_df2 = merged[merged['_merge'] == 'right_only']
</code></pre>
<p>I am unable to figure out where I am doing wrong or how to troubleshoot this further -</p>
<p><br>Below is the sample data frame samples and their expected result-</p>
<pre><code>df1 = pd.DataFrame({
'img': ['Alice', 'Bob', 'Charlie'],
'Age': [25, 30, 35],
'City': ['New York', 'San Francisco', 'Chicago']
})
binary_list_df = pd.DataFrame({
'img': ['Alice', 'Bob', 'Charlie', 'Frank'],
'Salary': [50000, 60000, 70000, 80000],
'Department': ['Sales', 'Marketing', 'Engineering', 'Finance']
})
</code></pre>
<p>Expected Output -
I want only the response for the <code>Frank field</code> or just want to know that <code>Frank</code> was the intersection.</p>
|
<python><pandas>
|
2023-03-14 19:37:13
| 2
| 809
|
Beginner
|
75,737,611
| 1,982,032
|
How can convert '\\u5de5' into '\u5de5'?
|
<p>They are different:</p>
<pre><code>len('\\u5de5')
6
len('\u5de5')
1
</code></pre>
<p>How can write a function to convert <code>\\u5de5</code> into <code>\u5de5</code>?</p>
<pre><code>def con_str(arg):
some_code_here
return result
con_str('\\u5de5')
\u5de5
</code></pre>
|
<python><character-encoding><python-unicode>
|
2023-03-14 19:32:41
| 1
| 355
|
showkey
|
75,737,455
| 11,050,535
|
Get Body Items for Unread Emails from an specific Mail ID in an Outlook Account
|
<p>Would Like to appreciate the answer for below Question</p>
<p>I have trying to read the latest 5 Unread emails from an Specific mail Id and fetch the Mail Body data in a variable.</p>
<p>I have tried using Below Code :</p>
<pre><code>import win32com.client
import os
from datetime import datetime, timedelta
outlook = win32com.client.Dispatch('outlook.application')
mapi = outlook.GetNamespace("MAPI")
inbox = mapi.GetDefaultFolder(6)
messages = inbox.Items
received_dt = datetime.now() - timedelta(days=1)
received_dt = received_dt.strftime('%m/%d/%Y %H:%M %p')
messages = messages.Restrict("[ReceivedTime] >= '" + received_dt + "'")
messages = messages.Restrict("[SenderEmailAddress] = 'Info@gmail.com'")
messages = messages.Restrict("[Subject] = 'Sample Report'")
try:
for message in list(messages):
try:
s = message.sender
for body in message.Body:
</code></pre>
|
<python><email><outlook><win32com><office-automation>
|
2023-03-14 19:14:52
| 2
| 605
|
Manz
|
75,737,438
| 1,686,628
|
Is there a way to pre-define the python egg name programmatically?
|
<pre><code>from setuptools import setup, Extension
kwargs = {"name": "foo",
"author": "",
"version": "1.0",
"ext_modules": Extension(name='util/helper/foo', sources=[]),}
setup(**kwargs)
</code></pre>
<p>I can only specify pacakge name and version, <code>foo-1.0.0</code>, from the code and <code>setup()</code> is adding stuff on top of it, like py version, cython version, platform, arch.</p>
<p>for example, the egg i get is <code>foo-1.0.0-py3.9-linux-x86_64.egg</code></p>
<p>how do i get an egg name of <code>foo-1.0.0-py3.egg</code>?</p>
|
<python><setuptools><setup.py><distutils><egg>
|
2023-03-14 19:13:20
| 0
| 12,532
|
ealeon
|
75,737,437
| 3,513,267
|
Python function to convert datetime object to BCD
|
<p>This is the best that I have come up with so far. I am not entirely happy with it because I have to use a % 100 to prevent an overflow of the year byte. As a result I have to add 2000 back to the year in the reverse function. Can anyone improve these functions?</p>
<pre><code>def datetime_to_bcd(dt):
"""Converts a datetime object to BCD (Binary-Coded Decimal) format."""
year_bcd = ((dt.year % 100) // 10) << 4 | (dt.year % 10)
month_bcd = (dt.month // 10) << 4 | (dt.month % 10)
day_bcd = (dt.day // 10) << 4 | (dt.day % 10)
hour_bcd = (dt.hour // 10) << 4 | (dt.hour % 10)
minute_bcd = (dt.minute // 10) << 4 | (dt.minute % 10)
second_bcd = (dt.second // 10) << 4 | (dt.second % 10)
return bytes([year_bcd, month_bcd, day_bcd, hour_bcd, minute_bcd, second_bcd])
def bcd_to_datetime(bcd_bytes):
"""Converts a BCD (Binary-Coded Decimal) format in bytes to a datetime object."""
year_bcd, month_bcd, day_bcd, hour_bcd, minute_bcd, second_bcd = bcd_bytes
year = 2000 + (year_bcd >> 4) * 10 + (year_bcd & 0x0F)
month = (month_bcd >> 4) * 10 + (month_bcd & 0x0F)
day = (day_bcd >> 4) * 10 + (day_bcd & 0x0F)
hour = (hour_bcd >> 4) * 10 + (hour_bcd & 0x0F)
minute = (minute_bcd >> 4) * 10 + (minute_bcd & 0x0F)
second = (second_bcd >> 4) * 10 + (second_bcd & 0x0F)
return datetime.datetime(year, month, day, hour, minute, second)
</code></pre>
|
<python><datetime><bcd>
|
2023-03-14 19:13:12
| 1
| 512
|
Ryan Hope
|
75,737,232
| 8,417,363
|
Loss is Nan for SegFormer vision transformer trained on BDD10k
|
<p>I'm trying to implement a <a href="https://huggingface.co/docs/transformers/model_doc/segformer" rel="nofollow noreferrer">SegFormer</a> pretrained with a <a href="https://huggingface.co/nvidia/mit-b0" rel="nofollow noreferrer">mit-b0</a> model to perform semantic segmentation on images obtained from the <a href="https://doc.bdd100k.com/download.html" rel="nofollow noreferrer">bdd100k</a> dataset. Specifically, semantic segmentation has masks for only a subset of the 100k images, being 10k with appropriate masks for segmentation where the <a href="https://doc.bdd100k.com/format.html#semantic-segmentation" rel="nofollow noreferrer">pixel value</a> of the mask is the label between 0 - 18, or 255 for unknown labels. I'm also following this example from <a href="https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/segformer.ipynb" rel="nofollow noreferrer">collab</a> on a simple segmentation of three labels.</p>
<p>The problem I have is that any further training I do on the training data ends up with nan as a loss. Inspecting any predicted masks ends up with values of Nan which is not right. I've tried to ensure that the input images for training are normalized, reduced the learning rate, increased the epochs for learning, changed the pretrained model, but still end up with nan as a loss right away.</p>
<p>I have my datasets as:</p>
<pre><code>dataset = tf.data.Dataset.from_tensor_slices((image_train_paths, mask_train_paths))
val_dataset = tf.data.Dataset.from_tensor_slices((image_val_paths, mask_val_paths))
</code></pre>
<p>with this method to preprocess and normalize the data</p>
<pre><code>height = 512
width = 512
mean = tf.constant([0.485, 0.456, 0.406])
std = tf.constant([0.229, 0.224, 0.225])
def normalize(input_image):
input_image = tf.image.convert_image_dtype(input_image, tf.float32)
input_image = (input_image - mean) / tf.maximum(std, backend.epsilon())
return input_image
# Define a function to load and preprocess each example
def load_and_preprocess(image_path, mask_path):
# Load the image and mask
image = tf.image.decode_jpeg(tf.io.read_file(image_path), channels=3)
mask = tf.image.decode_jpeg(tf.io.read_file(mask_path), channels=1)
# Preprocess the image and mask
image = tf.image.resize(image, (height, width))
mask = tf.image.resize(mask, (height, width), method='nearest')
image = normalize(image)
mask = tf.squeeze(mask, axis=-1)
image = tf.transpose(image, perm=(2, 0, 1))
return {'pixel_values': image, 'labels': mask}
</code></pre>
<p>Actually created the datasets:</p>
<pre><code>batch_size = 4
train_dataset = (
dataset
.cache()
.shuffle(batch_size * 10)
.map(load_and_preprocess, num_parallel_calls=tf.data.AUTOTUNE)
.batch(batch_size)
.prefetch(tf.data.AUTOTUNE)
)
validation_dataset = (
val_dataset
.map(load_and_preprocess, num_parallel_calls=tf.data.AUTOTUNE)
.batch(batch_size)
.prefetch(tf.data.AUTOTUNE)
)
</code></pre>
<p>Setting up the labels and pre-trained model:</p>
<pre><code>id2label = {
0: 'road',
1: 'sidewalk',
2: 'building',
3: 'wall',
4: 'fence',
5: 'pole',
6: 'traffic light',
7: 'traffic sign',
8: 'vegetation',
9: 'terrain',
10: 'sky',
11: 'person',
12: 'rider',
13: 'car',
14: 'truck',
15: 'bus',
16: 'train',
17: 'motorcycle',
18: 'bicycle',
}
label2id = { label: id for id, label in id2label.items() }
num_labels = len(id2label)
model = TFSegformerForSemanticSegmentation.from_pretrained('nvidia/mit-b0', num_labels=num_labels, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001))
</code></pre>
<p>Finally fitting the data to the model, using only 1 epoch just to see if I can figure out why the loss in nan:</p>
<pre><code>epochs = 1
history = model.fit(train_dataset, validation_data=validation_dataset, epochs=epochs)
</code></pre>
<p>Segformer implements it's own loss function, so I don't need to supply one. I see the collab example I was following has some sort of loss, but I can't figure out why mine is nan.</p>
<p>Did I approach this correctly, or am I missing something along the way? What else can I try to figure out on why the loss is nan? I did also make sure my labels used match between validation and training data sets. The pixel values ranged from 0 - 18 with 255 as unknown as supplied by the docs.</p>
<p><strong>Edit: 3/16</strong></p>
<p>I did find <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb" rel="nofollow noreferrer">this example</a> which pointed out some flaws I had in my approach, but even after following this example with everything besides how the dataset is gathered, I was still unable to produce any loss other than nan.</p>
<p>My new code is mostly the same, other then how I am pre-processing the data with numpy before converting them to tensors.</p>
<p>Dataset dict definition for training and validation data:</p>
<pre><code>dataset = DatasetDict({
'train': Dataset.from_dict({'pixel_values': image_train_paths, 'label': mask_train_paths}).cast_column('pixel_values', Image()).cast_column('label', Image()),
'val': Dataset.from_dict({'pixel_values': image_val_paths, 'label': mask_val_paths}).cast_column('pixel_values', Image()).cast_column('label', Image())
})
train_dataset = dataset['train']
val_dataset = dataset['val']
train_dataset.set_transform(preprocess)
val_dataset.set_transform(preprocess)
</code></pre>
<p>where preprocess is where I am processing the images using the <a href="https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoImageProcessor" rel="nofollow noreferrer">AutoImageProcessor</a> to get the inputs.</p>
<pre><code>image_processor = AutoImageProcessor.from_pretrained('nvidia/mit-b0', semantic_loss_ignore_index=255) # This is a SegformerImageProcessor
def transforms(image):
image = tf.keras.utils.img_to_array(image)
image = image.transpose((2, 0, 1)) # Since vision models in transformers are channels-first layout
return image
def preprocess(example_batch):
images = [transforms(x.convert('RGB')) for x in example_batch['pixel_values']]
labels = [x for x in example_batch['label']]
inputs = image_processor(images, labels)
# print(type(inputs))
return inputs
</code></pre>
<p>transforming the sets to a tensorflow dataset:</p>
<pre><code>batch_size = 4
data_collator = DefaultDataCollator(return_tensors="tf")
train_set = dataset['train'].to_tf_dataset(
columns=['pixel_values', 'label'],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
val_set = dataset['val'].to_tf_dataset(
columns=['pixel_values', 'label'],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator,
)
</code></pre>
<p>fitting the model</p>
<pre><code>history = model.fit(
train_set,
validation_data=val_set,
epochs=10,
)
</code></pre>
<p><code>1750/1750 [==============================] - ETA: 0s - loss: nan</code></p>
|
<python><tensorflow><machine-learning><deep-learning><segformer>
|
2023-03-14 18:48:22
| 4
| 3,186
|
Jimenemex
|
75,737,187
| 10,292,638
|
How to create a column as a list of similar strings onto a new column?
|
<p>I've been trying to get a new row in a pandas dataframe which encapsullates as a list all the similar strings into it's original matching row.</p>
<p>This is the original pandas dataframe:</p>
<pre><code>import pandas as pd
d = {'product_name': ['2 pack liner socks', '2 pack logo liner socks', 'b.bare Hipster', 'Lady BARE Hipster Panty'], 'id': [13, 12, 11, 10]}
df = pd.DataFrame(data=d)
</code></pre>
<p>I would like to get a dataframe that looks like this:</p>
<pre><code># product_name # id # group
2 pack liner socks 13 ['2 pack liner socks', '2 pack logo liner socks']
2 pack logo liner socks 12 ['2 pack liner socks', '2 pack logo liner socks']
b.bare Hipster 11 ['b.bare Hipster', 'Lady BARE Hipster Panty']
Lady BARE Hipster Panty 10 ['b.bare Hipster', 'Lady BARE Hipster Panty']
</code></pre>
<p>I tried the following:</p>
<pre><code>import thefuzz
from thefuzz import process
df["group"] = df["product_name"].apply(lambda x: process.extractOne(x, df["product_name"], scorer=fuzz.partial_ratio)[0])
</code></pre>
<p>And it throws the next error:</p>
<blockquote>
<p>NameError: name 'fuzz' is not defined</p>
</blockquote>
<p>How could I fix this code or on the other hand are there any other approaches to solve this?</p>
|
<python><pandas><string><dataframe><nlp>
|
2023-03-14 18:43:48
| 1
| 1,055
|
AlSub
|
75,736,985
| 1,226,649
|
Matplotlib: resize a plot consisting of two side by side image subplots in Jupyter cell
|
<p>The existing solution <a href="https://stackoverflow.com/questions/14770735/how-do-i-change-the-figure-size-with-subplots">How do I change the figure size with subplots?</a> does not answer my question.</p>
<pre><code>fig, axs = plt.subplots(2, 2, figsize=(15, 15)) # Does not work in my case!
</code></pre>
<p>Other answers in my case of plotting two images with this code do not work also.</p>
<p>How can I resize a plot consisting of two side by side image subplots? I plot subplots in Jupyter notebook:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.image as img
ref_img = img.imread(ref_img_path)
test_img = img.imread(test_img_path)
plt.subplot(1, 2, 1) # row 1, col 2 index 1
plt.title("Reference Form")
plt.imshow(ref_img)
plt.subplot(1, 2, 2) # index 2
plt.title("Test Form ")
plt.imshow(test_img)
plt.show()
</code></pre>
<p>I am not trying to create individual subplots of different size, but need to resize a plot as a whole to use maximum space in Jupyter cell.</p>
|
<python><image><matplotlib><jupyter-notebook><subplot>
|
2023-03-14 18:20:50
| 0
| 3,549
|
dokondr
|
75,736,866
| 7,077,532
|
Python: Solving "Generate Parentheses" with Backtracking --> Confused About stack.pop()
|
<p>I am trying to understand the backtracking code below:</p>
<p><a href="https://i.sstatic.net/q0Z8N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q0Z8N.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>class Solution:
def generateParenthesis(self, n: int) -> List[str]:
stack = []
res = []
def backtrack(openN, closedN):
if openN == closedN == n:
res.append("".join(stack))
return
if openN < n:
stack.append("(")
backtrack(openN + 1, closedN)
print ("stack openN < n", stack)
stack.pop()
if closedN < openN:
stack.append(")")
backtrack(openN, closedN + 1)
print ("stack closedN < openN", stack)
stack.pop()
backtrack(0, 0)
return res
</code></pre>
<p>I am having a hard time conceptually grasping where the stack.pop() is hitting. For example, in the screen shot below how does the code get from the top yellow highlight to the bottom yellow highlight?</p>
<p>How is the code able to pop off three ")" without needing to append them back?</p>
<p><a href="https://i.sstatic.net/NZH6X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NZH6X.png" alt="enter image description here" /></a></p>
|
<python><recursion><enumeration><backtracking><tree-traversal>
|
2023-03-14 18:07:13
| 2
| 5,244
|
PineNuts0
|
75,736,857
| 1,418,326
|
python list of lists to np.array() return ndarray of list
|
<p>I have a list of lists of int: list1. When I convert it to 2d np.array(), it seems like sometime it returns a 2d numpy.ndarray and sometime it returns a 1d numpy.ndarray of list.
what am I missing here?</p>
|
<python>
|
2023-03-14 18:06:36
| 1
| 1,707
|
topcan5
|
75,736,742
| 11,452,928
|
Installing a new version of python3 on windows
|
<p>I had python 3.8.10 installed on my pc (with Windows 11 Home). I wanted python 3.10, so I download it from the official site and I installed it, but running <code>python3 --version</code> in the prompt i continued to get <code>python 3.8.10</code>. So I unistalled python 3.8 using the control panel programs and features, but now I continue to get <code>python 3.8.10</code> when I run <code>python3 --version</code> in the prompt. How can I fix this?
I found online that I need to change the environment variable "path" using the window advanced system settings, but there is nothing related to Python there.</p>
|
<python><python-3.x><windows><operating-system>
|
2023-03-14 17:53:46
| 2
| 753
|
fabianod
|
75,736,551
| 6,284,287
|
Can I assign multiple values to a proto message from a tuple?
|
<p>I need to create an list of proto rows from a list of tuples.</p>
<p>I can only see that I can create a message by explicitly referencing the fields like</p>
<pre><code>customer = customer_pb2.customer()
customer.user_id = 1
customer.name = 'Mary'
</code></pre>
<p>But let's say I have a list of tuples like [(1, 'Mark'), (2, 'Mary')] - how can I quickly and succinctly create a row from my pb2 file and the data from the tuple to it? Is there a way without explicitly referencing the names like in the example above? It would be nice to be able to do something out of the box like</p>
<pre><code>customer = customer_pb2.customer()
customer.append((my_tuple))
</code></pre>
|
<python><protocol-buffers>
|
2023-03-14 17:33:42
| 0
| 626
|
CClarke
|
75,736,476
| 4,348,400
|
How to make custom Hypothesis strategy to supply custom objects?
|
<p>Suppose I have a class <code>Thing</code></p>
<pre class="lang-py prettyprint-override"><code>class Thing:
def __init__(self, x, y):
...
</code></pre>
<p>And suppose I have a function which acts on a list of things.</p>
<pre class="lang-py prettyprint-override"><code>def do_stuff(list_of_things):
...
</code></pre>
<p>I would like to write unit tests for <code>do_stuff</code> involving different instances of lists of <code>Thing</code>.</p>
<p>Is there a way to define a custom Hypothesis strategy which provides examples of <code>list_of_things</code> for my unit tests on <code>do_stuff</code>?</p>
|
<python><unit-testing><python-hypothesis>
|
2023-03-14 17:25:52
| 2
| 1,394
|
Galen
|
75,736,428
| 14,076,103
|
upload geojson file to Mapbox API
|
<p>I have Geopandas dataFrame, I am converting this into Geojson string and uploading it into MAPbox API.</p>
<pre><code>gdf = gpd.GeoDataFrame(df, geometry="GEOMETRY")
geojson_str = gdf.to_json()
url = f"https://api.mapbox.com/uploads/v1/{username}/credentials?access_token={my_access_token}"
payload = {
"name": "mydata",
"file": geojson_str,
"format": "geojson",
}
response = requests.post(url, data=payload)
job_id = response.json()["job"]
status_url = f"https://api.mapbox.com/uploads/v1/{username}/{job_id}/status?access_token={my_access_token}"
status_response = requests.get(status_url)
</code></pre>
<p>I am getting s3-bucket,accesskey,secretaccess since MAPBOX API uses S3 Bucket for staging. I am not receiving job from the response. Is there any way to upload geojson to MapBOX API.</p>
|
<python><python-requests><mapbox-gl-js>
|
2023-03-14 17:21:04
| 0
| 415
|
code_bug
|
75,736,181
| 2,065,083
|
What is a quick way to count the number of pairs in a list where a XOR b is greater than a AND b?
|
<p>I have an array of numbers, I want to count all possible combination of pairs for which the xor operation for that pair is greater than and operation.</p>
<p><strong>Example:</strong></p>
<pre><code>4,3,5,2
</code></pre>
<p><strong>possible pairs are:</strong></p>
<pre><code>(4,3) -> xor=7, and = 0
(4,5) -> xor=1, and = 4
(4,2) -> xor=6, and = 0
(3,5) -> xor=6, and = 1
(3,2) -> xor=1, and = 2
(5,2) -> xor=7, and = 0
Valid pairs for which xor > and are (4,3), (4,2), (3,5), (5,2) so result is 4.
</code></pre>
<p>This is my program:</p>
<pre><code>def solve(array):
n = len(array)
ans = 0
for i in range(0, n):
p1 = array[i]
for j in range(i, n):
p2 = array[j]
if p1 ^ p2 > p1 & p2:
ans +=1
return ans
</code></pre>
<p>Time complexity is O(n^2) , but my array size is 1 to 10^5 and each element in array is 1 to 2^30. So how can I reduce time complexity of this program.</p>
|
<python><algorithm><time-complexity>
|
2023-03-14 16:56:00
| 4
| 21,515
|
Learner
|
75,736,140
| 3,826,115
|
Use hvplot interactive to render loop in browser, and save as html
|
<p>I am trying to follow this <a href="https://hvplot.holoviz.org/user_guide/Interactive.html" rel="nofollow noreferrer">example notebook</a>, specifically the plotting section where it plots the x,y grid and loops/scrolls through the time dimension.</p>
<pre><code>ds.air.interactive.sel(time=pnw.DiscreteSlider).plot()
</code></pre>
<p>I'm running into two issues. The first is, I'd like to run this code in the Spyder IDE, which does not support the inline widgets. Based on a few other questions I've found here, the solution seems to be to have the interactive widget render in a browser window. I've looked around a few spots, like <a href="https://stackoverflow.com/questions/55208035/why-doesnt-holoviews-show-histogram-in-spyder">this question</a>, but I can't seem to find anything that works with the <code>hvplot.xarray.XArrayInteractive</code> objects that are created here. So my first question is how to get these to render in a browser window.</p>
<p>My second question is how to save these loops as an html file that can be opened later. I've tried using the tutorial <a href="https://hvplot.holoviz.org/user_guide/Viewing.html#saving-plots" rel="nofollow noreferrer">here</a>, but when I try to save it, I get the following error.</p>
<pre><code>ValueError: HoloViews pane does not support objects of type 'XArrayInteractive'.
</code></pre>
<p>It seems like in both cases, using this "XArrayInteractive" type might be causing the issue. I am not locked into using this method, if there is another way to make the loops I want, I am open to that.</p>
<p>Thanks</p>
|
<python><python-xarray><interactive><hvplot>
|
2023-03-14 16:52:22
| 1
| 1,533
|
hm8
|
75,736,111
| 5,852,692
|
ctypes character pointers with same byte string points same location
|
<p>I am creating two ctypes character pointers with same byte string via:</p>
<pre><code>import ctypes as ct
var1 = ct.c_char_p(b'.'*80)
var2 = ct.c_char_p(b'.'*80)
# how I check the values:
api = ct.CDLL('api.dll')
api.function1(var1, var2)
# values of the vars are changed after running the above function
print(var1.value.decode(), var2.value.decode())
</code></pre>
<p>I am running a <code>CDLL.function</code> which changes values in the locations where <code>var1</code> and <code>var2</code> points. I was expecting, that if I would create 2 different pointers like above, these pointers would point a different location in memory. However, they do point to the same location, because when I check the values of these variables they are the same.</p>
<p>When I create the <code>var2</code> with different character (e.g.: <code>b'*'80</code>) instead of <code>b'.'*80</code>, then it works. Is there a better way to create <code>var1</code> and <code>var2</code>, so that both of these pointers would point a different location?</p>
|
<python><string><pointers><output><ctypes>
|
2023-03-14 16:48:46
| 1
| 1,588
|
oakca
|
75,735,911
| 1,806,566
|
How do I load a python module before pip?
|
<p>I have a module that adds a metapath finder to the metapath, so it affects how modules are imported and how distributions are found. Normally, I use <code>usercustomize.py</code> to load this finder, so it happens before any other module is loaded.</p>
<p>If I invoke pip with:</p>
<pre><code>python3 -m pip ...
</code></pre>
<p>it works fine and it includes my metapath finder. The problem is that I would like to test a new version of the metapath finder without installing it in <code>usercustomize.py</code> first.</p>
<p>I can use the <code>-s</code> switch to suppress <code>usercustomize.py</code>, but how else can I get my module to load before calling <code>pip</code>? I know that I could write a script that loads whatever I want and then calls pip as a module, but that's not supported by pip, and I don't want to deal with it breaking down the road.</p>
<p>Is there any way that I can suppress <code>usercustomize.py</code>, load my own module, and then run <code>pip</code>?</p>
|
<python><pip>
|
2023-03-14 16:27:55
| 0
| 1,241
|
user1806566
|
75,735,840
| 3,626,104
|
Python - Jira markdown preview image
|
<p>I'm making a PySide GUI that submits to Jira. Prior to submission, it would be ideal if I could show the user a "ticket preview" so they know roughly what the ticket will look like or if they made any syntax errors that would cause issues in Jira. To do that, I imagine that I would have to convert their raw text to a png file. I haven't found a way to make that conversion and would like any suggestions people could give.</p>
<h2>Example</h2>
<p>Imagine a submission text box that looks like this:</p>
<pre><code>{panel:title=Test|borderStyle=none|titleBGColor=#f7d6c1|bgColor=#fdf4ee}
Hello, World!
{panel}
</code></pre>
<p>I should be able to take that text into Python and somehow render an image so that looks as it should in Jira. For reference, this is how Jira renders it:</p>
<p><a href="https://i.sstatic.net/wVypT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wVypT.png" alt="A panel, rendered by Jira" /></a></p>
<p>This is the .png that I'm trying to generate, basically.</p>
<h2>What I Tried</h2>
<p>It's not a great substitute but I was thinking maybe I could use <a href="https://pypi.org/project/jira2markdown/" rel="nofollow noreferrer">jira2markdown</a> to convert <code>Jira markdown -> common markdown -> pygments -> png</code>. Unfortunately, information is lost when you go from <code>Jira markdown -> common markdown</code> using <a href="https://pypi.org/project/jira2markdown/" rel="nofollow noreferrer">jira2markdown</a>. The example text below</p>
<pre><code>{panel:title=Test|borderStyle=none|titleBGColor=#f7d6c1|bgColor=#fdf4ee}
Hello, World!
{panel}
</code></pre>
<p>converts to</p>
<pre><code>> **Test**
> Hello, World!
</code></pre>
<p>It removes all background, foreground, and text colors in that case.</p>
<p>So anyway, I'm mostly just looking for an approximation though an ideal "as it looks in Jira" render would be awesome as well. Does anyone know of a way that I could achieve this effect?</p>
|
<python><jira>
|
2023-03-14 16:21:47
| 0
| 1,026
|
ColinKennedy
|
75,735,744
| 15,959,591
|
List to dataframe without any indexes or NaNs
|
<p>I have a list of lists that I like to convert to a Pandas data frame.
My list looks like this:</p>
<pre><code>[1 25
2 35
3 45
4 55
5 65
Name: a, Length: 5, dtype: int
6 75
7 85
8 95
9 105
10 115
Name: b, Length: 5, dtype: int
11 125
12 135
13 145
14 155
15 165
Name: c, Length: 5, dtype: int]
</code></pre>
<p>My code to change it to a data frame is:</p>
<pre><code>df = pd.DataFrame(list, index=None, columns=None)
</code></pre>
<p>the result looks like this:</p>
<p><a href="https://i.sstatic.net/r9bDf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r9bDf.png" alt="enter image description here" /></a></p>
<p>But I want it to be like this:</p>
<p><a href="https://i.sstatic.net/TDOGM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TDOGM.png" alt="enter image description here" /></a></p>
<p>Any help please?</p>
|
<python><pandas><dataframe><list>
|
2023-03-14 16:14:19
| 2
| 554
|
Totoro
|
75,735,739
| 5,320,122
|
Docker image with base image python:3.10-slim-buster apt-get update fails to fetch http://deb.debian.org/debian/dists/buster/InRelease 403 Forbidden
|
<p>I am trying to build the docker image from this Dockerfile.</p>
<pre><code>FROM python:3.10-slim-buster
# Set uid and gid for sally
ARG sally_uid=2002
ARG sally_gid=2002
# Add user and group entries
RUN addgroup -gid $sally_gid sally && adduser --uid $sally_uid --ingroup sally --system sally
# install any updates
RUN apt-get update && apt-get upgrade -y && apt-get -y dist-upgrade
......
......
......
</code></pre>
<p>Console output:</p>
<pre><code>[root@lndevk8worker1]# docker build -t supervised:1.0 .
[+] Building 14.4s (8/21)
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.21kB 0.0s
=> [internal] load metadata for docker.io/library/python:3.10-slim-buster 2.8s
=> [ 1/17] FROM docker.io/library/python:3.10-slim-buster@sha256:d2cdc150b518f41cbd252979831fb6f28515aa01eb1b726efd6b16ca70e5656b 6.5s
=> => resolve docker.io/library/python:3.10-slim-buster@sha256:d2cdc150b518f41cbd252979831fb6f28515aa01eb1b726efd6b16ca70e5656b 0.0s
=> => sha256:8fd419aca81cfd3987d61550e700546537628562693bc01acc9f85468f483706 27.14MB / 27.14MB 4.4s
=> => sha256:e53683fb464bd118081a539c199f8918d31828d851085907fa5a1c5ee324efb3 2.78MB / 2.78MB 1.8s
=> => sha256:ee74efc279b75121e9acdef09d44c4d2771bfadbd1c97035ed3fa5ed8498f650 11.50MB / 11.50MB 3.3s
=> => sha256:d2cdc150b518f41cbd252979831fb6f28515aa01eb1b726efd6b16ca70e5656b 988B / 988B 0.0s
=> => sha256:370bc98a076db61e9c90996676801ebe962588a8607259fd1b9569d6b707c95d 1.37kB / 1.37kB 0.0s
=> => sha256:8c32b5ef693d5d8cdff269f421bf761c1b3e6a30f6ff6caa707d6f105d33176d 7.91kB / 7.91kB 0.0s
=> => sha256:fa7d106afb002534a9671178956cc59c16b96be796b5ae2a2f6afbcb3dedd068 233B / 233B 2.2s
=> => sha256:60b2070aeaf108c1398ea8d65ec30fbc2ad2c207ad3ec7e89becc2bd5945652c 3.35MB / 3.35MB 3.7s
=> => extracting sha256:8fd419aca81cfd3987d61550e700546537628562693bc01acc9f85468f483706 0.8s
=> => extracting sha256:e53683fb464bd118081a539c199f8918d31828d851085907fa5a1c5ee324efb3 0.2s
=> => extracting sha256:ee74efc279b75121e9acdef09d44c4d2771bfadbd1c97035ed3fa5ed8498f650 0.4s
=> => extracting sha256:fa7d106afb002534a9671178956cc59c16b96be796b5ae2a2f6afbcb3dedd068 0.0s
=> => extracting sha256:60b2070aeaf108c1398ea8d65ec30fbc2ad2c207ad3ec7e89becc2bd5945652c 0.2s
=> [internal] load build context 0.0s
=> => transferring context: 109.48kB 0.0s
=> [ 2/17] WORKDIR /sally-supervised-fe 3.4s
=> [ 3/17] RUN addgroup -gid 2002 sally && adduser --uid 2002 --ingroup sally --system sally 0.6s
=> ERROR [ 4/17] RUN apt-get update && apt-get upgrade -y && apt-get -y dist-upgrade 1.0s
------
> [ 4/17] RUN apt-get update && apt-get upgrade -y && apt-get -y dist-upgrade:
#0 0.836 Err:1 http://deb.debian.org/debian buster InRelease
#0 0.836 403 Forbidden [IP: 199.232.106.132 80]
#0 0.876 Err:2 http://deb.debian.org/debian-security buster/updates InRelease
#0 0.876 403 Forbidden [IP: 199.232.106.132 80]
#0 0.925 Err:3 http://deb.debian.org/debian buster-updates InRelease
#0 0.925 403 Forbidden [IP: 199.232.106.132 80]
#0 0.929 Reading package lists...
#0 0.944 E: The repository 'http://deb.debian.org/debian buster InRelease' is not signed.
#0 0.944 E: Failed to fetch http://deb.debian.org/debian/dists/buster/InRelease 403 Forbidden [IP: 199.232.106.132 80]
#0 0.944 E: Failed to fetch http://deb.debian.org/debian-security/dists/buster/updates/InRelease 403 Forbidden [IP: 199.232.106.132 80]
#0 0.944 E: The repository 'http://deb.debian.org/debian-security buster/updates InRelease' is not signed.
#0 0.944 E: Failed to fetch http://deb.debian.org/debian/dists/buster-updates/InRelease 403 Forbidden [IP: 199.232.106.132 80]
#0 0.944 E: The repository 'http://deb.debian.org/debian buster-updates InRelease' is not signed.
------
Dockerfile:10
--------------------
8 | RUN addgroup -gid $sally_gid sally && adduser --uid $sally_uid --ingroup sally --system sally
9 | # install any updates
10 | >>> RUN apt-get update && apt-get upgrade -y && apt-get -y dist-upgrade
11 |
--------------------
ERROR: failed to solve: process "/bin/sh -c apt-get update && apt-get upgrade -y && apt-get -y dist-upgrade" did not complete successfully: exit code: 100
</code></pre>
<p>I was able to successfully build the image last week. This problem started occurring today. I executed: <code>docker system prune</code> before building the image also but without any luck.</p>
<p>Any help to overcome this problem is appreciated.</p>
<p>Thank you.</p>
|
<python><linux><docker><debian>
|
2023-03-14 16:14:06
| 1
| 496
|
Amit
|
75,735,672
| 4,502,950
|
Get data from specific sheets in google sheet and extract specific columns from it using gspread
|
<p>I am trying to get specific column data out of specific sheets in one big google sheet.
For example, I have a list of sheets</p>
<pre><code>Sheets = ['Sheet 1', 'Sheet 2', 'Sheet 3']
</code></pre>
<p>and from these sheets, I want to retrieve specific columns like</p>
<pre><code>Column_headers = ['A', 'B']
</code></pre>
<p>What I am doing right now is getting the data from</p>
<pre><code>import gspread
from gspread_dataframe import set_with_dataframe
import pandas as pd
pd.set_option("display.max_columns", None)
pd.set_option('display.max_rows', None)
sa = gspread.service_account(filename='file.json')
book = sa.open("book")
Sheets = ['Sheet 1', 'Sheet 2', 'Sheet 3']
Column_headers = ['A', 'B']
for i in Sheets:
2022_sheet = book.worksheet(i)
records = 2022_sheet.get_all_records()
data_2022 = zip(*(e for e in zip(*record) if e[0] in Column_headers))
getdata_2022 = pd.DataFrame(data_2022, columns = Column_headers)
print(getdata_2022)
</code></pre>
<p>I am getting the following error</p>
<pre><code>GSpreadException: the given 'expected_headers' are not uniques
</code></pre>
<p>that is because the headers are not unique (obv) that's why I am retrieving specific columns, also I can't understand the bit where I loop through the 'Sheets' to get the data only from specific sheets. Eventually the end result should be two columns 'A' and 'B' with all the data from the specific 3 sheets.</p>
|
<python><google-sheets><gspread>
|
2023-03-14 16:08:16
| 1
| 693
|
hyeri
|
75,735,664
| 20,612,566
|
Getting dates for past month period Python
|
<p>I have to aggregate some financial data for past month period. For example, if today is March, 14, I need to get financial data from February, 01 to February, 28.
If today is April, 20, I need to get financial data from March, 01 to March, 31.
I have a function that takes two parameters: date_from & date_to (in %Y-%m-%d format).
I can get the data from last 30 days, but I need the data from past month period</p>
<pre><code># getting last 30 days
def get_analytics_time_interval() -> tuple:
tz = pytz.timezone(settings.TIME_ZONE)
now = datetime.now(tz)
from_date = now - timedelta(days=31)
to_date = now + timedelta(days=1)
return from_date, to_date
intervals = get_analytics_time_interval()
# getting financial data
def get_sold_products(date_from=intervals[0].strftime("%Y-%m-%d"),
date_to=intervals[1].strftime("%Y-%m-%d")
</code></pre>
<p>So, I don't know if Python have a built-in functionality that allows to get the first and the last days of past month. If there is not one, how can I solve this task?</p>
|
<python><date><datetime>
|
2023-03-14 16:07:19
| 1
| 391
|
Iren E
|
75,735,631
| 1,421,907
|
How to use a parent classmethod that returns an instance in an overriding child method?
|
<p>I have a parent <code>classmethod</code> that returns an instance of the class. Later I have a child class in which I would like to use the <code>classmethod</code> of the parent and return an instance of the child class. I am using python 3.9</p>
<p>An example could be the following:</p>
<pre><code>class A:
def __init__(self, s):
print("init in class A")
self.s = s
@classmethod
def f(cls, s):
print("f in class A")
return cls(s)
class B(A):
def __init__(self, s):
print("init in class B")
super().__init__(s)
@classmethod
def f(cls, s):
print("f in class B")
a = A.f(s)
# a = super(B, cls).f(s)
# a = super().f(s)
s = "class B " + a.s
return cls(s)
</code></pre>
<p>Looking at the output, written like this it seems to work.</p>
<pre><code>>>> B.f("toto").s
f in class B
f in class A
init in class A
init in class B
init in class A
'class B toto'
</code></pre>
<p>But if I uncomment one of the two commented lines in the <code>classmethod</code> f in class B (for example <code>super(B, cls).f(s)</code>), I got this:</p>
<pre><code>>>> B.f("toto").s
f in class B
f in class A
init in class B
init in class A
init in class B
init in class A
'class B toto'
</code></pre>
<p>The result is the same but I do not understand why from the <code>classmethod</code> of class A, it goes first from the <code>__init__</code> method of class B?</p>
<p>I get the feeling that in the <code>classmetod</code> of A, <code>cls</code> is still class B. But I don't know how to change this.</p>
|
<python><inheritance>
|
2023-03-14 16:04:53
| 0
| 9,870
|
Ger
|
75,735,566
| 15,673,412
|
python - remove array column if it contains at least one 0
|
<p>Let's suppose I have a <code>np.array</code> like:</p>
<pre><code>array([[1., 1., 0., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 0.],
[1., 1., 1., 1., 0.]])
</code></pre>
<p>I would like to know if there is a pythonic way to find all the columns that contain at least one occurence of 0. In the example I would like to retrieve the indexes 2 and 4.</p>
<p>I need to remove those columns, but I also need to know how many columns I have removed (the indexes are not strictly necessary).
So in the end I simply need the result</p>
<pre><code>array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
</code></pre>
|
<python><arrays><numpy>
|
2023-03-14 16:00:03
| 1
| 480
|
Sala
|
75,735,553
| 3,482,266
|
Running subprocess.call with pytest fails in GitHub Actions
|
<p>I have a CI pipeline in GitHub.</p>
<p>In a <code>test.py</code> file, I have the following:</p>
<pre class="lang-py prettyprint-override"><code>subprocess.call(
args=[
"python",
"./folder/file.py",
"--file_path",
"tests/local_tests/test_data/data.txt", # pylint: disable=line-too-long
"--silent",
]
)
</code></pre>
<p>Running it with <code>pytest</code> gives:</p>
<pre><code>----------------------------- Captured stderr call -----------------------------
Traceback (most recent call last):
File "./folder/file.py", line 10, in <module>
from folder.aux import function
ModuleNotFoundError: No module named 'folder'
</code></pre>
<p>Which in turn makes the test to fail with another exception.
When I run locally the test, everything goes smoothly.</p>
<p>Edit:</p>
<pre><code>name: integration_tests_container
run-name: ${{ github.actor }}'s integration tests on ES container
on:
pull_request:
branches: [main, dev]
types: [opened, reopened]
push:
paths-ignore:
- '**.md'
- './docs'
- 'Dockerfile'
- './.github/workflows/ci-production.yml'
- './.github/workflows/evaluate_clustering.yml'
- './.github/workflows/build_docs.yml'
jobs:
run_integration_tests:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8]
steps:
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Checkout branch
uses: actions/checkout@v3
- name: Install requirements
run: |
pip install -r requirements.txt
- name: Launching ES db container
run: |
cd ./folder_local_db
docker compose up -d
- name: Execute tests
run: |
sleep 20 # to avoid connection error
python -m pytest ./tests/integration_tests
</code></pre>
|
<python><continuous-integration><pytest><github-actions>
|
2023-03-14 15:59:14
| 1
| 1,608
|
An old man in the sea.
|
75,735,519
| 14,649,310
|
How are Celery queues managed?
|
<p>This is a bit of theoretical question and I cant find a clean cut answer in any <a href="https://docs.celeryq.dev/en/stable/getting-started/first-steps-with-celery.html#first-steps-with-celery" rel="nofollow noreferrer">celery documentation</a> I ve read. So assume we have a Flask app and also Celery with either <code>redis</code> or <code>rabbitmq</code>. Now the broker here might be important but please let me know how it affects this if you can.</p>
<p>So our Flask app send some celery tasks with <code>mytask.apply_async((args, args), time_limit=5)</code>. On the other side I have some workers subscribing to a queue <code>task_routes = {'tasks.mytask': {'queue': 'transformations'}}</code> I ve read in the Celery docs that the queue is created when a celery worker starts and subscribes to it. So there are a few things here which I can find answers in documentation:</p>
<ul>
<li>Is that correct the queues disappear if the workers shut down? What if I want the queue to exist even if no workers are alive so when the worker start up again they can continue processing messages that were received when the were inactive. Can this be achieved?</li>
<li>Who manages the routing to the celery queues? The broker receives my task, but my sender does not know anything about the queues. The queues, based on documentation, are defined in the worker side. How is the task routed to the correct queue? Does the queue has to be somehow defined both on the sender and the worker side?</li>
</ul>
|
<python><celery>
|
2023-03-14 15:56:03
| 1
| 4,999
|
KZiovas
|
75,735,206
| 12,760,550
|
Confirm if LOV columns in a pandas dataframe complies with another mapping dataframe
|
<p>I have 2 dataframes, one of them of employee information by Country, and another one with a mapping of possible values for LOV columns per country (depending on the country, the column may or may not be an LOV and accept different values and is case sensitive). Example I have this data belonging to CZ:</p>
<pre><code>Country Employee Name Contract Type Grade Education
CZ Jonathan permanent 1 Male
CZ Peter np.nan 6 male
CZ Maro Fixed . N/A
CZ Marisa Contractor 01-01-2020 Female
CZ Petra Permanent 5 Female
</code></pre>
<p>And this would be the mapping dataframe:</p>
<pre><code>Country LOV Column Values
CZ Contract Type Permanent
CZ Contract Type Fixed
US Contract Type Permanent
US Contract Type Fixed
US Contract Type Contractor
AE Contract Type Permanent
CZ Grade 1
CZ Grade 2
CZ Grade 3
CZ Grade 4
CZ Grade 5
US Contract Type Manager
US Contract Type Non Manager
CZ Education 1st Degree
CZ Education 2nd Degree
SK Education A
SK Education B
SK Education C
AE Gender Male
AE Gender Female
</code></pre>
<p>I need a way to filter the LOV mapping dataframe for the Country I am analyzing, their LOV columns and possible list of values and then checking against the data which values they provided on those columns that are non compliant to the mapping table (in a list, and if an empty list this would mean all values complied to the mapping) and identify the rows the error occured (also in a list, and again if an empty list there is no error). If the cell is blank, do not consider it as an error.</p>
<p>Example:</p>
<pre><code>Contract Type
Values non compliant in the column: ['permanent','Contractor']
Rows error occured: [0,3]
Education
Values non compliant in the column: ['Male', 'male', 'Female', 'Female']
Rows error occured: [0, 1, 3, 4]
Grade
Values non compliant in the column: ['6', '.', '01-01-2020']
Rows error occured: [1, 2, 3]
</code></pre>
<p>Thank you for the support!</p>
|
<python><pandas><list><filter><lines-of-code>
|
2023-03-14 15:30:26
| 1
| 619
|
Paulo Cortez
|
75,735,188
| 2,789,863
|
Django GraphQL won't create new data with foreign key
|
<p>I'm new to Graphene, Django, and GraphQL and am trying to wrap my head around how it would work when updating tables that use a Foreign Key. I am able successfully create a new author, which does <strong>not</strong> require populating a foreign key field, but I cannot create a new book, which <strong>does</strong> require populating a foreign key field.</p>
<p><strong>My Question:</strong> What is wrong with my implementation that is preventing me from creating a new book?</p>
<h3>What is Happening</h3>
<p>When I run the <code>CreateNewBook</code> mutation (shown below), GraphQL returns <code>null</code> as the title and no data is added to <code>dim_all_books</code>.</p>
<h3>What I Expect</h3>
<p>I expect that when I run the <code>CreateNewBook</code> mutation that the return results show the title of the book and a new record is appended to the table <code>dim_all_books</code>.</p>
<h3>What I've Tried</h3>
<p>When I run the following query, it works fine.</p>
<pre><code>mutation CreateNewAuthor{
createAuthor(authorData: {
authorName: "William Shakespeare",
yearBorn: "1722",
yearDied: "1844",
country: "Austria",
booksWritten: 11
}) {
author {
authorName
}
}
}
</code></pre>
<p>However, when I run the following query immediately after that query, I get this result:</p>
<pre><code>{
"data": {
"createBook": {
"book": null
}
}
}
</code></pre>
<pre><code>mutation CreateNewBook {
createBook(bookData: {
title: "Romeo & Juliet",
subtitle: "",
authorName: "William Shakespeare",
yearPublished: "1736",
review: 3.2
}) {
book {
title
}
}
}
</code></pre>
<p>This outcome occurs regardless of what value I put in for <code>authorName</code>.</p>
<h3>Implementation</h3>
<h6>Project Creation</h6>
<pre class="lang-bash prettyprint-override"><code>django-admin startproject core
python3 manage.py startapp books
python3 manage.py startapp authors
</code></pre>
<h6>Books</h6>
<pre class="lang-py prettyprint-override"><code># books/models.py
class Book(models.Model):
title = models.CharField(max_length=100)
subtitle = models.CharField(max_length=100)
author_name = models.ForeignKey(
'authors.author',
related_name='authors',
verbose_name='author_name',
db_column='author_name',
blank=True,
null=True,
on_delete=models.SET_NULL
)
year_published = models.CharField(max_length=10)
review = models.FloatField()
class Meta:
db_table = "dim_all_books"
def __str__(self):
return self.title
</code></pre>
<pre class="lang-py prettyprint-override"><code># books/schema.py
class BookType(DjangoObjectType):
class Meta:
model = Book
fields = '__all__'
class BookInputType(graphene.InputObjectType):
title = graphene.String()
subtitle = graphene.String()
author_name = graphene.String()
year_published = graphene.String()
review = graphene.Float()
...
class CreateBook(graphene.Mutation):
class Arguments:
book_data = BookInputType(required=True)
book = graphene.Field(BookType)
@staticmethod
def mutate(root, info, book_data=None):
book_instance = Book(
title=book_data.title,
subtitle=book_data.subtitle,
author_name=book_data.author_name,
year_published=book_data.year_published,
review=book_data.review
)
book_instance.save()
return CreateBook(book=book_instance)
...
class Mutation(graphene.ObjectType):
create_book = CreateBook.Field()
update_book = UpdateBook.Field()
delete_book = DeleteBook.Field()
</code></pre>
<h6>Authors</h6>
<pre class="lang-py prettyprint-override"><code># authors/models.py
class Author(models.Model):
author_name = models.CharField(
max_length=100,
primary_key=True,
db_column='author_name',
default='Unknown'
)
year_born = models.CharField(max_length=10)
year_died = models.CharField(max_length=10, blank=True, null=True)
country = models.CharField(max_length=60)
books_written = models.PositiveSmallIntegerField()
class Meta:
db_table = 'dim_all_authors'
def __str__(self):
return self.name
</code></pre>
<pre class="lang-py prettyprint-override"><code># authors/schema.py
class AuthorType(DjangoObjectType):
class Meta:
model = Author
fields = '__all__'
class AuthorInputType(graphene.InputObjectType):
author_name = graphene.String()
year_born = graphene.String()
year_died = graphene.String()
country = graphene.String()
books_written = graphene.Int()
...
class CreateAuthor(graphene.Mutation):
class Arguments:
author_data = AuthorInputType(required=True)
author = graphene.Field(AuthorType)
@staticmethod
def mutate(root, info, author_data=None):
author_instance = Author(
author_name = author_data.author_name,
year_born = author_data.year_born,
year_died = author_data.year_died,
country = author_data.country,
books_written = author_data.books_written
)
author_instance.save()
return CreateAuthor(author=author_instance)
...
class Mutation(graphene.ObjectType):
create_author = CreateAuthor.Field()
update_author = UpdateAuthor.Field()
delete_author = DeleteAuthor.Field()
</code></pre>
<p>And then these schemas are set up in <code>core/schema.py</code> and <code>core.settings.py</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code># core/schema.py
import graphene
from authors.schema import Query as AuthorQuery
from authors.schema import Mutation as AuthorMutation
from books.schema import Query as BookQuery
from books.schema import Mutation as BookMutation
class Query(BookQuery, AuthorQuery):
pass
class Mutation(BookMutation, AuthorMutation):
pass
schema = graphene.Schema(query=Query, mutation=Mutation)
</code></pre>
<pre class="lang-py prettyprint-override"><code># core/settings.py
...
GRAPHENE = {
"SCHEMA": "core.schema.schema"
}
</code></pre>
|
<python><django><graphql><graphene-django>
|
2023-03-14 15:27:27
| 1
| 6,776
|
tblznbits
|
75,735,028
| 9,640,238
|
Using os.path.relpath on Windows network drive
|
<p>I need to get the relative path on a network drive. So, from <code>s:\path\to\file</code>, I want to get <code>path\to\file</code>.</p>
<p>So, this works:</p>
<pre class="lang-py prettyprint-override"><code>In [8]: path = r's:\path\to\file'
In [9]: path[3:]
Out[9]: 'path\\to\\file'
</code></pre>
<p>But I thought that using <code>os.path.relpath()</code> was a smarter, more pythonic way. This doesn't work though:</p>
<pre class="lang-py prettyprint-override"><code>In [10]: os.path.relpath(r's:\path\to\file')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[10], line 1
----> 1 os.path.relpath(r's:\path\to\file')
File <frozen ntpath>:758, in relpath(path, start)
ValueError: path is on mount 's:', start on mount 'C:'
</code></pre>
<p>Is there any workaround to allow the use of <code>relpath</code> on network drives?</p>
|
<python><windows><path><relative-path><network-drive>
|
2023-03-14 15:14:12
| 0
| 2,690
|
mrgou
|
75,734,987
| 9,729,023
|
Append S3 File in Lambda, Python
|
<p>I understand it's not recommendable way but we need to update the same S3 file using Lambda.
We'd like to write and add the count result in the same file in S3.</p>
<p>Usually, it's counting and updating one by one in sequential order in the same job session. But we can't deny any possibility that the scheduled jobs using same Lambda function can run simultaneously, especially at the busy time. In such case, the file can be updated in unintended way or corrupsed in the worst senario.</p>
<p>I think we can set the concurrency 0 of that Lambda function and make it always run in sequential order, but I'm afraid it might cause the performance issue instaed.</p>
<p>If you have any better idea for concurrency prevention or any error countermeasure in Python code below, please advise me.</p>
<p>Thank you for your help in advance.</p>
<pre><code>-- lambda_handler.py
import json
from s3 import *
def lambda_handler(event, context):
s3_append('yyyymmdd/result.txt', 'yyyymmdd/result.txt', 'append str')
-- function S3.py
import boto3
from datetime import datetime
bucket_name = 's3-sachiko-aws3-new2'
s3_resource = boto3.resource("s3")
# S3 File Append
def s3_append(target_path, append_path, append_str):
target_context = s3_resource.Object(bucket_name, target_path).get()["Body"].read()
s3_resource.Object(bucket_name, append_path).put(Body=target_context + b"\n" + bytes(append_str, 'utf-8'))
</code></pre>
|
<python><amazon-web-services><amazon-s3><aws-lambda>
|
2023-03-14 15:10:49
| 1
| 964
|
Sachiko
|
75,734,979
| 1,403,546
|
pandas-profiling / ydata-profiling : not able disable some basic alerts like "Zeros"
|
<p>I'm using ydata-profiling (pandas profiling) and I'm not able to disable some alerts (e.g. Zeros).</p>
<p>Here, <a href="https://ydata-profiling.ydata.ai/docs/master/pages/advanced_usage/available_settings.html" rel="nofollow noreferrer">https://ydata-profiling.ydata.ai/docs/master/pages/advanced_usage/available_settings.html</a> , I can find only some alerts which can be removed, for example correlations, but not Zeros as an example.</p>
<p>thanks</p>
|
<python><validation><pandas-profiling>
|
2023-03-14 15:10:25
| 1
| 1,759
|
user1403546
|
75,734,763
| 8,790,507
|
Ordering multi-indexed pandas dataframe on two levels, with different criteria for each level
|
<p>Consider the dataframe <code>df_counts</code>, constructed as follows:</p>
<pre><code>df2 = pd.DataFrame({
"word" : ["AA", "AC", "AC", "BA", "BB", "BB", "BB"],
"letter1": ["A", "A", "A", "B", "B", "B", "B"],
"letter2": ["A", "C", "C", "A", "B", "B", "B"]
})
df_counts = df2[["word", "letter1", "letter2"]].groupby(["letter1", "letter2"]).count()
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/b8ItM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b8ItM.png" alt="enter image description here" /></a></p>
<p>What I would like to do from here, is to order first by <code>letter1</code> totals, so the rows for <code>letter1 == "B"</code> appear first (there are four words starting with <code>B</code>, vs only three with <code>A</code>), and then ordered within each grouping of <code>letter1</code> by the values in the <code>word</code> column.</p>
<p>So the final output should be:</p>
<pre><code> word
letter1 letter2
B B 3
A 1
A C 2
A 1
</code></pre>
<p>Is this possible to do?</p>
|
<python><pandas><dataframe>
|
2023-03-14 14:52:25
| 2
| 1,594
|
butterflyknife
|
75,734,602
| 2,790,047
|
Differences between forced early binding methods (In CPython) and how to access values stored in function?
|
<p>Using the CPython implementation of Python 3.9.16 consider the following codes</p>
<pre class="lang-py prettyprint-override"><code>def foo_a():
x = 1
fun = (lambda _x: lambda: print(_x))(x)
x = 2
return fun
def foo_b():
x = 1
fun = lambda _x=x: print(_x)
x = 2
return fun
fa = foo_a()
fb = foo_b()
</code></pre>
<p>where I've used two different methods for forcing early binding of <code>x</code>. I believe these are semantically equivalent, however I notice that the resulting functions have different closures.</p>
<pre class="lang-py prettyprint-override"><code>print(fa.__closure__) # (<cell at 0x0000025CED84BFD0: int object at 0x0000025CE9E36930>,)
print(fb.__closure__) # None
</code></pre>
<p>In the case of <code>fa</code> I can access the local <code>_x</code> variable using <code>fa.__closure__[0].cell_contents</code>, but where is the local <code>_x</code> stored in <code>fb</code> if not inside its closure?</p>
|
<python><cpython>
|
2023-03-14 14:39:08
| 1
| 22,544
|
jodag
|
75,734,529
| 7,553,746
|
How can I correctly parse this date string in Python using strptime?
|
<p>I've read previous Stack questions but I am coming unstuck with a datetime. My code looks like this and each time raises and error on just some of the transactions.</p>
<pre><code> # r.createdAt = datetime.strptime(unparsed_review['createdAt'], '%Y-%m-%dT%H:%M:%S.%fZ') # ValueError: time data '2020-11-25T14:54:00Z' does not match format '%Y-%m-%dT%H:%M:%S.%fZ'
# r.createdAt = datetime.strptime(unparsed_review['createdAt'], '%Y-%m-%dT%H:%M:%SZ') # ValueError: time data '2023-03-06T11:28:22.746919Z' does not match format '%Y-%m-%dT%H:%M:%SZ'
# r.createdAt = datetime.strptime(unparsed_review['createdAt'], '%Y-%m-%dT%H:%M:%S%z') # ValueError: time data '2023-03-06T11:28:22.746919Z' does not match format '%Y-%m-%dT%H:%M:%S%z'
</code></pre>
<p>Currently I am successfully over coming it like this which is brutal but does work.. I'm not sure what the final piece of the string is or if it's changing.</p>
<pre><code> if not processed:
try:
r.createdAt = datetime.strptime(unparsed_review['createdAt'], '%Y-%m-%dT%H:%M:%S.%fZ')
processed = True
except:
processed = False
if not processed:
try:
r.createdAt = datetime.strptime(unparsed_review['createdAt'], '%Y-%m-%dT%H:%M:%SZ')
processed = True
except:
processed = False
if not processed:
try:
r.createdAt = datetime.strptime(unparsed_review['createdAt'], '%Y-%m-%dT%H:%M:%S%z')
processed = True
except:
processed = False
</code></pre>
|
<python><datetime>
|
2023-03-14 14:31:58
| 2
| 3,326
|
Johnny John Boy
|
75,734,478
| 7,702,354
|
Python count same keys in ordered dict
|
<p>I have multiple ordered dict (in this example 3, but can be more) as a result from an API, with same key value pairs and I test for the same key, then I print out another key's value:</p>
<pre><code>from collections import OrderedDict
d = [OrderedDict([('type', 'a1'), ('rel', 'asd1')]),
OrderedDict([('type', 'a1'), ('rel', 'fdfd')]),
OrderedDict([('type', 'b1'), ('rel', 'dfdff')])]
for item in d:
if item["type"] == "a1":
print(item["rel"])
</code></pre>
<p>This works fine, but it could happen that the return from API can be different, because there is only 1 result. Without the [ at the begining and ] at the end, like this:</p>
<pre><code>d = OrderedDict([('type', 'b1'), ('rel', 'dfdff')])
</code></pre>
<p>In this case, I will receive this error:</p>
<pre><code> if item["type"] == "a1":
TypeError: string indices must be integers
</code></pre>
<p>So I would like to test also (maybe in the same if it is possible) if the key "type" has more then 0 value with "a1" then print the items or to test the OrderedDict have the square brackets at the begining or at the end.</p>
|
<python>
|
2023-03-14 14:27:37
| 3
| 359
|
Darwick
|
75,734,368
| 1,497,199
|
What is the grid_size parameter in shapely operations do?
|
<p>In a practical sense, what does the <code>grid_size</code> parameter do for you? When/why would you change it from the default?</p>
<p>I understand from testing that it imposes a discretization on the coordinates of the resulting geometries, e.g. with <code>grid_size=0.01</code> the fractional part of the coordinates will be multiples of <code>0.01</code>. Does this play into the logic of the algorithms or is just a convenience for applications where, in the end, the user is going to discretize the coordinates anyway?</p>
|
<python><shapely>
|
2023-03-14 14:17:43
| 1
| 8,229
|
Dave
|
75,734,320
| 19,336,534
|
Waiting for os command to end in python
|
<p>In my python program i am running:</p>
<pre><code>os.popen('sh download.sh')
</code></pre>
<p>where <code>download.sh</code> downloads a couple of csv files using curl.<br />
Is there a way to wait until the files are downloaded before the python programm continues running?</p>
|
<python><python-3.x><operating-system><popen>
|
2023-03-14 14:13:28
| 2
| 551
|
Los
|
75,734,273
| 1,857,373
|
Error Unexpected formatting Networks edges exception. ValueError: not enough values to unpack. add_nodes_from() problem
|
<p><strong>Problem</strong></p>
<p>Batch processing with CSV files, nodes, edges. Need to create graph data and load netowrkx.from_csv().</p>
<p>Started adding nodes and edges to networkx graph using graph.add_nodes_from(_nodes) and graph.add_nodes_from(_edges). Nodes and edges are visible in sparse array. But code does not load properly.</p>
<p>When I execute <em>_nodes = [n for n in nodes]</em>, I get the same errors.</p>
<p><strong>Code</strong></p>
<pre><code>import pandas as pd
import numpy as np
from scipy import sparse
import pandas as pd
import networkx as nx
from sknetwork.data import from_edge_list, from_adjacency_list, from_graphml, from_csv
from sknetwork.visualization import svg_graph, svg_bigraph
from sknetwork.utils import bipartite2undirected
edges = from_csv("edges.csv")
nodes = from_csv("nodes.csv")
_nodes = [n for n in nodes]
_edges = [n for n in edges]
G = nx.Graph()
G.add_nodes_from(_nodes) # see below error
G.add_edges_from(_edges) # see below error
print(nx.info(G))
</code></pre>
<p><strong>Multiple Paths Execution</strong></p>
<p>When I run *G.add_edges_from(_edges), this error is raised:</p>
<pre><code>Unexpected exception formatting exception. Falling back to standard exception
</code></pre>
<p>When I run <em>G.add_nodes_from(_nodes)</em>, this error is raised:</p>
<pre><code>TypeError Traceback (most recent call last)
File ~/opt/anaconda3/lib/python3.9/site-packages/networkx/classes/graph.py:562, in Graph.add_nodes_from(self, nodes_for_adding, **attr)
561 try:
--> 562 if n not in self._node:
563 self._adj[n] = self.adjlist_inner_dict_factory()
TypeError: unhashable type: 'csr_matrix'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[99], line 10
7 _labels = [n for n in edges][1:]
9 G = nx.Graph()
---> 10 G.add_nodes_from(_nodes)
11 G.add_edges_from(_edges)
12 print(nx.info(G))
File ~/opt/anaconda3/lib/python3.9/site-packages/networkx/classes/graph.py:569, in Graph.add_nodes_from(self, nodes_for_adding, **attr)
567 self._node[n].update(attr)
568 except TypeError:
--> 569 nn, ndict = n
570 if nn not in self._node:
571 self._adj[nn] = self.adjlist_inner_dict_factory()
ValueError: not enough values to unpack (expected 2, got 1)
</code></pre>
<p><strong>Data</strong></p>
<pre><code>print(nx.info(G))
_nodes
[<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 385 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 5 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 5077 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 2833 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
with 4 stored elements in Compressed Sparse Row format>,
<1x5795 sparse matrix of type '<class 'numpy.int64'>'
</code></pre>
<pre><code>_edges
[<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 248 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 241 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 241 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 246 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 246 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 254 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 254 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 240 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 240 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 246 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 246 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 210 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 203 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 207 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 187 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 243 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 243 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 208 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
with 208 stored elements in Compressed Sparse Row format>,
<1x5794 sparse matrix of type '<class 'numpy.int64'>'
</code></pre>
<p><strong>Sample Data nodes</strong></p>
<pre><code>(0, 1265) 1
(0, 1338) 1
(0, 1413) 1
(0, 1643) 1
(0, 1719) 1
(0, 1806) 1
(0, 2052) 1
(0, 2128) 1
(0, 2641) 1
(0, 2872) 1
(0, 3100) 1
(0, 3244) 1
: :
(5778, 52) 1
(5779, 3) 4
(5780, 3) 4
(5781, 1) 1
(5781, 3) 2
(5781, 29) 1
(5782, 3) 4
(5783, 3) 4
(5784, 3) 4
(5785, 3) 4
(5786, 3) 4
(5787, 3) 4
(5788, 3) 2
(5788, 5) 1
(5788, 47) 1
(5789, 3) 4
(5790, 3) 2
(5790, 5) 1
(5790, 47) 1
(5791, 3) 4
(5792, 3) 2
(5792, 5) 1
(5792, 29) 1
</code></pre>
<p><strong>Sample Data edges</strong></p>
<pre><code>(0, 682) 1
(0, 683) 1
(0, 755) 1
(0, 756) 1
(0, 794) 1
(0, 807) 1
(0, 808) 1
(0, 883) 1
(0, 884) 1
(0, 965) 1
: :
(5793, 1492) 1
(5793, 1968) 1
(5793, 2050) 1
(5793, 2347) 1
(5793, 2491) 1
(5793, 3171) 1
(5793, 3549) 1
(5793, 3772) 1
(5793, 5469) 1
(5793, 5504) 1
(5793, 5727) 1
(5793, 5766) 1
(5793, 5767) 1
(5793, 5768) 1
(5793, 5769) 1
</code></pre>
|
<python><numpy><graph><networkx>
|
2023-03-14 14:09:50
| 0
| 449
|
Data Science Analytics Manager
|
75,734,219
| 10,844,937
|
Why multi-thread get different output without sleep?
|
<p>I use <code>threading</code> to create 10 threads.</p>
<pre><code>import threading
import time
def work():
time.sleep(1)
print(f"{threading.current_thread().name} - work......")
if __name__ == '__main__':
for i in range(10):
t = threading.Thread(target=work)
t.start()
</code></pre>
<p>Here if I <code>sleep</code> for 1 second, I get different sorts of thread names each time.</p>
<p>The first time:</p>
<pre><code>Thread-1 - work......
Thread-2 - work......
Thread-3 - work......
Thread-4 - work......
Thread-5 - work......
Thread-6 - work......
Thread-9 - work......
Thread-10 - work......
Thread-7 - work......
Thread-8 - work......
Process finished with exit code 0
</code></pre>
<p>The second time:</p>
<pre><code>Thread-1 - work......
Thread-2 - work......
Thread-3 - work......
Thread-5 - work......
Thread-7 - work......
Thread-9 - work......
Thread-6 - work......
Thread-8 - work......
Thread-10 - work......
Thread-4 - work......
Process finished with exit code 0
</code></pre>
<p>If I remove <code>time.sleep(1)</code>, I get the same output everytime.</p>
<pre><code>Thread-1 - work......
Thread-2 - work......
Thread-3 - work......
Thread-4 - work......
Thread-5 - work......
Thread-6 - work......
Thread-7 - work......
Thread-8 - work......
Thread-9 - work......
Thread-10 - work......
Process finished with exit code 0
</code></pre>
<p>Anyone knows Why?</p>
|
<python><multithreading>
|
2023-03-14 14:04:18
| 1
| 783
|
haojie
|
75,734,133
| 8,548,374
|
Need advice on a use case that involves maintaining several python and non-python scripts
|
<p>I am just learning containers and kubernetes and everything around it. There has been a use case to build a reliable setup, where we can store all our python scripts(small, usecase defined scripts that do one only job each). There are some scripts in other languages like perl too.</p>
<p>Not sure if this is the correct place to ask, but I will ask anyway.</p>
<p>The requirement is to build a solution that will have less to no dependency on the underlying operating system so even if we were to switch operating systems/servers in the future, the scripts can remain/run as it is.</p>
<p>Was thinking if I can build a 2 node kubernetes cluster and run each script in a container and trigger them using a cron job. Not sure if this is an optimal and efficient approach. The python virtual environments is not our way to go given the python version is symmlinked back to the python version on the server, causing a server/os dependency.</p>
<p>Appreciate any ideas and advice if someone else has done something similar. I've google enough for such usecases. Didn't find solutions that match specifically to my need. But please feel free to share, ideas, thoughts any good reads too. Thanks!</p>
<p>Note: The server operating system is RHEL 8 and above</p>
|
<python><kubernetes><scripting><containers><podman>
|
2023-03-14 13:57:36
| 1
| 421
|
Sudhi
|
75,733,969
| 11,154,036
|
Splitting multiple lists in columns in multiple rows in pandas
|
<p>I'm looking for the best way to transform this matrix</p>
<p><a href="https://i.sstatic.net/RaXM2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RaXM2.png" alt="enter image description here" /></a></p>
<pre><code>pd.DataFrame(data=[[1, [2, 3], [4, 5], [6, 7]], ['a', ['b', 'c'], ['d', 'e'], ['f', 'g']]])
</code></pre>
<p>Into this</p>
<p><a href="https://i.sstatic.net/ZmNyD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZmNyD.png" alt="enter image description here" /></a></p>
<pre><code>pd.DataFrame(data=[[ 1, 2, 4, 6], [1, 3, 5, 7], ['a', 'b', 'd', 'f'], ['a', 'c', 'e', 'g']])
</code></pre>
<p>Columns 1, 2, 3 can have longer lists but are per definition of equal length.</p>
<p>What would be the best way to do this?</p>
|
<python><pandas><dataframe>
|
2023-03-14 13:44:22
| 1
| 302
|
Hestaron
|
75,733,920
| 8,176,763
|
jinja templates inside @task in taskflow
|
<p>I have a dag that does not render my jinja template:</p>
<pre><code>@dag(default_args={"owner": "airflow"}, schedule_interval=None, start_date=days_ago(1))
def my_dag():
fs_hook = LocalFilesystemHook()
@task
def write_output(output_name):
output = "This is some output."
unique_id = "{{ ds }}_{{ ti.task_id }}_{{ ti.try_number }}"
file_path = f"/path/to/output/{output_name}_{unique_id}.txt"
fs_hook.write(output, file_path)
return file_path
</code></pre>
<p>Do jinja templates work in taskflow ?</p>
|
<python><airflow>
|
2023-03-14 13:39:46
| 0
| 2,459
|
moth
|
75,733,844
| 18,618,577
|
Plot windroses subplots on the same figure
|
<p>there is a <a href="https://windrose.readthedocs.io/en/latest/usage.html#subplots" rel="nofollow noreferrer">beautiful example</a> of subplots display with windroses but I don't find any code of it, and for me the function windroseaxes is a blackbox.</p>
<p>Here is my code, it's generate 2 plots of windrose and I try to have one plot with two subplots :</p>
<pre><code>from windrose import WindroseAxes
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import numpy as np
ws1 = np.random.random(500) * 6
wd1 = np.random.random(500) * 360
ws2 = np.random.random(500) * 6
wd2 = np.random.random(500) * 360
fig, ax = plt.subplots(2, figsize=(15,15), dpi=150)
ax[0] = WindroseAxes.from_ax()
ax[0].contourf(wd1, ws1, bins=np.arange(0, 8, 1), cmap=cm.hot)
ax[0].contour(wd1, ws1, bins=np.arange(0, 8, 1), colors='black')
ax[1] = WindroseAxes.from_ax()
ax[1].contourf(wd2, ws2, bins=np.arange(0, 8, 1), cmap=cm.hot)
ax[1].contour(wd2, ws2, bins=np.arange(0, 8, 1), colors='black')
# ax.set_legend()
</code></pre>
|
<python><subplot><windrose>
|
2023-03-14 13:33:09
| 1
| 305
|
BenjiBoy
|
75,733,789
| 9,390,633
|
is there a way to subset a dataset when read in using pyspark
|
<p>I see you can use use sample to return a random sample of items but is there anyway when reading in a csv file as a dataframe for example we only read in a specified random selection of rows of a specific number?</p>
<p>Is there anyway to read in the csv but pick 100 random rows from that csv.</p>
<p>Do I need to read in the full file or is there a different approach.</p>
<pre><code># Import the SparkSession library
from pyspark.sql import SparkSession
# Create a spark session using getOrCreate() function
spark_session = SparkSession.builder.getOrCreate()
# Read the CSV file
data_frame=csv_file = spark_session.read.csv('/content/student_data.csv',
sep = ',', inferSchema = True, header = True)
# Extract random sample through sample function using
# withReplacement (value=True) and fraction as arguments
data_frame.sample(True,0.8).collect()
</code></pre>
|
<python><pandas><dataframe><apache-spark><pyspark>
|
2023-03-14 13:29:05
| 1
| 363
|
lunbox
|
75,733,783
| 9,879,869
|
Django list display of foreign key attribute that is boolean field using check or cross icons
|
<p>My problem is similar to this thread: <a href="https://stackoverflow.com/questions/163823/can-list-display-in-a-django-modeladmin-display-attributes-of-foreignkey-field">Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields?</a>. I want to display a foreign key attribute from the list display. This attribute is a BooleanField. Normally, it would display a check or cross if the admin page is on the model itself containing the field. Such as this:</p>
<p><a href="https://i.sstatic.net/adA6a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/adA6a.png" alt="enter image description here" /></a></p>
<p>However when I reference this to other admin page.</p>
<pre><code>class ProofOfPaymentAdmin(admin.ModelAdmin):
list_display = ('id', 'reference_number', 'user', 'is_subscribed',)
def is_subscribed(self, obj):
return obj.user.is_subscribed
</code></pre>
<p>The return from the list display is the boolean values <code>True</code> or <code>False</code>.</p>
<p><a href="https://i.sstatic.net/tQh4Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tQh4Z.png" alt="enter image description here" /></a></p>
<p>How can I change this to display the icons check or cross similar from the above?</p>
<p><strong>Update 1:</strong>
Result from first answer:</p>
<p><a href="https://i.sstatic.net/QNzs4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QNzs4.png" alt="enter image description here" /></a></p>
|
<python><django><django-models><django-admin><django-modeladmin>
|
2023-03-14 13:28:40
| 1
| 1,572
|
Nikko
|
75,733,676
| 3,862,607
|
npm install --legacy-peer-deps - command failed
|
<p>In order to try to resolve an issue with node-sass peer dependencies, I tried running the following</p>
<p><code>npm install --legacy-peer-deps</code></p>
<p>However when I run this, I get the following stacktrace and I'm not sure what it means</p>
<pre><code>npm ERR! code 1
npm ERR! path /Users/apple/Documents/projects/nova-web/node_modules/node-sass
npm ERR! command failed
npm ERR! command sh -c node scripts/build.js
npm ERR! Building: /usr/local/bin/node /Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/node-gyp/bin/node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library=
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp verb cli [
npm ERR! gyp verb cli '/usr/local/bin/node',
npm ERR! gyp verb cli '/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/node-gyp/bin/node-gyp.js',
npm ERR! gyp verb cli 'rebuild',
npm ERR! gyp verb cli '--verbose',
npm ERR! gyp verb cli '--libsass_ext=',
npm ERR! gyp verb cli '--libsass_cflags=',
npm ERR! gyp verb cli '--libsass_ldflags=',
npm ERR! gyp verb cli '--libsass_library='
npm ERR! gyp verb cli ]
npm ERR! gyp info using node-gyp@3.8.0
npm ERR! gyp info using node@18.15.0 | darwin | x64
npm ERR! gyp verb command rebuild []
npm ERR! gyp verb command clean []
npm ERR! gyp verb clean removing "build" directory
npm ERR! gyp verb command configure []
npm ERR! gyp verb check python checking for Python executable "python2" in the PATH
npm ERR! gyp verb `which` failed Error: not found: python2
npm ERR! gyp verb `which` failed at getNotFoundError (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:13:12)
npm ERR! gyp verb `which` failed at F (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:68:19)
npm ERR! gyp verb `which` failed at E (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:80:29)
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:89:16
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/isexe/index.js:42:5
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/isexe/mode.js:8:5
npm ERR! gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:208:21)
npm ERR! gyp verb `which` failed python2 Error: not found: python2
npm ERR! gyp verb `which` failed at getNotFoundError (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:13:12)
npm ERR! gyp verb `which` failed at F (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:68:19)
npm ERR! gyp verb `which` failed at E (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:80:29)
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:89:16
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/isexe/index.js:42:5
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/isexe/mode.js:8:5
npm ERR! gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:208:21) {
npm ERR! gyp verb `which` failed code: 'ENOENT'
npm ERR! gyp verb `which` failed }
npm ERR! gyp verb check python checking for Python executable "python" in the PATH
npm ERR! gyp verb `which` failed Error: not found: python
npm ERR! gyp verb `which` failed at getNotFoundError (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:13:12)
npm ERR! gyp verb `which` failed at F (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:68:19)
npm ERR! gyp verb `which` failed at E (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:80:29)
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:89:16
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/isexe/index.js:42:5
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/isexe/mode.js:8:5
npm ERR! gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:208:21)
npm ERR! gyp verb `which` failed python Error: not found: python
npm ERR! gyp verb `which` failed at getNotFoundError (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:13:12)
npm ERR! gyp verb `which` failed at F (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:68:19)
npm ERR! gyp verb `which` failed at E (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:80:29)
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:89:16
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/isexe/index.js:42:5
npm ERR! gyp verb `which` failed at /Users/apple/Documents/projects/nova-web/node_modules/isexe/mode.js:8:5
npm ERR! gyp verb `which` failed at FSReqCallback.oncomplete (node:fs:208:21) {
npm ERR! gyp verb `which` failed code: 'ENOENT'
npm ERR! gyp verb `which` failed }
npm ERR! gyp ERR! configure error
npm ERR! gyp ERR! stack Error: Can't find Python executable "python", you can set the PYTHON env variable.
npm ERR! gyp ERR! stack at PythonFinder.failNoPython (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/node-gyp/lib/configure.js:484:19)
npm ERR! gyp ERR! stack at PythonFinder.<anonymous> (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/node-gyp/lib/configure.js:406:16)
npm ERR! gyp ERR! stack at F (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:68:16)
npm ERR! gyp ERR! stack at E (/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:80:29)
npm ERR! gyp ERR! stack at /Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/which/which.js:89:16
npm ERR! gyp ERR! stack at /Users/apple/Documents/projects/nova-web/node_modules/isexe/index.js:42:5
npm ERR! gyp ERR! stack at /Users/apple/Documents/projects/nova-web/node_modules/isexe/mode.js:8:5
npm ERR! gyp ERR! stack at FSReqCallback.oncomplete (node:fs:208:21)
npm ERR! gyp ERR! System Darwin 21.6.0
npm ERR! gyp ERR! command "/usr/local/bin/node" "/Users/apple/Documents/projects/nova-web/node_modules/node-sass/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
npm ERR! gyp ERR! cwd /Users/apple/Documents/projects/nova-web/node_modules/node-sass
npm ERR! gyp ERR! node -v v18.15.0
npm ERR! gyp ERR! node-gyp -v v3.8.0
npm ERR! gyp ERR! not ok
npm ERR! Build failed with error code: 1
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/apple/.npm/_logs/2023-03-14T13_09_24_326Z-debug-0.log
</code></pre>
<p>I have python3 installed but python2 and python are not path variables. Do I need to set ptyhon as a path variable to point to python3?</p>
|
<python><node.js><npm>
|
2023-03-14 13:19:36
| 1
| 1,899
|
Drew Gallagher
|
75,733,675
| 15,613,309
|
Is there an alternative to using Python eval() in this situation?
|
<p>I have a program that is a group of individual "applets". Each of the applets has tk.Entry widgets that only require numeric entries and the decimal key (0-9 & .). This program runs on a Raspberry Pi with a 10" touch screen. The stock on screen keyboards (onBoard, etc.) take up way too much screen real estate, plus I only need numeric keys. So I wrote and on screen numeric keypad, but unlike the numerous YouTube calculator examples, this keypad needs to know which of the Entry widgets has focus. I'm using a list of directories to keep up with the Entry widgets and the "fields" I'll need to apply the keypad entries to. I can not come up with a way to avoid using eval() when applying the keypad entries. I suspect the answer (if there is one) lies in "nametowidget", but not sure how I'd use it here. The eval() statements are near the end of the code in the kb_entry function.</p>
<pre><code>import tkinter as tk
from tkinter import messagebox
class Eval_GUI:
def __init__(self):
self.root = tk.Tk()
self.root.geometry('650x225')
self.root.title('Eval Demo')
self.allEntryFields = [] # A list of directories we'll use to apply the numeric keypad entries
# Frame 1 - A couple of Labels and Entry widgets
f1 = tk.LabelFrame(self.root,padx=5,pady=5)
f1.grid(column=0,row=0)
self.f1e1_text = tk.Label(f1,text='Frame 1 Entry 1',font=('Arial',12))
self.f1e1_text.grid(row=0,column=0)
self.f1e2_text = tk.Label(f1,text='Frame 1 Entry 2',font=('Arial',12))
self.f1e2_text.grid(row=1,column=0)
self.f1e1 = tk.StringVar() # Using StringVar because IntVar will not accept a decimal (.) and DoubleVar has a default of 0.0 which is not desired
self.f1e2 = tk.StringVar()
self.f1e1_entry=tk.Entry(f1,textvariable=self.f1e1,width='7',font=('Arial',12))
self.f1e2_entry=tk.Entry(f1,textvariable=self.f1e2,width='7',font=('Arial',12))
self.f1e1_entry.grid(row=0,column=1)
self.f1e2_entry.grid(row=1,column=1)
temp = {} # Create a temporary directory for each Entry widget
temp['focusedentry'] = '.!labelframe.!entry' # Used the print() function to find these names
temp['textvar'] = 'self.f1e1' # Will need this to get() & set() the Entry widget textvariable
temp['entrywidget'] = 'self.f1e1_entry' # Will need this to advance the cursor in the Entry widget
self.allEntryFields.append(temp) # Append the directory to the allEntryFields list
temp = {}
temp['focusedentry'] = '.!labelframe.!entry2'
temp['textvar'] = 'self.f1e2'
temp['entrywidget'] = 'self.f1e2_entry'
self.allEntryFields.append(temp)
# Frame 2 - A couple more Labels and Entry widgets
f2 = tk.LabelFrame(self.root,padx=5,pady=5)
f2.grid(column=1,row=0)
self.f2e1_text = tk.Label(f2,text='Frame 2 Entry 1',font=('Arial',12))
self.f2e1_text.grid(row=0,column=0)
self.f2e2_text = tk.Label(f2,text='Frame 2 Entry 2',font=('Arial',12))
self.f2e2_text.grid(row=1,column=0)
self.f2e1 = tk.StringVar()
self.f2e2 = tk.StringVar()
self.f2e1_entry=tk.Entry(f2,textvariable=self.f2e1,width='7',font=('Arial',12))
self.f2e2_entry=tk.Entry(f2,textvariable=self.f2e2,width='7',font=('Arial',12))
self.f2e1_entry.grid(row=0,column=1)
self.f2e2_entry.grid(row=1,column=1)
temp = {}
temp['focusedentry'] = '.!labelframe2.!entry'
temp['textvar'] = 'self.f2e1'
temp['entrywidget'] = 'self.f2e1_entry'
self.allEntryFields.append(temp)
temp = {}
temp['focusedentry'] = '.!labelframe2.!entry2'
temp['textvar'] = 'self.f2e2'
temp['entrywidget'] = 'self.f2e2_entry'
self.allEntryFields.append(temp)
# Frame 3
f3 = tk.LabelFrame(self.root,padx=5,pady=5)
f3.grid(column=2,row=0)
# Placing .grid on same line just to shorten this demo code
k7 = tk.Button(f3,text='7',bg='white',command=lambda: self.kb_entry(7) ,width=4,font=('Arial',16)).grid(row=0,column=1,padx=3,pady=3)
k8 = tk.Button(f3,text='8',bg='white',command=lambda: self.kb_entry(8) ,width=4,font=('Arial',16)).grid(row=0,column=2,padx=3,pady=3)
k9 = tk.Button(f3,text='9',bg='white',command=lambda: self.kb_entry(9) ,width=4,font=('Arial',16)).grid(row=0,column=3,padx=3,pady=3)
k4 = tk.Button(f3,text='4',bg='white',command=lambda: self.kb_entry(4) ,width=4,font=('Arial',16)).grid(row=1,column=1,padx=3,pady=3)
k5 = tk.Button(f3,text='5',bg='white',command=lambda: self.kb_entry(5) ,width=4,font=('Arial',16)).grid(row=1,column=2,padx=3,pady=3)
k6 = tk.Button(f3,text='6',bg='white',command=lambda: self.kb_entry(6) ,width=4,font=('Arial',16)).grid(row=1,column=3,padx=3,pady=3)
k1 = tk.Button(f3,text='1',bg='white',command=lambda: self.kb_entry(1) ,width=4,font=('Arial',16)).grid(row=2,column=1,padx=3,pady=3)
k2 = tk.Button(f3,text='2',bg='white',command=lambda: self.kb_entry(2) ,width=4,font=('Arial',16)).grid(row=2,column=2,padx=3,pady=3)
k3 = tk.Button(f3,text='3',bg='white',command=lambda: self.kb_entry(3) ,width=4,font=('Arial',16)).grid(row=2,column=3,padx=3,pady=3)
k0 = tk.Button(f3,text='0',bg='white',command=lambda: self.kb_entry(0) ,width=4,font=('Arial',16)).grid(row=3,column=1,padx=3,pady=3)
kd = tk.Button(f3,text='.',bg='white',command=lambda: self.kb_entry('.'),width=4,font=('Arial',16)).grid(row=3,column=2,padx=3,pady=3)
self.root.mainloop()
def kb_entry(self,the_key):
focused = str(self.root.focus_get()) # Get the Entry widget that has focus
if focused == '.':
tk.messagebox.showerror('Error','Select an Entry Field First')
else:
found = False
for i in self.allEntryFields: # Find a match in our list of directories
if i['focusedentry'] == focused:
the_tv = i['textvar']
the_en = i['entrywidget']
found = True
break
if found:
c_val = eval(the_tv).get() # Get the current value of the Focused Entry widget
n_val = c_val + str(the_key) # Concatenate the numeric keypad entry to the current value
eval(the_tv).set(n_val) # Set the value of the Focused Entry widget to the new value
eval(the_en).icursor(8) # Advance the cursor to the end of the Focused Widget (use a number greater than what you expect the Entry widget to hold
else:
tk.messagebox.showerror('Error','Unknown Entry Widget Focus ' + str(focused))
if __name__ == '__main__':
Eval_GUI()
</code></pre>
|
<python><tkinter><eval>
|
2023-03-14 13:19:29
| 1
| 501
|
Pragmatic_Lee
|
75,733,367
| 188,331
|
Convert Chinese numeric characters to numbers in Python
|
<p>I am writing a Python function to convert numbers represented in a mix of Chinese and Arabic numbers to a numerical value.</p>
<pre><code>import re
units = ['', '十', '百', '千', '萬', '十萬', '百萬', '千萬', '億', '十億', '百億', '千億']
# the Chinese means '', '100', '1000', '10000', etc.
def chinese_numeral_replace(input_text):
if input_text.group(0) in units:
idx = units.index(input_text.group(0))
return '*' + str(pow(10, idx))
else:
return input_text
test2 = '5萬' # means 50 thousands or 50000
result = re.sub('|'.join(units).lstrip('|'), chinese_numeral_replace, test2)
print(eval(result))
</code></pre>
<p>which results <code>50000</code>. The value is <code>50*1000</code> which will then <code>eval()</code> to 50000.</p>
<p>The above function and codes work when the sentence is simple. However, if the input string is more complex like '5千萬5千' (which translates to 50000 + 5000, that's 55000), the above function will result in incorrect values.</p>
<p>I know there are 2 bugs:</p>
<ol>
<li><p>The <code>re.sub()</code> returns incorrect results. For 千萬 in the complex case, the <code>re.sub()</code> will return 2 match groups ('千' and '萬') instead of '千萬' which is also in the <code>units</code> list.</p>
</li>
<li><p>The <code>chinese_numeral_replace</code> function hard codes <code>.group(0)</code> into the function, how can I ask the replace function to replace:</p>
</li>
</ol>
<ul>
<li>千萬 -> 10000000 (which means 10 million in Chinese, which 千 = 1000, 萬 = 10000)</li>
<li>千 -> 1000 (which means 1000 in Chinese)</li>
</ul>
<p>Thanks in advance.</p>
|
<python><regex>
|
2023-03-14 12:50:31
| 3
| 54,395
|
Raptor
|
75,733,294
| 11,932,905
|
Pandas: calculate time difference between different milestones in column
|
<p>I have a table like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: left;">tm</th>
<th style="text-align: left;">milestone</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">00335c06f96a21e4089c49a5da</td>
<td style="text-align: left;">2023-02-01 18:13:42.307543</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">00335c06f96a21e4089c49a5da</td>
<td style="text-align: left;">2023-02-01 18:14:42.307543</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">00335c06f96a21e4089c49a5da</td>
<td style="text-align: left;">2023-02-01 18:15:42.307543</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">00335c06f96a21e4089c49a5da</td>
<td style="text-align: left;">2023-02-01 18:19:10.307543</td>
<td style="text-align: left;">B</td>
</tr>
<tr>
<td style="text-align: left;">00335c06f96a21e4089c49a5da</td>
<td style="text-align: left;">2023-02-01 18:21:05.307543</td>
<td style="text-align: left;">C</td>
</tr>
<tr>
<td style="text-align: left;">0043545f6b9112c7e471d5cc81</td>
<td style="text-align: left;">2023-02-02 08:06:42.307543</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">0043545f6b9112c7e471d5cc81</td>
<td style="text-align: left;">2023-02-02 08:07:42.307543</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">0043545f6b9112c7e471d5cc81</td>
<td style="text-align: left;">2023-02-02 09:05:42.307543</td>
<td style="text-align: left;">B</td>
</tr>
<tr>
<td style="text-align: left;">0043545f6b9112c7e471d5cc81</td>
<td style="text-align: left;">2023-02-02 09:05:42.307543</td>
<td style="text-align: left;">B</td>
</tr>
<tr>
<td style="text-align: left;">ffe92ae6b0962e800dbdbdf00a</td>
<td style="text-align: left;">2023-02-12 19:05:42.307543</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">ffe92ae6b0962e800dbdbdf00a</td>
<td style="text-align: left;">2023-02-12 19:05:42.307543</td>
<td style="text-align: left;">B</td>
</tr>
<tr>
<td style="text-align: left;">ffe92ae6b0962e800dbdbdf00a</td>
<td style="text-align: left;">2023-02-12 19:07:42.307543</td>
<td style="text-align: left;">B</td>
</tr>
<tr>
<td style="text-align: left;">ffe92ae6b0962e800dbdbdf00a</td>
<td style="text-align: left;">2023-02-13 21:03:42.307543</td>
<td style="text-align: left;">C</td>
</tr>
</tbody>
</table>
</div>
<p>What I want to achieve - is convert it to transactional view / pivot so that for each <code>id</code> to have columns with max time difference between milestones (A, B, C, D..). For starting - columns <code>A-B</code>, <code>A-C</code>.<br />
Treaky part that there are multiple records with same milestone - I am taking min time for each category. But also not all id have full sequence, in this case I need to mark this time difference as None.</p>
<p>For Example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: left;">A-B, min</th>
<th style="text-align: left;">A-C, min</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">00335c06f96a21e4089c49a5da</td>
<td style="text-align: left;">5.47</td>
<td style="text-align: left;">7.38</td>
</tr>
<tr>
<td style="text-align: left;">0043545f6b9112c7e471d5cc81</td>
<td style="text-align: left;">59.0</td>
<td style="text-align: left;">None</td>
</tr>
<tr>
<td style="text-align: left;">ffe92ae6b0962e800dbdbdf00a</td>
<td style="text-align: left;">0</td>
<td style="text-align: left;">1558.0</td>
</tr>
</tbody>
</table>
</div>
<p>What I'm trying to do:<br />
Take min values for each milestone:
df.groupby(['id','milestone '])['tm'].min().to_frame('tm').reset_index(),</p>
<p>And apply <code>.diff()</code> like <code>.sort_values(['id','tm']).groupby('id')['tm'].diff()</code> But can't feagure out how to properly do this to create time-difference columns.<br />
Other idea is to use <code>.pivot</code> but can't come up with idea how to aggregate milestones inside.</p>
<p>Appreciate any help.</p>
<p>Input DataFrame for example:</p>
<pre><code>pd.DataFrame({'id':['00335c06f96a21e4089c49a5da','00335c06f96a21e4089c49a5da','00335c06f96a21e4089c49a5da','00335c06f96a21e4089c49a5da','00335c06f96a21e4089c49a5da','0043545f6b9112c7e471d5cc81','0043545f6b9112c7e471d5cc81','0043545f6b9112c7e471d5cc81','0043545f6b9112c7e471d5cc81','ffe92ae6b0962e800dbdbdf00a','ffe92ae6b0962e800dbdbdf00a','ffe92ae6b0962e800dbdbdf00a','ffe92ae6b0962e800dbdbdf00a'],
'tm':['2023-02-01 18:13:42.307543','2023-02-01 18:14:42.307543','2023-02-01 18:15:42.307543','2023-02-01 18:19:10.307543','2023-02-01 18:21:05.307543',
'2023-02-02 08:06:42.307543','2023-02-02 08:07:42.307543','2023-02-02 09:05:42.307543','2023-02-02 09:05:42.307543',
'2023-02-12 19:05:42.307543','2023-02-12 19:05:42.307543','2023-02-12 19:07:42.307543','2023-02-13 21:03:42.307543'],
'milestone':['A','A','A','B','C','A','A','B','B','A','B','B','C']})
</code></pre>
|
<python><pandas><group-by><pivot>
|
2023-03-14 12:43:40
| 1
| 608
|
Alex_Y
|
75,733,292
| 5,969,463
|
Formatting and Substituting Into Queries Fails in SQLAlchemy
|
<p>I am doing some wrapping of queries in <code>TextClause</code> objects by using the SQLAlchemy's <code>text()</code> function. It appears that when I try to pass values to these parameters, they do not get substituted. I am pretty sure I am passing the parameters correctly (via dictionary).</p>
<p>Could there be an issue with having single quotes nearby a parameter to be subsituted? For instance, many of my queries have something like this:</p>
<pre><code>':YYYY-01-01' as requested_date,
</code></pre>
<p>and</p>
<pre><code>select concat('BB-:YY. ', as_name)
</code></pre>
<p>and</p>
<pre><code> ':YYYY-01-01'
</code></pre>
<p>Is it expected that these should work, assuming I pass a dictionary with <code>YY</code> and <code>YYYY</code> as keys that have appropriate values or are there caveats which would prevent this from succeeding?</p>
|
<python><sqlalchemy><flask-sqlalchemy>
|
2023-03-14 12:43:33
| 1
| 5,891
|
MadPhysicist
|
75,732,892
| 1,937,003
|
Issue accessing a dictionary key in python
|
<p>When I execute this code:</p>
<pre><code>df_settings = {
"headers" : [
"Header 1",
"Header 2",
"Header 3"
],
"headers_type" : {
"Header 1" : "int64",
"Header 2" : "object",
"Header 3" : "float"
},
}
class DataframeValidator:
def __init__(self, df_data, df_settings):
self.df_data = df_data
self.df_settings = df_settings
def _check_missing_headers(self):
""" controlla le che ci siano le intestazioni di colonna necessarie"""
output_msg = ""
missing_headers = []
standards_headers = self.df_settings['headers']
print(standards_headers)
for header in standards_headers:
if header in self.df_data.columns.tolist():
continue
else:
missing_headers.append(header)
if missing_headers:
output_msg = f"Missing headers check failed: missing headers have been detected:\n {missing_headers}"
return (False, output_msg)
else:
output_msg = "Missing headers check passed: no missing headers have been detected."
return(True, output_msg)
def _check_wrong_types(self, **kwargs):
""" controlla le che ci siano le intestazioni di colonna necessarie"""
print('_check_wrong_types')
rules = {
'headers' : _check_missing_headers,
'headers_type' : _check_wrong_types
}
def validate_data_frame(self):
output_msg = ""
for setting in self.df_settings:
if setting in self.rules:
output = self.rules[setting](self.df_settings[setting])
output_msg += str(output)
print(output_msg)
dataframe_path = Path('test.xlsx')
dati = pd.read_excel(dataframe_path)
validator = DataframeValidator(dati, df_settings)
validator.validate_data_frame()
</code></pre>
<p>I obtain this error:</p>
<pre><code> File "/Users/manuelzompetta/Library/CloudStorage/OneDrive-Lactalis/_SPOOL OneDrive/Test/datavalidator3.py", line 30, in _check_missing_headers
standards_headers = self.df_settings['headers']
^^^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'df_settings'
</code></pre>
<p>I do not understand where the mistake is, before I thought it depending from an issue related to the dictionary loaded by a json file. The purpose of the code is validating a dataframe applying some functions with parameters contained in a json file.</p>
|
<python><list><dictionary><data-structures>
|
2023-03-14 12:02:11
| 2
| 424
|
Manuel Zompetta
|
75,732,797
| 5,623,899
|
Construct a cone from a set of k points given apex point i in python?
|
<p>Suppose you have k n-dimensional points <code>{p1, p2, ..., pk}</code> and we pick a point <code>i</code> (<code>1 <= i <= k</code>). How would one compute the tightest fitting cone with tip <code>pi</code>? As well as the <a href="https://link.springer.com/article/10.1007/s10107-006-0069-1" rel="nofollow noreferrer">pointedness</a> of said cone?</p>
<p>For a similar question about tightest fitting cones, user <a href="https://stackoverflow.com/users/387778/joseph-orourke">@Joseph O'Rourke</a> <a href="https://stackoverflow.com/a/14140689/5623899">suggested</a></p>
<blockquote>
<p>If your set of directions all fall within one halfspace through the origin, then you could compute the convex hull of the vector tips on the unit-radius sphere, which yields a convex polygon on that sphere, and then find the smallest circle circumscribing that polygon. You could avoid the spherical calculations by projection to a suitable plane.
I realize this is an abstract view and you likely need more concrete advice, but it still might help: Convex hull + minimum circumscribing circle.</p>
</blockquote>
<p>So how would one compute this in python?
There is the <a href="https://cvxopt.org/userguide/coneprog.html" rel="nofollow noreferrer">cone programming</a> library and SciPy's <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html" rel="nofollow noreferrer">convex hull</a>.</p>
<p>From a naive perspective, I feel like the angle / pointedness of the cone can be computed as follows (at least in 2d, were the angle of a cone and an arc are equivalent?)</p>
<pre><code>given: point i
pick another point j, 1<=j<=k, j!=i
min_theta = 0
for each point p, p!=i, p!=j
cur_theta = arc angle from j to p
if cur_theta > min_theta: # point is outside our minimal 2d arc
min_theta = cur_theta
</code></pre>
|
<python><scipy><computational-geometry>
|
2023-03-14 11:52:53
| 0
| 5,218
|
SumNeuron
|
75,732,660
| 3,118,956
|
Adding missing label line
|
<p>I csv have a file with data lines, but it misses the label line.
This label line is a string, coma separated, ready to be appended on top.</p>
<pre><code>import pandas as pd
data = pd.read_csv("C:\\Users\\Desktop\\Trades-List.txt", sep=',')
labelLine = "label1,label2,label3"
</code></pre>
<p>How to add the 'labelLine' on top of data and make sure it is properly considered as the label line of the file?</p>
|
<python><pandas>
|
2023-03-14 11:39:36
| 2
| 7,488
|
Robert Brax
|
75,732,546
| 12,106,577
|
LaTeX logo (\LaTeX)
|
<p>I'm trying to get the \LaTeX symbol to work in matplotlib.pyplot's mathtext or usetex (matplotlib==3.5.2).</p>
<p>Other symbols display correctly with <code>text_usetex</code> both <code>True</code> and <code>False</code>:</p>
<pre><code>import matplotlib.pyplot as plt
plt.rcParams['text.usetex'] = False
plt.plot([],[])
plt.title(r"$\alpha$")
plt.show()
</code></pre>
<p>however <code>\LaTeX</code> doesn't work with either:</p>
<pre><code>plt.title(r"$\LaTeX$")
</code></pre>
<p>instead throwing, with <code>plt.rcParams['text.usetex']=True</code>:</p>
<pre><code>Unknown symbol: \LaTeX, found '\' (at char 0), (line:1, col:1)
</code></pre>
<p>and with <code>plt.rcParams['text.usetex']=False</code>:</p>
<pre><code>RuntimeError: latex was not able to process the following string:
b'$\\\\LaTeX$'
</code></pre>
|
<python><matplotlib><latex>
|
2023-03-14 11:30:22
| 1
| 399
|
John Karkas
|
75,732,523
| 1,468,797
|
Optimizing Django db queries for Viewset
|
<p>I recently was tasked to optimize db queries and performance of some of our django rest apis written in drf and was able to successfully use <code>prefetch_related()</code> to implement them.</p>
<p>But there is one usecase I have been unable to resolve and looking for support on the same.</p>
<p>Here goes the structure:</p>
<p><strong>Models.py</strong></p>
<pre><code>class Section(models.Model):
section_tags = models.ManyToManyField(AssetTag)
section_name = models.CharField(max_length=200)
section_createdate = models.DateTimeField(auto_now=True)
class Collection(models.Model):
section = models.ForeignKey(Section, on_delete=models.CASCADE, related_name="collection_set")
system_tags = models.ManyToManyField(AssetTag, blank=False, related_name='system_tags_poster_collections')
card = models.ManyToManyField(Card)
class Card(models.Model):
tag = models.ManyToManyField(CardTag) # should we deprecate?
system_tags = models.ManyToManyField(AssetTag, blank=False, related_name='system_tags_poster_cards')
card_name = models.CharField(max_length=200)
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>class SectionViewset(viewsets.ModelViewSet):
serializer_class = serializers.SectionSerializer
http_method_names = ['get']
def get_queryset(self):
queryset = Section.objects.filter(section_status=True, section_expiredate__gte=datetime.now())
return queryset
</code></pre>
<p><strong>serializer.py</strong></p>
<pre><code>class SectionSerializer(serializers.ModelSerializer):
collection_set = CollectionSerializer(many=True, read_only=True)
class FilteredCollectionSerializer(serializers.ListSerializer):
def to_representation(self, data):
data = data.filter(collection_status=True, collection_expiredate__gte=datetime.now())
return super(FilteredCollectionSerializer, self).to_representation(data)
class CollectionSerializer(serializers.ModelSerializer):
card = CardSerializer(many=True, read_only=True)
system_tags = AssetTagBadgeSerializer(many=True, read_only=True)
class Meta:
list_serializer_class = FilteredCollectionSerializer
model = Collection
fields = ('id', 'card', 'system_tags')
class CardSerializer(serializers.ModelSerializer):
tag = CardTagSerializer(many=True, read_only=True)
system_tags = AssetTagBadgeSerializer(many=True, read_only=True)
# method_name = 'get_template_id'
class Meta:
model = Card
fields = ('id', 'card_name', 'card_image', 'card_createdate', 'tag', 'system_tags')
class AssetTagBadgeSerializer(serializers.ModelSerializer):
class Meta:
model = AssetTag
fields = ('id', 'tag_name')
</code></pre>
<p>I am unable to apply prefetch_related() to get any optimization of the queries on the SectionViewSet since it is sort of a reverse relationship in the Collection Model. Is there any way to optimize calls to db since I see many calls for card__tag and card__system_tags in are application performance monitoring system.</p>
|
<python><django><django-rest-framework><django-queryset>
|
2023-03-14 11:26:45
| 1
| 2,175
|
ichthyocentaurs
|
75,732,371
| 3,063,273
|
How do I copy the binaries from the `python` NuGet package into my project's build output?
|
<p>Nothing happens when I reference <a href="https://www.nuget.org/packages/python" rel="nofollow noreferrer">the <code>python</code> NuGet package</a> in my .csproj. I would like its binaries to be copied into my build directory. What do I need to change about my .csproj for that to happen?</p>
<p>My .csproj:</p>
<pre class="lang-xml prettyprint-override"><code><Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net7.0</TargetFramework>
<Nullable>enable</Nullable>
<Platforms>x64</Platforms>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="python" Version="3.11.2" />
</ItemGroup>
</Project>
</code></pre>
<p>That package has this <code>python.props</code> file located in /build/native:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup Condition="$(Platform) == 'X64'">
<PythonHome Condition="$(PythonHome) == ''">$([System.IO.Path]::GetFullPath("$(MSBuildThisFileDirectory)\..\..\tools"))</PythonHome>
<PythonInclude>$(PythonHome)\include</PythonInclude>
<PythonLibs>$(PythonHome)\libs</PythonLibs>
<PythonTag>3.11</PythonTag>
<PythonVersion>3.11.2</PythonVersion>
<IncludePythonExe Condition="$(IncludePythonExe) == ''">true</IncludePythonExe>
<IncludeDistutils Condition="$(IncludeDistutils) == ''">false</IncludeDistutils>
<IncludeLib2To3 Condition="$(IncludeLib2To3) == ''">false</IncludeLib2To3>
<IncludeVEnv Condition="$(IncludeVEnv) == ''">false</IncludeVEnv>
<GetPythonRuntimeFilesDependsOn>_GetPythonRuntimeFilesDependsOn311_None;$(GetPythonRuntimeFilesDependsOn)</GetPythonRuntimeFilesDependsOn>
</PropertyGroup>
<ItemDefinitionGroup Condition="$(Platform) == 'X64'">
<ClCompile>
<AdditionalIncludeDirectories>$(PythonInclude);%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary>
</ClCompile>
<Link>
<AdditionalLibraryDirectories>$(PythonLibs);%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
</Link>
</ItemDefinitionGroup>
<Target Name="GetPythonRuntimeFiles" Returns="@(PythonRuntime)" DependsOnTargets="$(GetPythonRuntimeFilesDependsOn)" />
<Target Name="_GetPythonRuntimeFilesDependsOn311_None" Returns="@(PythonRuntime)">
<ItemGroup>
<_PythonRuntimeExe Include="$(PythonHome)\python*.dll" />
<_PythonRuntimeExe Include="$(PythonHome)\python*.exe" Condition="$(IncludePythonExe) == 'true'" />
<_PythonRuntimeExe>
<Link>%(Filename)%(Extension)</Link>
</_PythonRuntimeExe>
<_PythonRuntimeDlls Include="$(PythonHome)\DLLs\*.pyd" />
<_PythonRuntimeDlls Include="$(PythonHome)\DLLs\*.dll" />
<_PythonRuntimeDlls>
<Link>DLLs\%(Filename)%(Extension)</Link>
</_PythonRuntimeDlls>
<_PythonRuntimeLib Include="$(PythonHome)\Lib\**\*" Exclude="$(PythonHome)\Lib\**\*.pyc;$(PythonHome)\Lib\site-packages\**\*" />
<_PythonRuntimeLib Remove="$(PythonHome)\Lib\distutils\**\*" Condition="$(IncludeDistutils) != 'true'" />
<_PythonRuntimeLib Remove="$(PythonHome)\Lib\lib2to3\**\*" Condition="$(IncludeLib2To3) != 'true'" />
<_PythonRuntimeLib Remove="$(PythonHome)\Lib\ensurepip\**\*" Condition="$(IncludeVEnv) != 'true'" />
<_PythonRuntimeLib Remove="$(PythonHome)\Lib\venv\**\*" Condition="$(IncludeVEnv) != 'true'" />
<_PythonRuntimeLib>
<Link>Lib\%(RecursiveDir)%(Filename)%(Extension)</Link>
</_PythonRuntimeLib>
<PythonRuntime Include="@(_PythonRuntimeExe);@(_PythonRuntimeDlls);@(_PythonRuntimeLib)" />
</ItemGroup>
<Message Importance="low" Text="Collected Python runtime from $(PythonHome):%0D%0A@(PythonRuntime->' %(Link)','%0D%0A')" />
</Target>
</Project>
</code></pre>
<p>Here is Python's documentation on this NuGet package:<br/>
<a href="https://docs.python.org/3/using/windows.html#the-nuget-org-packages" rel="nofollow noreferrer">https://docs.python.org/3/using/windows.html#the-nuget-org-packages</a></p>
<p>They recommend calling <code>nuget.exe</code> directly like this:</p>
<pre><code>nuget.exe install python -ExcludeVersion -OutputDirectory .
</code></pre>
<p>I would like the equivalent of that to happen when my project is built, except for the output to be my project's output directory.</p>
|
<python><c#><msbuild><nuget>
|
2023-03-14 11:10:50
| 1
| 5,844
|
Matt Thomas
|
75,732,303
| 5,724,244
|
Pyspark: Compare Column Values across different dataframe
|
<p>we are planning to do the following,
compare two dataframe, based on comparision add values into first dataframe and then groupby to have combined data.</p>
<p>We are using pyspark dataframe and the following are our dataframes.</p>
<p>Dataframe1:</p>
<pre><code>| Manager | Department | isHospRelated
| -------- | -------------- | --------------
| Abhinav | Medical | t
| Abhinav | Surgery | t
| Rajesh | Payments | t
| Rajesh | HR | t
| Sonu | Onboarding | t
| Sonu | Surgery | t
| Sonu | HR | t
</code></pre>
<p>Dataframe2:</p>
<pre><code>| OrgSupported| OrgNonSupported |
| -------- | -------------- |
| Medical | Payment |
| Surgery | Onboarding |
</code></pre>
<p>We plan to compare Dataframe1 with Dataframe2 and obtain the following results:</p>
<pre><code>| Manager | Department | Org Supported | Org NotSupported
| -------- | -------------- | ------------- | ----------------
| Abhinav | Medical | Medical |
| Abhinav | Surgery | Surgery |
| Rajesh | Payments | | Payments
| Rajesh | HR | | HR
| Sonu | Onboarding | | Onboarding
| Sonu | Surgery | Surgery |
| Sonu | HR | | HR
</code></pre>
<p>Finally we would like to groupthem as follows:</p>
<pre><code>| Manager | Department | isHospRelated | Org Supported | Org NotSupported
| -------- | -------------- | ------------ | ------------- | ----------------
| Abhinav | Medical,Surgery | t | Medical,Surgery|
| Rajesh | Payments, HR | t | | Payments, HR
| Sonu | Onboarding,Surgery,HR| t | Surgery | Onboarding, HR
</code></pre>
<p>We are using pyspark in our code, any suggestions how do we do these kind of comparison in pyspark.</p>
|
<python><apache-spark><pyspark><pyspark-pandas><pyspark-schema>
|
2023-03-14 11:04:16
| 1
| 449
|
frp farhan
|
75,732,196
| 6,734,243
|
how to add line number in a sphinx warning?
|
<p>I'm writting an extention to display inline icons (<a href="https://github.com/sphinx-contrib/icon" rel="nofollow noreferrer">https://github.com/sphinx-contrib/icon</a>) in for sphinx and I realized the warnings I raise are not displaying the line number in the doc, making more difficult to debug:</p>
<pre><code>WARNING: icon "sync" is not part of fontawesome
</code></pre>
<p>What I would like to write is:</p>
<pre><code>path/to/file.rst:10: WARNING: icon "sync" is not part of fontawesome
</code></pre>
<p>AS I think it's relevant I raise warnings as such:</p>
<pre class="lang-py prettyprint-override"><code>def visit_icon_node_html(translator: SphinxTranslator, node: icon_node) -> None:
"""Visit the html output."""
font, glyph = get_glyph(node["icon"])
translator.body.append(f'<i class="{Fontawesome.html_font[font]} fa-{glyph}">')
return
def get_glyph(text) -> Tuple[str, str]:
"""Get the glyph from text.
Args:
text: The text to transform (e.g. "fas fa-folder")
Returns:
(glyph, font): from the provided text. skip the node if one of them does not exist
"""
# split the icon name to find the name inside
m = re.match(Fontawesome.regex, text)
if not m:
logger.warning(f'invalid icon name: "{text}"')
raise nodes.SkipNode
if m.group("font") not in Fontawesome.html_font:
logger.warning(f'font "{m.group("font")}" is not part of fontawesome')
raise nodes.SkipNode
if m.group("glyph") not in Fontawesome.metadata:
logger.warning(f'icon "{m.group("glyph")}" is not part of fontawesome')
raise nodes.SkipNode
</code></pre>
<p>I assume I need to add a paremeter the <code>get_glyph</code> and pass it to the logger but which one ?</p>
|
<python><python-sphinx>
|
2023-03-14 10:51:54
| 1
| 2,670
|
Pierrick Rambaud
|
75,732,052
| 1,764,089
|
How do I force the usual "default" formatting that pandas dataframes output to?
|
<p>I'm using a jupyter like platform that has truly terrible visual capabilities and one of the annoying things is that when simply seeing a dataframe by having it be the last thing in a cell all the usual pandas formatting is gone if I turn it into a styler.</p>
<p>What I mean by that is:</p>
<ul>
<li>alternating rows white / grey</li>
<li>hover over row gets me nothing instead of that blue highlighting</li>
<li>numbers are aligned in weird ways</li>
</ul>
<p>I want to use styler because it allows me to format numbers easily (ie things like <code>'{:,.0f}'</code> or <code>'{:.1%}'</code>.</p>
<p>Is there a way to force the formatting to match the "default" in some easy way? eg where is the default stored?</p>
<pre><code>df # this works fine as an output
df.style # this loses all default formatting
</code></pre>
|
<python><pandas><jupyter-notebook><pandas-styles>
|
2023-03-14 10:37:29
| 1
| 3,753
|
evan54
|
75,731,994
| 4,490,454
|
How to transform this for loop in a list comprehension?
|
<p>I am trying to transform a for loop in a list comprehension but I keep getting a syntax error. What am I doing wrong?</p>
<p>The for loop:</p>
<pre><code>for item in items:
if item in default_items.keys():
total += default_items[item]
</code></pre>
<p>The list comprehension:</p>
<pre><code>[total := total + default_items[item] if item in default_items.keys() for item in items]
</code></pre>
<p>Here some example of the code in context:</p>
<pre><code>items = []
total = 0
default_items = {"tophat": 2, "bowtie": 4, "monocle": 5}
items.append("bowtie")
items.append("jacket")
items.append("monocle")
items.append("chocolate")
for item in items:
if item in default_items.keys():
total += default_items[item]
print(total) # 9
items = []
total = 0
[total := total + default_items[item] if item in default_items.keys() for item in items] # raising sintax error
print(total) # 9
</code></pre>
|
<python><python-3.x>
|
2023-03-14 10:33:04
| 1
| 445
|
EGS
|
75,731,959
| 12,760,550
|
Confirm if column in a pandas dataframe is in a sequential order starting by one
|
<p>Imagine I have the following column on a dataframe:</p>
<pre><code>df['Col1'] = [1, 2, 3, 4, 6, 5]
</code></pre>
<p>What would be the best way to confirm if this column is in a sequential order having their lowest value starting with 1? In the example above, I would expect it to return "True".</p>
|
<python><pandas><dataframe><sorting>
|
2023-03-14 10:30:33
| 2
| 619
|
Paulo Cortez
|
75,731,873
| 9,328,993
|
ONNX export failed on an operator with unrecognized namespace 'torch_scatter::scatter_max'
|
<p>I have a pytorch network like this</p>
<pre class="lang-py prettyprint-override"><code>import torch.nn as nn
import torch_scatter.scatter_max
class DGCN(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
...
torch_scatter.scatter_max(x, index, dim=0)
...
</code></pre>
<p>But when i want export my model to onnx, i face this error:</p>
<pre class="lang-bash prettyprint-override"><code> ...
File "/usr/local/lib/python3.9/dist-packages/torch/onnx/utils.py", line 1115, in _model_to_graph
graph = _optimize_graph(
File "/usr/local/lib/python3.9/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/usr/local/lib/python3.9/dist-packages/torch/onnx/utils.py", line 1909, in _run_symbolic_function
raise errors.UnsupportedOperatorError(
torch.onnx.errors.UnsupportedOperatorError: ONNX export failed on an operator with unrecognized namespace 'torch_scatter::scatter_max'.
If you are trying to export a custom operator, make sure you registered it with the right domain and version.
</code></pre>
<p>So, How i can do this exactly?</p>
|
<python><onnx><torch-scatter>
|
2023-03-14 10:24:09
| 1
| 2,630
|
Sajjad Aemmi
|
75,731,770
| 2,754,029
|
run perforce sync though script after changing directory
|
<p>I am trying to sync the perforce code using automation script.
I have a directory where /opengrok/src/<project_name> where .p4config is there.</p>
<p>But when I run below code it fail because it takes current working directory as the script's directory:</p>
<pre><code>def sync_perforce_code(project_name, p4port, p4user, p4client, p4view):
project_name = project_name
project_dir = os.path.join("/opengrok/src", project_name)
os.environ["P4CONFIG"] = ".p4config"
# Run the p4 sync command
try:
output = os.chdir(project_dir)
print(os.getcwd())
subprocess.run(["p4", "info"])
print("Successfully synced Perforce code")
except subprocess.CalledProcessError as e:
print(e)
</code></pre>
<p>print(os.getcwd()) prints correct directory but <code>p4 info</code> prints wrong current directory.</p>
<pre><code>User name: my_username
Client name: hostname (illegal)
Client host: hostname.example.com
Client unknown.
Current directory: /script's directory/sync_code
...
</code></pre>
<p>Any help highly appreaciated.</p>
|
<python><subprocess><perforce><perforce-client-spec>
|
2023-03-14 10:16:38
| 1
| 3,642
|
undefined
|
75,731,693
| 13,320,357
|
How to add a key in a mongo document on the basis of an existing key?
|
<p>I have a document as follows:</p>
<pre class="lang-json prettyprint-override"><code>{
"_id": "1234",
"created_at": 1678787680
}
</code></pre>
<p>I want to modify the document and add a new key <code>updated_at</code> which will be a datetime equivalent of the <code>created_at</code> UNIX timestamp.</p>
<pre class="lang-json prettyprint-override"><code>{
"_id": "1234",
"created_at": 1678787680,
"updated_at": "2023-03-14 15:39:18.767232"
}
</code></pre>
<p>Is there a way to perform this operation using <code>updateMany</code>?</p>
|
<python><mongodb><mongodb-query><aggregation-framework><pymongo>
|
2023-03-14 10:11:07
| 2
| 415
|
Anuj Panchal
|
75,731,406
| 8,176,763
|
how to use Dataset in taskflow in airflow
|
<p>I have a dag written in the taskflow format, and I am experimenting on how to pass files between tasks, I read some info about <code>Dataset</code>(<a href="https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/datasets.html" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/datasets.html</a>) in airflow but did not understand what is the advantage over passing the file path as a string like in the following dag:</p>
<pre><code>from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
from airflow.operators.python_operator import PythonOperator
@dag(default_args={"owner": "airflow"}, schedule_interval=None, start_date=days_ago(1))
def my_dag():
@task
def write_output():
with open("/path/to/output/file.txt", "w") as f:
f.write("This is some output.")
return "/path/to/output/file.txt"
@task
def read_output(file_path):
with open(file_path, "r") as f:
output = f.read()
print(output)
return output
write_task = write_output()
read_task = read_output(write_task.output)
# Set the dependencies between tasks
read_task.set_upstream(write_task)
dag = my_dag()
</code></pre>
|
<python><airflow>
|
2023-03-14 09:43:23
| 1
| 2,459
|
moth
|
75,731,396
| 865,695
|
OpenCV stitcher ignores additional image
|
<p>As a follow-up to the closed question ' <a href="https://stackoverflow.com/questions/75727432/how-to-update-a-part-of-a-panorama">How to update a part of a panorama</a> ' I have come up with an approach but OpenCV doesn't behave the way I want it to. To be more exact:</p>
<ul>
<li>I have a panorama of which I do not have the original images</li>
<li>I have a new additional, smaller image that fits well into the panorama (except for warping, size adjustments etc) but has some more items in it that I want to be present in the adjusted panorama</li>
</ul>
<p>The approach I am now on is to cut the panorama into several overlapping images and use those images together with the new image to stitch a new panorama. However, the new image is never part of the new panorama. Here is the code:</p>
<pre><code>import cv2
from empatches import EMPatches
pano_img = cv2.imread('pano.jpg')
pano_height = pano_img.shape[0]
new_img = cv2.imread('new.jpg')
# Use the EMPatches lib to extract overlapping images. This works perfectly fine.
# img_patches is a list of np.array, usable in OpenCV
emp = EMPatches()
img_patches, _ = emp.extract_patches(pano_img, pano_height, overlap=0.4)
img_patches.append(new_img)
stitcher = cv2.Stitcher_create()
status, output = stitcher.stitch(img_patches)
# Returns status 0 and output is the original panorama perfectly stitched.
# The new image was ignored.
</code></pre>
<p>I do not know where in the original panorama the new image would go, so I can't set up an order in the <code>img_patches</code> list if that's the issue. How can I tweak OpenCV to create the panorama containing the new image?</p>
<p>OpenCV is version 4.7.0.72 and installed using pip install opencv-python.</p>
<p>Panorama:
<a href="https://ii.stack.imgur.com/JufS9.jpg" rel="nofollow noreferrer"><img src="https://ii.stack.imgur.com/JufS9.jpg" alt="The panorama" /></a></p>
<p>New image:
<a href="https://ii.stack.imgur.com/BwEa3.jpg" rel="nofollow noreferrer"><img src="https://ii.stack.imgur.com/BwEa3.jpg" alt="The new image to be integrated into the panorama" /></a></p>
|
<python><opencv><image-manipulation><panoramas><opencv-stitching>
|
2023-03-14 09:42:32
| 0
| 5,290
|
8192K
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.