QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,527,881
| 793,154
|
How to make superclass in Python work as a factory?
|
<p>I have similar classes <code>Class[ABC]</code> that may have different <code>__init__</code> parameters.<br />
The all inherit from a <code>SuperClass</code>, which I would also like to work as a factory:</p>
<pre class="lang-py prettyprint-override"><code>class SuperClass():
valid_subtypes = {}
# 'a': ClassA, 'b': ClassB, 'c': ClassC
# need to set it empty, as subclasses havent been defined.
def __new__(cls, subtype=None, *args, **kwargs): # The Factory.
if cls is not SuperClass:
args = (subtype, *args)
return object.__new__(cls)
if subtype in cls.valid_subtypes:
subclass = cls.valid_subtypes[subtype]
return object.__new__(subclass)
class ClassA(SuperClass):
pass
class ClassB(SuperClass):
def __init__(self, pb):
self.pb = pb
class ClassC(SuperClass):
def __init__(self, pc1, pc2):
self.pc1 = pc1
self.pc2 = pc2
the_classes = {'a': ClassA, 'b': ClassB, 'c': ClassC}
SuperClass.valid_subtypes.update(the_classes)
</code></pre>
<p>The previous code allows one to create subclasses and inherit from <code>SuperClass</code>:</p>
<pre class="lang-py prettyprint-override"><code>a1_obj = ClassA()
b1_obj = ClassB('hello b')
c1_obj = ClassC('hello c', 1)
</code></pre>
<p>But it's gonna fail when using the <code>__new__</code> factory.</p>
<pre class="lang-py prettyprint-override"><code>a2_obj = SuperClass('a')
b2_obj = SuperClass('b', pb ='hello b')
c2_obj = SuperClass('c', pc1='hello c', pc2=1)
</code></pre>
<p>While the <code>__new__</code> method seems to return the right object.
I understand that <code>__init__()</code> is then called with the initiating parameters which don't match its instance.</p>
<pre class="lang-py prettyprint-override"><code>a1_obj.__init__('a') # corresponding to ClassA.__init__(a1_obj, 'a')
b2_obj.__init__('b', 'hello b')
c2_obj.__init__('c', 'hello c', 1)
</code></pre>
<p>I have tried modifying <code>SuperClass.__init__(self, subtype, *args, **kwargs)</code> to account for the <code>subtype</code>, that is:</p>
<pre class="lang-py prettyprint-override"><code>def __init__(self, subtype, *args, **kwargs):
if subtype in valid_subtypes:
valid_subtypes[subtype].__init__(self, *args, **kwargs)
</code></pre>
<p>This doesn't seem to get called as the created objects by <code>__new__</code> are of class <code>Class[ABC]</code>.</p>
<ul>
<li>How do the <code>__init__</code> and <code>__new__</code> work with this strange inheritance?</li>
<li>Could these two initialization calls be made to work at the same time?</li>
<li>... without modifying the proper subclasses, but only the factory <code>SuperClass</code>?</li>
<li>or Would I necessarily need an extra method, eg. <code>@classmethod</code> <code>SuperClass.create()</code> or class that separates from the super-class from which they inherit?</li>
</ul>
<p>Thank you for your help.
Regards from Mexico.</p>
<p>PS. One solution that seems okay, albeit not super ideal, is to name all the parameters upon creation:</p>
<pre class="lang-py prettyprint-override"><code>a2_obj = SuperClass(subtype='a')
b2_obj = SuperClass(subtype='b', pb ='hello b')
c2_obj = SuperClass(subtype='c', pc1='hello c', pc2='1')
</code></pre>
|
<python><class><inheritance><initialization>
|
2023-06-22 00:13:35
| 1
| 2,349
|
Diego-MX
|
76,527,748
| 10,335
|
Using pyodata, how do I list all available properties of a queried Entity?
|
<p>I want to dynamically list all available properties from an odata Entity using <a href="https://pyodata.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">PyOData package</a>.</p>
<p>Here is what I'd like to do:</p>
<pre class="lang-py prettyprint-override"><code>service = pyodata.Client(SERVICE_URL, session)
positions = service.entity_sets.TRLPositionSet.get_entities().select('Prop1,Prop2').execute()
for p in positions:
# how to implement get_properties below?
properties = get_properties(p)
for prop in properties:
print(prop.name, getattr(p, prop.name)
</code></pre>
<p>How would I implement the <code>get_properties(p)</code> function to make it return a list <code>['Prop1', 'Prop2']</code>?</p>
|
<python><odata>
|
2023-06-21 23:31:21
| 1
| 40,291
|
neves
|
76,527,641
| 3,255,936
|
What happens when you put multiple for clauses in one generator expression in Python?
|
<p>I had a pattern of numbers that repeats mod 72, so on a hunch I got the idea to use the following Python syntax:</p>
<pre><code>for i in (x+t for x in range(k-k%72,80000000001,72) for t in (1,3,7,9,15,19,25,27,33,39,43,51,55,57,63)):
</code></pre>
<p>I think the first <code>for</code> clause in the generator expression executes for one cycle, then the entire second one executes, then the next one in the first, then the entire second, etc., because the numbers come out in the correct order. It seems to produce the result I was looking for, but is this documented? I tried searching on Google with no conclusive results. Is it a feature, or am I just exploiting a quirk in Python? I know that <code>if</code> clauses are allowed, but I didn't know that multiple <code>for</code> clauses were too.</p>
<p>[Edit] Upon reading the link provided, I see I'm not the first to ask such a question. I also forgot that a generator expression and a list comprehension are essentially the same syntax, and so if I had tried "list comprehension" instead, maybe I'd have found it too.</p>
|
<python><python-3.x><for-loop><syntax><generator>
|
2023-06-21 22:59:17
| 0
| 685
|
Brian J. Fink
|
76,527,638
| 4,431,535
|
Is there a conda-develop directive for environment.yml?
|
<p>I'm trying to configure <code>environment.yml</code> for setting up project environments to be as automated as possible. Right now, it seems like the best way to deploy a development environment is by using a yaml file and then calling <code>conda develop</code>:</p>
<pre><code>conda env create --file environment.yml
conda develop .
</code></pre>
<p>Is there a proper way to embed the develop step within the initial yaml? (e.g. I can add a<code> - pip:\n - --editable .</code> block in <code>dependencies:</code> to install in development mode using <code>pip</code>.)</p>
|
<python><conda>
|
2023-06-21 22:58:07
| 0
| 514
|
pml
|
76,527,591
| 7,120,031
|
Python - Highlight differences between two strings in Jupyter notebook
|
<p>I have a list of two strings, and I want to <em><strong>highlight</strong></em> and print differences between two strings (specifically in Jupyter notebook). By differences, I specifically mean the insertions, deletions and replacements needed to change one of the strings to the other.</p>
<p>I found <a href="https://stackoverflow.com/q/17904097/7120031">this question</a> which is similar but doesn't mention a way to present the changes.</p>
|
<python><string><jupyter-notebook><diff><highlight>
|
2023-06-21 22:45:35
| 1
| 808
|
Burhan
|
76,527,475
| 7,348,110
|
Python can't find files that are shown in os.walk()
|
<p>My problem in short is when I run <code>os.walk()</code> I get an accurate list of files, but when I try getting information about those files like their last modified date, file size or even just try <code>open()</code> with them, I get an error that the file cannot be found for only some files. Roughly 0.2% for reasons that are unclear.</p>
<h4>Background</h4>
<p>At work we have a server running Windows Server 2012 R2 (I know, I know..). We are wanting to automate moving targeted shared folders to specific shared drives in Google Drive.</p>
<p>The first thing I wanted to do was get a list of files and their last modified dates and file sizes to be used later. The code I wrote worked fine on my laptop which was running Windows 11, but when I tried pointing it at a few different share folders on the server it ran into the same issue repeatedly.</p>
<h4>Troubleshooting</h4>
<p>I don't think it's a code issue and have revised my code several times to be simpler while ending up with the same end result - where it works locally but fails to fully go through the share folders.</p>
<p>My first thought was maybe it was due to long path names (the 255 character limit on older systems) but it was successfully finding files whose paths were > 300 characters.</p>
<p>My next thought was maybe there's a clear pattern to what kind of files can't be found, but in a given folder it'd find most of the PDFs successfully but then fail to locate one or a few of the others. This is just an observed example, it is not specific to PDFs.</p>
<p>I've dedicated probably 6-8 hours total into trying to troubleshoot and investigate this but I'm pretty stumped at this point.</p>
<h4>Code</h4>
<p>do_test.py - uses the hurry.filesize package for rough file sizes</p>
<pre class="lang-python prettyprint-override"><code>import os
import datetime
from hurry.filesize import size
from pprint import pprint
# Test directory
src = "//[DC]/PATH/TO/FOLDER"
def simple_file_check(src_dir):
total_bytes = 0
total_files = 0
total_folders = 0
total_not_found = 0
files_not_found = []
for (root, dirs, files) in os.walk(src_dir):
# just count files and folders for now
total_files += len(files)
total_folders += len(dirs)
# Get full-path file names
fnames = [os.path.join(root, f).replace("\\","/") for f in files]
# Get their sizes and sum it up
fsizes = []
for f in fnames:
try:
fsizes.append(os.stat(f).st_size)
except Exception as e:
files_not_found.append(f)
total_bytes += sum(fsizes)
total_size = size(total_bytes)
total_not_found += len(files_not_found)
pct_missing = total_not_found/total_not_found+total_files*100
data = {
"ttl-size": total_size,
"ttl-files": total_files,
"ttl-folders": total_folders,
"ttl-not-found": total_not_found,
"pct-missing": "{}%".format(pct_missing)
}
pprint(data)
def time_it_pls(func, *arg):
begin_dt = datetime.datetime.now()
begin = str(begin_dt)[:19]
print("beginning execution at: {}".format(begin))
func(*arg)
end_dt = datetime.datetime.now()
end = str(end_dt)[:19]
print("ending execution at: {}".format(end))
print("time taken: {}".format(end_dt - begin_dt))
time_it_pls(simple_file_check, src)
</code></pre>
<p>result</p>
<pre class="lang-python prettyprint-override"><code>beginning execution at: 2023-06-21 14:50:06
{'pct-missing': '0.19806269922322284%',
'ttl-files': 193878,
'ttl-folders': 18150,
'ttl-not-found': 384,
'ttl-size': '210G'}
ending execution at: 2023-06-21 14:51:11
time taken: 0:01:05.302772
</code></pre>
<p>specific error message without the Exception block</p>
<pre><code>Traceback (most recent call last):
File "C:\it_scripts\do_test.py", line 53, in <module>
time_it_pls(simple_file_check, src)
File "C:\it_scripts\do_test.py", line 47, in time_it_pls
func(*arg)
File "C:\it_scripts\do_test.py", line 25, in simple_file_check
fsizes.append(os.stat(f).st_size)
^^^^^^^^^^
FileNotFoundError: [WinError 3] The system cannot find the path specified: '//DC/PATH/TO/FILE'
</code></pre>
<p>--edit--</p>
<p>I get a similar error when trying to use <code>open()</code> against just an individual file like so in the interpreter.</p>
<pre><code>>>> f = "//DC/PATH/TO/FILE" # actual path length is 267 characters long and copied from the exception in the previous example.
>>> d = open(f)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
FileNotFoundError: [Errno 2] No such file or directory: '//DC/PATH/TO/FILE'
</code></pre>
<p>--edit 2--</p>
<p>We are getting closer! Just trying to list the folder in PowerShell, I can see that the file exists but if I try running ls against the individual file I get an error. So it's <em>not</em> python specific and is hinting at something weird on the Windows side.</p>
<p>Here's a much less redacted version of the output and error on the PS-side. Please understand that this does need to be redacted to an extent due to the sensitive nature of these files.</p>
<pre class="lang-powershell prettyprint-override"><code>PS C:\Users\sani> ls "\\DC\#CONSOLIDATION of Checklists, Declarations, Examples, and Documents for [REDACTED], U, and I751 Applications\Application- U Visa\Closing Letters\
Rejections\"
Directory: \\DC\#CONSOLIDATION of Checklists, Declarations, Examples, and Documents for [REDACTED], U, and I751 Applications\Application- U Visa\Closing
Letters\Rejections
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 5/31/2019 3:06 PM 13025 Samole closing Letter - No DV Simple assault.docx
-a---- 11/21/2018 3:10 PM 16232 Sample Closing Letter-Not a qualifying crime (Sp).dotx
-a---- 7/26/2018 11:32 AM 13581 Sample Closing Letter-RE PC does not qualify a indirect victim.dotx
-a---- 11/21/2018 3:14 PM 12908 Sample Closing Letter-RE U Cert Request Denied.dotx
-a---- 7/9/2018 7:25 PM 13500 Sample Closing Letter-Unqualifying crime.dotx
-a---- 7/26/2018 6:19 PM 12769 Sample Closing Ltr w Copy of File (Sp), Over Income.dotx
-a---- 7/26/2018 1:24 PM 16432 Sample Rejection Letter, unqualifying crime.dotx
PS C:\Users\sani> ls "\\DC\#CONSOLIDATION of Checklists, Declarations, Examples, and Documents for [REDACTED], U, and I751 Applications\Application- U Visa\Closing Letters\
Rejections\Sample Closing Letter-RE PC does not qualify a indirect victim.dotx"
ls : Cannot find path '\\DC\#CONSOLIDATION of Checklists, Declarations, Examples, and Documents for [REDACTED], U, and I751 Applications\Application- U Visa\Closing
Letters\Rejections\Sample Closing Letter-RE PC does not qualify a indirect victim.dotx' because it does not exist.
At line:1 char:1
+ ls "\\DC\#CONSOLIDATION ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (\\DC\...ect victim.dotx:String) [Get-ChildItem], ItemNotFoundException
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand
</code></pre>
|
<python><windows-server-2012-r2>
|
2023-06-21 22:13:50
| 0
| 601
|
sanigirl
|
76,527,386
| 4,439,019
|
OAuth - Tuple object is not callable
|
<p>The below works fine and can be used across tests</p>
<pre><code>f = open('creds.json')
creds = json.load(f)
def get_access_token(url, client_id, client_secret):
response = requests.post(
url,
data={"grant_type": "client_credentials", "scope": "api://xyz.default"},
auth=(client_id, client_secret),
)
return response.json()["access_token"]
new_token = get_access_token(creds["auth_url"],client_id=creds["client_id"], client_secret=creds["client_secret"])
headers = {"loggedInUsername": "Test, User","Content-Type":"application/json",'Authorization': f"Bearer {new_token}"}
</code></pre>
<p>However, we just changed authentication method parameters to use <code>username</code> and <code>password</code> instead of <code>client_secret</code>. The parameters provided work fine in Postman, but the following is throwing <code>'Tuple object is not callable'</code> across the <code>pytest</code> test files.</p>
<p>Is it something about having 3 <code>auth</code> parameters instead of 2?</p>
<pre><code>def get_access_token(url, client_id, username, password):
response = requests.post(
url,
data={"grant_type": "password", "scope": "api://xyz.default"},
auth=(client_id, username, password),
)
return response.json()["access_token"]
new_token = get_access_token(creds["auth_url"],client_id=creds["client_id"], username=creds["username"], password=creds["password"])
headers = {"loggedInUsername": "Test, User","Content-Type":"application/json",'Authorization': f"Bearer {new_token}"}
</code></pre>
|
<python><oauth><tuples>
|
2023-06-21 21:52:45
| 0
| 3,831
|
JD2775
|
76,527,355
| 2,313,004
|
FastAPI - catch-all route put after root route mount doesn't get hit
|
<p>I'm trying to serve React SPA <em>and</em> a few API endpoints from FastAPI. React has its own routing, so in order to split responsibilities I use 2 FastAPI apps - one comes with all authorization bells and whistles and is mounted to the API route, and the second has one job - all requests that don't begin with <code>/api</code> should return the SPA's <code>index.html</code>. Sounds simple, right? Well, I either miss something really basic, or it's not that simple after all.</p>
<p>So this is the API app mounting (no questions here, works fine):</p>
<pre class="lang-py prettyprint-override"><code>api_app = secure_authenticated_app() # returns a tweaked FastAPI app
# main app
app = FastAPI()
app.mount("/api", api_app, name="api")
</code></pre>
<p>But then, the party starts.</p>
<pre class="lang-py prettyprint-override"><code>client_folder_path = pathlib.Path(__file__).parent.absolute().joinpath("build")
app.mount("", StaticFiles(directory=client_folder_path, html=True), name="client")
@app.get("/{full_path:path}")
async def serve_client():
with open(client_folder_path.joinpath("index.html")) as fh:
data = fh.read()
return Response(content=data, media_type="text/html")
</code></pre>
<p>My logic here following is: mount the client build folder to the root path so it could easily find all its assets, serve the <code>index.html</code> when getting to the root path, and if you encounter some route that you don't know - no worries, just serve <code>index.html</code>.</p>
<p>In reality it serves the html from the root path, other frontend assets are served too, but the <code>/{full_path:path}</code>, which I keep seeing as a starlette solution for 'wildcard route', doesn't get hit - routes like <code>/whtever</code> return 404.</p>
<p>I tried moving them around - with no luck, each time one of the features (either serving html from root, serving it from any other path or both) won't work. For the one coming from Node, this behavior is really surprising; is there a simple beautiful solution without writing full-blown helper classes?</p>
|
<python><fastapi><url-routing><starlette>
|
2023-06-21 21:44:25
| 1
| 2,603
|
grreeenn
|
76,527,192
| 6,447,399
|
Using Pythons Selenium to scrape some website gives some errors
|
<p>I have a slightly strange setup. I have two scripts in Python and R.</p>
<p>I was originally using R's <code>RSelenium</code> which worked but then stopped working so my original code was all in R - now I had to switch to Python and use <code>undetected_chromedriver</code> and a few other options that Selenium has available in Python. So I have 2 scripts.</p>
<ul>
<li>The R script - uses <code>rvest</code> to process the web scraped data and sends via the command line the info to the Python script which runs the Selenium part.</li>
<li>The Python Selenium script which does the web scraping and I want to run the scripts via the terminal.</li>
</ul>
<p>Problem:</p>
<p>When I set <code>headless = True</code> in Python I get all the errors. Here is my code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import random
from bs4 import BeautifulSoup
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import ElementClickInterceptedException
import undetected_chromedriver as uc
import sys
import random
import pandas as pd
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service as ChromeService
from seleniumwire import webdriver
import re
import uuid
import datetime
from selenium.common.exceptions import TimeoutException
RUN_HEADLESS = True
PATH = '/home/matt/bin/chromedriver/chromedriver'
options = uc.ChromeOptions()
options.add_argument('--user-data-directory=/home/matt/.config/google-chrome-beta/Default')
if RUN_HEADLESS:
options.add_argument("--headless=new") # ('--headless')
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
# #options.add_argument("--disable-gpu") # If running on Linux/Unix system
# options.add_argument("--disable-extensions")
# options.add_argument("--start-maximized")
# # options.add_argument('--disable-javascript')
else:
pass
</code></pre>
<p>How can I make the headless version work? when I set <code>headless = False</code> everything works but of course I have the browser open when scraping.</p>
<p>Error:</p>
<pre><code>[7238:7238:0621/230217.106355:ERROR:process_singleton_posix.cc(334)] Failed to create /home/matt/.config/google-chrome/SingletonLock: El fichero ya existe (17)
MESA-INTEL: warning: Performance support disabled, consider sysctl dev.i915.perf_stream_paranoid=0
[0621/230217.125157:ERROR:nacl_helper_linux.cc(355)] NaCl helper process running without a sandbox!
Most likely you need to configure your SUID sandbox correctly
Traceback (most recent call last):
File "/run/media/matt/A34E-C6B8/inmobiliarioProject2/rscripts/production/collectingTheData/v2/collectIndividualPages_Comprar.py", line 151, in <module>
driver = uc.Chrome(
^^^^^^^^^^
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/undetected_chromedriver/__init__.py", line 466, in __init__
super(Chrome, self).__init__(
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/chrome/webdriver.py", line 84, in __init__
super().__init__(
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/chromium/webdriver.py", line 104, in __init__
super().__init__(
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 286, in __init__
self.start_session(capabilities, browser_profile)
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/undetected_chromedriver/__init__.py", line 729, in start_session
super(selenium.webdriver.chrome.webdriver.WebDriver, self).start_session(
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 378, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at 127.0.0.1:35111
from chrome not reachable
Stacktrace:
#0 0x561e4749c4e3 <unknown>
#1 0x561e471cbb00 <unknown>
#2 0x561e471b9436 <unknown>
#3 0x561e471f89be <unknown>
#4 0x561e471f0884 <unknown>
#5 0x561e4722fccc <unknown>
#6 0x561e4722f47f <unknown>
#7 0x561e47226de3 <unknown>
#8 0x561e471fc2dd <unknown>
#9 0x561e471fd34e <unknown>
#10 0x561e4745c3e4 <unknown>
#11 0x561e474603d7 <unknown>
#12 0x561e4746ab20 <unknown>
#13 0x561e47461023 <unknown>
#14 0x561e4742f1aa <unknown>
#15 0x561e474856b8 <unknown>
#16 0x561e47485847 <unknown>
#17 0x561e47495243 <unknown>
#18 0x7fa7187a16ea start_thread
</code></pre>
<p>EDIT:</p>
<p>using <code>--headless</code> instead of <code>headless=new</code> I get the following:</p>
<pre><code>[1] "NA link. Skipping...."
[6782:6782:0622/000945.392409:ERROR:process_singleton_posix.cc(334)] Failed to create /home/matt/.config/google-chrome/SingletonLock: El fichero ya existe (17)
[0622/000945.407709:ERROR:nacl_helper_linux.cc(355)] NaCl helper process running without a sandbox!
Most likely you need to configure your SUID sandbox correctly
MESA-INTEL: warning: Performance support disabled, consider sysctl dev.i915.perf_stream_paranoid=0
Traceback (most recent call last):
File "/run/media/matt/A34E-C6B8/inmobiliarioProject2/rscripts/production/collectingTheData/v2/collectIndividualPages_Comprar.py", line 151, in <module>
driver = uc.Chrome(
^^^^^^^^^^
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/undetected_chromedriver/__init__.py", line 466, in __init__
super(Chrome, self).__init__(
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/chrome/webdriver.py", line 84, in __init__
super().__init__(
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/chromium/webdriver.py", line 104, in __init__
super().__init__(
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 286, in __init__
self.start_session(capabilities, browser_profile)
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/undetected_chromedriver/__init__.py", line 729, in start_session
super(selenium.webdriver.chrome.webdriver.WebDriver, self).start_session(
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 378, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "/home/matt/.asdf/installs/python/3.11.3/lib/python3.11/site-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at 127.0.0.1:33765
from chrome not reachable
Stacktrace:
#0 0x56307a71f4e3 <unknown>
#1 0x56307a44eb00 <unknown>
#2 0x56307a43c436 <unknown>
#3 0x56307a47b9be <unknown>
#4 0x56307a473884 <unknown>
#5 0x56307a4b2ccc <unknown>
#6 0x56307a4b247f <unknown>
#7 0x56307a4a9de3 <unknown>
#8 0x56307a47f2dd <unknown>
#9 0x56307a48034e <unknown>
#10 0x56307a6df3e4 <unknown>
#11 0x56307a6e33d7 <unknown>
#12 0x56307a6edb20 <unknown>
#13 0x56307a6e4023 <unknown>
#14 0x56307a6b21aa <unknown>
#15 0x56307a7086b8 <unknown>
#16 0x56307a708847 <unknown>
#17 0x56307a718243 <unknown>
#18 0x7ff1875e96ea start_thread
</code></pre>
<p>EDIT: Code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import random
from bs4 import BeautifulSoup
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import ElementClickInterceptedException
import undetected_chromedriver as uc
import sys
import random
import pandas as pd
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service as ChromeService
from seleniumwire import webdriver
import re
import uuid
import datetime
from selenium.common.exceptions import TimeoutException
import time
from seleniumbase import Driver
RUN_HEADLESS = False
PATH = '/home/matt/bin/chromedriver/chromedriver'
options = uc.ChromeOptions()
options.add_argument('--user-data-directory=/home/matt/.config/google-chrome-beta/Default')
################ New code added to fix chromedriver and chrome version mismatch ################
#options = Options()
if RUN_HEADLESS:
options.add_argument("--headless") # ('--headless')
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
# #options.add_argument("--disable-gpu") # If running on Linux/Unix system
# options.add_argument("--disable-extensions")
# options.add_argument("--start-maximized")
# # options.add_argument('--disable-javascript')
else:
pass
driver = uc.Chrome(options=options)
#seleniumwire_options = seleniumwire_options # here is where I pass random proxies to the options
link = "https://www.fotocasa.es/es/comprar/vivienda/madrid-capital/calefaccion-terraza-trastero-ascensor-no-amueblado/176698848/d?from=list"
driver.get(link)
</code></pre>
<p>This gives me:</p>
<pre><code>TypeError: WebDriver.__init__() got an unexpected keyword argument 'executable_path'
</code></pre>
|
<python><selenium-webdriver><seleniumbase>
|
2023-06-21 21:16:18
| 2
| 7,189
|
user113156
|
76,527,169
| 7,233,155
|
Rust IndexSet performance vs Python Tuple comparison?
|
<p>I wrote the following code to implement automatic differentiation in Python and it worked well (slightly simplified here):</p>
<pre class="lang-py prettyprint-override"><code>class Dual:
def __init__(
self,
real: float = 0.0,
vars: tuple[str, ...] = (),
dual: np.ndarray = np.ones(0),
):
self.real, self.vars, self.dual = real, vars, dual
def __add__(self, argument):
if self.vars == argument.vars:
Dual(self.real + argument.real, self.vars, self.dual + argument.dual)
else:
x, y = self._to_combined_vars(argument)
return x + y
def _to_combined_vars(self, other):
combined_vars = sorted(list(set(self.vars).union(set(arg.vars))))
x = self if combined_vars == self.vars else self._to_new_vars__(combined_vars)
y = other if combined_vars == other.vars else other._to_new_vars__(combined_vars)
return x, y
def _to_new_vars(self, new_vars):
dual = np.zeros(len(new_vars))
ix_ = list(map(lambda x: new_vars.index(x), self.vars))
dual[ix_] = self.dual
return Dual(self.real, new_vars, dual)
</code></pre>
<p>I tried to replicated this in Rust as follows (again minimised),</p>
<pre class="lang-rust prettyprint-override"><code>use indexmap::set::IndexSet;
use ndarray::{Array1};
use auto_ops::{impl_op, impl_op_ex};
#[derive(Clone, Debug)]
pub struct Dual {
pub real : f64,
pub vars : IndexSet<String>,
pub dual : Array1<f64>,
}
impl Dual {
fn to_combined_vars(&self, other: &Dual) -> (Dual, Dual) {
let comb_vars = IndexSet::from_iter(self.vars.union(&other.vars).map(|x| x.clone()));
(self.to_new_vars(&comb_vars), other.to_new_vars(&comb_vars))
}
fn to_new_vars(&self, new_vars: &IndexSet<String>) -> Dual {
let mut dual = Array::zeros(new_vars.len());
for (i, index) in new_vars.iter().map(|x| self.vars.get_index_of(x)).enumerate() {
match index {
Some(value) => { dual[[i]] = self.dual[[value]] }
None => {}
}
}
Dual {vars: new_vars.clone(), real: self.real, dual}
}
impl_op_ex!(+ |a: &Dual, b: &Dual| -> Dual {
if a.vars.len() == b.vars.len() && a.vars.iter().zip(b.vars.iter()).all(|(a,b)| a==b) {
Dual {real: a.real + b.real, dual: a.dual.clone() + b.dual.clone(), vars: a.vars.clone()}
} else {
let (x, y) = a.to_combined_vars(b);
Dual {real: x.real + y.real, dual: x.dual + y.dual, vars: x.vars}
}
});
</code></pre>
<p>The benchmarks, for the operation <code>dual1 + dual2</code> where each <code>dual</code> was a constructed datatype, were as follows:</p>
<p>In Rust:</p>
<ul>
<li>Float addition took 265 ps.</li>
<li>Dual addition (with 100 <strong>different</strong> variables each) 97 us.</li>
<li>Dual addition (with 1000 <strong>different</strong> variables each) 953 us.</li>
<li>Dual addition (with 100 <strong>similar</strong> variables each) 13 us. <- 4x worse than Python</li>
<li>Dual addition (with 1000 <strong>similar</strong> variables each) 129 us. <- 7x worse than Python</li>
</ul>
<p>In Python:</p>
<ul>
<li>Float addition took 77 ns.</li>
<li>Dual addition (with 100 <strong>different</strong> variables each) 353 us.</li>
<li>Dual addition (with 1000 <strong>different</strong> variables each) 26.4 ms.</li>
<li>Dual addition (with 100 <strong>similar</strong> variables each) 3.2 us.</li>
<li>Dual addition (with 1000 <strong>similar</strong> variables each) 18.6 us.</li>
</ul>
<p>I am puzzled why the Python implementation with similar variables is much faster than the rust implementation. In my program I typically use less than 100 variables and after a few iterations the variables are all aligned so these cases become the determining factors in speed.</p>
<p>I assume it is all in the lines</p>
<pre class="lang-py prettyprint-override"><code>if self.vars == argument.vars:
</code></pre>
<p>vs</p>
<pre class="lang-rust prettyprint-override"><code>if a.vars.len() == b.vars.len() && a.vars.iter().zip(b.vars.iter()).all(|(a,b)| a==b) {
</code></pre>
<p>But I dont know why this is materially slower or what I can do to improve it in Rust?</p>
|
<python><performance><rust><set>
|
2023-06-21 21:12:57
| 1
| 4,801
|
Attack68
|
76,527,148
| 5,302,323
|
Use python to identify table in HTML and then take a screenshot of it
|
<p>I am trying to take a screenshot of tables on HTML so that I can then publish them on Jupyter using markdown language.</p>
<p>The tables come from different websites and all have different formats, so it is easier for users to view the snapshot instead of looking at the print of the df.</p>
<p>I can find the tables using beautifulsoup and i can display the image once its saved on the PC but I don't know how to tell the PC to take a snapshot of that table (table[0] on each site) specifically.</p>
<p>Any help would be really appreciated!</p>
<p>This is the code i am using to identify the table:
import requests</p>
<p>from bs4 import BeautifulSoup</p>
<pre><code>for url in urls:
# Send a GET request to the URL
response = requests.get(url)
# Extract table using pandas
tables = pd.read_html(response.content)
# Assuming the desired table is the first one on the page
table = tables[0]
# Convert table to Markdown format
markdown_table = table.to_markdown(index=False)
# Print the Markdown table
print(url)
print(markdown_table)
</code></pre>
<p>This is the code I will use to print out the table snapshot using markdown language:</p>
<pre><code># Generate the Markdown syntax
markdown_syntax = f""
# Display the Markdown syntax as formatted output
display(Markdown(markdown_syntax))
</code></pre>
<p>What I need is to find out how exactly to take a snapshot in png format of only the table. I can easily find the table in the code but it doesnt have an url that only shows the table.</p>
<p>Any help would really be appreciated!</p>
|
<python><web-scraping>
|
2023-06-21 21:09:07
| 0
| 365
|
Cla Rosie
|
76,527,107
| 14,222,845
|
How to prevent bar plots from superimposing on each other in pandas?
|
<p>I am working with some data in pandas and I want to generate 2 separate bar plots based on the data. However, my two bar plots instead get superimposed on each-other (they are in the same graph). Here is my code:</p>
<pre><code>import math
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
import numpy as np
import matplotlib.pyplot as plt
from openpyxl import load_workbook
def gen(fileIn):
DF= pd.read_excel(fileIn)
overall = DF['Awesome'].value_counts(normalize = True) # Get the relative frequency of each unique value in the column Awesome
print(overall.plot(kind = 'bar', figsize = (10,5)))
spec = DF['Not Awesome'].value_counts(normalize = True)
print(spec.plot(kind = 'bar', color = 'red', figsize = (10,5)))
gen("my file path")
</code></pre>
<p>This is the output:
<a href="https://i.sstatic.net/x9hyR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x9hyR.png" alt="enter image description here" /></a></p>
<p>As you can see, the red color from the 'Not Awesome' column get supplanted on the 'Awesome' column's relative frequency values. I just want the two bar plots to be separate. I looked at the documentation in <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html</a> for the plot function, however, there don't seem to be any parameters I can use to turn off the superimposition which seems to be a default.</p>
|
<python><pandas><graph>
|
2023-06-21 21:01:35
| 1
| 330
|
Diamoniner12345
|
76,527,096
| 11,429,035
|
Change road network to bidirectional in osmnx
|
<p>I wonder if I can set all road networks to bidirectional using osmnx? I hope it is bidirectional rather than undirected because I still hope to use some functions like <code>osmnx.truncate</code>. So <code>osmnx.get_undirected(G)</code> cannot help me.</p>
<p>I saw @gboeing mentioned this can be done by changing the <a href="https://stackoverflow.com/questions/74881392/is-there-a-way-to-simplify-an-undirected-graph-with-osmnx">global settings</a>. But I am not sure how to do that. Two questions</p>
<ol>
<li><p>The global settings are stored in <code>settings.py</code> file. How can I create a new file and force <code>osmnx</code> uses the new <code>settings.py</code>file?</p>
</li>
<li><p>If I want to make all edges bidirectional, can I just simply set <code>bidirectiional_network_types=['all']</code>?</p>
</li>
</ol>
|
<python><osmnx>
|
2023-06-21 20:59:37
| 1
| 533
|
Xudong
|
76,527,086
| 10,998,081
|
Find an empty space in a binary image that can fit a shape
|
<p>I have this image
<br>
<a href="https://i.sstatic.net/fyI4I.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fyI4I.jpg" alt="enter image description here" /></a>
<br>
I need to find an empty area that can fit this shape
<br>
<a href="https://i.sstatic.net/gO3AR.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gO3AR.jpg" alt="enter image description here" /></a>
<br>
so that the end result is something like this
<br>
<a href="https://i.sstatic.net/Tue7J.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tue7J.jpg" alt="enter image description here" /></a></p>
|
<python>
|
2023-06-21 20:57:50
| 1
| 1,139
|
yazan sayed
|
76,526,939
| 5,008,504
|
Compile legacy fortran77 code as python module using f2py
|
<p>I am able to successfully compile FORTRAN77 code, but only when specifying <code>-std=legacy</code> in</p>
<pre><code>gfortran -o -std=legacy -o model.exe model.FOR
</code></pre>
<p>Without <code>-std=legacy</code>, I get some fatal errors. I wish to similarly specify a legacy compiler when attempting to build a python module. Presently the following call generates the same fatal errors as those when omitting <code>-std=legacy</code> using gfortran.</p>
<pre><code>f2py -c -m pymodel model.FOR --fcompiler=gnu95
</code></pre>
<p>Any suggestions on how to ensure f2py uses a legacy compiler compatible with antiquated FORTRAN77 code?</p>
|
<python><fortran><fortran77><f2py>
|
2023-06-21 20:33:45
| 1
| 317
|
Guy
|
76,526,791
| 3,821,009
|
Why argsort instead of argmin here?
|
<p>Looking at this exercise from <a href="http://scipy-lectures.org/intro/numpy/exercises.html#array-manipulations" rel="nofollow noreferrer">http://scipy-lectures.org/intro/numpy/exercises.html#array-manipulations</a>:</p>
<blockquote>
<p>Harder one: Generate a 10 x 3 array of random numbers (in range [0,1]). For each row, pick the number closest to 0.5.</p>
<ul>
<li>Use abs and argsort to find the column j closest for each row.</li>
<li>Use fancy indexing to extract the numbers. (Hint: a[i,j] – the array i must contain the row numbers corresponding to stuff in j.)</li>
</ul>
</blockquote>
<p>I've solved it like this:</p>
<pre><code>a = numpy.random.random((10, 3))
b = numpy.abs(a - 0.5)
b = b.argmin(axis=1)
b = a[numpy.arange(len(b)), b]
</code></pre>
<p>Except doing:</p>
<pre><code>b = numpy.argsort(b)[:, 0]
</code></pre>
<p>how can <code>argsort</code> be used in a better way than <code>argmin</code>?</p>
|
<python><numpy>
|
2023-06-21 20:08:05
| 0
| 4,641
|
levant pied
|
76,526,639
| 3,423,825
|
Unexplained high CPU usage of Celery worker and beat when started with Nginx and Gunicorn
|
<p>I have two applications in Digital Ocean with the following commands at startup:</p>
<p><strong>app1</strong></p>
<pre><code>sh run.sh # It starts celery worker, beat, nginx, gunicorn
</code></pre>
<p><strong>app2</strong></p>
<pre><code>sh run-worker.sh # It starts celery worker, beat
</code></pre>
<p>My problem is that the processes of Celery worker and Celery beat of App1 consume almost 100% of the CPU after the image has been deployed on 1vCPU, 4 cores. Then the consumption slowly decrease and after 3 to 5 minutes, I assume when it has reached a reasonnable level, the worker finally get online.</p>
<p>Logs of both workers with 5 minutes delay:</p>
<p><a href="https://i.sstatic.net/2kcDv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2kcDv.png" alt="enter image description here" /></a></p>
<p>High CPU usage right after image deployment:</p>
<p><a href="https://i.sstatic.net/4WW7B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4WW7B.png" alt="enter image description here" /></a></p>
<p>My question is do I have to run the worker in a third App ? or is it a configuration problem that could be fixed ?</p>
<p>This is how I start all the processes one after another:</p>
<p><strong>app1
sh run.sh</strong></p>
<pre><code>#!/usr/bin/env bash
set -o errexit
set -o nounset
python3 manage.py migrate --settings project.env.prod
export DJANGO_SETTINGS_MODULE=project.env.prod
celery -A project purge -f
echo "Start celery worker 1"
celery -A project worker --loglevel=INFO -P gevent --concurrency 24 -Q project_queue_1 -n worker1@project --detach
echo "Start celery beat"
celery -A project beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler --detach
echo "Start nginx"
service nginx start
echo "Start gunicorn"
gunicorn -b 0.0.0.0:8000 --timeout 0 -w 2 project.wsgi
</code></pre>
<p><strong>app2
sh run-worker.sh</strong></p>
<pre><code>#!/usr/bin/env bash
set -o errexit
set -o nounset
export DJANGO_SETTINGS_MODULE=project.env.prod
echo 'Start celery worker 2'
celery -A project worker --loglevel=INFO -P gevent --concurrency 24 -Q project_queue_2 -n worker2@project
</code></pre>
<p><strong>celery.py</strong></p>
<pre><code>@celeryd_after_setup.connect
def setup_direct_queue(sender, instance, **kwargs):
logger.info("setup_direct_queue", worker=sender)
@celeryd_init.connect(sender=None)
def configure_worker(conf=None, **kwargs):
logger.info("celeryd_init")
@worker_process_init.connect(sender=None)
def worker_process_init(sender=None, **kwargs):
logger.info("worker_process_init", worker=sender)
@worker_init.connect(sender=None)
def worker_init(sender=None, **kwargs):
logger.info("worker_init", worker=sender)
@worker_ready.connect(sender=None)
def worker_ready(sender=None, **kwargs):
logger.info("worker_ready", worker=sender)
</code></pre>
|
<python><nginx><gunicorn><digital-ocean><gevent>
|
2023-06-21 19:44:18
| 0
| 1,948
|
Florent
|
76,526,479
| 20,307,768
|
How not to mess terminal up after sys.stdin.read() and subprocess invoking vim?
|
<p>I want to create interactive mode code like <code>git rebase -i HEAD~6</code> but for PIPE editing, stdin redirection. another example is <a href="http://joeyh.name/code/moreutils/" rel="nofollow noreferrer">vipe</a> of <code>moreutils</code>.</p>
<p>To do that, I learn this code below.</p>
<pre class="lang-py prettyprint-override"><code># Source: https://stackoverflow.com/a/39989442/20307768
import sys, tempfile, os
from subprocess import call
EDITOR = os.environ.get('EDITOR', 'vim') # that easy!
initial_message = b'something' # if you want to set up the file somehow
with tempfile.NamedTemporaryFile(suffix=".tmp") as tf:
tf.write(initial_message)
tf.flush()
call([EDITOR, tf.name])
</code></pre>
<p><strong>To get PIPE and edit it, I added two lines.</strong></p>
<pre class="lang-py prettyprint-override"><code>text = sys.stdin.read()
initial_message = text.encode()
</code></pre>
<p><strong>The problematic full code</strong> is below.</p>
<pre class="lang-py prettyprint-override"><code>import sys, tempfile, os
from subprocess import call
EDITOR = os.environ.get('EDITOR', 'vim')
text = sys.stdin.read()
initial_message = text.encode()
with tempfile.NamedTemporaryFile(suffix=".tmp") as tf:
tf.write(initial_message)
tf.flush()
call([EDITOR, tf.name])
</code></pre>
<p>After running the second code with <code>echo "some words" | python the_code.py</code> in the shell and exiting vim<code>:q!</code>, the terminal is messed up. (<code>reset</code> in the shell command will fix it.)</p>
<p>Without <code>reset</code>, I can type in a shell of macOS, but the prompt is in a weird place.
I can't even type in a shell of Linux.</p>
<p><em>I typed <code>set -x</code>, already.</em></p>
<pre class="lang-bash prettyprint-override"><code>[rockyos@localhost python-vipe]$ echo "asdfasd" | python vipe.py
+ python vipe.py
+ echo asdfasd
Vim: Warning: Input is not from a terminal
++ printf '\033]0;%s@%s:%s\007' rockyos localhost '~/TODO/python-vipe'
++ history -a
++ history -c
++ history -r
[rockyos@localhost python-vipe]$
</code></pre>
<p>I just want to return normal terminal after running the second full code.
Also, why is this happening?</p>
<p>I tried <code>os.system('stty sane; clear;')</code> and <code>os.system('reset')</code> at the end of the code.(<a href="https://stackoverflow.com/a/17452756/20307768">https://stackoverflow.com/a/17452756/20307768</a>) <code>os.system('reset')</code> gave me what I wanted. But the message is annoying. I mean I can do <code>os.system('clear')</code> again, but that is not what normal other program does.</p>
<pre><code>Erase set to delete.
Kill set to control-U (^U).
Interrupt set to control-C (^C).
</code></pre>
|
<python><pipe><command-line-interface><stdin><io-redirection>
|
2023-06-21 19:18:04
| 1
| 889
|
Constantin Hong
|
76,526,473
| 19,157,137
|
Designing Docker Environment with Test Files on Host Machine Python
|
<p>I am working on a project within a Docker environment, and I want to separate my PyTest test files from the Docker image. Specifically, I would like to have the test files stored in a separate directory (<code>tests/</code>) on the host machine. Is there a way to run the PyTest tests for the Python files within the Docker container while keeping the test files on the host machine?</p>
<p>Details:</p>
<pre><code>Host Machine:
├── project/
│ ├── code/
│ │ ├── file1.py
│ │ ├── file2.py
│ │ └── ...
│ └── tests/
│ ├── test_file1.py
│ ├── test_file2.py
│ └── ...
└── Dockerfile
Docker Machine:
└── code/
├── file1.py
├── file2.py
└── ...
</code></pre>
<p>Objective:</p>
<ol>
<li>I want to execute PyTest tests for the Python files within the Docker container.</li>
<li>The test files are located in the tests/ directory on the host machine.</li>
<li>I want to avoid including the test files directly in the Docker image.</li>
</ol>
<p>Questions:</p>
<ol>
<li>How can I configure my Docker setup to run PyTest tests for the Python files within the Docker container while keeping the test files on the host machine?</li>
<li>Are there any specific volume mappings or configurations needed to make the test files accessible within the Docker container?</li>
<li>What would be the recommended approach or best practices for achieving this separation between the test files on the host machine and the Docker environment?</li>
</ol>
<p>Any insights or guidance on how to accomplish this setup and run PyTest tests within the Docker container while keeping the test files on the host machine would be greatly appreciated.</p>
|
<python><docker><unit-testing><dockerfile><pytest>
|
2023-06-21 19:17:14
| 0
| 363
|
Bosser445
|
76,526,415
| 14,244,437
|
Cannot resolve expression type, unknown output_field - Return model instances in Django's Coalesce?
|
<p>I have an endpoint that will display results from a model. There's going to be a image illustrating each result, and if none of the posts of this category has a media associated, I'll get the writers avatar to display it.</p>
<p>I can get the media url value using the <code>.values('media_source')</code> method in the subqueries, but the problem is that the serializer expects a complex model with other fields and properties.</p>
<p>I've tried to use the code below to in order to get the instances, but I receive the following error:</p>
<p><code>Cannot resolve expression type, unknown output_field</code></p>
<p>I understand that I'd need to specify a field type (such as CharField, etc) but I can't figure out if there's a type that is actually interpreted as a Django model instance.</p>
<pre><code> def get_queryset(self):
annotation = Coalesce(
Subquery(PostMedia.objects.filter(categories=OuterRef('pk'))), Subquery(UserMedia.objects.filter(posts__categories=OuterRef('pk')))
)
return Categories.objects.filter(is_public=True).annotate(media_source=annotation)
</code></pre>
<p>Here are the serializers:</p>
<pre><code>class MediaSerializer(serializers.Serializer):
media_detail = MediaField(source="cloudinary_resource")
url = serializers.CharField(source="media_source")
type = serializers.CharField()
height = serializers.SerializerMethodField()
width = serializers.SerializerMethodField()
@staticmethod
def get_height(obj):
try:
return obj.metadata.get("height", None)
except AttributeError:
if isinstance(obj, dict):
try:
return obj['metadata'].get("height", None)
except KeyError:
pass
return
@staticmethod
def get_width(obj):
try:
return obj.metadata.get("width", None)
except AttributeError:
if isinstance(obj, dict):
try:
return obj['metadata'].get("width", None)
except KeyError:
pass
return
class CategorySerializer(serializers.ModelSerializer):
media = MediaSerializer(source='media_source', many=True)
tags = SimpleTagSerializer(many=True)
class Meta:
model = Category
fields = ("id", "name", "text", "media", "segment", "tags", "is_public", "is_validated", "metadata")
</code></pre>
<p>And the whole traceback:</p>
<pre><code>api_1 | Internal Server Error: /api/v1/categories/
api_1 | Traceback (most recent call last):
api_1 | File "/usr/local/lib/python3.10/site-packages/django/core/handlers/exception.py", line 56, in inner
api_1 | response = get_response(request)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response
api_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/views/decorators/csrf.py", line 55, in wrapped_view
api_1 | return view_func(*args, **kwargs)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/viewsets.py", line 125, in view
api_1 | return self.dispatch(request, *args, **kwargs)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
api_1 | response = self.handle_exception(exc)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
api_1 | self.raise_uncaught_exception(exc)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
api_1 | raise exc
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
api_1 | response = handler(request, *args, **kwargs)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/mixins.py", line 40, in list
api_1 | page = self.paginate_queryset(queryset)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/generics.py", line 171, in paginate_queryset
api_1 | return self.paginator.paginate_queryset(queryset, self.request, view=self)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/pagination.py", line 387, in paginate_queryset
api_1 | self.count = self.get_count(queryset)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/pagination.py", line 525, in get_count
api_1 | return queryset.count()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 470, in count
api_1 | return self.query.get_count(using=self.db)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/query.py", line 552, in get_count
api_1 | number = obj.get_aggregation(using, ["__count"])["__count"]
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/query.py", line 537, in get_aggregation
api_1 | result = compiler.execute_sql(SINGLE)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1348, in execute_sql
api_1 | sql, params = self.as_sql()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1858, in as_sql
api_1 | inner_query_sql, inner_query_params = self.query.inner_query.get_compiler(
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 573, in as_sql
api_1 | extra_select, order_by, group_by = self.pre_sql_setup()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 64, in pre_sql_setup
api_1 | self.setup_query()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 55, in setup_query
api_1 | self.select, self.klass_info, self.annotation_col_map = self.get_select()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 295, in get_select
api_1 | sql, params = col.select_format(self, sql, params)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/expressions.py", line 419, in select_format
api_1 | if hasattr(self.output_field, "select_format"):
api_1 | File "/usr/local/lib/python3.10/site-packages/django/utils/functional.py", line 49, in __get__
api_1 | res = instance.__dict__[self.name] = self.func(instance)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/expressions.py", line 283, in output_field
api_1 | raise FieldError("Cannot resolve expression type, unknown output_field")
api_1 | django.core.exceptions.FieldError: Cannot resolve expression type, unknown output_field
api_1 | Internal Server Error: /api/v1/categories/
api_1 | Traceback (most recent call last):
api_1 | File "/usr/local/lib/python3.10/site-packages/django/core/handlers/exception.py", line 56, in inner
api_1 | response = get_response(request)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response
api_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/views/decorators/csrf.py", line 55, in wrapped_view
api_1 | return view_func(*args, **kwargs)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/viewsets.py", line 125, in view
api_1 | return self.dispatch(request, *args, **kwargs)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
api_1 | response = self.handle_exception(exc)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
api_1 | self.raise_uncaught_exception(exc)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
api_1 | raise exc
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
api_1 | response = handler(request, *args, **kwargs)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/mixins.py", line 40, in list
api_1 | page = self.paginate_queryset(queryset)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/generics.py", line 171, in paginate_queryset
api_1 | return self.paginator.paginate_queryset(queryset, self.request, view=self)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/pagination.py", line 387, in paginate_queryset
api_1 | self.count = self.get_count(queryset)
api_1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/pagination.py", line 525, in get_count
api_1 | return queryset.count()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 470, in count
api_1 | return self.query.get_count(using=self.db)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/query.py", line 552, in get_count
api_1 | number = obj.get_aggregation(using, ["__count"])["__count"]
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/query.py", line 537, in get_aggregation
api_1 | result = compiler.execute_sql(SINGLE)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1348, in execute_sql
api_1 | sql, params = self.as_sql()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1858, in as_sql
api_1 | inner_query_sql, inner_query_params = self.query.inner_query.get_compiler(
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 573, in as_sql
api_1 | extra_select, order_by, group_by = self.pre_sql_setup()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 64, in pre_sql_setup
api_1 | self.setup_query()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 55, in setup_query
api_1 | self.select, self.klass_info, self.annotation_col_map = self.get_select()
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 295, in get_select
api_1 | sql, params = col.select_format(self, sql, params)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/expressions.py", line 419, in select_format
api_1 | if hasattr(self.output_field, "select_format"):
api_1 | File "/usr/local/lib/python3.10/site-packages/django/utils/functional.py", line 49, in __get__
api_1 | res = instance.__dict__[self.name] = self.func(instance)
api_1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/expressions.py", line 283, in output_field
api_1 | raise FieldError("Cannot resolve expression type, unknown output_field")
api_1 | django.core.exceptions.FieldError: Cannot resolve expression type, unknown output_field
api_1 | [21/Jun/2023 21:33:59] "GET /api/v1/categories/ HTTP/1.1" 500 165520
</code></pre>
|
<python><django><postgresql><django-orm>
|
2023-06-21 19:09:07
| 1
| 481
|
andrepz
|
76,526,358
| 20,920,790
|
How to add custom logic for relationship in SDV HMASynthesizer model?
|
<p>I trying make synthetic data with SDV HMASynthesizer.
But I got failing, because I need to add custom logic for relationships:
mentor_id - user_id and mentee_id - user_id.</p>
<p>That I need.</p>
<p>If "user_id" in table "users" had role "mentor" it should be in column "mentor_id" (sessions). Same thing for "mentee".</p>
<p>This is full code of SDV model:</p>
<pre><code>database_data = {
'domain': domain,
'region': region,
'sessions': sessions,
'users': users
}
database_metadata = MultiTableMetadata()
for i in database_data:
database_metadata.detect_table_from_dataframe(
table_name=i,
data=database_data[i]
)
# sessions________________________
database_metadata.update_column(
table_name='sessions',
column_name='session_id',
sdtype='id',
regex_format='[0-9]{5}'
)
database_metadata.update_column(
table_name='sessions',
column_name='mentor_id',
sdtype='id',
regex_format='[a-zA-Z]{4}'
)
database_metadata.update_column(
table_name='sessions',
column_name='mentee_id',
sdtype='id',
regex_format='[a-zA-Z]{4}'
)
database_metadata.update_column(
table_name='sessions',
column_name='mentor_domain_id',
sdtype='id',
regex_format='[a-zA-Z]{2}'
)
database_metadata.set_primary_key(
table_name='sessions',
column_name='session_id'
)
# users________________________
database_metadata.update_column(
table_name='users',
column_name='user_id',
sdtype='id',
regex_format='[0-9]{4}'
)
database_metadata.update_column(
table_name='users',
column_name='region_id',
sdtype='id',
regex_format='[a-zA-Z]{2}'
)
database_metadata.set_primary_key(
table_name='users',
column_name='user_id'
)
# domain ________________________
database_metadata.update_column(
table_name='domain',
column_name='id',
sdtype='id',
regex_format='[0-9]{2}'
)
database_metadata.set_primary_key(
table_name='domain',
column_name='id'
)
# region _______________________
database_metadata.update_column(
table_name='region',
column_name='id',
sdtype='id',
regex_format='[0-9]{2}'
)
database_metadata.set_primary_key(
table_name='region',
column_name='id'
)
# add relationship
database_metadata.add_relationship(
parent_table_name='domain',
child_table_name='sessions',
parent_primary_key='id',
child_foreign_key='mentor_domain_id'
)
database_metadata.add_relationship(
parent_table_name='region',
child_table_name='users',
parent_primary_key='id',
child_foreign_key='region_id'
)
database_metadata.add_relationship(
parent_table_name='users',
child_table_name='sessions',
parent_primary_key='user_id',
child_foreign_key='mentor_id'
)
database_metadata.add_relationship(
parent_table_name='users',
child_table_name='sessions',
parent_primary_key='user_id',
child_foreign_key='mentee_id'
)
database_metadata.visualize(
show_table_details=True,
show_relationship_labels=True,
output_filepath='my_metadata.png'
)
</code></pre>
<p>Plot with relationships:
<a href="https://i.sstatic.net/6eDKw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6eDKw.jpg" alt="enter image description here" /></a></p>
<pre><code># Synthesizer
synthesizer = HMASynthesizer(database_metadata, locales=['ru_RU'])
# transformers
synthesizer.auto_assign_transformers(database_data)
from rdt.transformers.categorical import LabelEncoder
synthesizer.update_transformers(
table_name='domain',
column_name_to_transformer={
'name': LabelEncoder(add_noise=False)
}
)
synthesizer.update_transformers(
table_name='region',
column_name_to_transformer={
'name': LabelEncoder(add_noise=False)
}
)
synthesizer.update_transformers(
table_name='users',
column_name_to_transformer={
'role': LabelEncoder(add_noise=False)
}
)
synthesizer.update_transformers(
table_name='sessions',
column_name_to_transformer={
'session_status': LabelEncoder(add_noise=False)
}
)
# preprocess data
processed_data = synthesizer.preprocess(database_data)
# model fit
synthesizer.fit_processed_data(processed_data)
synthesizer.reset_sampling()
database_syntetic_data = synthesizer.sample(scale=1.01)
</code></pre>
<p>Check table "users" and "sessions" (user with id 2566 is mentee, but in sessions this id in column mentor_id).</p>
<p>How to avoid this error?</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">user_id</th>
<th style="text-align: left;">reg_date</th>
<th style="text-align: left;">role</th>
<th style="text-align: right;">region_id</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">2556</td>
<td style="text-align: right;">2556</td>
<td style="text-align: left;">2022-08-13</td>
<td style="text-align: left;">mentee</td>
<td style="text-align: right;">17</td>
</tr>
</tbody>
</table>
</div><div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">session_id</th>
<th style="text-align: left;">session_date_time</th>
<th style="text-align: right;">mentor_id</th>
<th style="text-align: right;">mentee_id</th>
<th style="text-align: left;">session_status</th>
<th style="text-align: right;">mentor_domain_id</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">170</td>
<td style="text-align: right;">170</td>
<td style="text-align: left;">2021-08-04</td>
<td style="text-align: right;">2566</td>
<td style="text-align: right;">1720</td>
<td style="text-align: left;">finished</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1296</td>
<td style="text-align: right;">1296</td>
<td style="text-align: left;">2021-12-17</td>
<td style="text-align: right;">2566</td>
<td style="text-align: right;">431</td>
<td style="text-align: left;">canceled</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">4497</td>
<td style="text-align: right;">4497</td>
<td style="text-align: left;">2022-05-16</td>
<td style="text-align: right;">2566</td>
<td style="text-align: right;">1327</td>
<td style="text-align: left;">canceled</td>
<td style="text-align: right;">4</td>
</tr>
<tr>
<td style="text-align: right;">5429</td>
<td style="text-align: right;">5429</td>
<td style="text-align: left;">2022-03-11</td>
<td style="text-align: right;">2566</td>
<td style="text-align: right;">1543</td>
<td style="text-align: left;">canceled</td>
<td style="text-align: right;">5</td>
</tr>
</tbody>
</table>
</div>
<p>How to improve the quality of categorical data?
I'm using LabelEncoder(add_noise=False) to transform categorial data.</p>
|
<python><sdv>
|
2023-06-21 19:00:06
| 1
| 402
|
John Doe
|
76,526,319
| 3,517,600
|
How to run a command in each sub directory within a given directory in python?
|
<p>I'm trying to run a command in each sub-directory within a directory
For example I could have the structure as follows:</p>
<pre><code>Dir1
--Dir2.imageAssets
----file1.pdf
----file1.metadata
--Dir3
----Dir4.imageAssets
------file2.pdf
------file2.metadata
</code></pre>
<p>I would run a script that would check in each directory if we have a pdf and if so convert it to another image type. Then I will modify the <code>.metadata</code> file to change the file extension.
In my case I want to walk through every sub directory and run the conversion script within a directory with suffix <code>.imageAssets</code>.</p>
<p>I've seen <code>os.walk</code> which has all the paths, directories and files within the file but would I then have to recursively call into each dir the function again or is there a simpler way?</p>
<p>Open to any ideas if there's a simpler solution</p>
|
<python>
|
2023-06-21 18:53:57
| 1
| 1,420
|
MichaelGofron
|
76,526,174
| 11,956,484
|
Bokeh CustomJS callback to change data table column visibility to True
|
<p>I have a multi choice widget and when the user selects an option in the widget I want the DataTable to update and make the column selected visible. So essentially when the TableColumn.field= multi_choice.value I want to change TableColumn.visible= True</p>
<pre><code>filtered_table_data=df[["Li","Be"]]
filtered_table_source= ColumnDataSource(data=filtered_table_data)
filtered_table_cols=[]
filtered_table_cols.append(TableColumn(field='Li', title='Li', width=750, visible=False))
filtered_table_cols.append(TableColumn(field='Be', title='Be', width=750,visible=False))
filtered_table=DataTable(source=filtered_table_source, columns=filtered_table_cols)
multi_choice = MultiChoice(value=[], options=df.columns[2:-1].tolist(), title='Select elements:')
callback2 = CustomJS(args=dict(multi_choice=multi_choice, filtered_table=filtered_table), code="""
for (var i=0; i<filtered_table.columns.length; i++)
{
for (var j=0; j<multi_choice.value.length;j++)
{
if (filtered_table.columns[i].field==multi_choice.value[j])
{
filtered_table.columns[i].visible=True;
}
}
}
filtered_table.change.emit()
""")
multi_choice.js_on_change("value",callback2)
</code></pre>
<p>When I try to run the code above nothing happens when I select options, the data table remains empty</p>
|
<javascript><python><bokeh><bokehjs>
|
2023-06-21 18:30:06
| 1
| 716
|
Gingerhaze
|
76,526,098
| 6,018,285
|
Python Selenium - TESTNG equivalent setup to run execute tests on different browsers in Python
|
<p>I am looking for a Below TestNG specific logic to run tests on different browsers in a single run with parameterisation,</p>
<pre><code><!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="Test Suite">
<test name = " chrome">
<parameter name = "browserType" value="chrome" />
<classes>
<class name = "runners.RunnerTest"/>
</classes>
</test>
<test name = " edge">
<parameter name = "browserType" value="edge" />
<classes>
<class name = "runners.RunnerTest"/>
</classes>
</test>
<test name = " firefox">
<parameter name = "browserType" value="firefox" />
<classes>
<class name = "runners.RunnerTest"/>
</classes>
</test>
</code></pre>
<pre><code>public class RunnerTest extends AbstractTestNGCucumberTests {
public final static ThreadLocal<String> BROWSER = new ThreadLocal<>();
@Parameters("browserType")
@BeforeClass
public void beforeClass(@Optional String browser) {
ConfigLoader.initializePropertyfile();
RunnerTest.BROWSER.set(browser);
</code></pre>
<p>Any ideas appreciated.</p>
|
<python><selenium-webdriver><testng><browser-automation>
|
2023-06-21 18:16:25
| 1
| 454
|
Automation Engr
|
76,526,082
| 926,918
|
NVidia Rapids: Non-Euclidean metric in cuml UMAP
|
<p>I am trying to use GPU (A100) to perform UMAP for speedup. I am facing problem as Euclidean metric does not seem to work for me at all but correlation/cosine are promising. However, the code I am using below seems to produce only Euclidean metric based computation on GPU while working well on CPU.</p>
<p>Tools:</p>
<pre><code>cuml 23.04.01 cuda11_py310_230421_g958186d07_0 rapidsai
libcuml 23.04.01 cuda11_230421_g958186d07_0 rapidsai
libcumlprims 23.04.00 cuda11_230412_g7502d8e_0 nvidia
python 3.10.11 he550d4f_0_cpython conda-forge
</code></pre>
<p>Relevant code:</p>
<pre><code>def umap_cpu(ip_mat, n_components, n_neighbors, metric):
import umap
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
ip_std = scaler.fit_transform(ip_mat)
reducer = umap.UMAP(n_components=n_components, n_neighbors=n_neighbors, metric=metric)
umap_embed = reducer.fit_transform(ip_std)
return umap_embed
def umap_gpu(ip_mat, n_components, n_neighbors, metric):
import cuml
from cuml.manifold import UMAP
from sklearn.preprocessing import StandardScaler
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
scaler = StandardScaler()
ip_std = scaler.fit_transform(ip_mat)
reducer = UMAP(n_components=n_components, n_neighbors=n_neighbors, metric=metric)
umap_embed = reducer.fit_transform(ip_std)
return umap_embed
</code></pre>
<p>Using <code>help</code> I noticed that other metrics are supported. However, I found an <a href="https://github.com/rapidsai/cuml/issues/4816" rel="nofollow noreferrer">old post</a> that said otherwise in discussion.</p>
<blockquote>
<p>PR will allow the metric for the input KNN graph to be changed but the
only supported target metrics currently remain to be categorical and
Euclidean. We can support different target metrics (and we have issue
open to support them) but they will require a slightly different
objective function in the SGD. I do believe there's an error in the
throwing of the Python exception (pointed out in this issue)</p>
</blockquote>
<p>I would like to know if the implementation has been done for other metrics or the help tool shows wrong info.</p>
<blockquote>
<p>metric : string (default='euclidean').
Distance metric to use. Supported distances are ['l1, 'cityblock', 'taxicab',
'manhattan', 'euclidean', 'l2', 'sqeuclidean', 'canberra', 'minkowski', 'chebyshev', 'linf', 'cosine', 'correlation', 'hellinger', 'hamming', 'jaccard'] Metrics that take arguments (such as minkowski) can have arguments passed via the metric_kwds dictionary.</p>
</blockquote>
<p>TIA</p>
|
<python><nvidia><rapids><cuml>
|
2023-06-21 18:13:29
| 1
| 1,196
|
Quiescent
|
76,525,891
| 17,487,457
|
pandas: add column whose value is available in previous row but not in current, of another column
|
<p>Suppose this is my <code>df</code>:</p>
<pre class="lang-py prettyprint-override"><code>{'accuracy': [0.773, 0.841, 0.862, 0.874, 0.883, 0.913],
'code': [('D',),('D', 'F'),('B', 'D', 'F'),
('B', 'F', 'K'), ('B', 'F', 'I', 'K'),
('F', 'I', 'K')]}
df
accuracy code
0 0.773 (D,)
1 0.841 (D, F)
2 0.862 (B, D, F)
3 0.874 (B, F, K)
4 0.883 (B, F, I, K)
5 0.913 (F, I, K)
</code></pre>
<p>I would like to add a column <code>dropped</code> whose value is the item in <code>code</code> in previous row is not available in the current row.</p>
<p>Expected:</p>
<pre class="lang-py prettyprint-override"><code> accuracy code dropped
0 0.773 (D,) -
1 0.841 (D, F) -
2 0.862 (B, D, F) -
3 0.874 (B, F, K) D
4 0.883 (B, F, I, K) -
5 0.913 (F, I, K) B
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-21 17:40:29
| 1
| 305
|
Amina Umar
|
76,525,529
| 19,157,137
|
Separating PyTest Tests and Sphinx Documentation in Docker with Poetry for Development
|
<p>I want to create a design for utilizing Poetry within a Docker environment. During development, I want to have separate directories for PyTest test files (<code>tests/</code>) and Sphinx documentation (<code>docs/</code>) on the host machine, rather than including them directly in the Docker image. Is there a way I could run tests for the Python files within the Docker image and create auto-updating documentation from Sphinx, which is only present on the host machine?</p>
<p><strong>Context:</strong></p>
<p>I am utilizing Docker for my development environment and Poetry for dependency management. My goal is to establish a workflow where the PyTest test files and Sphinx documentation directories remain on the host machine, while the actual code and dependencies are contained within the Docker image. This approach allows for easier test execution within the Docker environment and ensures that Sphinx generates up-to-date documentation without the need to rebuild the entire Docker image.</p>
<p>Directory Structure:</p>
<pre><code>Host Machine:
├── project/
│ ├── code/
│ │ ├── file1.py
│ │ ├── file2.py
│ │ └── ...
│ ├── tests/
│ │ ├── test_file1.py
│ │ ├── test_file2.py
│ │ └── ...
│ └── docs/
│ ├── conf.py
│ ├── index.rst
│ ├── ...
│ └── ...
└── Dockerfile
Docker Machine:
├── code/
│ ├── file1.py
│ ├── file2.py
│ └── ...
└── Dockerfile
</code></pre>
<p>Specifically, I would like to know:</p>
<ol>
<li>How can I run PyTest tests against the Python code within the Docker
image, while keeping the test files (<code>tests/</code>) on the host machine?</li>
<li>Is there a way to configure Sphinx to generate the documentation from
the Sphinx source files (<code>docs/</code>) on the host machine, without
including them in the Docker image?</li>
<li>What would be the recommended
setup and workflow for achieving this combination of Docker, Poetry,
PyTest, and Sphinx in terms of file organization and command
configurations?</li>
</ol>
<p>I appreciate any guidance or suggestions on how to design this development environment to meet these requirements.</p>
|
<python><docker><design-patterns><python-sphinx><python-poetry>
|
2023-06-21 16:51:08
| 0
| 363
|
Bosser445
|
76,525,472
| 3,838,750
|
Python Apache Beam DoFn For Polling
|
<p>I’m having trouble coming up with the best pattern to eagerly poll. By eagerly, I mean that elements should be consumed and yielded as soon as possible. There are a handful of experiments that I’ve tried and my latest attempt using the timer API seems quite promising, but is operating in a way that I find rather unintuitive. My solution was to create a sort of recursive timer callback - which I found one <a href="https://github.com/apache/beam/blob/v2.48.0/sdks/python/apache_beam/transforms/userstate_test.py#L797" rel="nofollow noreferrer">example</a> of within the beam test code.</p>
<p>I have a few questions:</p>
<ol>
<li><p>The below code runs fine with a single worker but with multiple workers there are duplicate values. It seems that the callback and snapshot of the state is provided to multiple workers and the number of duplications increases with the number of workers. Is this due to the values being provided to <code>timer.set</code>?</p>
</li>
<li><p>I’m using <code>TimeDomain.WATERMARK</code> here due to it simply not working when using <code>TimeDomain.WATERMARK</code>. The <a href="https://beam.apache.org/documentation/programming-guide/#state-and-timers" rel="nofollow noreferrer">docs</a> seem to suggest <code>REAL_TIME</code> would be the way to do this, however there seems to be no guarantee that a <code>REAL_TIME</code> callback will run. In this sample setting the timer to <code>REAL_TIME</code> will simply not ever fire the callback. Interestingly, if you call <code>timer.set</code> with any value less than the current <code>time.time()</code>, then the callback will run, however it seems to fire immediately regardless of the value (and in this sample will actually raise an <a href="https://github.com/apache/beam/blob/v2.48.0/sdks/python/apache_beam/runners/direct/transform_evaluator.py#L943" rel="nofollow noreferrer">AssertionError</a>).</p>
</li>
</ol>
<p>My concern with using something like a <code>time.sleep</code> within the <code>DoFn.process</code> method is that I'd like the runner to be doing something else if possible, and don't want a bunch of workers just sleeping repeatedly.</p>
<p>I'm curious if anyone has a better pattern for doing this and could clarify anything that I may be misunderstanding.</p>
<pre class="lang-py prettyprint-override"><code>import random
import threading
import apache_beam as beam
import apache_beam.coders as coders
import apache_beam.transforms.combiners as combiners
import apache_beam.transforms.userstate as userstate
import apache_beam.utils.timestamp as timestamp
from apache_beam.options.pipeline_options import PipelineOptions
class Log(beam.PTransform):
"""
A pass-through transform that prints the element.
"""
lock = threading.Lock()
@classmethod
def _log(cls, element, label):
with cls.lock:
# This just colors the print in terminal
print('\033[1m\033[92m{}\033[0m : {!r}'.format(label, element))
return element
def expand(self, pcoll):
return pcoll | beam.Map(self._log, self.label)
class EagerProcess(beam.DoFn):
BUFFER_STATE = userstate.BagStateSpec('buffer', coders.PickleCoder())
POLL_TIMER = userstate.TimerSpec('timer', beam.TimeDomain.WATERMARK)
def process(
self,
element,
buffer=beam.DoFn.StateParam(BUFFER_STATE),
timer=beam.DoFn.TimerParam(POLL_TIMER),
):
_, item = element
# Represents splitting out the element into individual pieces that
# may finish in any order.
for i in range(item):
buffer.add(i)
timer.set(timestamp.Timestamp.now() + timestamp.Duration(seconds=10))
@userstate.on_timer(POLL_TIMER)
def flush(
self,
buffer=beam.DoFn.StateParam(BUFFER_STATE),
timer=beam.DoFn.TimerParam(POLL_TIMER),
):
cache = buffer.read()
buffer.clear()
requeue = False
for item in cache:
# Represents some sort of check to see if the element piece
# is "complete".
if random.random() < 0.1:
yield item
else:
buffer.add(item)
requeue = True
if requeue:
timer.set(timestamp.Timestamp.now() + timestamp.Duration(seconds=10))
def main():
# NOTE: When using a single worker, this pipeline will behave "as expected".
# Introducing multiple workers results in duplicate values in addition to the
# problems where the timer callback are fired immediately.
options = PipelineOptions.from_dictionary({
'direct_num_workers': 3,
'direct_running_mode': 'multi_threading',
})
pipe = beam.Pipeline(options=options)
(
pipe
| beam.Create([10])
| 'Init' >> Log()
| beam.Reify.Timestamp()
| 'PairWithKey' >> beam.Map(lambda x: (hash(x), x))
| beam.ParDo(EagerProcess())
| 'Complete' >> Log()
| beam.transforms.combiners.Count.Globally()
| 'Count' >> Log()
)
result = pipe.run()
result.wait_until_finish()
if __name__ == '__main__':
main()
</code></pre>
|
<python><apache-beam>
|
2023-06-21 16:40:43
| 0
| 650
|
Sam Bourne
|
76,525,469
| 7,422,352
|
How to create a dataset containing python files in the "Machine Readable" format?
|
<p>I created a <a href="https://www.kaggle.com/datasets/adeepak7/tensorflow-global-and-operation-level-seeds" rel="nofollow noreferrer">dataset</a> that contains the python files containing snippets required for the Kaggle kernel - <a href="https://www.kaggle.com/code/adeepak7/tensorflow-s-global-and-operation-level-seeds/" rel="nofollow noreferrer">Tensorflow's Global and Operation level seeds</a></p>
<p>Since the kernel is around setting/re-setting global and local level seeds, the nullification of the effect of these seeds in the subsequent cells wasn't possible.</p>
<p>Hence, the snippets have been provided as separate python files and these python files are executed independently in the separate cells.</p>
<p><strong>My question -</strong> Kaggle is asking me to provide these files in a "Machine Readable" format. How to provide or convert these python files in the "Machine Readable" format?</p>
|
<python><dataset><kaggle>
|
2023-06-21 16:40:34
| 0
| 5,381
|
Deepak Tatyaji Ahire
|
76,525,367
| 843,075
|
Trying to retrieve filename from pytest.ini
|
<p>I'm trying to retrieve a filename from a pytest.ini file. It was initially hardcoded but now I have placed this in a pytest.ini file. Firstly, is this the correct approach for things like filenames? I am trying to read it but I am getting errors.</p>
<p>Here is my pytest.ini:</p>
<pre><code> [pytest]
cards_file = data/cards.csv
</code></pre>
<p>Here is my function:</p>
<pre><code>@staticmethod
def get_card_details():
filename = pytest.config.getini("cards_file")
with open(filename, 'r') as csvfile:
reader = csv.reader(csvfile)
rows = list(reader)[1:]
return random.choice(rows)
</code></pre>
<p>This is the error:</p>
<pre><code>E AttributeError: module pytest has no attribute config. Did you mean: 'Config'?
</code></pre>
<p>When I change this to <code>Config</code> I get this error:</p>
<pre><code>E TypeError: Config.getini() missing 1 required positional argument: 'name'
</code></pre>
<p>What is the argument 'name' that is required? I could not find this in the docs.</p>
<p>WHat is the difference between <code>Config.getini()</code> and <code>config.getini()</code></p>
<p>Thanks.</p>
|
<python><pytest>
|
2023-06-21 16:24:44
| 1
| 304
|
fdama
|
76,525,137
| 1,862,861
|
Can you make assignments between PyTorch tensors using ragged indices without a for loop?
|
<p>Suppose I have two PyTorch <code>Tensor</code> objects of equal shape:</p>
<pre class="lang-py prettyprint-override"><code>import torch
x = torch.randn(2, 10)
y = torch.randn(2, 10)
</code></pre>
<p>Now, I have a list of indices (of the same length as the first <code>Tensor</code> axis) which give different starting positions in the second <code>Tensor</code> axis from which I want to assign values from <code>y</code> into <code>x</code>, i.e.,</p>
<pre class="lang-py prettyprint-override"><code>idxs = [2, 6]
for i, idx in enumerate(idxs):
x[i, idx:] = y[i, idx:]
</code></pre>
<p>As above, I can do this with a for loop, but my question is whether there is a more efficient way of doing this without an explicit for loop?</p>
|
<python><pytorch><tensor>
|
2023-06-21 15:55:56
| 1
| 7,300
|
Matt Pitkin
|
76,525,079
| 5,437,090
|
pandas goupby method slow and inefficient for sum()
|
<p><strong>Given:</strong></p>
<p>A synthetic dataset in pandas <code>DataFrame</code> format <code>users vs. tokens</code>, generated via the following helper function:</p>
<pre><code>import numpy as np
import pandas as pd
import string
import random
def get_df(nROWs:int=10, nCOLs:int=100, MIN=0.0, MAX=199.0):
my_strings = string.printable
df = pd.DataFrame(np.random.uniform(low=MIN, high=MAX, size=(nROWs, nCOLs)).astype("float16"), columns=list(map(lambda orig_string: "tk_"+orig_string, random.sample(my_strings, nCOLs))) )
df["user_ip"] = [f"u{random.randint(0, nROWs)}" for r in range(nROWs)]
return df
</code></pre>
<p><strong>Goal:</strong></p>
<p>I would like to sum values of each column up for the grouped users,</p>
<p><strong>My inefficient solution:</strong></p>
<p>Considering small dataframes as follows:</p>
<pre><code>df1 = get_df(nROWs=3, nCOLs=5, MIN=0, MAX=10.0) # here `nCOLs` can't exceptionally go above 100, due to `len(string.printable)=100`
df2 = get_df(nROWs=5, nCOLs=4, MIN=0, MAX=5.0)
df3 = get_df(nROWs=7, nCOLs=9, MIN=0, MAX=1.0)
</code></pre>
<p><a href="https://i.sstatic.net/1XPjs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1XPjs.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/2crLc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2crLc.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/DyVX7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DyVX7.png" alt="enter image description here" /></a></p>
<p>and concatenate them along <code>axis=0</code> first:</p>
<pre><code>df_c = pd.concat([df1, df2, df3,], axis=0)
</code></pre>
<p><a href="https://i.sstatic.net/Azwq1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Azwq1.png" alt="enter image description here" /></a>
then <code>.groupby()</code> method works fine for this small size:</p>
<pre><code>d = dict()
for n, g in df_c.groupby("user_ip"):
d[n] = g.loc[:, g.columns!="user_ip"].sum()
df_res = pd.DataFrame.from_dict(d, orient="index").astype("float16")
</code></pre>
<p><a href="https://i.sstatic.net/0ELfp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0ELfp.png" alt="enter image description here" /></a>
<strong>Problem:</strong></p>
<p>Assuming, I have enough memory resources on a super computer and considering real dataframes with sizes on a scale of <code>15e+5 x 45e+3</code> or higher, it is super slow as in each execution inside the for loop takes roughly <code>10 ~ 15 sec</code>:</p>
<pre><code>df1 = get_df(nROWs=int(15e+5), nCOLs=100, MIN=0, MAX=200.0) # here `nCOLs` can't exceptionally go above 100, due to `len(string.printable)=100`
df2 = get_df(nROWs=int(3e+6), nCOLs=76, MIN=0, MAX=100.0)
df3 = get_df(nROWs=int(1e+3), nCOLs=88, MIN=0, MAX=0.9)
</code></pre>
<p>I was wondering if there might be a better and more efficient solution to deal with big sized data for this purpose.</p>
<p>Cheers,</p>
|
<python><pandas><dataframe><performance><coding-efficiency>
|
2023-06-21 15:48:25
| 1
| 1,621
|
farid
|
76,524,933
| 112,976
|
How does Uvicorn / Fastapi handle concurrency with 1 worker and synchronous endpoint?
|
<p>Understanding Uvicorn asynchrounous behavior</p>
<p>I am trying to understand the behavior of Uvicorn. I have create a sample fastapi app which mainly sleep for 5 seconds.</p>
<pre><code>import time
from datetime import datetime
from fastapi import FastAPI
app = FastAPI()
counter = 0
@app.get("/")
def root():
global counter
counter = counter + 1
my_id = counter
print(f'I ({my_id}) am feeling sleepy')
time.sleep(5)
print(f'I ({my_id}) am done sleeping')
return {}
</code></pre>
<p>I called my app using the following command of Apache Bench:</p>
<pre><code>ab -n 5 -c 5 http://127.0.0.1:8000/
</code></pre>
<p>Output:</p>
<pre><code>I (1) am feeling sleepy -- 0s
I (1) am done sleeping -- 5s
I (2) am feeling sleepy -- 5s
I (3) am feeling sleepy -- 5s
I (4) am feeling sleepy -- 5s
I (5) am feeling sleepy -- 5s
I (2) am done sleeping -- 10s
I (4) am done sleeping -- 10s
I (3) am done sleeping -- 10s
I (5) am done sleeping -- 10s
</code></pre>
<p>Why are requests running concurrently? I ran the app as:</p>
<pre><code>uvicorn main:app --workers 1
</code></pre>
<p>Please note that I did not use the async keyword so for me everything should be completely synchronous.</p>
<p>From the FastAPI docs:</p>
<blockquote>
<p>When you declare a path operation function with normal def instead of
async def, it is run in an external threadpool that is then awaited,
instead of being called directly (as it would block the server).</p>
</blockquote>
<p>Where is this threadpool? As I am using sleep, I though the only worker available would be completely blocked.</p>
|
<python><python-asyncio><fastapi><uvicorn>
|
2023-06-21 15:30:03
| 1
| 22,768
|
poiuytrez
|
76,524,838
| 8,532,866
|
Resampling Xarray Dataset to higher spatial resolution?
|
<p>This feels like it should be an easy problem with a known solution but struggling to figure it out.</p>
<p>I have a 2D xarray dataset (with latitude/longitude) on a regular grid. I want to increase the resolution of this data by sampling it on to a higher resolution grid, e.g. going from 1 degree to 0.1 degree.</p>
<p>I thought this should have been as easy as using the .interp method and providing it with the new grid but this causes issues at the boundaries where we lose data points. What I think is happening is that the "half" of the grid box at the edge of the domain ends up as a NaN as that's the nearest neighbour to that data point.</p>
<p>From what I can tell, none of the interpolation methods give the desired effect and it feels like some data manipulation might be required to "hack" this in to working correctly.</p>
<p>Thanks</p>
|
<python><python-xarray><cartopy>
|
2023-06-21 15:16:32
| 1
| 1,486
|
Rob
|
76,524,750
| 25,999
|
How can I add a value to a Pydantic model (not instance)
|
<p>Is there a way to store a set value with a Pydantic model (not each instance of the model)? In the code below, <code>slack_members</code> is a global variable. I'd like to have it self-contained within the model. This would allow for easier testing and better protability. Or is there a way to pass in the <code>slack_members</code> value to the <code>email()</code> function? I could store it with each instance, but that would be very inefficient since <code>slack_members</code> can contain hundreds of items. Or any other alternatives would be appreciated.</p>
<pre><code>class Allocation(BaseModel):
person: str
hours: int
@property
def email(self):
member = slack_members.match_user(self.person)
return member.email
</code></pre>
|
<python><pydantic>
|
2023-06-21 15:07:15
| 0
| 2,434
|
kidbrax
|
76,524,663
| 1,302,465
|
Python Scipy Stats Rayleigh distribution parameter estimate function gives negative value for scale parameter and ppf function provides nan output
|
<p>I'm using Rayleigh distribution to estimate the scale parameter and getting negative vlaue althoguh the dataset doesn't contain any negative value. What does that negative value mean? If I have negative value, then the ppf gives nan output.</p>
<pre><code>from scipy.stats import rayleigh
def baseline_rayleigh(data):
pd = rayleigh.fit(data)
B = pd[0]
return B
p = 0.1
B = baseline_rayleigh(data)
ThreshL = rayleigh.ppf(1 - p, scale=B)
</code></pre>
<p>For same data, Matlab rayleigh function gives non-negative value and getting valid output from raylinv function whereas the python equivalent rayleigh.ppf gives nan output.</p>
<pre><code>pd = fitdist(data,'Rayleigh')
B = pd.B;
ThreshL = raylinv(1-p,B);
</code></pre>
|
<python><matlab><scipy><scipy.stats>
|
2023-06-21 14:58:30
| 1
| 477
|
Keerthi Jayarajan
|
76,524,456
| 561,243
|
INSERT Statement in sqlite python with custom type
|
<p>It is true that I'm new to sqlite and maybe I am just asking a very trivial question.</p>
<p>I would like to store some analysis results in a sqlite table directly from my python analysis tool. The results are contained in a custom object (a class) and I though that the <a href="https://docs.python.org/3/library/sqlite3.html#how-to-write-adaptable-objects" rel="nofollow noreferrer">section</a> of sqlite python documentation 'How to adapt custom Python types to SQLite values' would be a starting point for my problem.</p>
<p>Elaborating from that example, I came to this piece of code that is <strong>not working</strong>:</p>
<pre class="lang-py prettyprint-override"><code>import sqlite3
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
# define a __conform__ method to adapt the type
def __conform__(self, protocol):
if protocol == sqlite3.PrepareProtocol:
return self.x, self.y
connection = sqlite3.connect('mydb.db')
cursor = connection.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS Points (x REAL, y REAL)
""")
connection.commit()
# the insert query takes two qmark style placeholder and the __conform__ method of Point is
# actually returning a tuple with two values.
# attempt passing a tuple with 1 point to the execute method.
cursor.execute("""
INSERT INTO Points VALUES (?, ?)
""", (Point(24.2, 22.1),))
# fails with a ProgrammingError: Incorrect number of bindings supplied. The current statement uses 2, and there are 1 supplied.
# attempt to pass a tuple with 1 point to the execute many method.
cursor.executemany("""
INSERT INTO Points VALUES (?, ?)
""", (Point(24.2, 22.1),))
# fails with a ProgrammingError: parameters are of unsupported type
# attempt of passing a Point directly
cursor.execute("""
INSERT INTO Points VALUES (?, ?)
""", Point(24.2, 22.1))
# fails with a ProgrammingError: parameters are of unsupported type
# attempt to manually adapt the object
cursor.execute("""
INSERT INTO Points VALUES (?, ?)
""", Point(24.2, 22.1).__conform__(sqlite3.PrepareProtocol))
# works, but I can't believe that this is the expected behavior!
</code></pre>
<p>The only difference between my code and the one provided in the documentation (that works) is that my object is providing two values instead of 1.</p>
<p>I have also tried the second approach with the adapter function to be registered, but with the same result. Moreover I have also tried to replace the qmark placeholder with namedplaceholder and swapped from tuple to dictionary in the adapter, but with no luck.</p>
<p>What am I doing wrong?</p>
<p>Thanks for helping me out with this problem!</p>
|
<python><sqlite>
|
2023-06-21 14:34:24
| 1
| 367
|
toto
|
76,524,326
| 12,415,855
|
Use gevent with parameter in working routine and return result-value?
|
<p>i have the following working code:</p>
<pre><code>from gevent import monkey; monkey.patch_all()
from gevent.pool import Pool
import time
def work(x):
time.sleep(1) # Or use requests.get() or something else IO-related here.
newVal = x ** 2
print(x, newVal)
start = time.perf_counter()
p = Pool()
l = [x for x in range(10)]
for i,x in enumerate(l):
p.spawn(work, x)
p.join()
print(round(time.perf_counter() - start, 3)) # Prints 1.001 or 1.002 for me.
</code></pre>
<p>This works fine and is finished within a little bit more than 1 second:</p>
<pre><code>(test) C:\DEV\Python-Diverses\gevent>python try.py
0 0
1 1
2 4
3 9
4 16
5 25
6 36
7 49
8 64
9 81
1.036
</code></pre>
<p>Now i want to return the result from the work-function and output all values at the - so i change the code to:</p>
<pre><code>from gevent import monkey; monkey.patch_all()
from gevent.pool import Pool
import time
def work(x):
time.sleep(1) # Or use requests.get() or something else IO-related here.
newVal = x ** 2
return (newVal)
start = time.perf_counter()
p = Pool()
l = [x for x in range(10)]
ergList = []
for i,x in enumerate(l):
erg = p.spawn(work, x)
ergList.append(erg)
p.join()
print(ergList)
print(round(time.perf_counter() - start, 3)) # Prints 1.001 or 1.002 for me.
</code></pre>
<p>The whole process still took only 1 second - but the output is not the calculated value - its a Greenlet-instance:</p>
<pre><code>(test) C:\DEV\Python-Diverses\gevent>python try.py
0 0
1 1
2 4
3 9
4 16
5 25
6 36
7 49
8 64
9 81
1.036
(test) C:\DEV\Python-Diverses\gevent>python try.py
[<Greenlet at 0x180704a55e0: _run>, <Greenlet at 0x18070533f40: _run>, <Greenlet at 0x18070563180: _run>, <Greenlet at 0x180705632c0: _run>, <Greenlet at 0x18070563400: _run>, <Greenlet at 0x18070563540: _run>, <Greenlet at 0x18070563680: _run>, <Greenlet at 0x180705637c0: _run>, <Greenlet at 0x18070563900: _run>, <Greenlet at 0x18070563a40: _run>]
1.07
</code></pre>
<p>How can i parallelize the process and also get the result-value back and not the Greenlet-instance?</p>
|
<python><gevent>
|
2023-06-21 14:20:30
| 1
| 1,515
|
Rapid1898
|
76,524,288
| 6,775,670
|
How to show both stdout and stderr captures in pytest, but not logs capture?
|
<p>Originally, I need both blocks for pytest result</p>
<blockquote>
<p>Captured stdout call</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Captured stderr call</p>
</blockquote>
<p>but not</p>
<blockquote>
<p>Captured log call</p>
</blockquote>
<p>There is <code>--show-capture</code> option, but no <code>both</code> argument, unfortunately. Looks like that I must choose either one of the group, or all at the one time. Is there some way to acheive that?</p>
<pre><code>pytest --show-capture=both
pytest: error: argument --show-capture: invalid choice:
'both' (choose from 'no', 'stdout', 'stderr', 'log', 'all')
</code></pre>
|
<python><testing><pytest>
|
2023-06-21 14:17:01
| 1
| 1,312
|
Nikolay Prokopyev
|
76,524,220
| 20,612,566
|
Extract values from list with dicts
|
<p>I have a list with orders. Each order contains products and their prices.
What I want is to write the shortest script possible to count the total number of products sold and the total amount of money earned. I'm looking for the fields "count" and "total".</p>
<pre><code>[
{
"id":1,
"items":[
{
"offerName":"name_1",
"count":6,
"prices":[
{
"costPerItem":129.0,
"total":774.0
}
]
},
{
"offerName":"name_2",
"count":1,
"prices":[
{
"costPerItem":120.0,
"total":120.0
}
]
}
]
},
{
"id":2,
"items":[
{
"offerName":"name_3",
"count":10,
"prices":[
{
"costPerItem":30.0,
"total":300.0
}
]
},
{
"offerName":"name_4",
"count":1,
"prices":[
{
"costPerItem":10.0,
"total":10.0
}
]
}
]
}
]
</code></pre>
<p>I got the right values, but I think there is a way to make the calculation more beautiful, without a lot of looping.</p>
<pre><code>counts = []
prices = []
for order in sales_data:
products_list = order.get("items", [])
for offer in products_list:
counts.append(offer.get("count", 0))
prices.append(offer.get("prices", []))
total_earned=[]
for price in prices:
total = price[0].get("total", 0)
total_earned.append(total)
print(sum(counts))
print(sum(total_earned))
</code></pre>
|
<python><list><for-loop><nested-loops>
|
2023-06-21 14:09:31
| 3
| 391
|
Iren E
|
76,524,202
| 11,028,689
|
how to install py-fhe library from the github repo?
|
<p>Can anybody give me some clear instructions about how to install this library from Anaconda Powershell Prompt on windows 10 for my Jupyter notebook env from here?</p>
<p><a href="https://github.com/sarojaerabelli/py-fhe" rel="nofollow noreferrer">https://github.com/sarojaerabelli/py-fhe</a></p>
<p>e.g. do I need to run the setup.py file?</p>
|
<python><encryption><installation><anaconda>
|
2023-06-21 14:08:15
| 1
| 1,299
|
Bluetail
|
76,523,937
| 5,663,844
|
Upgrade sklearn model from 0.23.1 to >1.0
|
<p>so I have a random forest model saved as. a <code>.pkl</code> file. This model was trained using an older version of <code>sklearn</code> 0.21.3, I would like to use it within an environment that has to use a <code>sklearn</code> version that is greater than 1.0.0.</p>
<p>From 0.23.1 to 1.0.0 <code>sklearn</code> has changed the naming of modules,classes and function from <code>forest</code> to <code>_forest</code> as an example.
For that reason I currently cannot import the old model, as it will not be recognized as a <code>sklearn</code> model. I would be interested if there is away to convert such model file to a file that is importable under 1.0.0 or greater.</p>
<p>Thanks</p>
|
<python><scikit-learn><pickle>
|
2023-06-21 13:40:00
| 1
| 480
|
Janosch
|
76,523,916
| 8,065,970
|
Error in r.listen(source) while using the speech recognition module
|
<p>I am trying to run a basic code for speech recognition. But I am getting an error in the following line:</p>
<pre class="lang-py prettyprint-override"><code>audio = r.listen(source)
</code></pre>
<p>The small piece of code that I am trying is:</p>
<pre class="lang-py prettyprint-override"><code>import speech_recognition as s_r
print(s_r.__version__) # just to print the version not required
r = s_r.Recognizer()
my_mic = s_r.Microphone() #my device index is 1, you have to put your device index
with my_mic as source:
print("Say now!!!!")
audio = r.listen(source) #take voice input from the microphone
print(r.recognize_google(audio)) #to print voice into text
</code></pre>
<p>I have developed multiple projects using this code but this time I am getting the following error:</p>
<pre><code>3.10.0
Say now!!!!
Traceback (most recent call last):
File "D:\professional\JARVIS\assistant\test.py", line 7, in <module>
audio = r.listen(source) #take voice input from the microphone
File "C:\Users\theas\AppData\Roaming\Python\Python38\site-packages\speech_recognition\__init__.py", line 465, in listen
assert source.stream is not None, "Audio source must be entered before listening, see documentation for ``AudioSource``; are you using ``source`` outside of a ``with`` statement?"
AssertionError: Audio source must be entered before listening, see documentation for ``AudioSource``; are you using ``source`` outside of a ``with`` statement?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\professional\JARVIS\assistant\test.py", line 7, in <module>
audio = r.listen(source) #take voice input from the microphone
File "C:\Users\theas\AppData\Roaming\Python\Python38\site-packages\speech_recognition\__init__.py", line 189, in __exit__
self.stream.close()
AttributeError: 'NoneType' object has no attribute 'close'
>>>
</code></pre>
<p>I am using this code on a new system with Windows 11. I have installed a fresh Python with default settings with version <code>3.11.4</code>.
The following library versions have been installed for speech recognition:</p>
<pre><code>SpeechRecognition: 3.10.0
Pyaudio: 0.2.13
</code></pre>
<p>The same code is working in my other system. Please help me find this silly mistake I made during the setup.</p>
<p>UPDATE:
When I try to print the objects of <code>my_mic</code>. I get the following output:</p>
<pre><code>['CHUNK', 'MicrophoneStream', 'SAMPLE_RATE', 'SAMPLE_WIDTH', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'audio', 'device_index', 'format', 'get_pyaudio', 'list_microphone_names', 'list_working_microphones', 'pyaudio_module', 'stream']
</code></pre>
<p>When I try to get a list of all the microphones present, I get a large list. But when I try to get the list of working microphones, that is empty.</p>
|
<python><python-3.x><windows><speech-recognition><windows-11>
|
2023-06-21 13:37:40
| 2
| 487
|
theashwanisingla
|
76,523,866
| 7,920,004
|
delta.tables module not found
|
<p>I'm trying to import <code>delta.tables</code> in my AWS Glue local script but getting error when running:</p>
<pre><code>bash gluesparksubmit /home/my_user_name/aws-glue-libs/code/script.py
</code></pre>
<p>My code for <code>data_lake_client</code> that is used in my <code>script.py</code> by calling
<code>from clients import DataLakeClient</code>:</p>
<pre><code>from pyspark.sql.session import SparkSession
from delta.tables import *
class DataLakeClient:
def __init__(self, s3_alias, prefix):
self.spark = (
SparkSession.builder
.config(
"spark.jars",
"/home/my_user_name/aws-glue-libs/code/libs/delta-core_2.12-1.0.0.jar",
)
.config(
"spark.sql.extensions",
"io.delta.sql.DeltaSparkSessionExtension",
)
.config(
"spark.sql.catalog.spark_catalog",
"org.apache.spark.sql.delta.catalog.DeltaCatalog",
)
.getOrCreate()
)
self.path = f"{s3_alias}/{prefix}/"
def read_dl(self, table):
return DeltaTable.forPath(self.spark, f"{self.path}/{table}").toDF()
</code></pre>
|
<python><amazon-web-services><aws-glue>
|
2023-06-21 13:33:20
| 1
| 1,509
|
marcin2x4
|
76,523,837
| 14,114,654
|
Group rows where str is contained in another row
|
<p>How could I create 2 columns that flags when the rows in the group are contained in each other? Everything is case-insensitive.</p>
<pre><code> Type value
0 Fruit apple
1 Fruit App le
2 Fruit Apple yes
3 Fruit Apple
4 Cutlery Spoon
</code></pre>
<p>Expected Output</p>
<pre><code>dup_index flags, per group, what rows the value is contained within
dup is "yes" if that row is mentioned in dup_index
type value dup dup_index
0 Fruit apple yes 2,3 ("apple" is contained in row 2 and 3)
1 Fruit App le
2 Fruit Apple yes yes ("apple yes" is not contained in another row)
3 Fruit Apple yes o,2 ("apple" is contained in row 0 and 2 )
4 Cutlery Spoon
...
</code></pre>
<p>I then want the flexibility to drop rows where dup is "yes".</p>
|
<python><pandas><group-by>
|
2023-06-21 13:29:44
| 1
| 1,309
|
asd
|
76,523,820
| 2,015,542
|
How can I plot a large array of Scatter3d subplots using plotly?
|
<p>I want to create a large number of Scatter3d plots in a plotly subplots array. When I plot 16 or fewer plots, everything renders correctly, but when I try to plot e.g. 25 plots in a 5x5 array, only the last 16 plots are rendered.</p>
<p>How can I get more than 16 plots to render correctly?</p>
<p><a href="https://i.sstatic.net/ZqmfE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZqmfE.jpg" alt="16 plots out of 25 render correctly" /></a></p>
<p>Running the code below produces the problem demonstrated in the screenshot above. However, if you edit line 7 to read <code>nrow,ncol = 4,4</code> then 16 plots render correctly. This problem does not seem to occur with other plot types (e.g. Scatter).</p>
<pre><code>from plotly.subplots import make_subplots
import plotly.graph_objects as go
import numpy as np
Npts = 30
nrows,ncols = 5,5
speclist = [[{"type": "Scatter3d"} for i in range(0,ncols)] for j in range(0,nrows)]
fig = make_subplots(rows=nrows,cols=ncols,specs=speclist)
for i in range(0,nrows*ncols):
row,col = (i//ncols)+1,(i%ncols)+1
print("Index: {i:02d} Row: {row:02d} Column: {col:02d}".format(i=i,row=row,col=col))
xvals=np.random.uniform(0.0,1.0,Npts)
yvals=np.random.uniform(0.0,1.0,Npts)
zvals=np.random.uniform(0.0,1.0,Npts)
fig.add_trace(go.Scatter3d(x=xvals, y=yvals, z=zvals,mode='markers',showlegend=False),row=row,col=col)
fig.show()
</code></pre>
|
<python><plotly><visualization><subplot><scatter3d>
|
2023-06-21 13:28:39
| 0
| 2,598
|
CnrL
|
76,523,780
| 2,382,141
|
Reload a django view
|
<p>Hi I have a django app where the usual user can see in the home page a list of services, click on it to read more details and return to home page.
User can register (usign django admin) and login.
If a user is new he saw only the list of services. When he reads a service the app track the url visited (I'm using django tracking2 <a href="https://github.com/bruth/django-tracking2" rel="nofollow noreferrer">https://github.com/bruth/django-tracking2</a>) and it shows some recommendations (they are based on text analysis and similiarity). The logged user to see updated recommendations it needs to logout and login..It seems django doesn't reload the view.
These are my files:</p>
<p>urls.py</p>
<pre><code>from django.contrib import admin
from django.urls import path, include
from rs import views
from django.conf import settings
from django.conf.urls.static import static
from django.urls import re_path
urlpatterns = [
path('admin/', admin.site.urls),
path('', views.index, name="index"),
path('static', views.static, name='static'),
path('users/index', views.users_servizi_list, name='user_index'),
path('users/detail/<int:pk>/', views.user_dettaglio, name='user_dettaglio'),
path('accounts/', include('django.contrib.auth.urls')),
path('sign_out', views.sign_out, name='logout'),
path('index', views.servizi_list, name='servizi'),
path('detail/<int:pk>/', views.dettaglio, name='dettaglio'),
re_path(r'^tracking/', include('tracking.urls')),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p>views.py</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
from django.conf import settings
from django.conf.urls.static import static
from django.shortcuts import render, redirect
from django.contrib import messages
from django.contrib.auth import login, authenticate, logout
from django.contrib.auth.decorators import login_required
from .models import Services
from django.http import HttpResponse, HttpResponseRedirect
from django.template import loader
from tracking.models import Visitor, Pageview
from django_pandas.io import read_frame
from sentence_transformers import SentenceTransformer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.cluster import KMeans
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.decomposition import PCA
import pandas as pd
import numpy as np
import pickle
def login(request):
return render(request, 'users/login.html')
@login_required
def sign_out(request):
logout(request)
messages.success(request,f'You have been logged out.')
return redirect('index')
@login_required
def user_index(request):
return render(request, 'users/index.html')
def servizi_list(request):
service_list = list(Services.objects.all())
template = loader.get_template('index.html')
context = {
'servizi': service_list,
}
return HttpResponse(template.render(context, request))
def dettaglio(request, pk: int):
query_service = Services.objects.filter(id=pk)
template = loader.get_template('detail.html')
context = {
'servizio': query_service,
}
return HttpResponse(template.render(context, request))
@login_required
def user_dettaglio(request, pk: int):
query_service = Services.objects.filter(id=pk)
template = loader.get_template('users/detail.html')
context = {
'servizio': query_service,
}
return HttpResponse(template.render(context, request))
############# recommendation engine ###############
def give_recommendations(index, data, cos_sim_data, print_recommendation = False,print_recommendation_plots= False,print_genres=False):
index_recomm = cos_sim_data.loc[index].sort_values(ascending=False).index.tolist()[1:6]
index_recomm = [int(x) for x in index_recomm]
service_recomm = data['servizio'].loc[index_recomm].values
id_recomm = data['id'].loc[index_recomm].values
result = {'Service': service_recomm, 'Index':index_recomm}
return id_recomm
def rs_engine(user_id):
session_key = Visitor.objects.filter(user_id=user_id).filter(end_time__isnull=False).order_by('-end_time')
if not session_key:
return [-1]
else:
for sss_kv in session_key:
url_visited = Pageview.objects.filter(visitor_id=sss_kv).order_by('-view_time')
print('url vis: ', url_visited)
for u in url_visited:
print('url visitato: ', u.referer)
target = u.referer
target_id = target.split('/')
service_target = target_id[-2]
check_service = service_target.isnumeric()
if check_service == True:
serviceTarget = int(service_target)
#print('service tag Identificato: ', serviceTarget)
break
else:
pass
if check_service == True:
serviceTarget = int(service_target)
break
else:
return [-1]
services = Services.objects.all()
data = read_frame(services)
X = []
for el in data.descrizione:
X.append(str(el))
model = SentenceTransformer('all-MiniLM-L6-v2')
#embeddings = model.encode(X, show_progress_bar=True)
#print('emb batches: ', type(embeddings), embeddings[0:10])
#Store sentences & embeddings on disc
#with open('../rsRegionePuglia/static/embeddings.pkl', "wb") as fOut:
# pickle.dump({'sentences': X, 'embeddings': embeddings}, fOut, protocol=pickle.HIGHEST_PROTOCOL)
#Load sentences & embeddings from disc
with open('../rsRegionePuglia/static/embeddings.pkl', "rb") as fIn:
stored_data = pickle.load(fIn)
stored_sentences = stored_data['sentences']
stored_embeddings = stored_data['embeddings']
#print('emb loaded: ', type(stored_embeddings), stored_embeddings[0:10])
x = np.array(X)
#cos_sim_data = pd.DataFrame(cosine_similarity(embeddings))
# salva la cos sim in un file csv
#cos_sim_data.to_csv('../rsRegionePuglia/static/cos_sim.csv', index=False)
cos_sim_data = pd.read_csv('../rsRegionePuglia/static/cos_sim.csv')
#print('target id: ', service_target)
mapping = {}
for i in range(0, len(services)):
mapping[data.id[i]] = i
service_start = mapping[serviceTarget]
sugg = give_recommendations(service_start, data, cos_sim_data, True) #service_target
return sugg
@login_required
def users_servizi_list(request):
service_list = list(Services.objects.all())
template = loader.get_template('users/index.html')
current_user = request.user
user_id = current_user.id
rs = rs_engine(user_id)
rs = rs[1:]
if rs==[] or rs[0] == -1:
context = {
'servizi': service_list,
'new_user': 1,
}
return HttpResponse(template.render(context, request))
else:
descr = []
for id in rs:
service = Services.objects.get(pk=id)
descr.append(service.servizio)
servizi = {}
for i in range(1, len(rs)):
servizi[rs[i]] = descr[i]
context = {
'servizi': service_list,
'recom': servizi,
'new_user': 0,
}
current_user = request.user
return HttpResponse(template.render(context, request))
</code></pre>
<p>index.html</p>
<pre><code>{% extends "base.html" %}
{%load static%}
{%block blk%}
{% if new_user == 0 %}
Ti potrebbe interessare:
{% for key, values in recom.items %}
<a class="read-more" href="{% url 'user_dettaglio' pk=key %}">{{values}}</a>
{% endfor %}
{% for x in servizi %}
{% include "users/service_card.html" %}
{% endfor %}
{% else %}
{% for x in servizi %}
{% include "users/service_card.html" %}
{% endfor %}
{%endif%}
{%endblock%}
</code></pre>
<p>service_card.html</p>
<pre><code>{%load static%}
<div class="container my-4">
<div class="my-5">
<div class="container">
<div class="row">
<div class="col-12 col-md-6 col-lg-4">
<!--start card-->
<div class="card-wrapper card-space">
<div class="card card-bg card-big">
<div class="card-body">
<div class="top-icon">
<svg class="icon">
<use href="{% static 'js/svg/sprites.svg#it-card' %}"></use>
</svg>
</div>
<h3 class="card-title h5 ">{{ x.categoria }}</h3>
<p class="card-text">{{ x.servizio }}</p>
<p class="card-text">{{ x.descrizione }}</p>
<p class="card-text">{{ x.tag }}</p>
<a class="read-more" href="{% url 'user_dettaglio' pk=x.id %}">
<span class="text">Leggi di più</span>
<span class="visually-hidden">su Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do
eiusmod tempor…</span>
<svg class="icon">
<use href="{% static '/svg/sprites.svg#it-arrow-right' %}"></use>
</svg>
</a>
</div>
</div>
</div>
<!--end card-->
</div>
</div>
</div>
</div>
</div>
</div>
</code></pre>
<p>There is a manner to update the view users_servizi_list without user logout and login?
Any hint is apreciated.</p>
|
<python><django><reload>
|
2023-06-21 13:24:01
| 1
| 348
|
Giuseppe Ricci
|
76,523,716
| 8,046,546
|
Sagemaker environment variable with .env
|
<p><strong>[Background]</strong></p>
<p>I am used to use <code>xx.py</code> and defining a <code>.env</code> file in python project in Pycharm, and by doing</p>
<pre><code>import os
os.environ['abc']
</code></pre>
<p>I will get the 'abc' variable defined in <code>.env</code> file.</p>
<p><strong>[Question]</strong></p>
<p>Now I need to work with sagemake studio, what is the <strong>best practice</strong> to manage env var with sagemake?</p>
<p>I tried to create a <code>.env</code> file, but got a rename error. Does it mean using <code>.env</code> is not a good try in sagemaker?</p>
<pre><code>Rename Error
Cannot rename file or directory '/xx/untitled.txt'
</code></pre>
|
<python><amazon-web-services><environment-variables><amazon-sagemaker><amazon-sagemaker-studio>
|
2023-06-21 13:16:29
| 1
| 320
|
Mapotofu
|
76,523,525
| 8,814,131
|
Add rows together based on other columns
|
<p>I want to add rows together for specific values in column <code>Col2</code>.</p>
<p>Let's use the following dataframe:</p>
<pre><code> Col1 Col2 Value
0 A 1 1
1 A 2 1
2 B 1 2
3 B 2 2
4 B 3 3
</code></pre>
<p>Now I want to sum the rows with <code>Col2</code> equal to <code>1</code> and <code>2</code> and call it <code>1</code> while maintaining the groups in <code>Col1</code>. So I want to get this:</p>
<pre><code> Col1 Col2 Value
0 A 1 2
2 B 1 4
4 B 3 3
</code></pre>
<p>Is this possible with a general approach? I tried using groupby, but it does not seem to work. Note that I have multiple columns such as <code>Col1</code> and multiple <code>value</code> columns, so I hope to use a general solution.</p>
|
<python><pandas>
|
2023-06-21 12:55:40
| 2
| 3,280
|
T C Molenaar
|
76,523,509
| 2,366,887
|
How to use the new gpt-3.5-16k model with langchain?
|
<p>I have written an application in langchain that passes a number of chains to a Sequential Chain to run. The problem I'm having is that the prompts are so large that they are exceeding the 4K token limit size. I saw that OpenAI has released a new 16K token window sized model for ChatGPT, but I can't seem to access it from the API. When I try, I get the following error:</p>
<p>openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?</p>
<p>Here is how I'm attempting to instantiate the model:</p>
<pre><code> self.llm = OpenAI(model='gpt-3.5-turbo-16k',temperature = self.config.llm.temperature,
openai_api_key = self.config.llm.openai_api_key,
max_tokens=self.config.llm.max_tokens
)
</code></pre>
<p>Anybody know how I can fix this?</p>
|
<python><langchain><py-langchain>
|
2023-06-21 12:52:17
| 2
| 523
|
redmage123
|
76,523,226
| 21,404,794
|
Botorch equality_constraints argument example in the optimize_acqf function
|
<p>I'm doing a Bayesian Optimization code with <a href="https://botorch.org/v/0.2.3/" rel="nofollow noreferrer">BoTorch</a> and I want to apply a constraint like x2+x4+x6=1 to the candidates, so I use the optimize_acqf function with this signature:</p>
<pre class="lang-py prettyprint-override"><code> new_x, value = optimize_acqf(
acq_function=acq_func,
bounds=to.tensor([[0.0] * 20, [1.0] * 20]),
q=BATCH_SIZE,
num_restarts=10,
raw_samples=100, # used for intialization heuristic
options={"batch_limit": 5, "maxiter": 200, "nonnegative": True},
equality_constraints=[(to.tensor([2,4,6]),to.tensor([1,1,1]),1)],
sequential=True,
)
</code></pre>
<p>But when I try to run the code, I get a <code>RunTimeError (dot : expected both vectors to have same dtype, but found Long and Float)</code> in the equality_constraints argument (if I comment that part out, the error disappears). I've tried setting the dtype of the tensors to float (as I reckon that the float tensor is the one from the data, and then it gives <code>ValueError: No feasible point found. Constraint polytope appears empty. Check your constraints.</code>)</p>
<p>I've searched the docs, and <a href="https://botorch.org/v/0.2.3/api/optim.html" rel="nofollow noreferrer">here</a> it says that the signature of the argument I want is</p>
<blockquote>
<p>constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs</p>
</blockquote>
<p>In another <a href="https://botorch.org/api/optim.html#botorch.optim.optimize.optimize_acqf" rel="nofollow noreferrer">page referring to the same function</a> (afaik) It says that the list of tuples should contain tensors and floats</p>
<blockquote>
<p>equality_constraints (List[Tuple[Tensor, Tensor, float]] | None) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an equality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs</p>
</blockquote>
<p>From what I understand, the argument performs something like this in the case of the example given, where <code>equality_constraints=[(to.tensor([2,4,6]),to.tensor([1,1,1]),1)]</code></p>
<pre class="lang-py prettyprint-override"><code>for i in range(num_rows):
X[i,2]+X[i,4]+X[i,6] = 1
</code></pre>
<p>But I haven't been able to figure out how I should define those tensors to run correctly.</p>
<p><strong>Can someone give me a simple example of how to use this argument?</strong></p>
|
<python><pytorch>
|
2023-06-21 12:20:18
| 1
| 530
|
David Siret Marqués
|
76,523,189
| 6,901,146
|
Is there a way to include and execute a Python script and render the output in MkDocs using markdown-exec?
|
<p>As I am building <a href="https://www.mkdocs.org/" rel="nofollow noreferrer">MkDocs</a> documentation pages I have integrated many python scripts within. This is possible by using <a href="https://pypi.org/project/markdown-exec/" rel="nofollow noreferrer">markdown-exec</a> syntax:</p>
<pre><code>```python exec="on"
print("Hello Markdown!")
</code></pre>
<p>However it is very cumbersome and annoying to work with Python code inside markdown (<code>.md</code>) pages as I am unable to test or auto-format code blocks on the fly. It makes documentation pretty clunky to construct. Extracting code from markdown into their own Python (<code>.py</code>) files would make everything easier to maintain, documentation more consistent, help with dynamics, and reduce git diffs.</p>
<p>I already tried playing around with markdown-exec syntax to enable loading external Python files, but I haven't been successful in completing that task so far. markdown-exec doesn't mention anything inside their documentation pages about loading external sources and their examples only show inline code. I even tried using a combination of markdown-exec and <a href="https://facelessuser.github.io/pymdown-extensions/extensions/snippets/" rel="nofollow noreferrer">snippets</a>, however snippets don't resolve before markdown-exec compiles the code.</p>
<p>Ultimately it would be easiest if there was a way to load python code snippets into mkdocs before markdown-exec compiles and runs the code.</p>
|
<python><documentation><mkdocs>
|
2023-06-21 12:15:52
| 1
| 1,634
|
TheRealVira
|
76,523,136
| 7,040,647
|
How to authenticate with email on ldap3
|
<p>Is there a way to use email on ldap authentication instead of cn ?</p>
<p>I have tried with "cn=test,ou=users,dc=example,dc=org" and it works, but i want to use the email test@example.org.</p>
<p>Thank you.</p>
|
<python><authentication><ldap3>
|
2023-06-21 12:09:35
| 0
| 330
|
nipuro
|
76,523,100
| 8,458,083
|
Why can't I reach the remainings patterns after the second pattern in the pattern matching block of union type of 4 elements
|
<p>First I define some types</p>
<pre><code>from dataclasses import dataclass
from typing import Tuple,Union
@dataclass
class Point:
x: float
y: float
myPoint= Tuple[int,int]
otherPoint = Union[Tuple[int,int,int] , Tuple[int,int,str]]
Shape = Union[Point , myPoint ,otherPoint]
</code></pre>
<p>AS you can see with <code>print(Shape)</code>, Shape is an union type of 4 elements</p>
<blockquote>
<p>typing.Union[<strong>main</strong>.Point, typing.Tuple[int, int], typing.Tuple[int, int, int], typing.Tuple[int, int, str]]</p>
</blockquote>
<p>But the following code doesn't work.</p>
<pre><code>def print_shape(shape: Shape):
match shape:
case Point(x, y):
print(f"Point {x} {y}")
case myPoint as mp:
print(f"{mp[0]}{mp[1]}")
case _ :
print("point 3d")
print(Shape)
print_shape(Point(1, 2))
print_shape((5, 7))
</code></pre>
<blockquote>
<p>case myPoint as mp:
^^^^^^^
SyntaxError: name capture 'myPoint' makes remaining patterns unreachable</p>
</blockquote>
<p>There is 2 others pattern normaly after matching mypoint.</p>
<p>N.B. If you comment case _: , the works fine and as expected.</p>
<p>What is printed</p>
<blockquote>
<p>Point 1 2</p>
</blockquote>
<blockquote>
<p>57</p>
</blockquote>
|
<python>
|
2023-06-21 12:05:21
| 0
| 2,017
|
Pierre-olivier Gendraud
|
76,523,075
| 5,616,309
|
Retrieve IBM i column headings from SQL using pyodbc
|
<p>I want to launch SQL querys from a PC to a DB2-database on IBM i.
The files were created with the old method (source file, 10-character's file and field name, and detailled column-heading)
In SQL from the emulator, I have an option to retrieve the column headings with the long name (Preferences / Results / column headings), so I think the jdbc driver can export them (I think it is allowed when checking Edition / JDBC Configuration/ Other / Extended metadata).</p>
<p>I can't retrieve this long name column heading when using pyodbc from python, using the driver "iSeries Access ODBC Driver". I don't see where tohave it, I searched in <a href="https://www.ibm.com/docs/en/i/7.1?topic=apis-connection-string-keywords" rel="nofollow noreferrer">https://www.ibm.com/docs/en/i/7.1?topic=apis-connection-string-keywords</a> the right option for the connection string, but did'nt find anything.
The 'description' cursor attribute in pyodbc does retrieve the column name and length, but no extended attribute</p>
<p>Is this not possible at all to retrieve the long name? Is this option accessible in the jdbc driver?</p>
|
<python><db2><pyodbc><db2-400>
|
2023-06-21 12:02:40
| 2
| 1,129
|
FredericP
|
76,523,065
| 10,530,984
|
How to update SQLAlchemy Postgres array at a single element?
|
<p>There is no docs for ARRAY field update in sqlalchemy. Idk, how to update position by index in sqlachemy orm. There is only possible to update the whole array, but I need to update single element in entry.</p>
<p>I found example in postgres docs:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM sal_emp;
name | pay_by_quarter | schedule
-------+---------------------------+-------------------------------------------
Bill | {10000,10000,10000,10000} | {{meeting,lunch},{training,presentation}}
Carol | {20000,25000,25000,25000} | {{breakfast,consulting},{meeting,lunch}}
</code></pre>
<pre class="lang-sql prettyprint-override"><code>UPDATE sal_emp SET pay_by_quarter[4] = 15000
WHERE name = 'Bill';
</code></pre>
<p>But I didn't find how to slice or index in <code>.values</code> method, there is only keyword arguments supported.</p>
<pre class="lang-py prettyprint-override"><code># sqlalchemy
update(SalEmp).values(pay_by_quarter=???).where(SalEmp.name=='Bill')
# Can't do like this:
update(SalEmp).values(SalEmp.pay_by_quarter[4]=15000).where(SalEmp.name=='Bill')
</code></pre>
|
<python><postgresql><sqlalchemy>
|
2023-06-21 12:01:53
| 1
| 655
|
Mastermind
|
76,523,064
| 19,390,849
|
How can I execute a function for each node in Python for this JSON?
|
<p>I want something exactly similar to <code>estraverse</code>. I want to run a function for each node of this <a href="https://www.npoint.io/docs/44d6550e8dd7a5e5c09b" rel="nofollow noreferrer">JSON object</a>. I asked another question <a href="https://stackoverflow.com/questions/76520381/how-can-i-traverse-all-nodes-of-a-given-ast-in-python">here</a> but that did not solve my problem.</p>
<p>I want to be able to get each node, as is, and check it as a JSON object.</p>
<p>As an example, I want to see if a node's type is <code>JSXText</code> and it has a non-empty value, then print that node's value:</p>
<pre><code>if node.type == 'JSXText' and node.value.strip() != '':
print (node.value)
</code></pre>
<p>How can I get to that point? I can do this very easily in C#. Yet I can't transfer my knowledge to Python in this case.</p>
|
<python>
|
2023-06-21 12:01:46
| 2
| 1,889
|
Big boy
|
76,522,712
| 18,895,773
|
Pandas operation between two dataframes with slicing
|
<p>I have to compute a column based on operation between two pandas dataframes. Let me give an example.
Some example would be:</p>
<pre><code>df1 = pd.DataFrame({'data': [0.4, 0.112, 0.7]})
df2 = pd.DataFrame({'data': [321, 3 , 1, 1, 2, ,4, 5, 6, 7, 8, 9, 12, 0.4, 0.112]})
</code></pre>
<p>For the last value in the first dataframe, I would want to do the following computation:</p>
<pre><code>(0.7 - df2.mean()) / df2.std()
</code></pre>
<p>For the value before the last value in the first dataframe, I would like to do the same operation, but ignoring the last value in the second dataframe:</p>
<pre><code>(0.4 - df2[:-1].mean()) / df2[:-1].std()
</code></pre>
<p>same logic for the first value:</p>
<pre><code>(0.112 - df2[:-2].mean()) / df2[:-2].std()
</code></pre>
<p>So, in general, the i-th value in my final result should be calculated as follows:</p>
<pre><code>result[i] = (df1[i] - df2[:(df2.shape[0] - i].mean()) / df2[:(df2.shape[0] - i].std()
</code></pre>
<p>How to do this using Pandas without python for loops?</p>
|
<python><pandas>
|
2023-06-21 11:19:01
| 1
| 362
|
Petar Ulev
|
76,522,693
| 2,641,825
|
How to check the validity of the OpenAI key from python?
|
<ul>
<li><p><a href="https://pypi.org/project/openai/" rel="noreferrer">https://pypi.org/project/openai/</a></p>
<blockquote>
<p>"The library needs to be configured with your account's secret key which
is available on the
<a href="https://platform.openai.com/account/api-keys" rel="noreferrer">website</a>. [...] Set it as
the OPENAI_API_KEY environment variable"</p>
</blockquote>
</li>
</ul>
<p>When I ask Chat GPT to complete a message</p>
<pre><code>import openai
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "What are the trade-offs around deadwood in forests?"}]
)
print(response)
</code></pre>
<p>I get a <code>RateLimitError: You exceeded your current quota, please check your plan and billing details.</code></p>
<p>Is there a python method to check that the key is valid?</p>
<pre><code>In [35]: openai.api_key
Out[35]: 'sk-...'
</code></pre>
|
<python><openai-api><chatgpt-api>
|
2023-06-21 11:16:50
| 6
| 11,539
|
Paul Rougieux
|
76,522,690
| 13,998,438
|
Merging two dataframes with additional business rules
|
<p>I have two Pandas dataframes, and I would like to match them up by ID and find the differences in their dates if a match can be found. An example of my first dataframe is:</p>
<pre><code>ID Date_A
=========
1 01012023
1 02012023
1 03012023
2 02152023
3 02012023
3 04012023
</code></pre>
<p>My second dataframe looks like:</p>
<pre><code>ID Date_B
=========
1 01112023
1 03112023
3 05012023
</code></pre>
<p>I am trying to perform <code>pd.merge()</code> on the two dataframes to receive a result like the following. Notice that <code>Date_B</code> must be merged so it comes chronologically after the next earliest occurrence of <code>Date_A</code>.</p>
<pre><code>ID Date_A Date_B
==================
1 01012023 01112023
1 02012023 03112023
1 03012023
2 02152023
3 02012023 05012023
3 04012023
</code></pre>
<p>I am trying to perform <code>pd.merge()</code> operations on these two data frames, but instead of including just 3 entries of <code>Date_B</code>, it copies the <code>Date_B</code> occurrences for all records. Each <code>Date_A</code> and <code>Date_B</code> record can only be used once.</p>
<p>Is there a good way to achieve this desired result? I am struggling to find a method to achieve this. Thank you!</p>
|
<python><python-3.x><pandas><dataframe><merge>
|
2023-06-21 11:16:24
| 1
| 606
|
325
|
76,522,681
| 15,130,043
|
"Operation now in progress" exception when connecting bluetooth client to server with pybluez
|
<p>I took the <a href="https://github.com/pybluez/pybluez/blob/master/examples/simple/rfcomm-server.py" rel="nofollow noreferrer">rfcomm-server.py</a> and <a href="https://github.com/pybluez/pybluez/blob/master/examples/simple/rfcomm-client.py" rel="nofollow noreferrer">rfcomm-client.py</a> from the PyBluez repository on github. The contents of these scripts is the following:</p>
<p>rfcomm-server.py:</p>
<pre><code>import bluetooth
server_sock = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
server_sock.bind(("", bluetooth.PORT_ANY))
server_sock.listen(1)
port = server_sock.getsockname()[1]
uuid = "94f39d29-7d6d-437d-973b-fba39e49d4ee"
bluetooth.advertise_service(server_sock, "SampleServer", service_id=uuid,
service_classes=[uuid, bluetooth.SERIAL_PORT_CLASS],
profiles=[bluetooth.SERIAL_PORT_PROFILE],
# protocols=[bluetooth.OBEX_UUID]
)
print("Waiting for connection on RFCOMM channel", port)
client_sock, client_info = server_sock.accept()
print("Accepted connection from", client_info)
try:
while True:
data = client_sock.recv(1024)
if not data:
break
print("Received", data)
except OSError:
pass
print("Disconnected.")
client_sock.close()
server_sock.close()
print("All done.")
</code></pre>
<p>rfcomm-client.py:</p>
<pre><code>import sys
import bluetooth
addr = None
if len(sys.argv) < 2:
print("No device specified. Searching all nearby bluetooth devices for "
"the SampleServer service...")
else:
addr = sys.argv[1]
print("Searching for SampleServer on {}...".format(addr))
# search for the SampleServer service
uuid = "94f39d29-7d6d-437d-973b-fba39e49d4ee"
service_matches = bluetooth.find_service(uuid=uuid, address=addr)
if len(service_matches) == 0:
print("Couldn't find the SampleServer service.")
sys.exit(0)
first_match = service_matches[0]
port = first_match["port"]
name = first_match["name"]
host = first_match["host"]
print("Connecting to \"{}\" on {}".format(name, host))
# Create the client socket
sock = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
sock.connect((host, port))
print("Connected. Type something...")
while True:
data = input()
if not data:
break
sock.send(data)
sock.close()
</code></pre>
<p>I first startup the server script which outputs "Waiting for connection on RFCOMM channel 1" and then block waiting for incoming connections. Then I run the client script with the argument 'localhost'. When I do this the SampleServer is found, but when the script tries to connect it blocks for a few second and then raises "bluetooth.btcommon.BluetoothError: [Errno 115] Operation now in progress". If I then try again I get the exception: "bluetooth.btcommon.BluetoothError: [Errno 16] Device or resource busy". The full output is:</p>
<pre><code>$> ./client.c localhost
Searching for SampleServer on localhost...
Connecting to "SampleServer" on localhost
Traceback (most recent call last):
File "<string>", line 3, in connect
_bluetooth.error: (115, 'Operation now in progress')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/test/./client.py", line 42, in <module>
sock.connect((host, port))
File "<string>", line 5, in connect
bluetooth.btcommon.BluetoothError: [Errno 115] Operation now in progress
$> ./client.c localhost
Searching for SampleServer on localhost...
Connecting to "SampleServer" on localhost
Traceback (most recent call last):
File "<string>", line 3, in connect
_bluetooth.error: (16, 'Device or resource busy')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/test/./client.py", line 42, in <module>
sock.connect((host, port))
File "<string>", line 5, in connect
bluetooth.btcommon.BluetoothError: [Errno 16] Device or resource busy
</code></pre>
<p>One point which may be notable. When I leave out the localhost argument from the client. It won't find the SampleServer, which is odd because in that case it should be scanning all devices including localhost.</p>
<p>I have tried setting the host and port number manually in the connect call of client. I have tried changing the uuid of the server. Both of which didn't make a difference.</p>
|
<python><bluetooth><rfcomm><pybluez>
|
2023-06-21 11:15:01
| 0
| 393
|
S3gfault
|
76,522,675
| 17,487,457
|
How do I customize the xticklabels of my figure?
|
<p>I give the below <code>MWE</code> to describe the kind of customization I would like to apply to my figure.</p>
<p>My current <code>df</code> looks like this:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
{'feature': ['abc','bcd','abd','dax','cax','bax','def','deg','abe','cde'],
'score': [0.7732,0.8412,0.8626,0.8705,0.8811,0.8851,0.8884,0.8922,0.8934,0.8949]}
)
df
feature score
0 abc 0.7732
1 bcd 0.8412
2 abd 0.8626
3 dax 0.8705
4 cax 0.8811
5 bax 0.8851
6 def 0.8884
7 deg 0.8922
8 abe 0.8934
9 cde 0.8949
</code></pre>
<p>To plot the <code>score</code> (y-axis) against <code>feature</code> I proceed like so:</p>
<pre class="lang-py prettyprint-override"><code>m = df.T
m
0 1 2 3 4 5 6 7 8 9
feature abc bcd abd dax cax bax def deg abe cde
score 0.7732 0.8412 0.8626 0.8705 0.8811 0.8851 0.8884 0.8922 0.8934 0.8949
k_feat = sorted(m.keys())
avg = [m[k]["score"] for k in k_feat]
plt.plot(k_feat, avg, color="blue", marker="o")
</code></pre>
<p>To obtain:</p>
<p><a href="https://i.sstatic.net/C2ZV0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C2ZV0.png" alt="enter image description here" /></a></p>
<p>As it now, the x-axis is labelled with indices of the <code>feature</code>. How do I modify this to get x-axis labelled with the corresponding feature name instead (i.e. <code>0 -> abc, 1 -> bcd, 2 -> abd ....</code>)?</p>
|
<python><pandas><matplotlib><xticks>
|
2023-06-21 11:14:14
| 2
| 305
|
Amina Umar
|
76,522,615
| 3,807,750
|
What is the convention for type annotating trivial lines of code that instantiate a class in Python?
|
<p>As I'm unable to find a better way to phrase the question, I'm not able to find any reasonable discussions on the topic.</p>
<p>Consider the following:</p>
<pre><code>import queue
my_queue: queue.Queue = queue.Queue()
</code></pre>
<p>I have been annotating code as shown above for a long time. However, each time I do so, I feel strange. Consider the alternative:</p>
<pre><code>import queue
my_queue = queue.Queue()
</code></pre>
<p>The latter seems to be superior in terms of readability (at least for me). However, afraid that some tool or the other may be unable to discern the appropriate type for <code>my_queue</code>, I have always written the former.</p>
<p>This can especially be a problem when the type in question is very long (and the code starts to look more akin to Java rather than Python). I am aware of the fact that aliasing a type is possible. Regardless, I feel that the latter is more readable for humans.</p>
<p>Has this topic perhaps been addressed in a PEP? Do any tools (for documentation, linting, analysis, etc.) behave differently in either case?</p>
|
<python><python-typing>
|
2023-06-21 11:07:07
| 2
| 708
|
Shreyas
|
76,522,582
| 177,640
|
How to pass parameters to an endpoint using `add_route()` in FastAPI?
|
<p>I'm developing a simple application with FastAPI.</p>
<p>I need a function to be called as endpoint for a certain route. Everything works just fine with the function's default parameters, but wheels come off the bus as soon as I try to override one of them.</p>
<p>Example. This works just fine:</p>
<pre class="lang-py prettyprint-override"><code>async def my_function(request=Request, clientname='my_client'):
print(request.method)
print(clientname)
## DO OTHER STUFF...
return SOMETHING
private_router.add_route('/api/my/test/route', my_function, ['GET'])
</code></pre>
<p>This returns an error instead:</p>
<pre class="lang-py prettyprint-override"><code>async def my_function(request=Request, clientname='my_client'):
print(request.method)
print(clientname)
## DO OTHER STUFF...
return SOMETHING
private_router.add_route('/api/my/test/route', my_function(clientname='my_other_client'), ['GET'])
</code></pre>
<p>The Error:</p>
<pre class="lang-bash prettyprint-override"><code>INFO: 127.0.0.1:60005 - "GET /api/my/test/route HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
...
...
TypeError: 'coroutine' object is not callable
</code></pre>
<p>The only difference is I'm trying to override the <code>clientname</code> value in <code>my_function</code>.</p>
<p>It is apparent that this isn't the right syntax but I looked everywhere and I'm just appalled that the documentation about the <code>add_route</code> method is nowhere to be found.</p>
<p>Is anyone able to point me to the right way to do this supposedly simple thing?</p>
<p>Thanks!</p>
|
<python><fastapi><starlette>
|
2023-06-21 11:03:15
| 1
| 440
|
MariusPontmercy
|
76,522,414
| 5,596,559
|
How to count unique rows in sqlalchemy
|
<p>I already know how to count unique rows using deprecated Query API like that:</p>
<pre><code> self.session.query(table).distinct().count()
</code></pre>
<p>But how to do this using currently supported ORM api? (<a href="https://docs.sqlalchemy.org/en/20/orm" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/orm</a>)</p>
<p><code>select(func.count()).select_from(select(table).distinct()))</code> does not work :(</p>
|
<python><sqlalchemy><orm>
|
2023-06-21 10:45:09
| 1
| 405
|
Ginko
|
76,522,348
| 8,869,003
|
Read Excel file from URL in python 3.6
|
<p>I'm trying to read an excel-file in python 3.6. Using the code below I managed to get HTTP 200 as status code for the request, could somebody help me to read the contents, too.</p>
<pre><code>import requests
url="https://<myOrg>.sharepoint.com/:x:/s/x-taulukot/Ec0R1y3l7sdGsP92csSO-mgBI8WCN153LfEMvzKMSg1Zzg?e=6NS5Qh"
session_obj = requests.Session()
response = session_obj.get(url, headers={"User-Agent": "Mozilla/5.0"})
print(response.status_code)
</code></pre>
<p>When I go to the url in browser I get en excel-file, thus it should be en excel-file (although I don't get it by curl or wget...)</p>
<p>There's also some instructions in this page:</p>
<p><a href="https://stackoverflow.com/questions/62278538/pd-read-csv-produces-httperror-http-error-403-forbidden">pd.read_csv produces HTTPError: HTTP Error 403: Forbidden</a></p>
<p>Edit:</p>
<p>using the test.py:</p>
<pre><code>import pandas as pd
from urllib.request import Request, urlopen
url = "https://<myOrg>.sharepoint.com/:x:/s/x-taulukot/Ec0R1y3l7sdGsP92csSO-mgBI8WCN153LfEMvzKMSg1Zzg?e=6NS5Qh"
req = Request(url)
req.add_header('User-Agent', 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0')
content = urlopen(req)
df = pd.read_csv(content)
print(df)
</code></pre>
<p>I get:</p>
<pre><code>(venv) > python test.py
Traceback (most recent call last):
File "test.py", line 8, in <module>
df = pd.read_csv(content)
File "/srv/work/miettinj/beta/python/venv/lib/python3.6/site-packages/pandas/io/parsers.py", line 688, in read_csv
return _read(filepath_or_buffer, kwds)
File "/srv/work/miettinj/beta/python/venv/lib/python3.6/site-packages/pandas/io/parsers.py", line 460, in _read
data = parser.read(nrows)
File "/srv/work/miettinj/beta/python/venv/lib/python3.6/site-packages/pandas/io/parsers.py", line 1198, in read
ret = self._engine.read(nrows)
File "/srv/work/miettinj/beta/python/venv/lib/python3.6/site-packages/pandas/io/parsers.py", line 2157, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 862, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 918, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 10, saw 4
</code></pre>
|
<python><pandas>
|
2023-06-21 10:35:35
| 2
| 310
|
Jaana
|
76,522,268
| 487,873
|
Properly decouple processes
|
<p>I am communicating with a server using asyncio and have timeouts if the server does not respond (in the order of 2secs). Sometimes I want to log this data to file. I find the file IO (say, 6-7MB/s) can cause timeouts in my comms with the server. In general: if I run it on a Windows laptop, I don't have much of an issue. But on my RPi4 (which has slow file IO), it is a constant issue. I tried various things and eventually thought to move the file IO to a completely different process so the 'main' part is unaffected by the file IO work. However, whilst better, it still causes timeouts occasionally.</p>
<p>Below are snippets of my code...what is strange is if I comment out <code>log_file.write(output)</code> (meaning, everything other than the file write is still processed), everything works fine (no timeouts seen). So, somehow, my file IO is still coupled to my 'main' process.</p>
<p>What am I misunderstanding or doing wrong here? Why is the file write still affecting my 'main' process?</p>
<pre class="lang-py prettyprint-override"><code># Setup a pipe and the listener (multiprocessing.Process)
def file_logger_mp(....): # My setup function
conn_rec, conn_send = multiprocessing.Pipe(duplex=False) # I believe duplex=False reduces overhead
listener = LogListenerProcess(conn_rec, log_file_pth, append)
log_hdl = LogHandler(conn_send, listener)
listener.start()
return log_hdl.send # This is called as the `callback_fn()` below
</code></pre>
<pre><code># Within LogHandler() class
class LogHandler():
def __init__(self, conn_send: Connection, listener: LogListenerProcess) -> None:
self.conn_send = conn_send
self.listener = listener
def send(self):
self.conn_send.send(msg) # Simply put the data onto the pipe
</code></pre>
<pre class="lang-py prettyprint-override"><code># This writes to file
class LogListenerProcess(multiprocessing.Process):
def __init__(self, pipe_conn, log_file_path: str, append: bool = True):
multiprocessing.Process.__init__(self)
self.exit = multiprocessing.Event()
self.pipe_conn = pipe_conn
self.log_file_path = log_file_path
self.buffered_console_stdout = None
self.pipe_data = None
self.append = append
self.app_only_to_console = app_only_to_console
def run(self):
if self.append:
log_file = open(self.log_file_path, "a")
else:
log_file = open(self.log_file_path, "w+")
# Continue until told to exit and while there is data present in the pipe
while not self.exit.is_set() or self.pipe_conn.poll():
try:
self.pipe_data = self.pipe_conn.recv()
if self.pipe_data is None: # Send None to force a break. Currently unused but here in case needed.
print("Closing")
break
for record in self.pipe_data:
if isinstance(record, list):
output = "\n".join(map(str, record))+"\n"
else:
output = str(record)+'\n'
log_file.write(output)
# Without this, seems system will hold the data until program closes
# https://stackoverflow.com/a/9824894
log_file.flush()
self.pipe_data.clear()
except Exception:
# Snip
</code></pre>
<pre><code># Within an asyncio function, call the hdl to push data into the pipe
for callback_fn in self.cbs:
callback_fn(msg) # This calls log_hdl.send()
</code></pre>
|
<python><python-3.x>
|
2023-06-21 10:26:35
| 0
| 1,096
|
SimpleOne
|
76,522,197
| 8,547,163
|
DataFrame to JSON format without column names
|
<p>I have a dataset in the following format below</p>
<pre><code>Parameter Value Col3
A 1.0 2.0
B 1.0 2.5
dx 0.2 1.0
</code></pre>
<p>and I use a python script to save the output in json format given below</p>
<pre><code>#foo.py
import pandas
import json
#..
df.to_json('foo.json'), orient='records', lines = True, indent=1)`
</code></pre>
<p>Output:</p>
<pre><code>{
"Parameter":"A",
"Value":1.0,
"Col3":2.0,
}
{
"Parameter":"B",
"Value":1.0,
"Col3" : 2.5,
}
{
"Parameter":"dx",
"Value":0.2,
"Col3" : 2.5,
}
</code></pre>
<p>I would prefer the json format to be like</p>
<pre><code>{
"A" : [1.0, 2.0]
"B" : [1.0, 2.5]
"dx" :[0.2, 1.0]
}
</code></pre>
<p>Can someone suggest how to get the desired output format? Considering I have varying column name and number of columns across various datasets.</p>
|
<python><json><pandas>
|
2023-06-21 10:16:29
| 1
| 559
|
newstudent
|
76,522,178
| 1,422,096
|
Waiting for device to be ready with timeout=120 gives up after 20 seconds
|
<p>In Python, I need to wait until a device (which takes ~ 90 seconds to boot) is connected.</p>
<p>I tried a timeout of 120 seconds:</p>
<pre><code>import socket
sock = socket.create_connection(("192.168.254.254", 1234), timeout=120)
# port 1234 for the direct API of the device
</code></pre>
<p>but after ~ 20 seconds it fails with:</p>
<blockquote>
<p>Trackback (most recent call last):<br />
File "C:\Python38\lib\urllib\request.py", line 1354, in do_open<br />
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond</p>
</blockquote>
<p>Idem, I tried:</p>
<pre><code>from urllib.request import urlopen
req = urlopen("http://192.168.254.254", timeout=120) # the device also has a web interface
</code></pre>
<p>but after 21 seconds it fails with the same <code>WinError 10060</code>.</p>
<p>The device usually has finished boot after 90 seconds, so a timeout of 120 should be enough.</p>
<p>How to do this?</p>
<p><strong>Is 20 seconds a maximum possible timeout on Windows?</strong></p>
<p>Linked question (but not duplicate, it doesn't give the answer): here 20 seconds seems to be mentioned: <a href="https://stackoverflow.com/questions/12065700/tcp-connection-timeout-is-20-or-21-seconds-on-some-pcs-when-set-to-500ms">TCP connection timeout is 20 or 21 seconds on *some* PCs when set to 500ms</a></p>
<blockquote>
<p>This means that you wind up waiting for the underlying TCP operation to fail. This typically takes 20 seconds.</p>
</blockquote>
<p>Is there a way to remove this TCP timeout limit of 20 seconds?</p>
|
<python><windows><sockets><tcp><timeout>
|
2023-06-21 10:13:57
| 1
| 47,388
|
Basj
|
76,522,123
| 480,982
|
Is there a cv.ROTATE_NONE?
|
<p>I know that OpenCV has three rotations:</p>
<pre class="lang-none prettyprint-override"><code>ROTATE_90_CLOCKWISE
ROTATE_180
ROTATE_90_COUNTERCLOCKWISE
</code></pre>
<p>Is there also a no-op rotation like <code>ROTATE_NONE</code>?</p>
<p>I'd prefer using that over having an <code>if</code>-statement.</p>
|
<python><opencv>
|
2023-06-21 10:06:18
| 1
| 61,121
|
Thomas Weller
|
76,522,082
| 14,282,714
|
Apostrophe in matcher spacy
|
<p>I am trying to detect some words using the <code>matcher</code> of <code>spacy</code>. The problem is that I can't detect words having an apostrophe <code>'</code> using <code>IS_PUNCT: True</code> in the matcher. I would like to detect the word <code>doctor's</code> in the sentence below. Here is some reproducible code:</p>
<pre><code>import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
pattern = [
[{"LOWER": "doctor"}, {"IS_PUNCT": True}, {"LOWER": "s"}]
]
matcher.add("doctor", pattern)
doc = nlp("She's on her way to the doctor's")
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id]
span = doc[start:end]
print(match_id, string_id, start, end, span.text)
</code></pre>
<p>This won't return any output. I also tried it without <code>"LOWER": "s"</code>, but this also doesn't work. So I was wondering if anyone knows how to deal with apostrophes in matching with spacy?</p>
|
<python><text><spacy><string-matching>
|
2023-06-21 10:02:03
| 1
| 42,724
|
Quinten
|
76,522,049
| 11,357,695
|
Finding non-zero indexes in a multidimensional numpy array
|
<p>I have an numpy array as follows:</p>
<pre><code>np.array([
[1, 0, 0.5, 1, 0],
[0, 0, 0, 0, 0],
[1, 0, 0, 1, 0]
])
</code></pre>
<p>Can anyone suggest the fastest way to obtain a 1-d array of indexes, where for each index there is at least one row of the array with an object at that index with a value > 0? So for the above array, the desired output is (it doesn't matter what the data type is i.e. list vs array vs series etc):</p>
<pre><code>[0, 2, 3]
</code></pre>
<p>Please let me know if the above is unclear. Currently, in my real usecase (below) I am starting with a dataframe, converting to series, turning to a numpy array, getting non-zero indexes, getting columns at those indexes. However, I suspect a single operation on the dataframe (following conversion to a numpy array) would be faster that the <code>iterrows()</code> approach:</p>
<pre><code>df1 = pd.DataFrame([
[1, 0, 0.5, 1, 0],
[0, 0, 0, 0, 0],
[1, 0, 0, 1, 0]
],
columns = ['c1', 'c2, 'c3', 'c4', 'c5'],
index = ['r1', 'r2', 'r3'])
set1 = set()
#This is what I would like to replace by going np.some_np_function(df1.to_numpy())
for protein, row in df1.iterrows():
non_zero_indexes = list(row.to_numpy().nonzero()[0])
columns_with_positive = [p for i,p in enumerate(row.index) if i in non_zero_indexes]
set1.update(columns_with_positive)
</code></pre>
<p>Thanks,
Tim</p>
|
<python><pandas><dataframe><numpy><numpy-ndarray>
|
2023-06-21 09:57:32
| 1
| 756
|
Tim Kirkwood
|
76,521,989
| 11,079,448
|
Why itertools combinations returns empty list?
|
<p>From Python terminal I am trying to get all the tuples for my string.</p>
<pre><code>>>> from itertools import combinations
>>> TUPLES = list(combinations(['picaacd'], 2))
>>> TUPLES
[]
</code></pre>
<p>Why is <code>TUPLES</code> an empty list?</p>
|
<python>
|
2023-06-21 09:49:31
| 1
| 727
|
mishav
|
76,521,902
| 1,171,783
|
Loading LD_PRELOAD symbols with CDLL(None) in Python 3.6
|
<p>I use <code>ctypes.CDLL(None)</code> to get access to the symbols of the current executable, which works well for Python 2.7.5 and 3.11.2 but not with python 3.6.8. I can't figure why there is a difference in behavior. Example:</p>
<p>The shared object:</p>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
void foo()
{
printf("foo called\n");
}
</code></pre>
<p>Compiled with <code>gcc -fPIC -shared foo.c libfoo.so</code>.</p>
<p>The Python program:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
lib = ctypes.CDLL(None)
foo = lib["foo"]
foo()
</code></pre>
<p>Then I run the test program:</p>
<pre><code>$ LD_PRELOAD=./libfoo.so python2 test.py
foo called
$ LD_PRELOAD=./libfoo.so python3.11 test.py
foo called
$ LD_PRELOAD=./libfoo.so python3.6 test.py
Traceback (most recent call last):
File "test.py", line 4, in <module>
foo = lib["foo"]
File "/usr/lib64/python3.6/ctypes/__init__.py", line 361, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: python3: undefined symbol: foo
</code></pre>
<p>How can I have the behavior of Python 2.7 and 3.11 with 3.6?</p>
|
<python><python-3.x>
|
2023-06-21 09:39:06
| 0
| 2,258
|
Julien
|
76,521,868
| 2,876,079
|
How to "step into" function while debugging with multi line function call in PyCharm, without stepping through the individual argument lines?
|
<p>I have a function call where the arguments are spread over several lines:</p>
<p><a href="https://i.sstatic.net/a2Cff.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a2Cff.png" alt="enter image description here" /></a></p>
<p>When I am on the line 133 and select "step into" during debugging, I would expect, that the debugger steps into the function <code>_include_default_entries</code> and I get to the corresponding source code file. However, it does not. It steps to the next line in the current file. I need to click "step into" several times until I actually step in the function body of the called function and navigate to the expected source code file.</p>
<p>=> Is this a PyCharm bug? Or is there some (hidden) configuration I could adapt to skip the arguments and immediately step into the function?</p>
<p>The settings I found so far do not seem to be related to this use case:</p>
<p><a href="https://i.sstatic.net/8aBQX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8aBQX.png" alt="enter image description here" /></a></p>
<p><a href="https://www.jetbrains.com/help/pycharm/settings-debugger-stepping.html#f0b05e75" rel="nofollow noreferrer">https://www.jetbrains.com/help/pycharm/settings-debugger-stepping.html#f0b05e75</a></p>
|
<python><debugging><pycharm>
|
2023-06-21 09:35:43
| 1
| 12,756
|
Stefan
|
76,521,815
| 1,503,683
|
Does Flask + Gunicorn still use Werkzeug?
|
<p>When using Flask + Gunicorn, is Werkzeug still used ?</p>
<p>Lets say I have a simple Flask app:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
from flask.cli import FlaskGroup
def create_app():
app = Flask(__name__)
# Other inits here: DB, config etc.
return app
cli = FlaskGroup(
add_default_commands=True,
create_app=create_app
)
if __name__ == '__main__':
cli()
</code></pre>
<p>Then I can run my app via:</p>
<pre><code>$ python3 app.py run
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:5000
Press CTRL+C to quit
</code></pre>
<p>From what I understand, here Werkzeug is started as a WSGI server in front of my Flask app, as the <a href="https://github.com/pallets/flask/blob/main/src/flask/cli.py#L824" rel="nofollow noreferrer"><code>run</code></a> command call Flask's <a href="https://github.com/pallets/flask/blob/main/src/flask/cli.py#L877" rel="nofollow noreferrer"><code>run_command</code></a> which call Werkzeug <a href="https://github.com/pallets/flask/blob/main/src/flask/cli.py#L923-L933" rel="nofollow noreferrer"><code>run_simple</code></a>.</p>
<p>Now, as stated in both <a href="https://flask.palletsprojects.com/en/2.3.x/cli/#run-the-development-server" rel="nofollow noreferrer">Flask documentation</a> & <a href="https://werkzeug.palletsprojects.com/en/2.3.x/deployment/" rel="nofollow noreferrer">Werkzeug documention</a>:</p>
<blockquote>
<p>Do not use this command to run your application in production. Only use the development server during development. The development server is provided for convenience, but is not designed to be particularly secure, stable, or efficient. See Deploying to Production for how to run in production.</p>
</blockquote>
<p>I guess we are talking about Werkzeug here. So next I want to setup Gunicorn in front of my app (to keep this question as simple as possible I omit all possible gunicorn options here):</p>
<pre><code>$ pip install gunicorn
$ gunicorn --bind 0.0.0.0:5000 app:cli
[2023-06-21 11:25:05 +0200] [79911] [INFO] Starting gunicorn 20.1.0
[2023-06-21 11:25:05 +0200] [79911] [INFO] Listening at: http://0.0.0.0:5000 (79911)
[2023-06-21 11:25:05 +0200] [79911] [INFO] Using worker: sync
[2023-06-21 11:25:05 +0200] [79959] [INFO] Booting worker with pid: 79959
</code></pre>
<p>Good, my app start. But Gunicorn still uses <code>app:cli</code> as an entrypoint, which call the <code>run</code>/<code>run_command</code>/<code>run_simple</code> described above.</p>
<p><strong>So here is my question</strong>: is Werkzeug still used between Gunicorn and my Flask app ? If so it means 2 WSGI servers are "chaining" requests to my Flask app. Is there a way to get rid of it ? Or is it the way it is intended to be used ?</p>
|
<python><flask><gunicorn><werkzeug>
|
2023-06-21 09:29:22
| 1
| 2,802
|
Pierre
|
76,521,744
| 8,964,393
|
How to read in xml file in pandas - Error: https://www.un.org/securitycouncil/sites/www.un.org.securitycouncil/files/consolidated.xml
|
<p>I need to read in the following xml file:</p>
<p><a href="https://www.un.org/securitycouncil/sites/www.un.org.securitycouncil/files/consolidated.xml" rel="nofollow noreferrer">https://www.un.org/securitycouncil/sites/www.un.org.securitycouncil/files/consolidated.xml</a></p>
<p>I have tried this code:</p>
<pre><code>import requests
from lxml import objectify
url = requests.get("https://www.un.org/securitycouncil/sites/www.un.org.securitycouncil/files/consolidated.xml")
parsed = objectify.parse((url))
</code></pre>
<p>When I run it, I get this error:</p>
<p>TypeError: cannot parse from 'Response'</p>
<p>I don't understand why.</p>
<p>Can someone help me please?</p>
|
<python><xml><lxml>
|
2023-06-21 09:19:31
| 2
| 1,762
|
Giampaolo Levorato
|
76,521,517
| 1,581,090
|
How to fix "Error Accessing String" error with libusb on windows?
|
<p>On Windows 10 I am using python 3.10.11 and libusb-package 1.0.26.2 to list all USB devices connected to the computer.</p>
<pre><code>import libusb_package
for dev in libusb_package.find(find_all=True):
print(dev)
</code></pre>
<p>But for many devices I get an output like this</p>
<pre><code>DEVICE ID 0403:6001 on Bus 002 Address 029 =================
bLength : 0x12 (18 bytes)
bDescriptorType : 0x1 Device
bcdUSB : 0x200 USB 2.0
bDeviceClass : 0x0 Specified at interface
bDeviceSubClass : 0x0
bDeviceProtocol : 0x0
bMaxPacketSize0 : 0x8 (8 bytes)
idVendor : 0x0403
idProduct : 0x6001
bcdDevice : 0x600 Device 6.0
iManufacturer : 0x1 Error Accessing String
iProduct : 0x2 Error Accessing String
iSerialNumber : 0x3 Error Accessing String
</code></pre>
<p>where it says "<strong>Error Accessing String</strong>". Is there a way to solve this problem so I can get more information?</p>
<p>P.S. Running the code in an admin terminal does not fix this problem...</p>
|
<python><windows><usb>
|
2023-06-21 08:51:14
| 0
| 45,023
|
Alex
|
76,521,485
| 6,468,053
|
ContractsFinder API
|
<p>I'm trying to get downloads from UK government contractsfinder API.</p>
<p>This is the python code I have:</p>
<pre><code>import requests
base_url = "https://www.contractsfinder.service.gov.uk/api/rest/2/search_notices"
keyword = "analysis"
params = {
"searchCriteria": {
"types": [
"Contract"
],
"keyword": "Analysis",
"queryString": None,
"regions": "Wales,South East",
"publishedFrom": "01/06/2023",
"publishedTo": None,
#"deadlineFrom": null,
#"deadlineTo": null,
#"approachMarketFrom": null,
#"approachMarketTo": null,
#"awardedFrom": null,
#"awardedTo": null,
#"isSubcontract": null,
"suitableForSme": True,
#"suitableForVco": false,
#"awardedToSme": true,
#"awardedToVcse": false,
#"cpvCodes": null
},
"size": 1000
}
response = requests.get(base_url, params=params)
if response.status_code == 200:
data = response.json()
# Process the retrieved data as needed
print(data)
else:
print("Request failed with status code:", response.status_code)
</code></pre>
<p>But it returns a 404 error. Has anyone managed to get contacts data using the API? Or is there something wrong with the API call? Not my area of expertise so I might have done something very dumb!</p>
|
<python><rest>
|
2023-06-21 08:46:41
| 1
| 1,528
|
A Rob4
|
76,521,465
| 567,059
|
Coverage report for 'pytest' different when run in code vs CLI
|
<p>When I use the <code>pytest</code> CLI to run some tests against my small module, the coverage report that is output is complete and shows 100% coverage.</p>
<p>However, when I run the tests in code using the <code>pytest</code> module, the coverage report is incomplete (missing the <strong><strong>init</strong>.py</strong> file) and shows some lines as missing. But this cannot be the case, as all of the tests pass.</p>
<p>Looking at the file <strong>colours.py</strong>, the lines listed as missing are all of the <code>import</code> statements, <code>class</code> definitions, class parameters, method definitions and decorators.</p>
<p>Note that I have compared the verbose output for both runs, and other than the difference in coverage and the odd millisecond of execution time, they are identical.</p>
<p>What could be causing this inconsistent behaviour?</p>
<h3>Tests run via the CLI</h3>
<pre><code>pytest ./az/tests/unit --cov src --cov-report term-missing
</code></pre>
<pre><code>================================ test session starts ================================
platform linux -- Python 3.10.6, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/dgard/repos/management-groups/az/tests
configfile: pytest.ini
plugins: cov-4.1.0, dependency-0.5.1
collected 31 items
az/tests/unit/colours_test.py ............................... [100%]
31 assertions pased.
---------- coverage: platform linux, python 3.10.6-final-0 -----------
Name Stmts Miss Cover Missing
--------------------------------------------------
az/src/__init__.py 1 0 100%
az/src/colours.py 35 0 100%
--------------------------------------------------
TOTAL 36 0 100%
================================ 31 passed in 0.10s =================================
</code></pre>
<h1>Tests run via code</h1>
<pre class="lang-py prettyprint-override"><code>import pytest
pytest_args = ['./az/tests/unit', '--cov', 'src', '--cov-report', 'term-missing']
pytest.main(pytest_args)
</code></pre>
<pre><code>================================ test session starts ================================
platform linux -- Python 3.10.6, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/dgard/repos/management-groups/az/tests
configfile: pytest.ini
plugins: cov-4.1.0, dependency-0.5.1
collected 31 items
az/tests/unit/colours_test.py ............................... [100%]
31 assertions pased.
---------- coverage: platform linux, python 3.10.6-final-0 -----------
Name Stmts Miss Cover Missing
-------------------------------------------------
az/src/colours.py 35 15 57% 1-18, 42-43
-------------------------------------------------
TOTAL 35 15 57%
================================ 31 passed in 0.09s =================================
</code></pre>
|
<python><pytest>
|
2023-06-21 08:42:37
| 1
| 12,277
|
David Gard
|
76,521,370
| 1,014,217
|
Weaviate combine with Azure Cognitive Search
|
<p>This is my scenario:</p>
<ol>
<li>The client has an Azure SQL database with a profiles table with demographic information.</li>
<li>We created an Azure Cognitive Search and indexed that database, we concatenated all fields into one called content. Because according to the documentation everything needs to be in one field.
<a href="https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/azure_cognitive_search" rel="nofollow noreferrer">https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/azure_cognitive_search</a></li>
</ol>
<p>Now we are creating a chatbot with LangChain where we can ask questions like:
Who is John Smith?, How old is Jane Smith, Who likes gardening.</p>
<p>The way I found is here:
<a href="https://shweta-lodha.medium.com/integrating-azure-cognitive-search-with-azure-openai-and-langchain-51280d1026f2" rel="nofollow noreferrer">https://shweta-lodha.medium.com/integrating-azure-cognitive-search-with-azure-openai-and-langchain-51280d1026f2</a></p>
<p>Basically first cognitive search is queried and some documents are returned, then those documents are saved as vectors in ChromaDB, and then ChromaDB is queried and the results are received in plain english with langchain and openAI.</p>
<p>However ChromaDB is very slow. and it takes about 50 seconds in this step.</p>
<p>so I wanted to try weaviate, but then I get very weird errors like:</p>
<pre><code>[ERROR] Batch ConnectionError Exception occurred! Retrying in 2s. [1/3]
{'error': [{'message': "'@search.score' is not a valid property name. Property names in Weaviate are restricted to valid GraphQL names, which must be “/[_A-Za-z][_0-9A-Za-z]*/”., no such prop with name '@search.score' found in class 'LangChain_df32d6b6d10c4bb895db75f88aaabd75' in the schema. Check your schema files for which properties in this class are available"}]}
</code></pre>
<p>My code is as this:</p>
<pre><code>@timer
def from_documentsWeaviate(docs, embeddings):
return Weaviate.from_documents(docs, embeddings, weaviate_url=WEAVIATE_URL, by_text=False)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
embeddings = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1)
user_input = get_text()
retriever = AzureCognitiveSearchRetriever(content_key="content")
llm = AzureChatOpenAI(
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_API_VERSION ,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type = OPENAI_API_TYPE ,
model_name=OPENAI_MODEL_NAME,
temperature=0)
docs = get_relevant_documents(retriever, user_input)
#vectorstore = from_documentsChromaDb(docs=docs, embedding=embeddings)
vectorstore = from_documentsWeaviate(docs, embeddings)
</code></pre>
<p>I wonder if I should first index all rows from the table and avoid the cognitive search part.?</p>
|
<python><azure><azure-cognitive-search><langchain><weaviate>
|
2023-06-21 08:27:42
| 1
| 34,314
|
Luis Valencia
|
76,521,350
| 4,187,360
|
SSLV3_ALERT_HANDSHAKE_FAILURE happens randomly, when trying to pull data from different APIs (Shopify, Klaviyo)
|
<p>Over the last few days we have noticed API errors from 2 different APIs, Shopify and Klaviyo,</p>
<p>For Shopify, the weird thing is that it happens only when we pull <code>Product</code> and <code>Product</code> related information, <strong>SOMETIMES</strong>, and some other times it just works.</p>
<p>For Klaviyo, it can happen on any entity, but similar issue is here: It may happen <strong>SOMETIMES</strong>, and usually it works.</p>
<p>We have noticed that it fails at some point when we pull data for some time (e.g. when we pull a list of products from Shopify or a list of members from Klaviyo).</p>
<p>We are using python 3.10 for both cases. For Shopify, we use the <a href="https://github.com/Shopify/shopify_python_api" rel="nofollow noreferrer">official python client</a> to access the API, and for Klaviyo we are using custom implementation with tornado async HTTP agent.</p>
<p>Example error from Shopify:</p>
<p>This one occurs when we pull <code>CustomCollection</code> entity by id</p>
<pre><code>...
File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/shopify/base.py", line 196, in find
collection = super(ShopifyResource, cls).find(id_=id_, from_=from_, **kwargs)
File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/pyactiveresource/activeresource.py", line 386, in find
return cls._find_every(from_=from_, **kwargs)
File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/pyactiveresource/activeresource.py", line 525, in _find_every
response = cls.connection.get(path, cls.headers)
File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/pyactiveresource/connection.py", line 329, in get
return self._open('GET', path, headers=headers)
File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/shopify/base.py", line 23, in _open
self.response = super(ShopifyConnection, self)._open(*args, **kwargs)
File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/pyactiveresource/connection.py", line 290, in _open
raise Error(err, url)
pyactiveresource.connection.Error: <urlopen error [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)>
</code></pre>
<p>We get similar errors when pulling <code>Product</code> or <code>InventoryItem</code>, for example, but usually not when pulling <code>Customer</code> or <code>Order</code> data.</p>
<p>Example error from Klaviyo:</p>
<pre><code> File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/tornado/simple_httpclient.py", line 340, in run
stream = await self.tcp_client.connect(
File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/tornado/tcpclient.py", line 292, in connect
stream = await stream.start_tls(
File "/home/airflow/.local/.virtualenvs/agents310/lib/python3.10/site-packages/tornado/iostream.py", line 1367, in _do_ssl_handshake
self.socket.do_handshake()
File "/usr/local/lib/python3.10/ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)
</code></pre>
<p>Those has started to occur very recently. We have not changed anything to our procedure that pulls data, neither to the software of it.</p>
<p>Some googling indicates that there is some mismatch during the TLS handshake because of unsupported ciphers in either the server or the client.</p>
<p>But I am not sure that this is the issue here, because:</p>
<ul>
<li>This seems to happen randomly, and mostly when pulling Product and Product-related data (for Shopify)</li>
<li>We can pull Customer and Order data in most cases, if not always, using similar procedure and software (for Shopify)</li>
<li>Same for Klaviyo, we start pulling some data, then at some point there is this SSL error
I have tried some of the failed request using cURL, and they succeed, so it doesn't seem to be a problem with a particular endpoint, but something different</li>
<li>I have checked the TLS versions that are supported by our OpenSSL and they seems legitimate (<code>TLSv1.3</code> is supported)</li>
</ul>
<p>So if this was indeed cipher issue, I'd expect that it would not work, at all. But here this works most of the time.</p>
<p>From within the server that we run that:</p>
<pre><code>$ openssl ciphers -v | awk '{print $2}' | sort | uniq
SSLv3
TLSv1
TLSv1.2
TLSv1.3
</code></pre>
<p>and from the python instance:</p>
<pre><code>>>> import ssl
>>> print(ssl.OPENSSL_VERSION)
OpenSSL 1.1.1n 15 Mar 2022
</code></pre>
<p>Note that the same thing occurs for different Shopify stores (from different clients of ours). Same for Klaviyo, occurs for a different number of clients (hence different accounts).</p>
<p>Do you have any ideas / suggestion why this may happen?</p>
<p>Could it be something else, irrelevant from the TLS handshake? E.g. some server issue or API limits (I know they should return a 429 status, but you never know with custom implementations today)?</p>
<p>Any tips on how to troubleshoot this?</p>
<p>Many Thanks!</p>
|
<python><ssl><shopify><shopify-api><klaviyo>
|
2023-06-21 08:25:31
| 0
| 1,960
|
babis21
|
76,521,336
| 14,282,714
|
Matching Spacy with double punctuation
|
<p>I am using <code>spacy</code> with <code>Matcher</code> to detect some words. When I want to find a word with a single punctuation like <code>-</code> works:</p>
<pre><code>import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
#
pattern = [{"LOWER": "nice"}, {"IS_PUNCT": True}, {"LOWER": "word"}]
matcher.add("nice-word", [pattern])
doc = nlp("This is a nice-word also? Why is this a nice word")
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id]
span = doc[start:end]
print(match_id, string_id, start, end, span.text)
</code></pre>
<p>Output:</p>
<pre><code>1899655961849619838 nice-word 3 6 nice-word
</code></pre>
<p>This works great! But imagine we have a word with double <code>-</code>, I can't get it work. I would like to find a word for example: <code>nice-word-also</code>. Here is some reproducible code:</p>
<pre><code>import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
#
pattern = [{"LOWER": "nice"}, {"IS_PUNCT": True}, {"LOWER": "word"}, {"LOWER": "also"}]
matcher.add("nice-word-also", [pattern])
doc = nlp("This is a nice-word-also? Why is this a nice word")
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id]
span = doc[start:end]
print(match_id, string_id, start, end, span.text)
</code></pre>
<p>This doesn't return anything. So I was wondering if anyone knows how to use Spacy matches to detect words with double punctuation like the example above?</p>
|
<python><python-3.x><text><spacy><string-matching>
|
2023-06-21 08:23:45
| 1
| 42,724
|
Quinten
|
76,521,326
| 12,172,389
|
Hugging Face Transformers - trust_remote_code not working
|
<p>I am currently working on a notebook to run falcon-7b-instruct myself. I am using a notebook in Azure Machine Learning Studio for that. The code I use for running Falcon is from Hugging Face.</p>
<pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
</code></pre>
<p>However, when I run the code, even though <code>trust_remote_code</code> is set to True, I get the following error.</p>
<blockquote>
<p>ValueError: Loading tiiuae/falcon-7b-instruct requires you to execute
the configuration file in that repo on your local machine. Make sure
you have read the code there to avoid malicious use, then set the
option `trust_remote_code=True` to remove this error.</p>
</blockquote>
<p>Is there perhaps an environment variable I have to set in order for remote code execution to work? I wasn't able to find anything about this error in the transformers documentation.</p>
|
<python><huggingface-transformers><azure-machine-learning-service><huggingface><falcon>
|
2023-06-21 08:22:40
| 0
| 303
|
J Heschl
|
76,521,287
| 2,932,907
|
How to filter out weird metadata-like characters from Excel cell
|
<p>EDIT: please see my own answer as the solution i'm have currently implemented. I'm still open to 'better' or best-practice solutions, if any.</p>
<p>I wrote an application that takes an .xlsx file, reads its content with pandas and uses that data to create objects and push them to the RIPE NCC DB for our administration. These .xlsx files are filled out by our customers. It occasionally happens that the file comes with weird metadata-like characters that are not seen by my application. It seems like data that was copied from another data source, like a PDF file.</p>
<p>For example, a customer has to fill out his phonenumber, i.e. +31 6 12345678. The application does some validation on it and if it passes the validation the value is used further in the script. In this case the application errors when it tries to push it to the RIPE NCC database. I get back the following error: <code>Severity: Error --> Syntax error in +31 6 12345678?</code> Notice that the script adds a question mark '?' at the end of the phonenumber.</p>
<p>When copied from the Excel cell to notepadd++, the string 'PDF' is added to the phonenumber:
<a href="https://i.sstatic.net/XyVbV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XyVbV.png" alt="enter image description here" /></a></p>
<p>I tried the following code, but nothing's changed:</p>
<pre><code>import operator as op
allowed_chars = '+0123456789 '
s = ''
phone_number = str(self.pd_keymapping[key][0])
print('PRE validation phone:', phone_number)
for i in phone_number:
if op.countOf(allowed_chars, i) > 0:
s += i
phone_number = s
print('POST validation phone:', phone_number)
Output:
PRE validation phone: +31 6 12345678
POST validation phone: +31 6 12345678
</code></pre>
<p>I can simply fix this by filling out the phone number by hand and rerun the script. But I can imagine that this problem will definitely raise questions when this script is used by my colleagues. I'm therefore looking for a solution that can be handled by the script itself.</p>
<p>What exactly is happening here and how do I filter out these weird characters when reading the Excel document?</p>
|
<python>
|
2023-06-21 08:17:25
| 1
| 503
|
Beeelze
|
76,521,187
| 13,916,049
|
Unsupervised feature selection using numpy.corrcoef
|
<p>I want to perform pairwise comparison to select the rows of <code>subset</code> with a correlation > 0.05.</p>
<pre><code>import pandas as pd
import numpy as np
# pairwise correlation
c = np.corrcoef(subset.T)
c = pd.DataFrame(c)
s = c.unstack()
so = s.sort_values(kind="quicksort", ascending=False)
so = np.abs(so)
#so = int(np.argmax(so))
# Retrieve only if value in 3rd column > 0.05
thresh2 = 0.05
so_sub = so.loc[so.iloc[:,-1:] > 0.05]
subset = subset.iloc[so_sub]
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
IndexingError Traceback (most recent call last)
Input In [23], in <cell line: 23>()
21 # Retrieve only if value in 3rd column > 0.05
22 thresh2 = 0.05
---> 23 so_sub = so.loc[so.iloc[:,-1:] > 0.05]
24 mrna_subset = mrna_subset.iloc[so_sub]
File ~/.local/lib/python3.9/site-packages/pandas/core/indexing.py:1097, in _LocationIndexer.__getitem__(self, key)
1095 if self._is_scalar_access(key):
1096 return self.obj._get_value(*key, takeable=self._takeable)
-> 1097 return self._getitem_tuple(key)
1098 else:
1099 # we by definition only have the 0th axis
1100 axis = self.axis or 0
File ~/.local/lib/python3.9/site-packages/pandas/core/indexing.py:1594, in _iLocIndexer._getitem_tuple(self, tup)
1593 def _getitem_tuple(self, tup: tuple):
-> 1594 tup = self._validate_tuple_indexer(tup)
1595 with suppress(IndexingError):
1596 return self._getitem_lowerdim(tup)
File ~/.local/lib/python3.9/site-packages/pandas/core/indexing.py:900, in _LocationIndexer._validate_tuple_indexer(self, key)
895 @final
896 def _validate_tuple_indexer(self, key: tuple) -> tuple:
897 """
898 Check the key for valid keys across my indexer.
899 """
--> 900 key = self._validate_key_length(key)
901 key = self._expand_ellipsis(key)
902 for i, k in enumerate(key):
File ~/.local/lib/python3.9/site-packages/pandas/core/indexing.py:939, in _LocationIndexer._validate_key_length(self, key)
937 raise IndexingError(_one_ellipsis_message)
938 return self._validate_key_length(key)
--> 939 raise IndexingError("Too many indexers")
940 return key
IndexingError: Too many indexers
</code></pre>
<p>Input:</p>
<p><code>subset.iloc[0:4,0:4]</code></p>
<pre><code>pd.DataFrame({'A2M': {'TCGA.2K.A9WE.01': 40686.22,
'TCGA.2Z.A9J1.01': 11009.03,
'TCGA.2Z.A9J3.01': 3180.79,
'TCGA.2Z.A9J5.01': 16771.52},
'A4GALT': {'TCGA.2K.A9WE.01': 2583.0,
'TCGA.2Z.A9J1.01': 4720.0,
'TCGA.2Z.A9J3.01': 2768.0,
'TCGA.2Z.A9J5.01': 1689.0},
'AAK1': {'TCGA.2K.A9WE.01': 2590.0,
'TCGA.2Z.A9J1.01': 2562.0,
'TCGA.2Z.A9J3.01': 2715.0,
'TCGA.2Z.A9J5.01': 3010.0},
'AAMP': {'TCGA.2K.A9WE.01': 4478.0,
'TCGA.2Z.A9J1.01': 4518.0,
'TCGA.2Z.A9J3.01': 5936.0,
'TCGA.2Z.A9J5.01': 4139.0}})
</code></pre>
|
<python><pandas><numpy>
|
2023-06-21 08:01:41
| 1
| 1,545
|
Anon
|
76,521,161
| 7,585,973
|
Why does my command prompt and PowerShell only open the file, not executing it?
|
<p>My command prompt can't find the executable that I ask it to launch.</p>
<pre><code>C:\Users\NabihBawazir\...\aws-token\aws-token>./token.sh
</code></pre>
<p>This is the script:</p>
<pre><code>C:\Users\NabihBawazir\...\aws-token\aws-token>python python_script.py
</code></pre>
<p>The error is it is only opening the file in Visual Studio, not executing the script.</p>
|
<python><cmd><token>
|
2023-06-21 07:58:19
| 1
| 7,445
|
Nabih Bawazir
|
76,521,104
| 7,658,313
|
Exception happens ONLY in PyCharm's debug mode
|
<p>Below is the code snippet</p>
<pre class="lang-py prettyprint-override"><code>import datetime
from datetime import date, timedelta
import pandas_market_calendars as mcal
def is_US_market_open(dt: date) -> bool:
nyse = mcal.get_calendar("NYSE")
return not nyse.schedule(start_date=dt, end_date=dt).empty
def gen_last_market_open_day_of_month() -> date:
today = date.today()
curr_year = today.year
curr_month = today.month
while curr_year > 0:
last_day_of_month = datetime.date(curr_year, curr_month, 1) - timedelta(days=1)
while not is_US_market_open(last_day_of_month):
last_day_of_month -= timedelta(days=1)
yield last_day_of_month
curr_month -= 1
if curr_month == 0:
curr_month = 12
curr_year -= 1
</code></pre>
<p>The above method <code>gen_last_market_open_day_of_month</code> works great EXCEPT in PyCharm's debug mode, it will raise the below exception if anyhow i try to debug the application...</p>
<p>I tried to set the <code>PYDEVD_USE_CYTHON=NO</code> but still the exception pops up in the debug mode</p>
<pre class="lang-bash prettyprint-override"><code>***/.venv/bin/python /Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 60697 --file ***/src/utils/time.py
warning: PYDEVD_USE_CYTHON environment variable is set to 'NO'. Frame evaluator will be also disabled because it requires Cython extensions to be enabled in order to operate correctly.
Connected to pydev debugger (build 231.9011.38)
Traceback (most recent call last):
File "***/.venv/lib/python3.10/site-packages/pandas_market_calendars/market_calendar.py", line 461, in holidays
try: return self._holidays
AttributeError: 'NYSEExchangeCalendar' object has no attribute '_holidays'. Did you mean: 'holidays'?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "***/.venv/lib/python3.10/site-packages/numpy/core/getlimits.py", line 670, in __init__
self.dtype = numeric.dtype(int_type)
TypeError: 'NoneType' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "***/src/utils/time.py", line 60, in <module>
print(is_US_market_open(datetime.datetime.strptime("1999-12-31", FORMAT_YMD).date()))
File "***/src/utils/time.py", line 41, in is_US_market_open
return not nyse.schedule(start_date=dt, end_date=dt).empty
File "***/.venv/lib/python3.10/site-packages/pandas_market_calendars/market_calendar.py", line 599, in schedule
_all_days = self.valid_days(start_date, end_date)
File "***/.venv/lib/python3.10/site-packages/pandas_market_calendars/exchange_calendar_nyse.py", line 1092, in valid_days
trading_days = super().valid_days(start_date, end_date, tz= tz)
File "***/.venv/lib/python3.10/site-packages/pandas_market_calendars/market_calendar.py", line 479, in valid_days
return pd.date_range(start_date, end_date, freq=self.holidays(), normalize=True, tz=tz)
File "***/.venv/lib/python3.10/site-packages/pandas_market_calendars/market_calendar.py", line 463, in holidays
self._holidays = CustomBusinessDay(
File "pandas/_libs/tslibs/offsets.pyx", line 3272, in pandas._libs.tslibs.offsets.CustomBusinessDay.__init__
File "pandas/_libs/tslibs/offsets.pyx", line 1302, in pandas._libs.tslibs.offsets.BusinessMixin._init_custom
File "pandas/_libs/tslibs/offsets.pyx", line 252, in pandas._libs.tslibs.offsets._get_calendar
File "***/.venv/lib/python3.10/site-packages/pandas/tseries/holiday.py", line 454, in holidays
pre_holidays = [
File "***/.venv/lib/python3.10/site-packages/pandas/tseries/holiday.py", line 455, in <listcomp>
rule.dates(start, end, return_name=True) for rule in self.rules
File "***/.venv/lib/python3.10/site-packages/pandas/tseries/holiday.py", line 272, in dates
np.in1d(holiday_dates.dayofweek, self.days_of_week)
File "<__array_function__ internals>", line 200, in in1d
File "***/.venv/lib/python3.10/site-packages/numpy/lib/arraysetops.py", line 655, in in1d
range_safe_from_overflow = ar2_range <= np.iinfo(ar2.dtype).max
File "***/.venv/lib/python3.10/site-packages/numpy/core/getlimits.py", line 672, in __init__
self.dtype = numeric.dtype(type(int_type))
TypeError: 'NoneType' object is not callable
</code></pre>
<ul>
<li>Python 3.10.11</li>
<li>pandas = "1.4.2"</li>
<li>pandas-market-calendars = "~4.1.4"</li>
</ul>
|
<python><python-3.x><pycharm>
|
2023-06-21 07:51:04
| 0
| 2,948
|
lnshi
|
76,521,061
| 991,710
|
Finding the number of continuous days per group with breaks in between, resetting cumulative sum on condition
|
<p>I have the following dataframe:</p>
<pre class="lang-py prettyprint-override"><code>data = [['X','A','2022-05-01',True,True],
['X','A','2022-05-02',True,True],
['X','A','2022-05-03',True,False],
['X','A','2022-05-04',False,True],
['X','A','2022-05-05',False,True],
['X','A','2022-05-06',True,True],
['X','A','2022-05-07',True,False],
['X','B','2022-05-10',True,True],
['X','B','2022-05-11',True,True],
['X','B','2022-05-13',True,True],
['X','B','2022-05-14',True,True],
['X','B','2022-05-27',True,True],
['X','B','2022-05-28',False,True],
['X','B','2022-05-29',True,True],
['Y','C','2022-05-01',True,True],
['Y','C','2022-05-02',True,True],
['Y','C','2022-05-03',True,True],
['Y','C','2022-05-04',False,True],
['Y','C','2022-05-05',False,True],
['Y','C','2022-05-06',False,True]]
columns = ['op_id','si_id','date','activity_level_1','activity_level_2']
df = pd.DataFrame(data, columns=columns).astype({'date':'datetime64[ns]'})
> df
op_id si_id date activity_level_1 activity_level_2
0 X A 2022-05-01 True True
1 X A 2022-05-02 True True
2 X A 2022-05-03 True False
3 X A 2022-05-04 False True
4 X A 2022-05-05 False True
5 X A 2022-05-06 True True
6 X A 2022-05-07 True False
7 X B 2022-05-10 True True
8 X B 2022-05-11 True True
9 X B 2022-05-13 True True
10 X B 2022-05-14 True True
11 X B 2022-05-27 True True
12 X B 2022-05-28 False True
13 X B 2022-05-29 True True
14 Y C 2022-05-01 True True
15 Y C 2022-05-02 True True
16 Y C 2022-05-03 True True
17 Y C 2022-05-04 False True
18 Y C 2022-05-05 False True
19 Y C 2022-05-06 False True
</code></pre>
<p>Where my data is sorted according to my required groupings already: <code>['op_id', 'si_id', 'date']</code>.</p>
<p>What I would like to to is the following: <em>per group</em> of <code>['op_id', 'si_id']</code>, find continuous (consecutive) date ranges where the condition of <code>activity_level_n</code> is also <em>true</em>, resetting the cumulative sum when it is false.</p>
<p>Example output:</p>
<pre class="lang-py prettyprint-override"><code> op_id si_id date activity_level_1 activity_level_2 cum_activity_level_1 cum_activity_level_2
0 X A 2022-05-01 True True 1 1
1 X A 2022-05-02 True True 2 2
2 X A 2022-05-03 True False 3 1
3 X A 2022-05-04 False True 1 2
4 X A 2022-05-05 False True 1 3
5 X A 2022-05-06 True True 1 4
6 X A 2022-05-07 True False 3 1
7 X B 2022-05-10 True True 1 1
8 X B 2022-05-11 True True 2 2
9 X B 2022-05-13 True True 1 1
10 X B 2022-05-14 True True 2 2
11 X B 2022-05-27 True True 1 1
12 X B 2022-05-28 False True 1 2
13 X B 2022-05-29 True True 1 3
14 Y C 2022-05-01 True True 1 1
15 Y C 2022-05-02 True True 2 2
16 Y C 2022-05-03 True True 3 3
17 Y C 2022-05-04 False True 1 4
18 Y C 2022-05-05 False True 1 5
19 Y C 2022-05-06 False True 1 6
</code></pre>
<p>The above implies a <code>reset_index()</code> after whatever grouping operation (on <code>['op_id', 'si_id']</code> would get the job done.</p>
<p>The <code>activity_level_n</code> columns are not related to each other, I merely wanted to illustrate that I wish to do this for multiple boolean columns.</p>
<p>I tried the following so far:</p>
<pre class="lang-py prettyprint-override"><code>day = pd.Timedelta('1d')
breaks = (df['date'].diff() != day) | (~df['activity_level_1'])
df['breaks'] = breaks
df['cum_activity_level_1'] = df.activity_level_1.mask(breaks, False).groupby([df.activity_level_1, df.breaks]).cumsum()
</code></pre>
<p>but while this is close, is not quite what I want:</p>
<pre class="lang-py prettyprint-override"><code> op_id si_id date activity_level_1 activity_level_2 breaks cum_activity_level_1
0 X A 2022-05-01 True True True 1
1 X A 2022-05-02 True True False 2
2 X A 2022-05-03 True False False 3
3 X A 2022-05-04 False True True 1
4 X A 2022-05-05 False True True 1
5 X A 2022-05-06 True True False 4
6 X A 2022-05-07 True False False 5
7 X B 2022-05-10 True True True 1
8 X B 2022-05-11 True True False 6
9 X B 2022-05-13 True True True 1
10 X B 2022-05-14 True True False 7
11 X B 2022-05-27 True True True 1
12 X B 2022-05-28 False True True 1
13 X B 2022-05-29 True True False 8
14 Y C 2022-05-01 True True True 1
15 Y C 2022-05-02 True True False 9
16 Y C 2022-05-03 True True False 10
17 Y C 2022-05-04 False True True 1
18 Y C 2022-05-05 False True True 1
19 Y C 2022-05-06 False True True 1
</code></pre>
<p>as it's not resetting the cumulative sum. How can I achieve what I want?</p>
<p>Ideally, I would prefer to avoid <code>apply</code> as my dataframe has about ~7M rows, but if there's no other way I'm fine with it.</p>
|
<python><pandas>
|
2023-06-21 07:45:26
| 4
| 3,744
|
filpa
|
76,521,051
| 1,878,846
|
Pytest or Unittest: How to run tests independent from each other?
|
<p>I want to test a simple class <code>A</code>:</p>
<pre><code>class A:
_cache = None
def func(self):
if not A._cache:
A._cache = 'Get value from some service'
class TestA:
def test_cache_after_func(self):
a = A()
a.func()
assert A._cache is not None
def test_cache_empty(self):
a = A()
assert A._cache is None
</code></pre>
<p>These two tests pass when run separately from <code>VSCode</code>. But when they are run together then second test fails because the first one already has modified the <code>_cache</code> field.</p>
<p>How to run these tests isolated without affecting one another? (I would appreciate examples both for <code>unittest</code> and <code>pytest</code> if they differ)</p>
|
<python><pytest><python-unittest>
|
2023-06-21 07:44:57
| 1
| 1,663
|
elshev
|
76,520,740
| 1,473,517
|
Why does this shgo minimization fail?
|
<p>I am trying to use linear constraints with <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html" rel="nofollow noreferrer">shgo</a>. Here is a simple MWE:</p>
<pre><code>from scipy.optimize import shgo, rosen
# Set up the constraints list
constraints = [{'type': 'ineq', 'fun': lambda x, i=i: x[i+1] - x[i] - 0.1} for i in range(2)]
# Define the variable bounds
bounds = [(0, 20)]*3
# Call the shgo function with constraints
result = shgo(rosen, bounds, constraints=constraints)
# Print the optimal solution
print("Optimal solution:", result.x)
print("Optimal value:", result.fun)
</code></pre>
<p>An example solution satisfying these constraints is:</p>
<pre><code>rosen((0.1, 0.21, 0.32))
13.046181
</code></pre>
<p>But if you run the code you get:</p>
<pre><code>Optimal solution: None
Optimal value: None
</code></pre>
<p>It doesn't find a feasible solution at all! Is this a bug?</p>
<h1>Update</h1>
<p>@Reinderien showed that the problem is the default sampling method 'simplicial'. Either of the other two options make the optimization work. Concretely, replace the <code>result =</code> line with</p>
<pre><code>result = shgo(rosen, bounds, sampling_method='halton',constraints=constraints)
</code></pre>
<p>and you get</p>
<pre><code>Optimal solution: [1.08960341 1.18960341 1.41515624]
Optimal value: 0.04453888080946618
</code></pre>
<p>If you use <code>simplicial</code> you get</p>
<pre><code>Failed to find a feasible minimizer point. Lowest sampling point = None
</code></pre>
<p>although I don't know why.</p>
<p>(Deleted incorrect part of question)</p>
|
<python><python-3.x><scipy><scipy-optimize><operations-research>
|
2023-06-21 07:01:10
| 1
| 21,513
|
Simd
|
76,520,575
| 383,834
|
Scons: overriding SConscript() function to get a list all loaded SConscripts
|
<p>I am trying to get a list of included python Scons files (which are included via env.SConscript) function for some dependency analyzis. Scons version is 4.4.0</p>
<p>So after a building of a script d.py containing</p>
<pre><code>env.SConscript('#a.py', exports='env')
env.SConscript('#b.py', exports='env')
</code></pre>
<p>I would like to have a list ['a.py', 'b.py']</p>
<ol>
<li>I was not able to the list of included ".py" files from the --tree dependency view of Scons</li>
<li>Another idea was to override the SConscript function (SCons.Environment) to make an own list:</li>
</ol>
<pre><code>def deco(func):
def wrapped(*args, **kw):
print("name goes here")
func(*args, **kw)
return wrapped
Environment.SConscript = deco(Environment.SConscript)
</code></pre>
<p>This approach fails with</p>
<pre><code>scons: *** Export of non-existent variable ''env''
</code></pre>
<p>Same happens when makeing a fake Envionment class.</p>
<p>Any ideas how to solve this?</p>
<p>upd 2023-06-22:</p>
<p>running with scons --debug=stacktrace gives</p>
<pre><code>Running with -j 15
name goes here
Traceback (most recent call last):
File <path>\SCons\Script\SConscript.py", line 97, in compute_exports
retval[export] = loc[export]
~~~^^^^^^^^
KeyError: 'env'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<path>\SCons\Script\SConscript.py", line 99, in compute_exports
retval[export] = glob[export]
~~~~^^^^^^^^
KeyError: 'env'
</code></pre>
|
<python><scons>
|
2023-06-21 06:36:07
| 2
| 2,612
|
Stasik
|
76,520,454
| 5,305,512
|
Python package installed in Google Cloud Platform, but getting import error when running Docker image
|
<p>I have installed this library called Ta-Lib in my GCP in the following way:</p>
<pre><code>wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz
tar -xzf ta-lib-0.4.0-src.tar.gz
cd ta-lib/
./configure --prefix=/usr
make
sudo make install
pip install ta-lib
</code></pre>
<p>I can then import this library if I run <code>python</code> in cloud shell and then <code>>>> import talib</code>.</p>
<p>I then proceed to build the docker image of my project with the following <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
# Give user access to write into the results folder
RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user PATH=/home/user/.local/bin:$PATH
WORKDIR $HOME/app
COPY --chown=user . $HOME/app
# Run the application:
CMD ["python", "-u", "app.py"]
</code></pre>
<p>However, after building the Docker image of my project and trying to run it, it gives the following error:</p>
<pre><code>username@cloudshell:~/RoboAdvisor (asom-barta-qna-bot)$ python
Python 3.9.2 (default, Feb 28 2021, 17:03:44)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import talib
>>> quit()
username@cloudshell:~/RoboAdvisor (asom-barta-qna-bot)$ docker run robo_advisor
Traceback (most recent call last):
File "/home/user/app/app.py", line 2, in <module>
from finnlp.data_processors.yahoofinance import Yahoofinance
File "/home/user/app/finnlp/data_processors/yahoofinance.py", line 29, in <module>
from finnlp.data_processors._base import _Base, calc_time_zone
File "/home/user/app/finnlp/data_processors/_base.py", line 12, in <module>
import talib
ModuleNotFoundError: No module named 'talib'
</code></pre>
<p>I don't understand what is the issue here, and how do I fix it? Why is it saying <code>talib</code> is not installed, when it clearly is?</p>
<hr />
<p>I created a shell script called <code>install_talib.sh</code> and put it in the directory of my project to install the library like so:</p>
<pre><code>wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz \
&& sudo tar -xzf ta-lib-0.4.0-src.tar.gz \
&& sudo rm ta-lib-0.4.0-src.tar.gz \
&& cd ta-lib/ \
&& sudo ./configure --prefix=/usr \
&& sudo make \
&& sudo make install \
&& cd ~ \
&& sudo rm -rf ta-lib/ \
&& pip install ta-lib
</code></pre>
<p>And then I modified my Dockerfile like so:</p>
<pre><code>FROM python:3.9
RUN chmod +x install_talib.sh
RUN ./install_talib.sh
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
# Give user access to write to write in results folder
RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user PATH=/home/user/.local/bin:$PATH
WORKDIR $HOME/app
COPY --chown=user . $HOME/app
# Run the application:
CMD ["python", "-u", "app.py"]
</code></pre>
<p>This time when I try to run it, it gives the following error:</p>
<pre><code>username@cloudshell:~/RoboAdvisor (asom-barta-qna-bot)$ docker build -t robo_advisor .
[+] Building 0.6s (6/13)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 490B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/python:3.9 0.2s
=> CACHED [1/9] FROM docker.io/library/python:3.9@sha256:98f018a1afd67f2e17a4abd5bfe09b998734ba7c1ee54780e7ed216f8b8095c3 0.0s
=> CANCELED [internal] load build context 0.3s
=> => transferring context: 21.95MB 0.3s
=> ERROR [2/9] RUN chmod +x install_talib.sh 0.3s
------
> [2/9] RUN chmod +x install_talib.sh:
#0 0.237 chmod: cannot access 'install_talib.sh': No such file or directory
------
Dockerfile:3
--------------------
1 | FROM python:3.9
2 |
3 | >>> RUN chmod +x install_talib.sh
4 | RUN ./install_talib.sh
5 |
--------------------
ERROR: failed to solve: process "/bin/sh -c chmod +x install_talib.sh" did not complete successfully: exit code: 1
</code></pre>
|
<python><docker><google-cloud-platform><importerror><ta-lib>
|
2023-06-21 06:15:54
| 1
| 3,764
|
Kristada673
|
76,520,415
| 139,150
|
How to select last group by text sub-strings
|
<p>I have a list of characters:</p>
<pre><code>groups = ['a', 'e', 'i', 'o', 'u']
</code></pre>
<p>And I have a list of words:</p>
<pre><code>strings = ['testing', 'tested', 'fisherman', 'sandals', 'name']
</code></pre>
<ol>
<li><p>I need to split the words based on the characters mentioned in the groups list.
The word "tested" can be grouped as "t + est + ed"</p>
</li>
<li><p>Select the last group. So 'ed' in this case should be selected and only 'd' returned, because 'e' is already known.</p>
</li>
</ol>
<pre><code>expected={'testing':'ng', 'tested':'d', 'fisherman':'n', 'sandals':'ls', 'name':''}
</code></pre>
<p>If there is only 'e' then I can write:</p>
<pre><code>'tested'.split('e')[-1]
</code></pre>
<p>But not sure how to check other characters.</p>
|
<python><split>
|
2023-06-21 06:10:15
| 2
| 32,554
|
shantanuo
|
76,520,381
| 19,390,849
|
How can I traverse all nodes of a given AST in Python?
|
<p>I use <code>Esprima</code> to generate AST of a modern JSX component. It works great. I serilize that AST to a JSON in a file and load it in a Python file and I want to traverse each node in it.</p>
<p>I tried the <a href="https://github.com/austinbyers/esprima-ast-visitor" rel="nofollow noreferrer">esprima-ast-visitor</a> but I get this error:</p>
<pre><code>visitor.UnknownNodeTypeError: ImportDeclaration
</code></pre>
<p>I guess it's because that's an old library. I have also seen some examples on how to visit nodes in a JSON, yet they are all about <a href="https://stackoverflow.com/questions/2733813/iterating-through-a-json-object">specific examples on how to extract such value at such depth</a>, or they are about <a href="https://stackoverflow.com/questions/41777880/functions-that-help-to-understand-jsondict-structure/41778581#41778581"><em>searching</em></a>.</p>
<p>But I want to be able to visit every node of a given JSON, and at the same time have access to its direct parent and children. How can I do that?</p>
<p>Basically I need a method with this signature:</p>
<pre><code>import json
jsonString=open('json-path').read()
jsonRoot=json.loads(jsonString)
# this is a pseudo code, invalid in Python syntax, written in JS style
jsonRoot.visit(({ node, parent, children }) => {
# do something with the node here
})
</code></pre>
<p>P.S. I'm new to Python. That's why I can't write it.</p>
|
<python><json>
|
2023-06-21 06:04:54
| 1
| 1,889
|
Big boy
|
76,520,332
| 728,438
|
How to draw a bottoming out curved trendline
|
<p>I want to draw a trendline that follows a few points nicely. It is supposed to be a kind of 'average' so it doesn't have to hit the points exactly, but the shape has to be a smooth arc.
I am having trouble getting Python or Excel to get exactly what I want. Here is an example code that I'm using,</p>
<pre><code>x = np.array([6,8,10,12,16,24,32])
y = np.array([934,792,744,710,699,686,681])
#create scatterplot
plt.scatter(x, y)
#calculate equation for quadratic trendline
z = np.polyfit(x, y, 2)
p = np.poly1d(z)
#add trendline to plot
plt.plot(x, p(x))
</code></pre>
<p>which gives a output like <a href="https://i.sstatic.net/psRES.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/psRES.png" alt="this" /></a></p>
<p>whereas what I'm looking for is something like <a href="https://i.sstatic.net/pnZF7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pnZF7.png" alt="this" /></a>
Here's another example, where the line is sharper and doesn't bottom out like the previous
<a href="https://i.sstatic.net/X7oSF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X7oSF.png" alt="enter image description here" /></a></p>
<p>Does anyone know how this can be done? When I try it on Excel it seems like an 'equation' is needed but I'm not sure how to get to this equation.</p>
<p>Edit: the curve I am looking for has to be monotonic decreasing and concave</p>
|
<python><matplotlib><trendline>
|
2023-06-21 05:55:05
| 1
| 399
|
Ian Low
|
76,520,229
| 499,721
|
Parsing non-standard date element using xmlschema
|
<p>I have an xsd schema file, which include the following definition:</p>
<pre class="lang-xml prettyprint-override"><code><xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
...
<xs:element name="CreateDate" minOccurs="0" maxOccurs="1">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:minLength value="8"/>
<xs:maxLength value="10"/>
</xs:restriction>
</xs:simpleType>
</xs:element>
...
</code></pre>
<p>And an xml file which includes the following element:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="utf-8"?>
...
<!-- format is: YYYY/dd/mm -->
<CreateDate>2020/10/22</CreateDate>
...
</code></pre>
<p>I'm using <a href="https://github.com/sissaschool/xmlschema" rel="nofollow noreferrer">xmlschema</a> to parse the xml file like so:</p>
<pre class="lang-python prettyprint-override"><code>schema = xmlschema.XMLSchema(schema_file)
element = schema.to_dict(xml_file, datetime_types=True)
</code></pre>
<p>Obviously, <em>CreateDate</em> is parsed to a string instead of a <code>Date</code> object. Questions:</p>
<ol>
<li>Is it possible to change the xsd definition, so that <code>xmlschema</code> automatically parses <em>CreateDate</em> to <code>Date</code> using format <em>"YYYY/mm/dd"</em>?</li>
<li>If not, I guess I need intercept the parsing using <a href="https://xmlschema.readthedocs.io/en/latest/usage.html#control-the-decoding-of-elements" rel="nofollow noreferrer"><code>value_hook</code></a> or <a href="https://xmlschema.readthedocs.io/en/latest/usage.html#control-the-decoding-of-elements" rel="nofollow noreferrer"><code>element_hook</code></a> arguments to <code>to_dict()</code>, but I'm not sure how to go about it. Any suggestions?</li>
</ol>
|
<python><xsd>
|
2023-06-21 05:31:24
| 1
| 11,117
|
bavaza
|
76,520,176
| 3,260,750
|
"poetry update" takes 3 hours on a tiny project - what am I doing wrong?
|
<p>I have a tiny Python (3.10) project (named crawler) that imports SDKs from Azure (azure-identity, azure-core) and AWS (boto3).</p>
<p>Then I have a different one-file project (named crawler-tester) that imports the first one as a module.</p>
<p>I'm using poetry and the pyproject.toml file looks like this:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "crawler-tester"
version = "0.1.0"
description = ""
authors = ["Your Name <you@example.com>"]
packages = [
{ include = "../crawler/" },
]
[tool.poetry.dependencies]
python = "^3.10"
finops-crawler = {path = "../crawler"}
[tool.poetry.dev-dependencies]
python-dotenv = "^1.0.0"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>From what I've understood then this is the "development mode" where it should work in a way that if I edit my module then it would be picked up immediately.</p>
<p>Except the result is as follows:</p>
<pre class="lang-none prettyprint-override"><code>(.venv) stuff:/mnt/c/Users/myname/github/crawler-tester$ poetry update
Updating dependencies
Resolving dependencies... (11184.5s)
Writing lock file
Package operations: 5 installs, 2 updates, 0 removals
• Updating urllib3 (2.0.3 -> 1.26.16)
• Installing jmespath (1.0.1)
• Installing python-dateutil (2.8.2)
• Installing botocore (1.29.156)
• Installing s3transfer (0.6.1)
• Installing boto3 (1.26.156)
• Updating finops-crawler (0.1.2 -> 0.1.3 /mnt/c/Users/myname/github/finops-crawler)
</code></pre>
<p>Notice the almost 3 hours it took to resolve dependencies.</p>
<p>I'm obviously doing something wrong here, but I can't figure out what.
The changes are also not picked up automatically - I have to increase the version number and do <code>poetry update</code> for it to happen.</p>
|
<python><python-poetry>
|
2023-06-21 05:21:00
| 1
| 1,939
|
LauriK
|
76,519,818
| 2,739,700
|
Scan files recursive and non recursive using Pathlib python
|
<p>I need to scan files in given folder/directory, if recursive is True recursivly scan directory else only files can yield. I am implemented below code, I am getting None</p>
<pre><code>import re
import uuid
from pathlib import Path
from typing import Generator, Tuple, Union, Optional
import time
def scan_files(dir_path: Union[str, Path], filter_regex: Optional[str] = None, recursive: bool = True) -> Generator[Tuple[str, Path], None, None]:
"""
Get list of files in the specified directory.
"""
path = Path(dir_path)
for item in path.iterdir():
if not item.is_symlink():
if recursive and item.is_dir():
yield from scan_files(item, filter_regex=filter_regex)
elif filter_regex is None or re.match(filter_regex, item.name, re.IGNORECASE):
yield str(uuid.uuid5(uuid.NAMESPACE_URL, str(item))), item
while True:
print("=" * 100)
for x, y in scan_files(dir_path="/tmp/in/106/", recursive=True):
print(x, y)
print("=" * 100)
time.sleep(4)
</code></pre>
<p>Actual output: No files are yield from scan_files</p>
<p>test input files</p>
<p>create directory</p>
<pre><code> /tmp/in/106/nested
add below files
/tmp/in/106/1.mov
/tmp/in/106/2.mov
/tmp/in/106/nested/1.mov
/tmp/in/106/nested/2.mov
</code></pre>
<p>expected output:</p>
<pre><code> if recursive: all files with full path
if non recursive: /tmp/in/106/1.mov and /tmp/in/106/2.mov
</code></pre>
|
<python><python-3.x>
|
2023-06-21 03:47:31
| 1
| 404
|
GoneCase123
|
76,519,727
| 10,570,372
|
Is my time complexity decorator concise and does what it does?
|
<p>I wrote a decorator to time if certain operations or function is really linear, quadratic etc when <code>n</code> increases.</p>
<pre class="lang-py prettyprint-override"><code>DataTypes = Union[List[int], Dict[int, int], None]
# callable that takes any number of arguments and returns any value.
# pylint: disable=invalid-name
def data_factory(data_type: str, n: int) -> DataTypes:
if data_type == "array":
return list(range(n))
if data_type == "dict":
return {i: i for i in range(n)}
if data_type is None:
return None
raise ValueError(f"Invalid data_type: {data_type}")
def time_complexity(
data_type: str, repeat: int = 1, plot: bool = False
) -> Callable[[Callable[..., Any]], Callable[..., Tuple]]:
def decorator(func: Callable[..., Any]) -> Callable[..., Tuple]:
def wrapper(n_sizes: List[int], *args: Any, **kwargs: Dict[str, Any]) -> Tuple:
avg_times = []
median_times = []
best_times = []
worst_times = []
for n in n_sizes:
# create a list of n elements
data_structure = data_factory(data_type, n)
# note array is created outside the loop
runtimes = []
for _ in range(repeat):
start_time = time.perf_counter()
# pylint: disable=expression-not-assigned,line-too-long
func(n, data_structure, *args, **kwargs) if data_type else func(
n, *args, **kwargs
) # <--- this is where it calls the function with n or data_structure as argument
end_time = time.perf_counter()
runtimes.append(end_time - start_time)
avg_times.append(np.mean(runtimes))
median_times.append(np.median(runtimes))
best_times.append(np.min(runtimes))
worst_times.append(np.max(runtimes))
if plot:
plt.figure(figsize=(10, 6))
plt.plot(n_sizes, avg_times, "o-", label="Average")
plt.plot(n_sizes, median_times, "o-", label="Median")
plt.plot(n_sizes, best_times, "o-", label="Best")
plt.plot(n_sizes, worst_times, "o-", label="Worst")
plt.xlabel("Size of Input (n)")
plt.ylabel("Execution Time (s)")
plt.legend()
plt.grid(True)
plt.title(f"Time Complexity of {func.__name__}")
plt.show()
return n_sizes, avg_times, median_times, best_times, worst_times
return wrapper
return decorator
</code></pre>
<p>Using it will be like the following:</p>
<pre class="lang-py prettyprint-override"><code>@time_complexity(data_type="array", repeat=10, plot=True)
def list_append(n: int, array) -> None:
array.append(n)
@time_complexity(data_type="array", repeat=10, plot=True)
def list_insert(n: int, array) -> None:
array.insert(0, n)
# list_append(range(1000000, 10000001, 1000000))
# list_insert(range(1000000, 10000001, 1000000))
</code></pre>
<p>I wonder if this implementation is good enough to have a visual idea, excluding external uncontrollable factors such as CPU etc.</p>
|
<python><data-structures>
|
2023-06-21 03:21:07
| 0
| 1,043
|
ilovewt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.