QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,104,462
1,481,986
Typer - group/inheritance arguments
<p>I have several commands in the same typer app that get the same params or at least some of them (e.g. database connection arguments), but they are not sub-commands of each other.</p> <p>I want to avoid copy-pasting those arguments for each command but rather define some sort of inheritance or grouping of arguments.</p> <p>Is there something similar to argparse <a href="https://docs.python.org/3/library/argparse.html#mutual-exclusion" rel="nofollow noreferrer">mutually exclusive</a> groups in typer?</p>
<python><argparse><typer>
2023-04-25 18:45:33
2
6,241
Tom Ron
76,104,459
6,679,011
Multiple program execute one after first one finished, python
<p>Using python, I need to call a program three times with different parameters. What I need is no parallel running these programs. Start a new program only after the previous one finish. How should I modify the following script?</p> <pre><code> os.system(&quot;start MSSQL2PostgreSQLPro_Cons.exe /Session:{}&quot;.format(parameter1)) os.system(&quot;start MSSQL2PostgreSQLPro_Cons.exe /Session:{}&quot;.format(parameter2)) os.system(&quot;start MSSQL2PostgreSQLPro_Cons.exe /Session:{}&quot;.format(parameter3)) </code></pre>
<python><python-3.x>
2023-04-25 18:45:16
1
469
Yang L
76,104,417
935,376
dictionary of dictionaries is not ordering properly
<p>I have a list of logs that show 3 items - userid, time, sign-in or sign-out</p> <pre><code>logs = [&quot;12 150 sign-out&quot;,&quot;30 90 sign-in&quot;,&quot;12 10 sign-in&quot;,&quot;30 120 sign-out&quot;] </code></pre> <p>I am trying to create a dictionary of jsons dictionaries</p> <pre><code>outlist = {} for log in logs: buff = log.split() if buff[0] not in outlist.keys(): outlist[buff[0]] = {} if buff[2] == 'sign-in': outlist[buff[0]].update({'sign-in':buff[1]}) elif buff[2] == 'sign-out': outlist[buff[0]]['sign-out']=buff[1] </code></pre> <p>The outlist looks like this:</p> <pre><code>{'12': {'sign-out': '150', 'sign-in': '10'}, '30': {'sign-in': '90', 'sign-out': '120'}} </code></pre> <p>I want it to look like below, where the sign-in is at the front. I use update method to insert at the beginning, but it is not working. Where I am going wrong?</p> <pre><code>{'12': {'sign-in': '10','sign-out': '150'}, '30': {'sign-in': '90', 'sign-out': '120'}} </code></pre>
<python><json><dictionary>
2023-04-25 18:39:30
1
2,064
Zenvega
76,104,294
15,112,182
Selenium / python - Execute a call-back function inside an iframe
<p>I am using Selenium web driver (python) to scrape/interact with websites.</p> <p>I have the target code below, (please note it has been simplified to highlight the important target code only).</p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;/head&gt; &lt;body&gt; &lt;iframe id=&quot;main-iframe&quot;&gt; &lt;html&gt; &lt;head&gt; &lt;script type=&quot;text/javascript&quot;&gt; var onCaptchaFinished = function(response) { var xhr; var msg = &quot;g-recaptcha-response=&quot; + response; xhr.open(&quot;POST&quot;, &quot;/_Incapsula_Resource?lorem&quot;, true); xhr.setRequestHeader(&quot;Content-Type&quot;, &quot;application/x-www-form-urlencoded&quot;); xhr.onreadystatechange = function(){ if (xhr.readyState == 4) { if (xhr.status == 200) { window.parent.location.reload(true); } else { window.parent.location.reload(true); } } } xhr.send(msg); }; &lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;div class=&quot;g-recaptcha&quot; data-sitekey=&quot;lorem&quot; data-callback=&quot;onCaptchaFinished&quot;&gt;&lt;/div&gt; &lt;/body&gt; &lt;/html&gt; &lt;/iframe&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I need to invoke the callback function &quot;onCaptchaFinished()&quot;.</p> <p>I have tried several methods (executing js code ):</p> <p>Method 1 - calling the function directly.<br /> It fails since the onCaptchaFinished() is inside iframe (inaccessible)</p> <pre><code> scriptTxt = &quot;&quot;&quot; return onCaptchaFinished() &quot;&quot;&quot; driver.execute_script(scriptTxt) </code></pre> <p>Method 2 - creating a fake button and then clicking on it to invoke the function: It fails with the error: Blocked a frame with origin &quot;<em>website link here</em>&quot; from accessing a cross-origin frame.</p> <pre><code> scriptTxt = &quot;&quot;&quot; var iframe = document.getElementsByTagName('iframe')[0]; var innerDoc = iframe.contentDocument || iframe.contentWindow.document; var body = innerDoc.getElementsByTagName('body')[0]; var button = innerDoc.createElement('button'); button.setAttribute('onClick', 'onCaptchaFinished()'); button.innerHTML = 'Bypass button' body.appendChild(button); &quot;&quot;&quot; driver.execute_script(scriptTxt) </code></pre> <p>Is there any way to invoke the callback function? I am open to using another library, as long as it is compatible with the Selenium webdriver and is in python.</p>
<javascript><python><selenium-webdriver><iframe>
2023-04-25 18:20:35
1
398
constantlyFlagged
76,104,090
9,809,135
Using the Azure DevOps Python API to create a new work-item
<p>Similar to the question posted <a href="https://stackoverflow.com/questions/60441473/creating-a-workitem-in-azure-devops-via-python/76102850#76102850">here</a>, I am trying to manage Azure DevOps work items via python.</p> <p>The accepted answer refers to the official <a href="https://github.com/microsoft/azure-devops-python-api" rel="nofollow noreferrer">Azure DevOps Python API docs</a> and some <a href="https://github.com/Azure/azure-devops-cli-extension/blob/768aff8971aad2e194fa26d796d47b3a67292b4e/azure-devops/azext_devops/dev/boards/work_item.py#L24" rel="nofollow noreferrer">sample code</a> which seems like it might be out of date.</p> <p>Here is a simplified version of the script I am trying to run:</p> <pre><code>from azure.devops.connection import Connection from azure.devops.v7_1.work_item_tracking import JsonPatchOperation from msrest.authentication import BasicAuthentication personal_access_token = &quot;MY_PAT&quot; organization_url = f'https://dev.azure.com/{MY_ORG}' # Create a connection to the org credentials = BasicAuthentication('', personal_access_token) connection = Connection(base_url=organization_url, creds=credentials) core_client = connection.clients.get_core_client() # core client --&gt; used to get project project = core_client.get_project(MY_PROJECT_ID) wit_client = connection.clients.get_work_item_tracking_client() # work item tracking client # this code is recycled from the azure example code posted above def _create_work_item_field_patch_operation(op, field, value): path = '/fields/{field}'.format(field=field) return _create_patch_operation(op=op, path=path, value=value) def _create_patch_operation(op, path, value): patch_operation = JsonPatchOperation() patch_operation.op = op patch_operation.path = path patch_operation.value = value patch_operation._from = None return patch_operation patch_document = [] patch_document.append(_create_work_item_field_patch_operation('add', 'System.Title', &quot;Testing&quot;)) patch_document.append(_create_work_item_field_patch_operation('add', 'System.Description', &quot;Hoping to create a new work item.&quot;)) wit_client.create_work_item(document=patch_document, project=project, type=&quot;Bug&quot;) </code></pre> <p>When I execute this code, I get the following error:</p> <pre><code>azure.devops.exceptions.AzureDevOpsClientRequestError: Operation returned a 400 status code. </code></pre> <p>I am wondering if anybody has insight into why I am receiving a 400 status code.</p>
<python><azure-devops>
2023-04-25 17:53:21
1
355
davjfish
76,104,022
1,601,443
Unable to run python project. `__main__` module not found
<p>I cloned a git repository that I would like to run, the project folder structure is shown bellow.</p> <p><a href="https://i.sstatic.net/KQAat.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KQAat.png" alt="enter image description here" /></a></p> <p>I installed the dependencies using,</p> <p><code>python3 -m pip install -e .</code></p> <p>And when I tried to run the package using python,</p> <p><code>python pyfhirsdc</code></p> <p>I get the following error,</p> <p><code>D:\pyfhirsdc\venv\Scripts\python.exe: can't find '__main__' module in 'D:\\pyfhirsdc\\pyfhirsdc'</code></p> <p>I'm new to using <code>pyproject.toml</code> files, how can I fix this isse?</p>
<python><python-3.x>
2023-04-25 17:42:05
1
4,050
rksh
76,104,000
676,192
How do I create a checkerboard of two specific colors with python
<p>I have managed to create a black and white checkerboard like this:</p> <pre><code>import numpy as np import cv2 checkers = np.tile( np.array([[0, 1], [1, 0]]), (16 // 2, 16 // 2)) </code></pre> <p>but now I am struggling to transform that into - sa - a blue and yellow checkerboard.</p> <p>I tried a lambda transform like this:</p> <pre><code>f = lambda x: [0, 255, 0] if x else [255, 255, 255] checkers = checkers.reshape((256)) checkers = f(checkers) </code></pre> <p>but I get an error that says:</p> <pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>which makes me think that I haven't fully understood how lambdas apply to numpy arrays.</p> <p>I also tried</p> <pre><code>np.where(checkers == 0, [0, 255, 0], [0, 0, 255]) </code></pre> <p>but I get</p> <pre><code>ValueError: operands could not be broadcast together with shapes (16,16) (3,) (3,) </code></pre> <p>So, how do I get a two-color checkerboard instead of a black and white one?</p>
<python><numpy>
2023-04-25 17:38:57
1
5,252
simone
76,103,950
436,826
Cannot run celery worker while importing playwright library
<p>I'm trying to run a simple playwright code with celery as you can see below:</p> <pre class="lang-py prettyprint-override"><code>@shared_task def get_title(): with sync_playwright() as playwright: browser = playwright.chromium.launch() page = browser.new_page() page.goto('https://github.com') print(page.title()) browser.close() </code></pre> <p>When I start the celery worker I get this error immediately from command line. I've tried every possible way with playwright using sync/async methods bot none of them worked. When I remove playwright imports and code belong to it celery worker starts without any issues.</p> <p>I think the main issue is with the import order. I import playwright or select library before celery get imported but i got the similar error from command line like this <code>'select' has no attribute 'epoll'</code> without the full traceback.</p> <pre class="lang-py prettyprint-override"><code># celery.py import os from celery import Celery os.environ.setdefault(&quot;DJANGO_SETTINGS_MODULE&quot;, &quot;config.settings.local&quot;) app = Celery(&quot;celery_app&quot;) app.config_from_object(&quot;django.conf:settings&quot;, namespace=&quot;CELERY&quot;) app.autodiscover_tasks() </code></pre> <p>The code is running on <code>python:3.9.6-slim-buster</code></p> <p>Full traceback code:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/bin/celery&quot;, line 8, in &lt;module&gt; sys.exit(main()) File &quot;/usr/local/lib/python3.9/site-packages/celery/__main__.py&quot;, line 15, in main sys.exit(_main()) File &quot;/usr/local/lib/python3.9/site-packages/celery/bin/celery.py&quot;, line 217, in main return celery(auto_envvar_prefix=&quot;CELERY&quot;) File &quot;/usr/local/lib/python3.9/site-packages/click/core.py&quot;, line 1130, in __call__ return self.main(*args, **kwargs) File &quot;/usr/local/lib/python3.9/site-packages/click/core.py&quot;, line 1055, in main rv = self.invoke(ctx) File &quot;/usr/local/lib/python3.9/site-packages/click/core.py&quot;, line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File &quot;/usr/local/lib/python3.9/site-packages/click/core.py&quot;, line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File &quot;/usr/local/lib/python3.9/site-packages/click/core.py&quot;, line 760, in invoke return __callback(*args, **kwargs) File &quot;/usr/local/lib/python3.9/site-packages/click/decorators.py&quot;, line 26, in new_func return f(get_current_context(), *args, **kwargs) File &quot;/usr/local/lib/python3.9/site-packages/celery/bin/base.py&quot;, line 134, in caller return f(ctx, *args, **kwargs) File &quot;/usr/local/lib/python3.9/site-packages/celery/bin/worker.py&quot;, line 343, in worker worker = app.Worker( File &quot;/usr/local/lib/python3.9/site-packages/celery/worker/worker.py&quot;, line 94, in __init__ self.app.loader.init_worker() File &quot;/usr/local/lib/python3.9/site-packages/celery/loaders/base.py&quot;, line 111, in init_worker self.import_default_modules() File &quot;/usr/local/lib/python3.9/site-packages/celery/loaders/base.py&quot;, line 105, in import_default_modules raise response File &quot;/usr/local/lib/python3.9/site-packages/celery/utils/dispatch/signal.py&quot;, line 276, in send response = receiver(signal=self, sender=sender, **named) File &quot;/usr/local/lib/python3.9/site-packages/celery/fixups/django.py&quot;, line 82, in on_import_modules self.worker_fixup.validate_models() File &quot;/usr/local/lib/python3.9/site-packages/celery/fixups/django.py&quot;, line 120, in validate_models self.django_setup() File &quot;/usr/local/lib/python3.9/site-packages/celery/fixups/django.py&quot;, line 116, in django_setup django.setup() File &quot;/usr/local/lib/python3.9/site-packages/django/__init__.py&quot;, line 24, in setup apps.populate(settings.INSTALLED_APPS) File &quot;/usr/local/lib/python3.9/site-packages/django/apps/registry.py&quot;, line 124, in populate app_config.ready() File &quot;/usr/local/lib/python3.9/site-packages/django/contrib/admin/apps.py&quot;, line 27, in ready self.module.autodiscover() File &quot;/usr/local/lib/python3.9/site-packages/django/contrib/admin/__init__.py&quot;, line 50, in autodiscover autodiscover_modules(&quot;admin&quot;, register_to=site) File &quot;/usr/local/lib/python3.9/site-packages/django/utils/module_loading.py&quot;, line 58, in autodiscover_modules import_module(&quot;%s.%s&quot; % (app_config.name, module_to_search)) File &quot;/usr/local/lib/python3.9/importlib/__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1030, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 850, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 228, in _call_with_frames_removed File &quot;/app/backend/apps/series/admin.py&quot;, line 22, in &lt;module&gt; from apps.modules.advert.tasks import ( File &quot;/app/backend/apps/modules/advert/tasks.py&quot;, line 7, in &lt;module&gt; from playwright.sync_api import sync_playwright File &quot;/usr/local/lib/python3.9/site-packages/playwright/sync_api/__init__.py&quot;, line 25, in &lt;module&gt; import playwright.sync_api._generated File &quot;/usr/local/lib/python3.9/site-packages/playwright/sync_api/_generated.py&quot;, line 25, in &lt;module&gt; from playwright._impl._accessibility import Accessibility as AccessibilityImpl File &quot;/usr/local/lib/python3.9/site-packages/playwright/_impl/_accessibility.py&quot;, line 17, in &lt;module&gt; from playwright._impl._connection import Channel File &quot;/usr/local/lib/python3.9/site-packages/playwright/_impl/_connection.py&quot;, line 35, in &lt;module&gt; from pyee import EventEmitter File &quot;/usr/local/lib/python3.9/site-packages/pyee/__init__.py&quot;, line 120, in &lt;module&gt; from pyee.trio import TrioEventEmitter as _TrioEventEmitter # noqa File &quot;/usr/local/lib/python3.9/site-packages/pyee/trio.py&quot;, line 7, in &lt;module&gt; import trio File &quot;/usr/local/lib/python3.9/site-packages/trio/__init__.py&quot;, line 18, in &lt;module&gt; from ._core import ( File &quot;/usr/local/lib/python3.9/site-packages/trio/_core/__init__.py&quot;, line 27, in &lt;module&gt; from ._run import ( File &quot;/usr/local/lib/python3.9/site-packages/trio/_core/_run.py&quot;, line 2452, in &lt;module&gt; from ._io_epoll import EpollIOManager as TheIOManager File &quot;/usr/local/lib/python3.9/site-packages/trio/_core/_io_epoll.py&quot;, line 188, in &lt;module&gt; class EpollIOManager: File &quot;/usr/local/lib/python3.9/site-packages/trio/_core/_io_epoll.py&quot;, line 189, in EpollIOManager _epoll = attr.ib(factory=select.epoll) AttributeError: module 'select' has no attribute 'epoll' </code></pre> <p>Any thoughts ?</p>
<python><django><celery><playwright>
2023-04-25 17:33:14
0
334
Ekin Ertaç
76,103,908
19,980,284
Wrapping Y-tick labels in Bold Removes whitespace
<p>I have this code:</p> <pre class="lang-py prettyprint-override"><code>s = &quot;&quot;&quot;level,cvc Vasopressor dose,0 Low, 0.263 Med, .563 High, 0.777 Vasopressor trend,0 Decreasing, 0.367 Stable, 0.521 Increasing, 0.714 Duration,0 8 hours, 0.478 24 hours, 0.591 PIV Location,0 Upper Arm, 0.528 Forearm, 0.541 Case,0 7, 0.541 8, 0.526 9, 0.507 10, 0.564&quot;&quot;&quot; data = np.array([a.split(',') for a in s.split(&quot;\n&quot;)]) #print(data) cvc = pd.DataFrame(data[1:], columns=data[0]) cvc['cvc'] = cvc['cvc'].apply(float) #print(cvc) variableNames = {'Vasopressor dose', 'Vasopressor trend', 'Duration', 'PIV Location', 'Case'} font_color = '#525252' hfont = {'fontname':'DejaVu Sans'} facecolor = '#eaeaf2' index = cvc.index#['level'] column0 = cvc['cvc']*100 title0 = 'Central Line Placed' fig, axes = plt.subplots(figsize=(8, 12), facecolor=facecolor) axes.barh(index, column0, align='center', color='darkslategray', zorder=10) axes.set_title(title0, fontsize=18, pad=15, color='black', **hfont) grid_params = dict( zorder=0, axis='x' ) # If you have positive numbers and want to invert the x-axis of the left plot axes.invert_xaxis() # To show data from highest to lowest plt.gca().invert_yaxis() axes.set(xlim = [0,100]) axes.yaxis.tick_left() axes.set_yticks(range(len(cvc))) formattedyticklabels = [r'$\bf{'+f&quot;{t}&quot;+r'}$' if t in variableNames else t for t in cvc['level']] #print(formattedyticklabels) axes.set_yticklabels(formattedyticklabels) axes.grid(**grid_params) fig.supxlabel(&quot;adjusted proportion of respondents&quot;, x = 0.62, size = 15) axes.tick_params(axis='both', labelsize=15) axes.tick_params(left = False) fig.patch.set_facecolor('white') plt.savefig(&quot;cvc.jpg&quot;) fig.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/2mnxA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2mnxA.jpg" alt="enter image description here" /></a></p> <p>For the y-tick labels with spaces, referred to in the set <code>variableNames</code>, those spaces are removed when I add the bold formatting. Any ideas how to get around this? I have referred to this <a href="https://stackoverflow.com/q/69003230/15975987">question</a> but not sure how to implement it in my list comprehension:</p> <pre><code>formattedyticklabels = [r'$\bf{'+f&quot;{t}&quot;+r'}$' if t in variableNames else t for t in cvc['level']] </code></pre> <p>Any tips?</p>
<python><pandas><matplotlib><list-comprehension>
2023-04-25 17:28:12
1
671
hulio_entredas
76,103,900
12,226,377
Pandas merge two data frames on a specific condition
<p>I have two different dataframes df1 and df2 which I would like to merge based on the following condition:</p> <pre><code>id category 1 cat1 2 cat2 </code></pre> <p>df2:</p> <pre><code>id category_name 1 shoes 1 car </code></pre> <p>resultant df:</p> <pre><code>id category category_name 1 cat1 shoes 1 car 2 cat2 </code></pre> <p>I don't want to the cat1 to be repeating while the merge happens. I only want to keep the id and the column category_name value that I am trying to store. How can I achieve this?</p>
<python><pandas><merge>
2023-04-25 17:27:16
1
807
Django0602
76,103,876
10,966,786
Why does Shamir secret sharing appear to need ordered shares?
<p>The implementation of Shamir secret sharing in this code, only generates the original image if the shares are provided in consecutive order (ex: [2,3,4]) and won't work in any other share order (ex: [2,4,6] or [4,1,3]). However, Shamir secret reconstruction does not require the shares to be in any order, then why does this fail?</p> <pre><code>import numpy as np from scipy.interpolate import lagrange as lag import os def polynomial(img, n, k): ''' Generate lagrange polynomial of degree k-1 f(x) = c1(x^k) + c2(x^k-1) .... + secret (mod prime) prime &gt; secret ''' coef = np.random.randint(low = 0, high = 251, size = (img.shape[0], k - 1)) #Coefficients should not exceed value of prime number chosen gen_imgs = [] for i in range(1, n + 1): base = np.array([i ** j for j in range(1, k)]) base = np.matmul(coef, base) imgValue_ = (img + base) % 251 gen_imgs.append(imgValue_) return np.array(gen_imgs) def reconstruct(imgs, index, k): ''' Reconstruct image using share index values for k shares ''' print(&quot;Shares: &quot;, index) assert imgs.shape[0] &gt;= k x = np.array(index) dim = imgs.shape[1] img = [] for i in range(dim): if i % 10000 == 0: print(&quot;Reconstructing pixel &quot;, i, &quot; of &quot;, dim, &quot; pixels&quot;) y = imgs[:, i] poly = lag(x, y) pixel = poly(0) % 251 img.append(pixel) return np.array(img) if __name__ == &quot;__main__&quot;: pathPrefix = path.split('.')[0] os.makedirs(pathPrefix, exist_ok=True) img_flattened, shape = util.read_image(path) gen_imgs = polynomial(img_flattened, n = n, k = k) to_save = gen_imgs.reshape(n, *shape) for i, img in enumerate(to_save): Image.fromarray(img.astype(np.uint8)).save(pathPrefix + &quot;/share&quot; + &quot;_{}.jpeg&quot;.format(i + 1)) #Secret reconstruction shareIndex = list(map(int, input(&quot;\nEnter the index of shares to reconstruct: &quot;).split())) origin_img = reconstruct(gen_imgs, shareIndex, k = k) origin_img = origin_img.reshape(*shape) Image.fromarray(origin_img.astype(np.uint8)).save(pathPrefix + &quot;/reconstructed_image.jpeg&quot;) </code></pre> <p>The read_image function has the following definition</p> <pre><code>def read_image(path): ''' Reads image from file and converts it into numpy array in greyscale ''' img = Image.open(path).convert('L') img_array = np.asarray(img) return img_array.flatten(), img_array.shape </code></pre>
<python><numpy><cryptography>
2023-04-25 17:24:42
1
496
Raisa A
76,103,811
12,282,349
FastAPI_Login RedirectResponse after authorization fails
<p>I have a simple login form that is opened using a <code>GET</code> request, and the user submits the form with a <code>POST</code> request. After successful authentication, I redirect the user to the main page of my web application using a <code>RedirectResponse</code> object.</p> <p>However, despite the successful redirection, the user is not authorized on the main page, as if the authentication cookie was <strong>not</strong> set. There are no error messages displayed.</p> <p>Here's the code for my login route:</p> <pre class="lang-py prettyprint-override"><code>@router.post('/login') def login(response: Response, data: OAuth2PasswordRequestForm = Depends(), db: Session = Depends(get_db)): email = data.username password = data.password user = db.query(DbUser).filter(DbUser.email == email).first() if not user: # you can return any response or error of your choice raise InvalidCredentialsException if not Hash.verify(user.password, password): raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail='Incorrect password') token = manager.create_access_token( data={'sub': email} ) resp = RedirectResponse(url='/post/stat/all',status_code=status.HTTP_302_FOUND) manager.set_cookie(response, token) return resp </code></pre> <p>However, if I change <code>resp = RedirectResponse(...</code> to <code>resp = 'ok'</code>, then everything works fine—just without a URL redirection.</p> <p>Tried also with no luck:</p> <pre class="lang-py prettyprint-override"><code>return templates.TemplateResponse(&quot;/login/success.html&quot;,{&quot;request&quot;: request}) </code></pre> <p>What would the issue be and the way to fix it?</p>
<python><http-redirect><fastapi><httpresponse><setcookie>
2023-04-25 17:14:44
1
513
Tomas Am
76,103,804
1,914,034
numpy - dynamic slicing using mix of all() and any()
<p>I am trying to slice a point cloud numpy array dynamically. The slicing would filter points that x/y coordinates fall inside a list of boundaries defined by <code>minx</code>, <code>miny</code>, <code>maxx</code>, <code>maxy</code></p> <p>I thought I had a working algorithm but Im facing the <code>The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</code></p> <p>For instance (this is only demo data):</p> <pre><code>import numpy as np bounds_list = [ (0, 0, 5, 5), (8, 8, 10, 10) ] #col0 --&gt; x , col1 --&gt; y , col2 --&gt; z random_arr = np.array([10, 10, 100]) * np.random.rand(100, 3) def filter_by_bounds(arr): slices = np.array([np.array([bounds[0] &lt;= arr[0] &lt;= bounds[2], bounds[1] &lt;= arr[1] &lt;= bounds[3]]).all() for bounds in bounds_list]).any() return arr[slices] filtered_arr = filter_by_bounds(random_arr) print(filtered_arr) </code></pre> <p>In this demo, I would like to retrieve all points where their x/y coordinates are within at least one of the bounds</p>
<python><numpy>
2023-04-25 17:13:19
1
7,655
Below the Radar
76,103,582
6,496,267
How to upload JSON string to GCP bucket using Python without having to write it to a file?
<p>I have been uploading JSON file to GCP bucket like this</p> <p>data_array is an array containing JSONs for each row. It looks like this &quot;[{key1: value1, key2: value2},{key1: value1, key2: value2},{key1: value1, key2: value2}...]&quot;</p> <pre><code>path_to_file = # i dump array to file blob = bucket.blob(blob_name) #blob.upload_from_filename(path_to_file) I want to skip this &quot;dump to file&quot; step and upload my array to GCP bucket directly. blob.upload_from_string(path_to_file) </code></pre> <p>This throws an error , all red, I don't even understand why.</p> <p>Is there any way to push array directly to bucket without having to write to file</p>
<python><google-cloud-platform>
2023-04-25 16:41:33
0
717
john
76,103,488
3,719,167
Django using database without migration
<p>I have a Django application running using MySQL database, and everything works fine.</p> <p>For another part of the same application, I want to create a new Django application, implement database sharding, and have a separate database to store the tracking data and use the existing database (used by previous Django application) for authentication and other stuff.</p> <p>But since I need to get data from the existing database using Django ORM, I created models with the required-only fields in the new application as well.</p> <pre class="lang-py prettyprint-override"><code># authentication.models.User class User(AbstractUser): email = models.EmailField('email address', blank=True, unique=True) is_shared = models.BooleanField(verbose_name='Is Shared User', default=False) class Meta: app_label = 'authentication' managed = False </code></pre> <p>I want to use this model for authentication in the admin panel as well (the same is used in the existing application). But I do not want this new application to migrate changes to the master database, so added <code>managed=False</code> in the <code>Meta</code> class.</p> <p>The <code>settings.py</code> file is</p> <pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # Postgres partitioning &quot;django.contrib.postgres&quot;, &quot;psqlextra&quot;, #################### # Created apps #################### 'tracking', 'authentication', ] AUTH_USER_MODEL = 'authentication.User' DATABASE_ROUTERS = [ 'app.database_routers.tracking.TrackingRouter' ] DATABASES = { 'default': eval(os.environ.get('DATABASE_MASTER')), 'database_tracking': eval(os.environ.get('DATABASE_TRACKING')) } </code></pre> <p>where the <code>default</code> database has to be used for authentication and access to the admin panel. Migration should not write to this database as the migration should only be managed from the previous application to this database.</p> <p>whereas <code>database_tracking</code> migration should run from this application. Therefore, I have written the database router <code>TrackingRouter</code> as</p> <pre class="lang-py prettyprint-override"><code>class TrackingRouter: route_app_labels = { 'tracking' } def db_for_read(self, model, **hints): if model._meta.app_label in self.route_app_labels: return 'database_tracking' return None def db_for_write(self, model, **hints): if model._meta.app_label in self.route_app_labels: return 'database_tracking' return None def allow_relation(self, obj1, obj2, **hints): if obj1._meta.app_label in self.route_app_labels or obj2._meta.app_label in self.route_app_labels: return True return None def allow_migrate(self, db, app_label, model_name=None, **hints): if app_label in self.route_app_labels: return db == 'database_tracking' return None </code></pre> <p>using this config, I want the migration to run for the models defined in the <code>tracking</code> application which should go to the <code>database_tracking</code> database.</p> <p>When running <code>python manage.py migrate</code>, it gives <code>No migrations to apply.</code> but no table is created in the <code>database_tracking</code>.</p> <p>While running <code>python manage.py migrate --database database_tracking</code> gives the following error</p> <pre><code>django.db.utils.ProgrammingError: relation &quot;django_content_type&quot; does not exist LINE 1: ...&quot;.&quot;app_label&quot;, &quot;django_content_type&quot;.&quot;model&quot; FROM &quot;django_co... ^ </code></pre> <p>Please note, with this setup, I want to make the <code>default</code> database kind of read-only from this application and should be used for authentication and login to the admin panel, which the other database <code>database_tracking</code> should be used for read and write if the application label is <code>tracking</code></p>
<python><django>
2023-04-25 16:29:11
0
9,922
Anuj TBE
76,103,440
7,879,074
Build a dictionary from large lists in python
<p>I have two lists, <code>A_flat</code>, which contains 1,000,000 sentences, and <code>B_flat</code> which for each sentence contains a list of four labels. This is my code:</p> <pre><code># Find indices of sentences with length &lt;= max_length short_sent_indices = [i for i, sent in enumerate(A_flat) if len(sent.split()) &lt;= max_length] # Build a dictionary mapping each label to the indices of sentences that have that label label_to_indices = {label: [i for i in short_sent_indices if label in B_flat[i]] for label in set(label for i in short_sent_indices for label in B_flat[i] if label != 'None')} label_sets = [{label for label in B_flat[i] if label != 'None'} for i in short_sent_indices] # Find positive pairs using set intersection positive_pairs = {(i, j) for i in range(len(short_sent_indices)) for j in range(i + 1, len(short_sent_indices)) if bool(label_sets[i] &amp; label_sets[j])} </code></pre> <p>This code is very slow and only works for a small subset of <code>A_flat</code>. How can I enhance this code to make it work for 1 million sentences?</p>
<python><dictionary>
2023-04-25 16:24:56
0
631
Abrar
76,103,342
74,562
Pygame slowdown on Raspberry Pi with multiple surfaces?
<p>I'm trying to do a small visualizer project on a Raspberry Pi (was aiming for a 3B but went to a 4 to have the better processing capabilities), and I noticed an odd behavior that I can't account for.</p> <p>Initially, I was just drawing to a main <code>display</code> surface (at 1920x1080 resolution) and the application was running faster than I'd expected a Pi to do.</p> <p>However, I wanted to add a UI and be able to save the output free of the UI, so I broke my program up into multiple surfaces.</p> <p>Before, I had code along the lines of:</p> <pre><code># drawing multiple rectangles on the screen for r in rects: pygame.draw.rect(display, color, r) # blit the UI txt = &quot;This is my shortened UI&quot; label = font.render(txt, 1, (220, 220, 220)) display.blit(label, (20, 20)) pygame.display.update() </code></pre> <p>I updated it to using multiple surfaces so that I could directly save the <code>mainSurf</code> instead of the <code>display</code> surface. <code>mainSurf</code> is the same size as <code>display</code>, and <code>uiSurf</code> is set to 200x200.</p> <pre><code>def draw_rect_alpha(surf, col, rect): shape_surf = pygame.Surface(pygame.Rect(rect).size, pygame.SRCALPHA) pygame.draw.rect(shape_surf, col, shape_surf.get_rect()) surf.blit(shape_surf, rect) # .. in main loop # drawing multiple rectangles on the screen for r in rects: draw_rect_alpha(mainSurf, color, r) # blit the UI txt = &quot;This is my shortened UI&quot; label = font.render(txt, 1, (220, 220, 220)) uiSurf.blit(label, (20, 20)) display.blit(mainSurf, (0, 0)) display.blit(uiSurf, (0, 0)) pygame.display.update() </code></pre> <p>Now, I may have missed a few lines of code as I'm transposing it manually, however the only real changes I made were to having multiple surfaces. I'm wondering if the issue how I'm rendering each rect, or having multiple surfaces in memory. The FPS dips significantly when using one approach versus the other.</p> <p>Other notes - I'm using SDL1.2 at present, since I've been struggling to get SDL2 up and running on my Pi. I'm also using a pure terminal (no X windows) as well to minimize things running.</p>
<python><pygame><raspberry-pi><raspberry-pi4>
2023-04-25 16:13:09
1
3,862
erik
76,103,340
8,280,171
Consuming Secret Values when building React using CDK
<p>I'm trying to consume secret values that already exist in Secret Manager when building my frontend react app using CDK and Python</p> <pre><code>secret_arn = f&quot;arn:aws:secretsmanager:us-west-2:1234:secret:my_variables-123456&quot; my_variables = secretmanager.Secret.from_secret_complete_arn( self, 'my_variables ', my_variables) value_i_want = cdk.Token.as_string(my_variables.secret_value_from_json('value_1')) bundle = s3_deploy.Source.asset( path=&quot;file-explorer&quot;, bundling=BundlingOptions( image=DockerImage.from_registry( image=&quot;public.ecr.aws/docker/library/node:19&quot; ), environment={ &quot;env_variable_1&quot;: value_i_want, }, command=[ &quot;sh&quot;, &quot;-c&quot;, &quot; &amp;&amp; &quot;.join( [ &quot;printenv&quot;, &quot;npm install --cache .npm&quot;, &quot;npm run prebuild&quot;, &quot;npm run build&quot;, &quot;cp -r /asset-input/build/* /asset-output/&quot;, ] ), ], ), ) </code></pre> <p>When running cdk ls or the actual build log i only see <code>${Token[TOKEN.1534]}</code> and after the stack deployment is done i can see the value is not consumed</p>
<python><aws-cdk><aws-secrets-manager>
2023-04-25 16:12:59
1
705
Jack Rogers
76,103,279
4,502,950
Apending values in google sheets add ' with the values using gspread
<p>I am following <a href="https://medium.com/@jb.ranchana/write-and-append-dataframes-to-google-sheets-in-python-f62479460cf0#:%7E:text=Once%20you%20write%20the%20data,tolist()%20before%20appending." rel="nofollow noreferrer">this</a> tutorial to append new rows to my existing dataset which is working fine except one thing that is that the code is adding ' at the start of values like this</p> <p><a href="https://i.sstatic.net/zkHFM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zkHFM.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/Z5G8M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z5G8M.png" alt="enter image description here" /></a></p> <p>Last two rows are the problem. I have tried converting them to string but it still shows the same output</p> <pre><code>try: merge_values = merge.values.tolist() gs.values_append('Merged', {'valueInputOption': 'RAW'}, {'values': merge_values}) except: print('No new data') </code></pre>
<python><pandas><google-sheets><gspread>
2023-04-25 16:06:33
1
693
hyeri
76,103,278
6,626,531
Mypy error when an attribute is initialized to None and then used later
<p>Using <code>mypy==0.991</code> and Python <code>3.8</code>.</p> <p>I have a class that initializes its attribute <code>kfp</code> to <code>None</code>. The user should run the <code>open</code> method first and then <code>produce</code>, and so at that point in time, <code>self.kfp</code> will not be <code>None</code>. Mypy however thinks that it is bad.</p> <p>Thoughts on resolving this or adding an argument to <code>pyproject.toml</code> to ignore it?</p> <p>Error:</p> <pre><code>error: &quot;None&quot; has no attribute &quot;produce&quot; [attr-defined] </code></pre> <p>Code</p> <pre class="lang-py prettyprint-override"><code>class Produce: &quot;&quot;&quot;Class.&quot;&quot;&quot; def __init__(self) -&gt; None: &quot;&quot;&quot;Init Doct String.&quot;&quot;&quot; self.kfp = None def open(self) -&gt; None: &quot;&quot;&quot;Open Connection to Producer. Use with produce individually.&quot;&quot;&quot; self.kfp = build() def produce(self) -&gt; None: &quot;&quot;&quot;Produce output.&quot;&quot;&quot; self.kfp.produce() </code></pre>
<python><mypy><python-typing>
2023-04-25 16:06:32
1
1,975
Micah Pearce
76,103,041
1,405,543
How to change background color of the title of legend
<p>In the example below, how can I change the <strong>background</strong> color of &quot;Title&quot; to be red (with alpha=.5), not the font color of &quot;Title&quot;?</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot([1,2,3,], [1,2,3], label='Series') legend = ax.legend(title='Title') # Access to the legend title box adapted from # https://stackoverflow.com/a/63570572 legend._legend_title_box._text.set_color('red') </code></pre> <p><a href="https://i.sstatic.net/WJQui.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WJQui.png" alt="enter image description here" /></a></p>
<python><matplotlib><colors><legend>
2023-04-25 15:42:48
4
1,165
jII
76,103,001
2,482,149
Python Logging is Blocking: Datadog
<p>There's a custom log handler in Python called <code>DataDogCustomLogHandler</code>.</p> <p>This is the function to initialise the logger:</p> <pre><code>os.environ[&quot;DD_API_KEY&quot;] = 'XXXXX' os.environ[&quot;DD_SITE&quot;] = 'XXXXX' host_name = socket.gethostname() def init_datadog_logging(service_name: str = None, env_name: str = None, min_log_level: int = logging.INFO): tags = f'service: {service_name}, host: {host_name}, environment: {env_name}' logging.basicConfig() logging.getLogger().setLevel(min_log_level) logger = logging.getLogger() if env_name == 'UAT' or env_name == 'Production': datadog_custom_handler = DatadogCustomLogHandler(tags=tags, service=service_name, level=min_log_level) logger.addHandler(datadog_custom_handler) </code></pre> <p>I have an async function that grabs data using <code>aiohttp</code>:</p> <pre><code>logger = logging.getLogger(__name__) async def http_get(url: str) -&gt; JSONObject: logger.info(f'Getting data from {url}') async with aiohttp.ClientSession() as session: async with session.get(url) as response: return await response.json() </code></pre> <p>In my <code>main</code> function I run this:</p> <pre><code>init_datadog_logging('log.test', 'UAT', min_log_level=logging.INFO) logger = logging.getLogger(__name__) async def main_async() -&gt; None: time_before = time.perf_counter() async with aiohttp.ClientSession() as session: tasks = [] for date in dates: url = BASE_URL + date tasks.append(asyncio.ensure_future(http_get(url))) responses = await asyncio.gather(*tasks) logger.info(f'Total time taken: {time.perf_counter() - time_before}') asyncio.run(main_async()) </code></pre> <p>It takes 0.97 seconds without the <code>logger</code>, but as soon as I compute <code>logger.info</code> it becomes about 8 seconds. How do I unblock the <code>logging</code> library? Or is there a better way to send datadog logs via http without blocking in python?</p>
<python><logging><python-asyncio>
2023-04-25 15:38:15
1
1,226
clattenburg cake
76,102,935
777,593
pandas group by with reset index
<p>Given I have following pandas dataframe</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>UserID</th> <th>LoginDate</th> <th>PlacedOrderItems</th> <th>Other column</th> </tr> </thead> <tbody> <tr> <td>A123456</td> <td>2023-04-01</td> <td>1</td> <td>a</td> </tr> <tr> <td>A123456</td> <td>2023-04-10</td> <td>0</td> <td>b</td> </tr> <tr> <td>B123456</td> <td>2023-04-05</td> <td>0</td> <td>c</td> </tr> <tr> <td>B123456</td> <td>2023-04-06</td> <td>2</td> <td>d</td> </tr> <tr> <td>B123456</td> <td>2023-04-07</td> <td>0</td> <td>e</td> </tr> </tbody> </table> </div> <p>I want to use group by <code>UserID</code> and get <code>min</code> of <code>LoginDate</code> and <code>max</code> of <code>PlacedOrderItems</code> added to each row of my datafarme. I tried Group by but turns the column names into a weird multi level indexing.</p> <p>But What I want is something like below where I preserve the original dataframe and just get two new columns added.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>UserID</th> <th>LoginDate</th> <th>PlacedOrderItems</th> <th>Other column</th> <th>min_login_date</th> <th>max_order_items</th> </tr> </thead> <tbody> <tr> <td>A123456</td> <td>2023-04-01</td> <td>1</td> <td>a</td> <td>2023-04-01</td> <td>1</td> </tr> <tr> <td>A123456</td> <td>2023-04-10</td> <td>0</td> <td>b</td> <td>2023-04-01</td> <td>1</td> </tr> <tr> <td>B123456</td> <td>2023-04-05</td> <td>0</td> <td>c</td> <td>2023-04-05</td> <td>2</td> </tr> <tr> <td>B123456</td> <td>2023-04-06</td> <td>2</td> <td>d</td> <td>2023-04-05</td> <td>2</td> </tr> <tr> <td>B123456</td> <td>2023-04-07</td> <td>1</td> <td>e</td> <td>2023-04-05</td> <td>2</td> </tr> </tbody> </table> </div>
<python><pandas><group-by><dataframe>
2023-04-25 15:28:21
2
2,411
Khurram Majeed
76,102,887
4,819,376
Plotly range of subplot doesn't change
<p>I'm trying to set the range of subplots using <code>xaxis_range</code> but I found it's working on the left subplot only</p> <h1>Data</h1> <pre class="lang-py prettyprint-override"><code>import pandas as pd import plotly.graph_objects as go from plotly.subplots import make_subplots data = {'variable': {0: 'Case', 1: 'Exam', 2: 'History', 3: 'MAP', 4: 'Volume'}, 'margins_fluid': {0: 0.497, 1: 0.668, 2: 0.506, 3: 0.489, 4: 0.718}, 'margins_vp': {0: 0.809, 1: 0.893, 2: 0.832, 3: 0.904, 4: 0.92}} df = pd.DataFrame(data) </code></pre> <h1>Plot</h1> <pre class="lang-py prettyprint-override"><code># create subplots fig = make_subplots( rows=1, cols=2, shared_xaxes=False, shared_yaxes=True, horizontal_spacing=0, subplot_titles=['&lt;b&gt;Fluid&lt;/b&gt;', '&lt;b&gt;Vasopressor&lt;/b&gt;']) fig.append_trace( go.Bar( x=df['margins_fluid'], y=df['variable'], text=df[&quot;margins_fluid&quot;], textposition='inside', texttemplate=&quot;%{x:.4p}&quot;, orientation='h', width=0.7, # space between bars showlegend=False, ), 1, 1) # 1,1 represents row 1 column 1 in the plot grid fig.append_trace( go.Bar( x=df['margins_vp'], y=df['variable'], text=df[&quot;margins_vp&quot;], textposition='inside', texttemplate=&quot;%{x:.4p}&quot;, orientation='h', width=0.7, showlegend=False), 1, 2) # 1,2 represents row 1 column 2 in the plot grid fig.update_xaxes( tickformat=',.0%', row=1, col=1, autorange='reversed',) fig.update_xaxes( tickformat=',.0%', row=1, col=2) fig.update_layout( title_text=&quot;Title&quot;, barmode=&quot;group&quot;, width=800, height=700, title_x=0.5, xaxis_range=[0, 1], # This doesn't work xaxis2_range=[0, 1], # This works ) fig.show() </code></pre> <p><a href="https://i.sstatic.net/14NT2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/14NT2.png" alt="enter image description here" /></a></p>
<python><pandas><plotly>
2023-04-25 15:23:47
1
13,495
rpanai
76,102,677
14,485,257
Expanding a pandas dataframe based on new values of a column
<p>I have two pandas dataframes called &quot;data&quot; and &quot;temp_data&quot;:</p> <p><a href="https://i.sstatic.net/EGd3K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EGd3K.png" alt="enter image description here" /></a></p> <p>&quot;data&quot; has columns &quot;date_time&quot; and &quot;Voltage&quot;;</p> <p>&quot;temp_data&quot; has columns &quot;date_time&quot; and &quot;Temperature&quot;.</p> <p>I find the absolute minimum and maximum values of the date values present in the &quot;date_time&quot; column between the two dataframes using:</p> <pre><code>min_date = pd.concat([data[&quot;date_time&quot;], temp_data[&quot;date_time&quot;]]).min() max_date = pd.concat([data[&quot;date_time&quot;], temp_data[&quot;date_time&quot;]]).max() </code></pre> <p>I need to now add new rows into each of the two dataframes such that both of them start and end with the min_date and max_date in their &quot;date_time&quot; columns. And against each of these newly added &quot;date_time&quot; values, the corresponding &quot;Temperature&quot; &amp; &quot;Voltage&quot; values in each of the dataframes can be given the value of blank/Nan.</p> <p>So eventually I need the date range of the &quot;date_time&quot; columns of the two dataframes to be the same.</p> <p>Note that all the original values present in the columns &quot;Temperature&quot; and &quot;Voltage&quot; are to be retained. Only in places where the original values are absent, they can be kept as blanks/Nan.</p> <p>Can you please help me modify the two dataframes so that I can get something like this?</p> <p><a href="https://i.sstatic.net/0BURu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0BURu.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2023-04-25 15:03:12
1
315
EnigmAI
76,102,484
21,404,794
Sum of columns over threshold in pandas
<p>I'm trying to sum a bunch of columns in pandas and check whether that sum is over 100 or not.</p> <p>I already have the sum part sorted, what I'm trying to find is a way to compare each value of the sum to a scalar.</p> <p>Here's my first attempt:</p> <p><code>df[[col1,col2]].sum(axis=1) &gt; 100.0</code></p> <p>this gave back ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()</p> <p>I've tried with a.any and a.all, but they both return only true or false, so that doesn't work. I've also tried creating a pandas series with the value, but it gives error too.</p>
<python><pandas><dataframe><comparison>
2023-04-25 14:45:09
1
530
David Siret Marqués
76,102,444
13,219,123
Filter pyspark dataframe
<p>I have the following pyspark dataframe.</p> <pre><code># Define the schema for the dataframe schema = StructType([ StructField(&quot;dt&quot;, StringType(), True), StructField(&quot;id&quot;, StringType(), True), StructField(&quot;value&quot;, FloatType(), True) ]) # Create the PySpark dataframe data = [ ('2023-04-24 00:00:00', 'A', None), ('2023-04-25 00:00:00', 'A', 100.5), ('2023-04-24 00:00:00', 'B', None), ('2023-04-25 00:00:00', 'B', None), ('2023-04-26 00:00:00', 'A', 110.0), ('2023-04-26 00:00:00', 'B', None), ('2023-04-27 00:00:00', 'A', None), ('2023-04-27 00:00:00', 'B', 50.5), ('2023-04-28 00:00:00', 'A', 105.0), ('2023-04-28 00:00:00', 'B', None), ('2023-04-29 00:00:00', 'B', 55.5), ('2023-04-29 00:00:00', 'A', 107.0) ] df = spark.createDataFrame(data, schema) df = df.withColumn(&quot;dt&quot;, col(&quot;dt&quot;).cast(TimestampType())) </code></pre> <p>For each ID I want to remove all rows before the first row where the <code>value</code> column is not null. Below is the expected dataframe:</p> <pre><code>+-------------------+---+-----+ | dt| id|value| +-------------------+---+-----+ |2023-04-25 00:00:00| A|100.5| |2023-04-26 00:00:00| A|110.0| |2023-04-27 00:00:00| A| null| |2023-04-28 00:00:00| A|105.0| |2023-04-29 00:00:00| A|107.0| |2023-04-27 00:00:00| B| 50.5| |2023-04-28 00:00:00| B| null| |2023-04-29 00:00:00| B| 55.5| +-------------------+---+-----+ </code></pre> <p>For simplicity I want to keep the transformations in pyspark, so I cannot convert to other formats (Pandas for instance).</p>
<python><pyspark>
2023-04-25 14:41:06
1
353
andKaae
76,102,415
7,599,062
Implementing a transformer-based span retrieval model for a document-grounded dialogue system
<p>I am working on a project that aims to develop a document-grounded dialogue system with a transformer-based span retrieval model. The system will be trained on a dataset of multiturn conversations where users ask questions that require information from multiple documents.</p> <p>I am currently struggling with the implementation of the transformer-based span retrieval model. Specifically, I am having difficulty with encoding the query and relevant documents and computing the attention scores between them.</p> <p>However, I am not sure if this is the correct way to compute the attention scores between the query and the document. Can anyone provide guidance on how to properly implement a transformer-based span retrieval model for a document-grounded dialogue system using the Hugging Face Transformers library?</p> <p>I am using the Hugging Face Transformers library to implement the model. Here is the code that I have so far:</p> <pre><code>import transformers query = &quot;What is the capital city of Turkey?&quot; document = &quot;Ankara is the capital city of Turkey.&quot; tokenizer = transformers.AutoTokenizer.from_pretrained(&quot;bert-base-cased&quot;) model = transformers.AutoModel.from_pretrained(&quot;bert-base-cased&quot;) query_encoded = tokenizer.encode(query, return_tensors=&quot;pt&quot;) document_encoded = tokenizer.encode(document, return_tensors=&quot;pt&quot;) query_outputs = model(query_encoded) document_outputs = model(document_encoded) attention_scores = torch.matmul(query_outputs, document_outputs.transpose(0, 1)) </code></pre>
<python><machine-learning><deep-learning><nlp>
2023-04-25 14:37:59
1
543
SyntaxNavigator
76,102,235
4,819,376
y-ticks between sublopts in Plotly
<p>I had a look at <a href="https://stackoverflow.com/questions/76073586/generate-bidirectional-bar-chart-in-matplotlib-with-variables-in-center-of-chart/76076468?noredirect=1#comment134210758_76076468">this</a> question and I was wondering if there is a way to put the y-ticks and names between 2 subplots.</p> <h1>Data</h1> <pre class="lang-py prettyprint-override"><code>import pandas as pd import plotly.graph_objects as go from plotly.subplots import make_subplots data = {'variable': {0: 'Case', 1: 'Exam', 2: 'History', 3: 'MAP', 4: 'Volume'}, 'margins_fluid': {0: 0.497, 1: 0.668, 2: 0.506, 3: 0.489, 4: 0.718}, 'margins_vp': {0: 0.809, 1: 0.893, 2: 0.832, 3: 0.904, 4: 0.92}} df = pd.DataFrame(data) </code></pre> <h1>Plot</h1> <pre class="lang-py prettyprint-override"><code>fig = make_subplots( rows=1, cols=2, shared_xaxes=False, shared_yaxes=True, horizontal_spacing=0, subplot_titles=['&lt;b&gt;Fluid&lt;/b&gt;', '&lt;b&gt;Vasopressor&lt;/b&gt;']) fig.append_trace( go.Bar( x=df['margins_fluid'], y=df['variable'], text=df[&quot;margins_fluid&quot;], textposition='inside', texttemplate=&quot;%{x:.4p}&quot;, orientation='h', width=0.7, # space between bars showlegend=False, ), 1, 1) # 1,1 represents row 1 column 1 in the plot grid fig.append_trace( go.Bar( x=df['margins_vp'], y=df['variable'], text=df[&quot;margins_vp&quot;], textposition='inside', texttemplate=&quot;%{x:.4p}&quot;, orientation='h', width=0.7, showlegend=False), 1, 2) # 1,2 represents row 1 column 2 in the plot grid fig.update_xaxes( tickformat=',.0%', row=1, col=1, autorange='reversed',) fig.update_xaxes( tickformat=',.0%', row=1, col=2) fig.update_layout( title_text=&quot;Title&quot;, barmode=&quot;group&quot;, width=800, height=700, title_x=0.5, ) fig.show() </code></pre> <p><a href="https://i.sstatic.net/M6x0f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6x0f.png" alt="enter image description here" /></a></p>
<python><pandas><plotly>
2023-04-25 14:20:42
1
13,495
rpanai
76,102,050
950,854
"No data found for this date range, symbol may be delisted"
<p>I'm trying to deploy a working Python 3.10 function to AWS Lambda, but I'm stuck with these yfinance (0.2.18) errors:</p> <p>This call <code>yf.Ticker('QQQ').history(period='6mo')</code> generates <code>QQQ: No data found for this date range, symbol may be delisted</code></p> <p>While this call <code>yf.download('QQQ', start='2023-01-01')</code> generates <code>QQQ: No timezone found, symbol may be delisted</code></p> <p>Both work fine on my local windows 10.</p> <p>I'm not sure if it matters, but earlier on I added <code>yf.set_tz_cache_location(&quot;/tmp/yfinance&quot;)</code> to prevent a read-only error.</p> <p>I've read everything I could, but I can't find a solution. Any ideas? Thanks</p>
<python><aws-lambda><yfinance>
2023-04-25 14:02:44
0
649
chris
76,101,799
7,194,375
Pycaret, predict soil moisture to minimize water consumption
<p>I want to predict soil moisture for a field to minimize water consumption. For that I recorded only soil moisture and soil temperture over the course of two months. There are four devices that give me the values in 10min interval for one field. So I did combine those four soil moisture values in a moving average. Here is an illustration of the training data: <a href="https://i.sstatic.net/KYHqe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KYHqe.png" alt="The diagram shows soil tension over time. Low values mean the soil is moistured, high means it is dry. Deep negative slopes or gradients means it is being watered atm." /></a> The diagram shows soil tension over time. Low values mean the soil is moistured, high means it is dry. Deep negative slopes or gradients means it is being watered atm. Due to the fact that I want to predict how the soil is drying over time, I have to get rid of the watering periods. For every specific plant and soil type there is another threshold. Later I want to show the farmer when this threshold is expected to be met. Visualized below: <a href="https://i.sstatic.net/y83Em.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y83Em.png" alt="this is how it should look like, created in paint ;)" /></a> This is how it should look like, created in paint ;) I identified them with the slope or gradient between the soil moisture values. <a href="https://i.sstatic.net/8NZYD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8NZYD.png" alt="Irrigation times highlighted" /></a> Irrigation times highlighted I seperated those into different training examples: <a href="https://i.sstatic.net/jo5Ii.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jo5Ii.png" alt="separated training examples" /></a></p> <p>Separated training examples Due to the fact that I want prototype this first, I used pycaret to benefit from high level functions that can compare models on the fly. So I got 4 dataframes now, with the following head:</p> <pre><code>index = new index 0-end orig_index = original index in df (before split) Time = datetime64 hour = feature created from Time, hour of the day minute = feature created from Time, minute of hour date = feature created from Time, day of month month = feature created from Time, month of year grouped_soil = grouped soil humidity from B2, B3, B4 grouped_resistance = grouped soil resistancefrom B2, B3, B4 grouped_soil_temp = grouped soil tempfrom B2, B3, B4 rolling_mean_grouped_soil = rolling mean of frouped soil humidity rolling_mean_grouped_soil_temp = rolling mean of frouped soil temp B2_solar_x2_03, Soil_tension = raw soild tension value from sensor B2_solar_x2_03, Resistance = raw soil resistance value from sensor, will be calculated with temp to soil tension B2_solar_x2_03, Soil_temperature = raw soil temp value from sensor B3_solar_x1_02, Resistance = raw soil resistance value from sensor, will be calculated with temp to soil tension B4_solar_x1_03, Soil_tension = raw soil tension value from sensor B4_solar_x1_03, Resistance = raw soil resistance value from sensor, will be calculated with temp to soil tension B4_solar_x1_03, Soil_temperature = raw soil temp value from sensor gradient = slope, to indicate rapid changes (watering periods) </code></pre> <p>Now let's get to my question:</p> <ol> <li><p>Do I train now four independent models? How can I combine those models afterwards? I tried it with blend_models() function in pycaret, but since I got different training data, it was not possible to combine them.</p> </li> <li><p>I cannot combine them to one model, because the irrigation times were different, so the time would be wrong(hour, day, month), right? For reference, this is my setup function:</p> <p>from pycaret.time_series import * exp=[] for i in range(len(cut_sub_dfs)): exp.append(TSForecastingExperiment())</p> <pre><code> # check the type of exp type(exp[i]) # init setup on exp exp[i].setup( cut_sub_dfs[i], target = 'rolling_mean_grouped_soil', enforce_exogenous = False, fh = 50, session_id = 123, fold = 3, ignore_features = ['Time', 'orig_index', 'gradient'] ) </code></pre> </li> </ol> <p>Here is the returned:</p> <pre><code>Description Value session_id1231 Targetrolling_mean_grouped_soil ApproachUnivariate Exogenous VariablesPresent Original data shape(1911, 16) Transformed data shape(1911, 16) Transformed train set shape(1861, 16) Transformed test set shape(50, 16) Rows with missing values0.0% Fold GeneratorExpandingWindowSplitter Fold Number3 Enforce Prediction IntervalFalse Splits used for hyperparametersall User Defined Seasonal Period(s)None Ignore Seasonality TestFalse Seasonality Detection Algoauto Max Period to Consider60 Seasonal Period(s) Tested[2, 3, 4, 25, 5, 26, 27, 6, 13, 7, 14, 18, 19, 22, 20, 21] Significant Seasonal Period(s)[2, 3, 4, 25, 5, 26, 27, 6, 13, 7, 14, 18, 19, 22, 20, 21] Significant Seasonal Period(s) without Harmonics[26, 27, 20, 25, 18, 21, 14, 19, 22] Remove HarmonicsFalse Harmonics Order Methodharmonic_max Num Seasonalities to Use1 All Seasonalities to Use[2] Primary Seasonality2 Seasonality PresentTrue Target Strictly PositiveTrue Target White NoiseNo Recommended d1 Recommended Seasonal D0 PreprocessFalse CPU Jobs-1 Use GPUFalse Log ExperimentFalse Experiment Namets-default-name35USIcbbb </code></pre> <p>These are the results: <a href="https://i.sstatic.net/KbHjl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KbHjl.png" alt="Many models perform bad, the best ones seem to be overfitted." /></a> Many models perform bad, the best ones seem to be overfitted. 3) Why are some models perform good, others bad on this data? And finally, I used plot_model function to forecast: <a href="https://i.sstatic.net/cTRkG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cTRkG.png" alt="This is the forecast within the forecast horizon fh=50" /></a> This is the forecast within the forecast horizon fh=50 <a href="https://i.sstatic.net/HQggE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HQggE.png" alt="This is the forecast after the forecast horizon fh=500, 450 values after data ends." /></a> This is the forecast after the forecast horizon fh=500, 450 values after data ends.</p> <p>I did not used predict_model function with new data, because this will only contain the times, because I do not know any other values in advance.</p> <ol start="4"> <li>Why is my prediction that bad after the forecast horizon?</li> </ol> <p>Further information: I did not split a testset on my own in this example and evaluated. I just saw from the beginning that it is not worth it with those curves. If you want me to upload a notebook with the code, I can do this, no problem.</p> <p>Greetings and thank you for your time and effort!</p>
<python><time-series><regression><forecasting><pycaret>
2023-04-25 13:38:13
1
408
AldegarRızvan
76,101,635
12,902,027
Kaggle Code doesn't download "gpt2" language model
<p>I am using kaggle code to download gpt2 language model.</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer, AutoModelForCausalLM device = &quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot; model_name = &quot;gpt2-xl&quot; tokenizer = AutoTokenizer.from_pretrained(model_name) </code></pre> <p>Intend to download the gpt2-xl model from the huggingface hub. But the last line raised LocalEntryNotFoundError. The detais are below.</p> <blockquote> <p>LocalEntryNotFoundError Traceback (most recent call last)</p> </blockquote> <blockquote> <p>/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash) 419 use_auth_token=use_auth_token, --&gt; 420 local_files_only=local_files_only, 421 )</p> </blockquote> <blockquote> <p>OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like gpt2-xl is not the path to a directory containing a file named config.json.</p> </blockquote> <blockquote> <p>Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.</p> </blockquote> <p>Doesn't seem that kaggle code connects to the huggingface hub. Why does this happen and how can I fix this error?</p>
<python><huggingface-transformers><kaggle><gpt-2>
2023-04-25 13:20:15
1
301
agongji
76,101,543
1,026,990
How to access Firebird system tables through SQLalchemy?
<p>I want to use automap to get database schema and start working with system tables (like MON$ RDB$) as objects.</p> <p>like</p> <pre class="lang-py prettyprint-override"><code> Base = automap_base() Base.prepare( autoload_with=engine ) Session().query(Base.classes.mon.Statements).all() </code></pre> <p><a href="https://i.sstatic.net/1Ef68.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Ef68.png" alt="enter image description here" /></a></p>
<python><sqlalchemy><firebird>
2023-04-25 13:11:04
0
3,768
uralbash
76,101,516
5,547,553
How to create a rank based on multiple columns in Polars?
<p>Suppose we have this dataframe in polars:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;era&quot;: [&quot;01&quot;, &quot;01&quot;, &quot;02&quot;, &quot;02&quot;, &quot;03&quot;, &quot;03&quot;], &quot;pred&quot;: [3,5,6,8,9,1] } ).with_row_index() </code></pre> <p>I can create a rank/row_number based on one column, like:</p> <pre class="lang-py prettyprint-override"><code>df.with_columns(rn = pl.col(&quot;era&quot;).rank(&quot;ordinal&quot;)) </code></pre> <pre><code>shape: (6, 4) ┌───────┬─────┬──────┬─────┐ │ index ┆ era ┆ pred ┆ rn │ │ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ str ┆ i64 ┆ u32 │ ╞═══════╪═════╪══════╪═════╡ │ 0 ┆ 01 ┆ 3 ┆ 1 │ │ 1 ┆ 01 ┆ 5 ┆ 2 │ │ 2 ┆ 02 ┆ 6 ┆ 3 │ │ 3 ┆ 02 ┆ 8 ┆ 4 │ │ 4 ┆ 03 ┆ 9 ┆ 5 │ │ 5 ┆ 03 ┆ 1 ┆ 6 │ └───────┴─────┴──────┴─────┘ </code></pre> <p>But if I want to do it based on two columns, it is not working:</p> <pre class="lang-py prettyprint-override"><code>df.with_columns(rn = pl.col(&quot;era&quot;,&quot;pred&quot;).rank(&quot;ordinal&quot;)) </code></pre> <p>I get this error message:</p> <pre><code>ComputeError: The name: 'rn' passed to `LazyFrame.with_columns` is duplicate Error originated just after this operation: DF [&quot;era&quot;, &quot;pred&quot;]; PROJECT */2 COLUMNS; SELECTION: &quot;None&quot; </code></pre> <p>I expect the following output:</p> <pre><code>shape: (6, 4) ┌───────┬─────┬──────┬─────┐ │ index ┆ era ┆ pred ┆ rn │ │ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ str ┆ i64 ┆ u32 │ ╞═══════╪═════╪══════╪═════╡ │ 0 ┆ 01 ┆ 3 ┆ 1 │ │ 1 ┆ 01 ┆ 5 ┆ 2 │ │ 2 ┆ 02 ┆ 6 ┆ 3 │ │ 3 ┆ 02 ┆ 8 ┆ 4 │ │ 4 ┆ 03 ┆ 9 ┆ 6 │ │ 5 ┆ 03 ┆ 1 ┆ 5 │ └───────┴─────┴──────┴─────┘ </code></pre> <p>Any suggestions on how to do this?</p>
<python><dataframe><python-polars><ranking>
2023-04-25 13:08:20
1
1,174
lmocsi
76,101,455
19,980,284
Reformat bidirectional bar chart to match example
<p>I have generated this bar-chart</p> <p><a href="https://i.sstatic.net/QS9v7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QS9v7.jpg" alt="enter image description here" /></a></p> <p>Using this code:</p> <pre class="lang-py prettyprint-override"><code>s = &quot;&quot;&quot;level,margins_fluid,margins_vp Volume,0,0 1L*,0.718,0.690 2L,0.501,0.808 5L,0.181,0.920 MAP,0,0 64*,0.434,0.647 58,0.477,0.854 52,0.489,0.904 Exam,0,0 dry,0.668,0.713 euvolemic*,0.475,0.798 wet,0.262,0.893 History,0,0 COPD*,0.506,0.804 Kidney,0.441,0.778 HF,0.450,0.832 Case,0,0 1 (PIV),0.435,0.802 2 (CVC)*,0.497,0.809&quot;&quot;&quot; data = np.array([a.split(',') for a in s.split(&quot;\n&quot;)]) fluid_vp_1_2 = pd.DataFrame(data[1:], columns=data[0]) fluid_vp_1_2['margins_fluid'] = fluid_vp_1_2['margins_fluid'].apply(float) fluid_vp_1_2['margins_vp'] = fluid_vp_1_2['margins_vp'].apply(float) fluid_vp_1_2 variableNames = {'Volume', 'MAP', 'Exam', 'History', 'Case'} font_color = '#525252' hfont = {'fontname':'DejaVu Sans'} facecolor = '#eaeaf2' index = fluid_vp_1_2.index#['level'] column0 = fluid_vp_1_2['margins_fluid']*100 column1 = fluid_vp_1_2['margins_vp']*100 title0 = 'Fluids' title1 = 'Vasopressors' fig, axes = plt.subplots(figsize=(10,5), facecolor=facecolor, ncols=2, sharey=True) axes[0].barh(index, column0, align='center', color='dimgray', zorder=10) axes[0].set_title(title0, fontsize=18, pad=15, color='black', **hfont) axes[1].barh(index, column1, align='center', color='silver', zorder=10) axes[1].set_title(title1, fontsize=18, pad=15, color='black', **hfont) # If you have positive numbers and want to invert the x-axis of the left plot axes[0].invert_xaxis() # To show data from highest to lowest plt.gca().invert_yaxis() axes[0].set(xlim = [100,0]) axes[1].set(xlim = [0,100]) axes[0].yaxis.tick_right() axes[0].set_yticks(range(len(fluid_vp_1_2))) maxWordLength = fluid_vp_1_2['level'].apply(lambda x: len(x)).max() formattedyticklabels = [r'$\bf{'+f&quot;{t}&quot;+r'}$' if t in variableNames else t for t in fluid_vp_1_2['level']] axes[0].set_yticklabels(formattedyticklabels, ha='center', position=(1.12, 0)) axes[0].tick_params(right = False) axes[1].tick_params(left = False) fig.tight_layout() plt.savefig(&quot;fluid_vp_1_2.jpg&quot;) plt.show() </code></pre> <p>However, I would like to modify this chart to more closely resemble the below example, where the y-axis labels are on the left-hand side, bi-directional bars are making contact in the center, white background, more vertical in shape (shrunken x-axis), add x-axis label (“adjusted proportion of respondents”), but I would still like to maintain the order of variables and the gaps in bars caused by the bolded header labels like <code>Volume</code>, <code>MAP</code>, etc.</p> <p><a href="https://i.sstatic.net/yk2kt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yk2kt.png" alt="enter image description here" /></a></p> <p>Any tips?</p>
<python><pandas><matplotlib><plotly><bar-chart>
2023-04-25 13:01:59
2
671
hulio_entredas
76,101,366
10,232,932
Upgrade python version in virtualenviornment in visual studio code
<p>I have a combination of this two questions on stackoverflow: <a href="https://stackoverflow.com/questions/58562928/how-do-i-update-a-python-virtual-environment-with-venv-in-python-3-3-to-use">How do I update a Python virtual environment with `venv` (in Python 3.3+) to use a newer version of Python?</a></p> <p><a href="https://stackoverflow.com/questions/54023954/python3-the-term-python3-is-not-recognized-as-the-name-of-a-cmdlet-function">python3 : The term &#39;python3&#39; is not recognized as the name of a cmdlet, function, script file, or operable program</a></p> <p>Currently, I have a virtual enviornment with the interpreter <code>python 3.7.0</code> in <code>visual studio code</code> and I want to upgrade it to <code>python 3.9.0</code>. So for that I do the following line in the terminal (as mentioned in the first question):</p> <pre><code>python3.9 -m venv --upgrade </code></pre> <p>Then I am running in the error of the second question:</p> <blockquote> <p>PS C:\Users\admin\Documents\Visual Studio 2017\Forecasts&gt; python3.9 -m venv --upgrade python3.9 : The term 'python3.9' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1</p> <ul> <li>python3.9 -m venv --upgrade</li> <li>CategoryInfo : ObjectNotFound: (python3.9:String) [], CommandNotFoundException</li> <li>FullyQualifiedErrorId : CommandNotFoundException</li> </ul> </blockquote> <p>what I am doing wrong here?</p> <p>My python executers are stored in: <code>C:\Users\admin\AppData\Local\Programs\Python\Python39</code></p>
<python><visual-studio-code><virtualenv>
2023-04-25 12:51:25
1
6,338
PV8
76,101,074
10,544,599
How to find multiple tags at once along with attributes using BeautifulSoup in python3?
<p>I am trying to find different tags at once using <code>find_all()</code> method of BeautifulSoup. I found a way to include all tags in the list to get the respective tags. But I am trying to get tags along with their attributes. I am not sure how to get it.</p> <p>This is the reference HTML structure.</p> <pre><code>&lt;html&gt; &lt;body&gt; &lt;div&gt; &lt;h4&gt;Registered Customer Details&lt;/h4&gt; &lt;div&gt; &lt;div class='row'&gt; &lt;div class='col-3'&gt;Name :&lt;/div&gt; &lt;div class='col-6'&gt;ABC&lt;/div&gt; &lt;/div&gt; &lt;div&gt; &lt;div class='row'&gt; &lt;div class='col-3'&gt;Address :&lt;/div&gt; &lt;div class='col-6'&gt;India&lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;col-3 col-6 col-12 lo&quot;&gt; &lt;a class=&quot;navbar-brand&quot; href=&quot;#&quot;&gt; &lt;img alt=&quot;image&quot; class=&quot;img-responsive&quot; src=&quot;/uploads/NEWLOGO.png&quot;/&gt; &lt;/a&gt; &lt;/div&gt; &lt;h4&gt;Partner Details&lt;/h4&gt; &lt;div&gt; &lt;div class='row'&gt; &lt;div class='col-4'&gt;Partners :&lt;/div&gt; &lt;div class='col-8'&gt;&lt;table&gt;&lt;/table&gt;&lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;span3&quot;&gt;&lt;/div&gt; &lt;div class=&quot;span9&quot;&gt;&lt;/div&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I'm trying to find tags at once.</p> <pre><code>from bs4 import BeautifulSoup soup = BeautifulSoup(open('test.html','r').read(),'lxml') soup.find_all(['h4','div']) </code></pre> <p>The above script will return all h4 tags and all div tags, but I'm looking for all h4 tags and all div tags having class value as col-3, col-6, col-4 &amp; col-8.</p> <p>Which might look something like this,</p> <pre><code># for single value soup.find_all(['h4', ['div',{'class':'col-3'}] ]) # for multiple value soup.find_all(['h4', ['div',{'class':['col-3','col-6','col-4','col-8']}] ]) </code></pre> <blockquote> <p>Output:</p> </blockquote> <pre><code>[&lt;h4&gt;Registered Customer Details&lt;/h4&gt;, &lt;h4&gt;Partner Details&lt;/h4&gt;] </code></pre> <blockquote> <p>Expected output:</p> </blockquote> <pre><code>[&lt;h4&gt;Registered Customer Details&lt;/h4&gt;, &lt;div class='col-3'&gt;Name :&lt;/div&gt;, &lt;div class='col-6'&gt;ABC&lt;/div&gt;, &lt;div class='col-3'&gt;Address :&lt;/div&gt;, &lt;div class='col-6'&gt;India&lt;/div&gt;, &lt;h4&gt;Partner Details&lt;/h4&gt;, &lt;div class='col-4'&gt;Partners :&lt;/div&gt;, &lt;div class='col-8'&gt;&lt;table&gt;&lt;/table&gt;] </code></pre>
<python><html><python-3.x><beautifulsoup><html-parsing>
2023-04-25 12:16:05
2
379
David
76,101,024
14,220,087
How to keep dtype in 0-dim array arithmetics in Numpy?
<pre class="lang-py prettyprint-override"><code>import numpy as np x=np.array(2, dtype=np.float32) y=x+1 print(y.dtype) # float64 </code></pre> <p>As shown above, <code>x</code> is a 0D array and its dtype is float32. But if you add 1 to it, the output <code>y</code> will be float64.</p> <p>This only happens when x is 0D array. I'm aware to use fcns such as <code>y=np.add(x,1, dtype=np.float32)</code> but this will be too tedious for something like <code>y=(x+1)*3-2</code>. Is there a way to disable this feature in numpy?</p>
<python><arrays><numpy>
2023-04-25 12:11:15
2
829
Sam-gege
76,100,975
13,636,586
YOLOv8 custom save directory path
<p>I'm currently working in a project in which I'm using Flask and Yolov8 together.</p> <p>When I run this code</p> <pre class="lang-py prettyprint-override"><code>from ultralytics import YOLO model = YOLO(&quot;./yolov8n.pt&quot;) results = model.predict(source=&quot;../TEST/doggy.jpg&quot;, save=True, save_txt=True) </code></pre> <p>the output will be saved in this default directory <code>/run/detect/</code></p> <p>like</p> <pre><code>Ultralytics YOLOv8.0.9 Python-3.10.8 torch-2.0.0+cpu CPU Fusing layers... YOLOv8n summary: 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs Results saved to d:\runs\detect\predict4 1 labels saved to d:\runs\detect\predict4\labels </code></pre> <p>and what I want is the <code>predict</code> directory number or the entire directory path in a variable.</p> <p>I tried capturing the path using sys.stdout methods but i want a direct solution.</p>
<python><flask><yolo><yolov8>
2023-04-25 12:06:36
1
399
Hariharan
76,100,972
11,725,056
How to solve a Custom Optimisation problem which uses Matrix Factorisation and custom loss in Pytorch?
<p>I am trying to replicate the paper <a href="https://dl.acm.org/doi/10.1145/3448139.3448177" rel="nofollow noreferrer">Recommendation for Effective Standardized Exam Preparation</a> where they have used a special term called <strong>correctness probability function</strong> and is described as:</p> <blockquote> <p>correctness probability function <code>p</code>, we use the Matrix Factorization model introduced in <a href="http://ml4ed.cc/attachments/LeeLCCS.pdf" rel="nofollow noreferrer">this paper</a>. With the question- response sequences of the users, we find the decomposition <code>X = L*R_Transpose</code> that minimizes the Binary Cross Entropy (BCE) loss with a Frobenius norm regularization, where <code>L = (L_uj )</code> represents the understanding of a student <code>u</code> on a hidden concept <code>j</code> and <code>R = (R_qj )</code> represents the contribution of a hidden concept <code>j</code> to the question <code>q</code>. The entry <code>X = (X_uq)</code> represents the understanding of a student <code>u</code> on a question <code>q</code>, and the response correctness probability <code>p(q|I_u ) = P_uq</code> is computed using <code>X_uq</code> based on the variation of the M2PR latent trait model in IRT give nin <a href="https://link.springer.com/content/pdf/10.3758/BF03203665.pdf" rel="nofollow noreferrer">this paper</a>.</p> </blockquote> <p>Then they go on to define the exact process like the below image: <a href="https://i.sstatic.net/5yP3I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5yP3I.png" alt="enter image description here" /></a></p> <p>Now they have a modified version of Sigmoid called <code>phi</code> which uses 3 additional parameters as <code>phi_a = 0.25, phi_b = 0.5, phi_c = 10</code>. My implementation for <code>phi</code> is as follows:</p> <pre><code># Modified sigmoid function def phi(x, phi_a = phi_a, phi_b = phi_b, phi_c = phi_c): ''' Putting value of X in phi will give the probability score of that question According to the paper it is given that Custom sigmoid = phi: φ(x) = φ_a + ((1 - φ_a) / (1 + e^ -φ_c(x - φ_b ))) and values of (φ_a , φ_b , φ_c ) are (0.25, 0.5, 10) respectively ''' return phi_a + (1 - phi_a) / (1 + torch.exp(-phi_c * (x - phi_b))) </code></pre> <p>the they have a custom <code>Optimization Problem</code> which we call <code>custom loss</code> and I have defined it as follows:</p> <pre><code># Custom loss function def custom_loss(Y_true, Y_pred, L, R, mu): ''' In the paper, it given as: BCE Loss + ((mu/2) * (Frobenius norm of L squared + Frobenius norm of R squared)) and Frobenius_norm(L) = √(Σ (L_ij)^2) so the authors might have used square just to eliminate the square root value so it becomes: Σ(L_ij)^2 ''' bce_loss = torch.nn.BCELoss()(Y_pred, Y_true) frobenius_norm_L = torch.sum(L ** 2) frobenius_norm_R = torch.sum(R ** 2) reg_loss = (mu / 2) * (frobenius_norm_L + frobenius_norm_R) return bce_loss + reg_loss </code></pre> <p>There are some conditions given in the image below as:</p> <ol> <li><code>0 &lt;= L_uj &lt;= 1</code> : All the entries in <code>Student - Latent Topic</code>matrix <code>L</code> will be between [0,1]</li> <li><code>0 &lt;= R_qj &lt;= 1</code> : All the entries in <code>Latent Topic - Question</code> matrix <code>R</code> will be between [0,1]</li> <li>Sum of all the elements of <code>R_qj</code> in each row must be equal to 1. It means if each questio is represented by <code>n_topics</code> then sum of <code>n_topics</code> for each question will be exactly equal to 1 (Like Softmax)</li> <li>Paper uses Stochastic Gradient Descent (SGD) method to solve the matrix factorization problem</li> </ol> <p>then somehow (by hook or crook with lots of help), I reached to a point where I bumped into <strong>some</strong> solution as:</p> <pre><code># Dummy numbers n_students = 50 n_questions = 700 n_concepts = 15 # Hyperparameters n_epochs = 10 learning_rate = 0.01 mu = 0.1 # regularization # Given constants for the phi() function phi_a, phi_b, phi_c = 0.25, 0.5, 10 # --------------------------------------------------- # Generate random student responses (1 if correct, 0 otherwise) Y = np.random.randint(0, 2, (n_students, n_questions)) L = np.random.rand(n_students, n_concepts) # Initialize L and R matrices randomly R = np.random.rand(n_questions, n_concepts) optimizer = torch.optim.SGD([L, R], lr=learning_rate) # SGD as given # ------------------------------------------------------------- # Training loop for epoch in range(n_epochs): optimizer.zero_grad() X = torch.matmul(L, torch.transpose(R, 0, 1)) Y_pred = phi(X) # predicted score loss = custom_loss(Y, Y_pred, L, R, mu) loss.backward() optimizer.step() # Enforce 3 conditions R.data = R.data / R.data.sum(axis=1, keepdims=True) # Normalize rows of R to sum up to 1 L.data = torch.clamp(L.data, min=0, max=1) # 0 ≤ L[u, j] ≤ 1 R.data = torch.clamp(R.data, min=0, max=1) # 0 ≤ R[q, j] ≤ 1 # Calculate the optimized understanding matrix X_opt (n_students x n_questions) X_opt = torch.matmul(L, torch.transpose(R, 0, 1)) # --- Evaluate -------------------------------- # Calculate the probability score for a student u and question q using the modified sigmoid function u, q = 0, 4 probability_score = phi(X_opt[u, q]) # Given in the formula print(f&quot;Probability score for student {u} and question {q}: {probability_score.item()}&quot;) </code></pre> <p><strong>Now the problem is that I don't know whether it is the right solution and is giving the results it should be giving</strong>.</p> <p>PS: If this can be solved by any other method, please do let me know. I just want to replicate the paper by using Any library.</p>
<python><numpy><machine-learning><pytorch><torch>
2023-04-25 12:06:04
1
4,292
Deshwal
76,100,754
12,858,691
Pandas find values that could be a date
<p>I have large and messy dataframe and want to identify date-like values if there are any. These date-like values could have any kind of date format and data type.</p> <p>Is there a generic (pandas in-built) way to achieve this?</p> <p>My code:</p> <pre><code>&gt;&gt; data.info() &lt;class 'pandas.core.frame.DataFrame'&gt; RangeIndex: 267091 entries, 0 to 267090 Data columns (total 93 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 verbund 267091 non-null object 1 verbund_id 267091 non-null object 2 global_party 267091 non-null object ... 91 kdberat_name_1 241196 non-null object 92 is_divergent_f 267091 non-null int8 dtypes: int8(1), object(92) memory usage: 187.7+ MB &gt;&gt; pd.to_datetime(csamapping_test.stack(), errors='coerce').unstack().info() &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 267091 entries, 0 to 267090 Data columns (total 93 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 verbund 0 non-null datetime64[ns] 1 verbund_id 0 non-null datetime64[ns] 2 global_party 0 non-null datetime64[ns] ... 91 kdberat_name_1 0 non-null datetime64[ns] 92 is_divergent_f 267091 non-null datetime64[ns] dtypes: datetime64[ns](93) memory usage: 191.5 MB </code></pre>
<python><pandas>
2023-04-25 11:41:18
0
611
Viktor
76,100,752
11,295,602
How do I pass custom metadata from setup.py to my application code
<p>I'm writing a setup.py file for my python package and have information in a dictionary that I want to pass to the setup command that will then be accessible to the application code when the package is installed.</p> <p>Is there a standard way to pass information like this in the setup function without writing and then including files (data / package) or writing a manifest file?</p>
<python><setuptools><setup.py><python-packaging>
2023-04-25 11:41:01
0
303
squashed
76,100,746
610,569
Simplifying ngram loops to compress the string given a fix set of ngrams
<p>Given in list of characters, <code>list('Hello▁world▁')</code> and a list of character tuples, i.e.</p> <blockquote> <p>[('l', 'l'), ('ell', 'o▁'), ('Hell', 'o▁'), ('w', 'or'), ('o', 'r'), ('e', 'l'), ('el', 'l'), ('H', 'ell'), ('H', 'e'), ('He', 'll'), ('worl', 'd▁'), ('wor', 'l'), ('l', 'd▁'), ('d', '▁'), ('wor', 'ld▁'), ('H', 'el'), ('o', '▁'), ('w', 'o'), ('l', 'o▁'), ('l', 'o')]</p> </blockquote> <p>The objective is to iterate through the tuples and collapse the list of characters if they match. I've tried this and it works:</p> <pre><code>import copy def matcher(s, ngram): while s: window = tuple(s[:2]) # Since the string tuples are in pairs. if window == ngram: yield &quot;&quot;.join(window) s = s[2:] else: yield s[0] s = s[1:] def combine_ngrams(s, vocab): prev = copy.copy(s) while True: for v in vocab: s = list(matcher(s, v)) if s == prev: break else: prev = s return s vocab = [('l', 'l'), ('ell', 'o▁'), ('Hell', 'o▁'), ('w', 'or'), ('o', 'r'), ('e', 'l'), ('el', 'l'), ('H', 'ell'), ('H', 'e'), ('He', 'll'), ('worl', 'd▁'), ('wor', 'l'), ('l', 'd▁'), ('d', '▁'), ('wor', 'ld▁'), ('H', 'el'), ('o', '▁'), ('w', 'o'), ('l', 'o▁'), ('l', 'o')] s = list('Hello▁world▁') combine_ngrams(s, vocab) </code></pre> <p>[out]:</p> <pre><code>['Hello▁', 'world▁'] </code></pre> <p>But the multiple while loops in both the outer function <code>combine_ngrams()</code> and inner <code>matcher()</code> looks like something that can be easily simplified.</p> <p>Or maybe the operations doesn't need to loop through the tuples and maybe some regex methods to iteratively apply the vocab substitution would work. Is there a way to simply the nested while loops in the <code>combine_ngrams</code> function?</p> <hr /> <p>Here's more input/output examples:</p> <p>[in]:</p> <pre><code>s = list('abcde'); vocab = [('a', 'b'), ('b', 'c'), ('a', 'bc'), ('abc', 'd'), ('abcd', 'e')] s = list('abcde'); vocab = [('a', 'b'), ('ab', 'c'), ('b', 'c'), ('a', 'bc'), ('abc', 'd'), ('abcd', 'e')] s = list('aaab'); vocab = [('a', 'a'), ('a', 'aa'), ('aaa', 'b')] s = list('Hello▁ポケモンセンター▁world▁'); vocab = [('l', 'l'), ('ell', 'o▁'), ('Hell', 'o▁'), ('w', 'or'), ('o', 'r'), ('e', 'l'), ('el', 'l'), ('H', 'ell'), ('H', 'e'), ('He', 'll'), ('worl', 'd▁'), ('wor', 'l'), ('l', 'd▁'), ('d', '▁'), ('wor', 'ld▁'), ('H', 'el'), ('o', '▁'), ('w', 'o'), ('l', 'o▁'), ('l', 'o')] </code></pre> <p>[out]:</p> <pre><code>['ab', 'c', 'd', 'e'] ['abcde'] ['aa', 'a', 'b'] ['Hello▁', 'ポ', 'ケ', 'モ', 'ン', 'セ', 'ン', 'タ', 'ー', '▁', 'world▁'] </code></pre> <hr /> <p>P/S: For anyone interested this is related to the <a href="https://en.wikipedia.org/wiki/Byte_pair_encoding" rel="nofollow noreferrer">byte-pair encoding</a> algorithm and if there's a more algorithmic rather than pythonic loop way to solve this problem, please do suggest.</p>
<python><string><while-loop><byte-pair-encoding>
2023-04-25 11:40:07
2
123,325
alvas
76,100,520
11,198,558
How can I customize hover for plotly.graph_object.Scattergeo?
<p>I have problem to customize the hoverbox when ploting using ScatterGeo of Plotly.graphic_object. Specifically, I have dataframe like this</p> <pre><code>data = pd.DataFrame({'country': &lt;list of country&gt;, 'importValue': &lt;list of data&gt;, 'lat': &lt;list of latitude for each country&gt;, 'lon': &lt;list of longitude for each country&gt;, </code></pre> <p>Then, I plot using plotly as below:</p> <pre><code> testFig = go.Figure() testFig.add_trace(go.Scattergeo( lon = data['lon'], lat = data['lat'], text = data['country'], marker = dict( size = data['value']/1000000, sizemode = 'area' ), hovertemplate = '&lt;b&gt;%{text}&lt;/b&gt;' )) testFig.update_layout(autosize=False, width=700, height=700) testFig.show() </code></pre> <p>The hoverbox now just contains the name of country (like the figure below), I don't know how to add the value of column &quot;importValue&quot; to this hover.</p> <p>Besides, I also would like to know how to change the background of the map for better vizualization. Using these lines of code just render the map with white color.</p> <p>Please help! Thanks!</p> <p><a href="https://i.sstatic.net/8Co8T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Co8T.png" alt="plot rendered by my code" /></a></p>
<python><plotly>
2023-04-25 11:11:02
1
981
ShanN
76,100,518
21,787,377
'TemporaryUploadedFile' object has no attribute 'get'
<p>I'm getting this error every time I'm trying to upload video file, and I think the problem is come from the <code>moviepy</code> library that I'm using to cut every video that are higher than 10 minute.</p> <p>the error: <code>AttributeError at /CreateVideo 'TemporaryUploadedFile' object has no attribute 'get'</code></p> <p>views:</p> <pre><code>from moviepy.video.io.VideoFileClip import VideoFileClip from django.core.exceptions import ValidationError def create_video(request): if request.method == 'POST': title = request.POST['title'] video = request.FILES['video'] banner = request.FILES['banner'] video_file = video video_clip = VideoFileClip(video_file.temporary_file_path()) duration = video_clip.duration video_clip.close() if duration &gt; 600: # 10 minute in seconds raise ValidationError('the video cannot be longer than 10 minute') return video_file new_video = Video.objects.create( user=request.user, title=title, video=video, banner=banner ) new_video.save() return redirect('Video') return render(request, 'video/create_video.html') </code></pre> <p>models:</p> <pre><code>class Video(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) title = models.CharField(max_length=70) video = models.FileField(upload_to='videos') created_on = models.DateTimeField(auto_now_add=True) banner = models.ImageField(upload_to='banner') slug = models.SlugField(max_length=100, unique=True) </code></pre>
<python><django><moviepy>
2023-04-25 11:10:39
1
305
Adamu Abdulkarim Dee
76,100,345
1,341,942
How does Python decide wherever to call __setitem__ or __getitem__ methods?
<p>As a C++ programmer, I am looking at the things from position of C++ language. If I want to define an indexing operator, I do it like this:</p> <pre><code>struct S { auto operator [](int) -&gt; string { throw 0; return &quot;hello&quot;; } }; </code></pre> <p>Notice that my method always throws no matter if it is used for assigning or reading:</p> <pre><code>S s{}; auto v = s[0]; // throws s[1] = 127; // throws </code></pre> <p>And I was surprised when a similar Python code works differently depending on context:</p> <pre><code>dict = {} a = dict[0] # throws dict[1] = 127 # works fine </code></pre> <p>I have found that the indexing operator in Python is syntactic sugar for methods <code>__getitem__</code> and <code>__setitem__</code>. But how does the interpreter decide which one to call?</p>
<python>
2023-04-25 10:49:25
2
4,713
nikitablack
76,100,078
9,749,124
Python pip error - AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
<p>I was installing Python libraries with pip on my Mac Catalina for the last 3 years. Command: <code>sudo pip3 install &lt;lib_name&gt;</code> Suddenly, that stopped workinng. When ever I want to install something I get this error:</p> <pre><code>Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.8/bin/pip3&quot;, line 10, in &lt;module&gt; sys.exit(main()) File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/cli/main.py&quot;, line 73, in main command = create_command(cmd_name, isolated=(&quot;--isolated&quot; in cmd_args)) File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/commands/__init__.py&quot;, line 96, in create_command module = importlib.import_module(module_path) File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/importlib/__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1014, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 671, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 783, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 219, in _call_with_frames_removed File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/commands/install.py&quot;, line 24, in &lt;module&gt; from pip._internal.cli.req_command import RequirementCommand File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/cli/req_command.py&quot;, line 15, in &lt;module&gt; from pip._internal.index.package_finder import PackageFinder File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/index/package_finder.py&quot;, line 21, in &lt;module&gt; from pip._internal.index.collector import parse_links File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/index/collector.py&quot;, line 12, in &lt;module&gt; from pip._vendor import html5lib, requests File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip/_vendor/requests/__init__.py&quot;, line 97, in &lt;module&gt; from pip._vendor.urllib3.contrib import pyopenssl File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip/_vendor/urllib3/contrib/pyopenssl.py&quot;, line 46, in &lt;module&gt; import OpenSSL.SSL File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/OpenSSL/__init__.py&quot;, line 8, in &lt;module&gt; from OpenSSL import crypto, SSL File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/OpenSSL/crypto.py&quot;, line 1556, in &lt;module&gt; class X509StoreFlags(object): File &quot;/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/OpenSSL/crypto.py&quot;, line 1577, in X509StoreFlags CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK' </code></pre> <p>I have tried this:</p> <pre><code>sudo pip3 install pyOpenSSL --upgrade sudo pip3 install cryptography --upgrade </code></pre> <p>But I get the same error.</p> <p>How to fix this issue?</p>
<python><pip>
2023-04-25 10:19:25
1
3,923
taga
76,099,798
11,901,834
Print all python logs within GitHub Action
<p>I have a GitHub Action that calls a python script:</p> <pre><code>- name: Test Python Script run: | python python_script.py </code></pre> <p>This runs fine and prints me the logs from within here:</p> <pre><code>if __name__ == &quot;__ main __&quot;: print('Calling function') func_call() print('End of script') </code></pre> <p>However, I don't get the print statements from <code>func_call</code> itself:</p> <pre><code>def func_call() : print('inside func_call') </code></pre> <p>When I run from my own terminal, the output is:</p> <pre><code>Calling function inside func_call End of script </code></pre> <p>Yet when I run from a GitHub Action I only get:</p> <pre><code>Calling function End of script </code></pre> <p>Is there a way I can propagate all print statements to the GitHub Action logs?</p>
<python><github-actions>
2023-04-25 09:48:40
2
1,579
nimgwfc
76,099,788
8,040,369
Multiprocessing: Python code to call my method with pool.map_async().get() is not working
<p>I am trying to run the below basic example code for multiprocessing in my Jupyter Notebook.</p> <pre><code>import multiprocessing import time def square(x): print('=====&gt; ',x) return x * x if __name__ == &quot;__main__&quot;: pool = multiprocessing.Pool() inputs = [0,1,2,3,4] outputs_async = pool.map_async(square, inputs) print(outputs_async) outputs = outputs_async.get(timeout=10) print(&quot;Output: {}&quot;.format(outputs)) </code></pre> <p>And i am getting the below as output,</p> <pre><code>&lt;multiprocessing.pool.MapResult object at 0x00000249E3A17188&gt; --------------------------------------------------------------------------- TimeoutError Traceback (most recent call last) &lt;ipython-input-3-f7d558819a08&gt; in &lt;module&gt; 12 outputs_async = pool.map_async(square, inputs) 13 print(outputs_async) ---&gt; 14 outputs = outputs_async.get(timeout=10) 15 print(&quot;Output: {}&quot;.format(outputs)) ~\Anaconda3\lib\multiprocessing\pool.py in get(self, timeout) 651 self.wait(timeout) 652 if not self.ready(): --&gt; 653 raise TimeoutError 654 if self._success: 655 return self._value TimeoutError: </code></pre> <p>It looks like that, the code is not calling my <strong>square</strong> method itself.</p> <p>Please help on how to fix this.</p> <p>Thanks,</p>
<python><python-3.x><multithreading><jupyter-notebook><multiprocessing>
2023-04-25 09:47:31
1
787
SM079
76,099,547
11,321,089
How to extract into lists all the values for each key in a nested dictionary
<p>I have some code which loops through a nested dictionary and extracts the nested values into lists but thought there has to be a more pythonic way of doing it.</p> <p><strong>Code</strong></p> <pre><code>if __name__ == &quot;__main__&quot;: master_dict = {&quot;1&quot;: {&quot;1&quot;: 11, &quot;2&quot;:22, &quot;3&quot;: 33}, &quot;2&quot;: {&quot;1&quot;: 15, &quot;2&quot;:23, &quot;3&quot;: 31}} #print(master_dict, '\n') ones = [] twenties = [] thirties = [] for k, v in master_dict.items(): #print(k, &quot; : &quot;, v) ones.append(v[&quot;1&quot;]) twenties.append(v[&quot;2&quot;]) thirties.append(v[&quot;3&quot;]) print(&quot;ones : &quot;, ones) print(&quot;twenties : &quot;, twenties) print(&quot;thirties : &quot;, thirties) </code></pre> <p><strong>Returns</strong></p> <pre><code>ones : [11, 15] twenties : [22, 23] thirties : [33, 31] </code></pre>
<python><dictionary><nested>
2023-04-25 09:21:18
3
909
Windy71
76,099,406
603,774
'Set as interpreter' for a .py file in VS Code
<p>During the execution of one of the examples, the video tutorial in VS Code indicates the message <code>Set as interpreter</code>, which allows you to explicitly specify which interpreter should be used for the py file.</p> <img src="https://i.sstatic.net/OwFO9.png" width="400" /> <p>My VS Code doesn't have this message and it doesn't appear even after I type &quot;#!&quot;</p> <img src="https://i.sstatic.net/UIqcP.png" width="400" /> <p>The environment in which I work is fully consistent with that in the video tutorial (VirtualBox , Ubuntu 20.04, VS Code, Extensions)</p> <p>Are there any other settings that need to be added to VS Code?</p>
<python><python-3.x><visual-studio-code><ros2>
2023-04-25 09:06:35
0
5,241
Ali Mamedov
76,099,336
4,623,971
Maximum recursion depth exceeded in databricks when converting pandas dataframe to spark dataframe
<p>I'm currently writing some code in databricks, with the default language in my notebook being python. I have a function that takes a pandas dataframe as input, does some calculations, and then outputs the dataframe with a new column that contains the output from the function. The function works as expected, and it outputs a pandas dataframe. However, when I then try and convert the outputted pandas dataframe to a spark dataframe, using <code>result = spark.createDataFrame(pandas_df)</code> I get the following error:</p> <p>maximum recursion depth exceeded in <strong>instancecheck</strong></p> <p>I can convert the pandas dataframe to a spark dataframe prior to the function call, and I know that the function work s as expected as I can see the output, however, every time I try and convert back to a spark dataframe, I get the error.</p>
<python><dataframe><pyspark><azure-databricks>
2023-04-25 08:59:43
1
334
JGW
76,099,325
3,427,777
pandas: summarize df with conditional based on another column
<p>I have a df with many factors as columns, and many samples as rows; many of the entries are <code>NaN</code>. There is also a single <code>value</code> column that is always non-NaN. I would like to summarize the factors across all samples. The summarized <code>value</code> for each factor should be the total of all <code>value</code>s <em>where the factor is non-NaN</em>. Here is an example shrunken source with desired output:</p> <pre><code># source df p0 p1 value 0 0.588817 0.681094 80.087629 1 NaN 0.984875 52.507150 2 0.722749 NaN 84.599839 3 0.786565 0.917544 2.230228 4 NaN 0.506236 15.199917 # output df2 mean cnt value p0 0.699377 3 166.917697 p1 0.772437 4 150.024925 </code></pre> <p>Code to generate the random sample df:</p> <pre><code>numFactors = 2 samples = 5 factors = [f'p{x}' for x in range(numFactors)] df = pd.DataFrame(np.random.rand(samples, numFactors), columns=factors) df = df.where(df &gt; 0.5) # set ~half to NaN df['value'] = np.random.rand(samples, 1) * 100 </code></pre> <p>Code for the summarized df2:</p> <pre><code>df2 = pd.DataFrame(index=factors) df2['mean'] = df.mean() df2['cnt'] = df.count() valuelist = [np.where(pd.isnull(df[f]), 0, df.value).sum() for f in factors] # slow df2['value'] = valuelist </code></pre> <p>My issue is creating the valuelist is slow: with 25000 factors and 750 samples it takes &gt;2s, and I need to do this on thousands of similar df's. I feel like <code>groupby</code> or similar should be able to do this but I can't get anything working. Any suggestions?</p>
<python><pandas><dataframe>
2023-04-25 08:58:32
2
22,862
fantabolous
76,099,315
3,329,877
Adding custom entries to legend gradually
<p>I want to build up a plot gradually in multiple functions like this</p> <h2>Example</h2> <pre class="lang-py prettyprint-override"><code>import matplotlib.patches as mpatches import matplotlib.pyplot as plt import numpy as np def plot_on_axis(ax: plt.Axes, x: np.ndarray, y: np.ndarray, color, name) -&gt; plt.Axes: ax.plot(x, y, color=color, label=&quot;orig&quot;) ax.plot(x, y + 0.2, &quot;--&quot;, color=color, label=&quot;shifted&quot;) patch = mpatches.Patch(color=color, label=name) handles, labels = ax.get_legend_handles_labels() ax.legend(handles + [patch], labels + [name]) return ax def get_fig() -&gt; plt.Figure: x1 = np.linspace(0, 3) y1 = np.sin(x1) x2 = np.linspace(0, 3) y2 = np.cos(x2) fig = plt.figure() ax = fig.subplots() plot_on_axis(ax, x1, y1, &quot;tab:blue&quot;, &quot;sin&quot;) plot_on_axis(ax, x2, y2, &quot;tab:orange&quot;, &quot;cos&quot;) return fig get_fig().show() </code></pre> <h2>Problem</h2> <p>However, this overwrites the <code>sin</code> entry in the legend so only <code>cos</code> shows up</p> <p><a href="https://i.sstatic.net/f01wm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f01wm.png" alt="plot" /></a></p> <p>since the second call to <code>get_legend_handles_labels</code> only returns the four elements and not the added one (if it would return all, there would be duplicate entries for <code>sin</code>).</p> <p>Is there a way to build up the legend in <code>plot_on_axis</code> or should the legend be handled in <code>get_fig</code>? Handling it in <code>plot_on_axis</code> seems much more elegant to me apart from this problem.</p> <p>Alternatively, is there a better way to communicate grouping of the entries to the viewer of the plot?</p>
<python><matplotlib>
2023-04-25 08:57:00
1
397
VaNa
76,099,307
10,889,650
Python list from lists, somewhat like flattening a zip
<p>EDIT: Unlike <a href="https://stackoverflow.com/questions/61943924/python-flat-zip">python: flat zip</a>, I'm asking about a list of lists, not two lists.</p> <p>I have a list of lists, for example:</p> <pre><code>[[a,b,c], [d,e,f], [g,h,i,j,k]] </code></pre> <p>And I wish to interleave them such that we take the first of each list, then the second, then the third, etc. In this example case we would get:</p> <pre><code>[a,d,g,b,e,h,c,f,i,j,k] </code></pre> <p>My pretty naive implementation:</p> <pre><code> output_list = [] while len(input_lists) &gt; 0: for i in range(len(input_lists)): element = test_sets[i][0] input_lists[i] = input_lists[i][1:] output_list.append(element) input_lists = [l for l in input_lists if len(l) &gt; 0] </code></pre> <p>What's a more elegant solution?</p>
<python><python-itertools>
2023-04-25 08:56:26
3
1,176
Omroth
76,099,217
10,035,190
How can I delete rows containing nonsense characters in pandas?
<p>I need to be able to remove all rows containing nonsense characters such as the ones shown below in a pandas dataframe.</p> <p><a href="https://i.sstatic.net/DCIeg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DCIeg.png" alt="enter image description here" /></a></p> <p>I have tried this code to delete rows containing gibberish data like this:</p> <pre class="lang-py prettyprint-override"><code>def dropgibbris(df): df = df.astype(str) dropList=[] for i in range(len(df)): col1=df.iloc[i, 0] col2=df.iloc[i, 1] col3=df.iloc[i, 2] if col1.isalnum() and col2.isalnum() and col3.isalnum(): pass else: dropList.append(df.iloc[i].name) for i in dropList: df.drop(index=i,inplace=True) return df </code></pre> <p>it's not working because isalnum() function is saying these values <code>77.594667</code>,<code>2023-04-24T10:44:20.4912482+00:00</code>,<code>EKO_18006_26/11/20</code> are also not alpnum so it's deleting the entire dataframe.I think regx might help?</p> <p>Is there any better way to do this?</p>
<python><pandas><dataframe>
2023-04-25 08:46:38
2
930
zircon
76,099,093
14,109,040
splitting a list into sublists based on the element immediately before another
<p>I have a list of flight details that I scraped from google flights:</p> <pre><code>result = ['12:10\u202fPM', ' – ', '5:20\u202fPM', 'AirAsia X', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '139 kg CO2', 'Avg emissions', 'A$540', 'round trip', '7:50\u202fPM', ' – ', '1:00\u202fAM+1', 'Jetstar', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '139 kg CO2', 'Avg emissions', 'A$669', 'round trip', '11:45\u202fAM', ' – ', '4:45\u202fPM', 'Separate tickets booked together', 'Air New Zealand', '3 hr', 'SYD–AKL', 'Nonstop', '170 kg CO2', '+16% emissions', 'A$697', 'round trip', '7:25\u202fAM', ' – ', '12:35\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$707', 'round trip', '11:15\u202fAM', ' – ', '4:20\u202fPM', 'LATAMQantas', '3 hr 5 min', 'SYD–AKL', 'Nonstop', '134 kg CO2', '-8% emissions', 'A$718', 'round trip', 'Price insights', 'Prices are currently low — A$463 cheaper than usual for your search', 'View price history', 'Other departing flights', '9:55\u202fAM', ' – ', '3:05\u202fPM', 'Separate tickets booked together', 'Air New Zealand', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '128 kg CO2', '-12% emissions', 'A$697', 'round trip', '3:30\u202fPM', ' – ', '8:30\u202fPM', 'Separate tickets booked together', 'Air New Zealand', '3 hr', 'SYD–AKL', 'Nonstop', '128 kg CO2', '-12% emissions', 'A$697', 'round trip', '7:15\u202fPM', ' – ', '12:25\u202fAM+1', 'Separate tickets booked together', 'Air New Zealand', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '128 kg CO2', '-12% emissions', 'A$697', 'round trip', '7:15\u202fPM', ' – ', '12:25\u202fAM+1', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$707', 'round trip', '11:15\u202fAM', ' – ', '4:25\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '151 kg CO2', 'Avg emissions', 'A$727', 'round trip', '5:35\u202fPM', ' – ', '10:45\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$727', 'round trip', '9:05\u202fAM', ' – ', '2:15\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$748', 'round trip', '9:40\u202fAM', ' – ', '2:50\u202fPM', 'Separate tickets booked together', 'Qantas', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '140 kg CO2', 'Avg emissions', 'A$748', 'round trip'] </code></pre> <p>I want to clean this list by extracting individual flight details. I can see that one way to do it is by splitting the list based on the element immediately before the ' - ' element.</p> <p>I have managed to split the list based on the ' - ' element, but I'm not sure how to do it based on the element immediately before it:</p> <pre><code>import itertools y = [list(x[1]) for x in itertools.groupby(result, lambda x: x == ' – ') if not x[0]] print(y) [['12:10\u202fPM'], ['5:20\u202fPM', 'AirAsia X', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '139 kg CO2', 'Avg emissions', 'A$540', 'round trip', '7:50\u202fPM'], ['1:00\u202fAM+1', 'Jetstar', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '139 kg CO2', 'Avg emissions', 'A$669', 'round trip', '11:45\u202fAM'], ['4:45\u202fPM', 'Separate tickets booked together', 'Air New Zealand', '3 hr', 'SYD–AKL', 'Nonstop', '170 kg CO2', '+16% emissions', 'A$697', 'round trip', '7:25\u202fAM'], ['12:35\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$707', 'round trip', '11:15\u202fAM'], ['4:20\u202fPM', 'LATAMQantas', '3 hr 5 min', 'SYD–AKL', 'Nonstop', '134 kg CO2', '-8% emissions', 'A$718', 'round trip', 'Price insights', 'Prices are currently low — A$463 cheaper than usual for your search', 'View price history', 'Other departing flights', '9:55\u202fAM'], ['3:05\u202fPM', 'Separate tickets booked together', 'Air New Zealand', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '128 kg CO2', '-12% emissions', 'A$697', 'round trip', '3:30\u202fPM'], ['8:30\u202fPM', 'Separate tickets booked together', 'Air New Zealand', '3 hr', 'SYD–AKL', 'Nonstop', '128 kg CO2', '-12% emissions', 'A$697', 'round trip', '7:15\u202fPM'], ['12:25\u202fAM+1', 'Separate tickets booked together', 'Air New Zealand', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '128 kg CO2', '-12% emissions', 'A$697', 'round trip', '7:15\u202fPM'], ['12:25\u202fAM+1', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$707', 'round trip', '11:15\u202fAM'], ['4:25\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '151 kg CO2', 'Avg emissions', 'A$727', 'round trip', '5:35\u202fPM'], ['10:45\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$727', 'round trip', '9:05\u202fAM'], ['2:15\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$748', 'round trip', '9:40\u202fAM'], ['2:50\u202fPM', 'Separate tickets booked together', 'Qantas', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '140 kg CO2', 'Avg emissions', 'A$748', 'round trip']] </code></pre> <p>Expected result:</p> <pre><code>[['12:10\u202fPM', '5:20\u202fPM', 'AirAsia X', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '139 kg CO2', 'Avg emissions', 'A$540', 'round trip'], ['7:50\u202fPM', '1:00\u202fAM+1', 'Jetstar', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '139 kg CO2', 'Avg emissions', 'A$669', 'round trip'], ['11:45\u202fAM', '4:45\u202fPM', 'Separate tickets booked together', 'Air New Zealand', '3 hr', 'SYD–AKL', 'Nonstop', '170 kg CO2', '+16% emissions', 'A$697', 'round trip'], ['7:25\u202fAM', '12:35\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$707', 'round trip'], ['11:15\u202fAM', '4:20\u202fPM', 'LATAMQantas', '3 hr 5 min', 'SYD–AKL', 'Nonstop', '134 kg CO2', '-8% emissions', 'A$718', 'round trip', 'Price insights', 'Prices are currently low — A$463 cheaper than usual for your search', 'View price history', 'Other departing flights'], ['9:55\u202fAM', '3:05\u202fPM', 'Separate tickets booked together', 'Air New Zealand', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '128 kg CO2', '-12% emissions', 'A$697', 'round trip'], ['3:30\u202fPM', '8:30\u202fPM', 'Separate tickets booked together', 'Air New Zealand', '3 hr', 'SYD–AKL', 'Nonstop', '128 kg CO2', '-12% emissions', 'A$697', 'round trip'], ['7:15\u202fPM', '12:25\u202fAM+1', 'Separate tickets booked together', 'Air New Zealand', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '128 kg CO2', '-12% emissions', 'A$697', 'round trip'], ['7:15\u202fPM', '12:25\u202fAM+1', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$707', 'round trip'], ['11:15\u202fAM', '4:25\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '151 kg CO2', 'Avg emissions', 'A$727', 'round trip'], ['5:35\u202fPM', '10:45\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$727', 'round trip'], ['9:05\u202fAM', '2:15\u202fPM', 'Separate tickets booked together', 'QantasAir Tahiti Nui', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '146 kg CO2', 'Avg emissions', 'A$748', 'round trip'], ['9:40\u202fAM', '2:50\u202fPM', 'Separate tickets booked together', 'Qantas', '3 hr 10 min', 'SYD–AKL', 'Nonstop', '140 kg CO2', 'Avg emissions', 'A$748', 'round trip']] </code></pre>
<python>
2023-04-25 08:29:03
3
712
z star
76,099,043
10,533,225
java.lang.IllegalStateException - Couldn't find column
<p>I have this code which corrects the MLE value to MALE. I first make a distinct value of Gender, then next is I will correct the wrong spelling.</p> <pre><code>genderTypeDim = ( qualificationConsolidatedData.select(&quot;GenderType&quot;)) .distinct() .dropna(&quot;all&quot;) ) genderTypeDim.show() genderTypeDim = genderTypeDim .withColumn( &quot;GenderType&quot;, F.when(F.col(&quot;GenderType&quot;) == &quot;MLE&quot;, &quot;MALE&quot;).otherwise( F.col(&quot;GenderType&quot;) ), ).distinct() genderTypeDim.show() genderTypeDim = genderTypeDim .withColumn( &quot;GenderTypeSortOrder&quot;, F.when(F.col(&quot;GenderType&quot;) == &quot;Male&quot;, 1) .when(F.col(&quot;GenderType&quot;) == &quot;Female&quot;, 2) .otherwise(3), ) genderTypeDim.show() </code></pre> <p>However, when I run this code, it gives me an error like this. (It prints the two show DF, then the error):</p> <pre><code>+---------------+ | ChangeType| +---------------+ | Bespoke| |Multi-qualified| | Non-bespoke| +---------------+ +---------------+ | ChangeType| +---------------+ | Non-bespoke| |Multi-qualified| +---------------+ java.lang.IllegalStateException: Couldn&amp;#39;t find GenderType#232317 in [CASE WHEN (GenderType#232317 = MLE) THEN MALE ELSE GenderType#232317 END#244149] </code></pre> <p>If I removed the distinct in the first code, the error will be gone. Can you explain why?</p> <pre><code>genderTypeDim = ( qualificationConsolidatedData.select(&quot;GenderType&quot;)) .dropna(&quot;all&quot;) ) </code></pre>
<python><pyspark><azure-databricks>
2023-04-25 08:23:17
1
583
Tenserflu
76,098,861
11,426,624
get minimum value which is greater than some value from column in pandas dataframe
<p>I have a pandas dataframe and would like to create a column <code>min_date_after_ref_date</code> which shows the minimum date after a given ref_date. I have the following code.</p> <pre><code>ref_date = datetime.strptime('2023-04-21 12:00', '%Y-%m-%d %H:%M') df = pd.DataFrame({'id':[1,2,1,1,3], 'time_stamp': ['2023-04-19 12:05', '2023-04-21 12:45', '2023-04-21 15:45', '2023-04-23 13:15', '2023-04-18 12:05']}) df = df.assign(time_stamp=pd.to_datetime(df.time_stamp)) df = df.assign(min_date_after_ref_date=df[df.time_stamp&gt;ref_date].groupby('id').time_stamp.transform('min')) </code></pre> <p>and I get this</p> <pre><code>id time_stamp min_date_after_ref_date 0 1 2023-04-19 12:05:00 NaT 1 2 2023-04-21 12:45:00 2023-04-21 12:45:00 2 1 2023-04-21 15:45:00 2023-04-21 15:45:00 3 1 2023-04-23 13:15:00 2023-04-21 15:45:00 4 3 2023-04-18 12:05:00 NaT </code></pre> <p>but I would like there to be <code>2023-04-21 15:45:00</code> also in the first row (and not <code>NaN</code>), so that for each <code>id</code> there is always the same value of <code>min_date_after_ref_date</code>. How can I change this?</p>
<python><pandas><dataframe><group-by>
2023-04-25 08:00:24
1
734
corianne1234
76,098,857
5,581,413
Add a filter/classifier to the Python function for generating documentation for custom tags
<p>I want to generate the documents for my Python project.</p> <p>I know that there are a lot of useful tools that I have read about, such as pdoc and Sphinx.</p> <p>One of my biggest requirements is that I want to add a tag or classifier for some functions located in different modules.</p> <p><strong>Suppose I have three modules.</strong></p> <pre><code>ModuleA with functions: a, b, c ModuleB with functions: d, e, f ModuleC with functions: x, y, z </code></pre> <p><strong>Assume functions a, e, f, x are with same tag called: tag1</strong></p> <pre><code>a.py Class ModuleA: def a(self): ''' This is docstring tag: tag1 ''' do_somthing... b.py Class ModuleB: def e(self): ''' This is docstring tag: tag1 ''' do_somthing... def f(self): ''' This is docstring tag: tag1 ''' do_somthing... </code></pre> <p>How do a create a custom filter or section for tag1 on the document?</p> <p>I can use the tag1 filer to list a, e, f, x functions, document, and which module they located.</p>
<python><documentation><python-sphinx><pdoc>
2023-04-25 07:59:59
0
387
codebrew
76,098,792
5,370,631
Explode a json string present in pyspark dataframe
<p>I have a JSON string <code>substitutions</code> as a column in dataframe which has multiple array elements that I want to explode and create a new row for each element present in that array. There are other columns present in the df</p> <p>My dataframe looks like this:</p> <pre><code>+--------------------+----------+--------------------+--------------------+----------+--------------------+---------+--------+--------------------+ | requestid|sourcepage| cartid| tm| dt| customerId| usItemId|prefType| substitutions| +--------------------+----------+--------------------+--------------------+----------+--------------------+---------+--------+--------------------+ |00-efbedfe05b4482...| CHECKOUT|808b44cc-1a38-4dd...|2023-04-25 00:07:...|2023-04-25|f1a34e16-a6d0-6f5...|862776084| NO_PREF|{&quot;id&quot;:{&quot;productId...| +--------------------+----------+--------------------+--------------------+----------+--------------------+---------+--------+--------------------+ </code></pre> <p>json string column <code>substitutions</code></p> <pre><code>[ { &quot;id&quot;: { &quot;productId&quot;: &quot;2N3UYGUTROQK&quot;, &quot;usItemId&quot;: &quot;32667929&quot; }, &quot;usItemId&quot;: &quot;32667929&quot;, &quot;itemRank&quot;: 1, &quot;customerChoice&quot;: false }, { &quot;id&quot;: { &quot;productId&quot;: &quot;2N3UYGUTRHQK&quot;, &quot;usItemId&quot;: &quot;32667429&quot; }, &quot;usItemId&quot;: &quot;32667429&quot;, &quot;itemRank&quot;: 2, &quot;customerChoice&quot;: true }, { &quot;id&quot;: { &quot;productId&quot;: &quot;2N3UYGUTRYQK&quot;, &quot;usItemId&quot;: &quot;32667529&quot; }, &quot;usItemId&quot;: &quot;32667529&quot;, &quot;itemRank&quot;: 3, &quot;customerChoice&quot;: false }, { &quot;id&quot;: { &quot;productId&quot;: &quot;2N3UYGUTIQK&quot;, &quot;usItemId&quot;: &quot;32667329&quot; }, &quot;usItemId&quot;: &quot;32667329&quot;, &quot;itemRank&quot;: 4, &quot;customerChoice&quot;: false }, {&quot;id&quot;: { &quot;productId&quot;: &quot;2N3UYGUTYOQK&quot;, &quot;usItemId&quot;: &quot;32663929&quot; }, &quot;usItemId&quot;: &quot;32663929&quot;, &quot;itemRank&quot;: 5, &quot;customerChoice&quot;: false } ] </code></pre> <p>I have tried the below but not getting desired results</p> <pre><code>df.select(&quot;*&quot;, f.explode(f.from_json(&quot;substitutions&quot;, MapType(StringType(),StringType())))) +--------------------+----------+--------------------+--------------------+----------+--------------------+---------+--------+--------------------+-------+ | requestid|sourcepage| cartid| tm| dt| customerId| usItemId|prefType| substitutions|entries| +--------------------+----------+--------------------+--------------------+----------+--------------------+---------+--------+--------------------+-------+ |00-efbedfe05b4482...| CHECKOUT|808b44cc-1a38-4dd...|2023-04-25 00:07:...|2023-04-25|f1a34e16-a6d0-6f5...|862776084| NO_PREF|[{&quot;id&quot;:{&quot;productI...| null| +--------------------+----------+--------------------+--------------------+----------+--------------------+---------+--------+--------------------+-------+ </code></pre> <p>What mistake am I doing here?</p>
<python><json><apache-spark><pyspark><apache-spark-sql>
2023-04-25 07:51:19
2
1,572
Shibu
76,098,609
4,723,732
Why Ansible molecule path is not detected? molecule: command not found
<p>I tried to init</p> <pre><code>molecule init role mikitovcluster </code></pre> <p>got</p> <pre><code>molecule: command not found </code></pre> <p>I listed my collections</p> <pre><code>ansible-galaxy collection list # /usr/lib/python3/dist-packages/ansible_collections Collection Version ----------------------------- ------- amazon.aws 5.4.0 ansible.netcommon 4.1.0 ansible.posix 1.5.1 ansible.utils 2.9.0 ansible.windows 1.13.0 arista.eos 6.0.0 # /home/miki/.ansible/collections/ansible_collections Collection Version -------------------- ------- ansible.posix 1.5.2 ansible.utils 2.9.0 community.general 6.5.0 community.kubernetes 2.0.1 community.molecule 0.1.0 kubernetes.core 2.4.0 </code></pre> <p>Output ansible --version</p> <pre><code>ansible [core 2.14.4] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/miki/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3/dist-packages/ansible ansible collection location = /home/miki/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (/usr/bin/python3) jinja version = 3.0.3 libyaml = True </code></pre> <p>How it comes that I have two paths for my collections?</p>
<python><ansible><molecule>
2023-04-25 07:29:31
0
8,250
Richard Rublev
76,098,566
12,304,000
intentionally raise an exception and fail the DAG
<p>In my Airflow dag, I am doing this:</p> <pre><code> try: response = requests.get(url, params=params, headers=headers) if response.status_code == 403: raise Exception(f&quot;Data Unavailable for metric: {m}, account: {account}&quot;) except Exception as e: logging.info(f&quot;Exception during API request: {e} for metric: {m}, account: {account}&quot;) </code></pre> <p>The except <code>Exception as e:</code> part only runs when there's an error from the API for example a wrong API etc. However, in case of <code>if response.status_code == 403</code>, the DAG does not fail, even though I manually wrote a &quot;raise Exception&quot;. It just moves on to the next steps and shows green or &quot;success&quot; for that task.</p> <p>How can I force the DAG to fail, throw an error and stop if this error occurs?</p>
<python><python-3.x><exception><airflow><airflow-2.x>
2023-04-25 07:24:07
1
3,522
x89
76,098,535
4,481,287
Handling async streams in flask with twilio
<p>I have a basic flask app to serve twilio calls. When a call is generated via twilio it goes to my <code>/test</code> http api.</p> <p>In this api I'm generating a twiml conversation where the first thing that happens is a verb just saying &quot;hello world&quot;. Then, concurrently I start a stream in the background. I'm using <code>flask_sock</code> for handling websocket connections in my flask app.</p> <p>Here is my flask app route function that gets executed when the twilio call starts</p> <pre><code>@app.route('/test', methods=['POST']) def test(): response = VoiceResponse() connect = Connect() connect.stream(url=f'wss://{request.host}/stream') response.say(&quot;hello world&quot;) response.append(connect) response.pause(length=30) return str(response), 200, {'Content-Type': 'text/xml'} </code></pre> <p>Everything works when my stream is not async.</p> <pre><code>@sock.route('/stream') def stream(ws): do_something() </code></pre> <p>When I try to make my stream function async, like this:</p> <pre><code>@sock.route('/stream') async def stream(ws): await do_something() </code></pre> <p>I get <code>RuntimeWarning: coroutine 'stream' was never awaited</code> And my socket gets automatically terminated (<code>ConnectionError</code>)</p> <p>How can I make this work with async?</p> <p>For extra context: I'm running my flask app like this and then just running <code>flask run</code></p> <pre><code>app = Flask(__name__) sock = Sock(app) ... functions described above if __name__ == '__main__': app.run(debug=True) </code></pre> <p>I cannot <strong>await</strong> the stream function because I'm not calling it, it's just a websocket entrypoint for twilio to initiate..</p>
<python><python-3.x><flask><twilio><flask-sockets>
2023-04-25 07:20:18
1
1,371
Kevin Cohen
76,099,791
758,979
Printing from the command-line on a modern continuous paper printer without wasting paper?
<p>I have a Peripage A4 continuous paper roll thermal printer, connected via USB-C to a MacBook with 13.3.1. Printing works fine from within all applications and via the commandline using <code>lpr</code>:</p> <pre><code>lorem --faust --lines 20 --randomize | lpr -P &quot;PeriPage_A4&quot; -o orientation-requested=1 </code></pre> <p>This prints 20 lines of random (randomized Goethe, in this case) lorem ipsum.</p> <p>Is there a way I can tell <code>lpr</code> to &quot;not fill an A4 page with nothingness&quot; but rather just print the text, the stop (so that only the lines that contain text are printed), or print &quot;one line at a time&quot; (again, of course, without filling the rest of the page with nothing)? Other than in this example I do not know the amount of text, it varies with each print.</p> <pre><code>Have: Want: ┌──────────────┐ ┌──────────────┐ │lorem ipsum │ │lorem ipsum │ │sit dolor et │ │sit dolor et │ │eniam unque. │ │eniam unque. │ │ │ └──────────────┘ │ │ ▲ │ │ │ │ │ │ │ │ │ │ │ │ │ │ printing stopped. │ │ │ │ │ │ lots of blank paper │ │ │ │ │ ◄─────────┘ │ │ └──────────────┘ </code></pre> <p>I have tried a Python script, but the printer does not seem to understand ESC/POS stuff:</p> <pre><code>#!/usr/bin/env python3 import sys import time from escpos.printer import Usb # Configure the printer printer = Usb(0x09c5, 0x0200, in_ep=0x81, out_ep=0x02) # Reset the printer printer.hw('reset') # Read the input text from stdin input_text = sys.stdin.read() # Print the input text printer.text(input_text) # Add a small delay time.sleep(1) # Cut the paper #printer.cut() # Close the printer connection printer.close() </code></pre>
<python><cups><lpr>
2023-04-25 07:19:41
1
1,301
Christian
76,098,496
19,674,402
Typehint tuple of ints gives IDE warning. List of ints does not. Why?
<p>Why does the typehint <code>tuple[int]</code> behave differently from <code>list[int]</code>?</p> <p>This gives me a visual warning:</p> <pre><code>def func_with_tuple(x: tuple[int]): pass func_with_tuple((1, 2, 3)) # ^ # warning _________| </code></pre> <blockquote> <p>Expected type 'tuple[int]', got 'tuple[int, int, int]' instead</p> </blockquote> <p>while this does not:</p> <pre><code>def func_with_list(x: list[int]): pass func_with_list([1, 2, 3]) </code></pre> <p>How should I typehint a <code>tuple</code> of <code>ints</code> then?</p>
<python><pycharm><type-hinting>
2023-04-25 07:15:21
1
496
PythonForEver
76,098,338
388,359
Validating a AWS ACM Certificate using aws.acm.CertificateValidation in Pulumi using python
<p>In Pulumi I create a ACM Certificate with a domain-name and a number of SAN. This is set to get verified using DNS.</p> <p>The Route53 records get created using the following function. This works as expected in that the records get created and the certificate status changes to valid in AWS.</p> <pre><code> # Create and add CNAME for DNS validate for ACM Cert def add_acm_validation_records(self, cert: aws.acm.Certificate) -&gt; list: valid_fqdns = cert.domain_validation_options.apply( lambda o: self.iterate_records(o) ) return valid_fqdns def iterate_records(self, validation_options): fqdns = [] for record_to_add in validation_options: record = aws.route53.Record( f&quot;r53-acm-verify-{record_to_add.domain_name}&quot;, allow_overwrite=True, name=record_to_add.resource_record_name, ttl=60, type=record_to_add.resource_record_type, records=[record_to_add.resource_record_value], zone_id=self.__customer_zone.zone_id, ) fqdns.append(record.fqdn) return fqdns </code></pre> <p>I am not able to verify the list of <code>fqdns</code> in the list. At the moment I cant figure out how to pint them out.</p> <p>The list of <code>fqdns</code> get passed to the following function:</p> <pre><code> def validate(self, fqdns: list) -&gt; aws.acm.Certificate: cert_validation = aws.acm.CertificateValidation( f&quot;{self.__customer_code}-cert-validation&quot;, certificate_arn=self.__customer_cert.arn, validation_record_fqdns=[pulumi.Output.all(fqdns).apply(lambda l: f&quot;{l}&quot;)], opts=pulumi.ResourceOptions(provider=self.__aws_provider_west_2), ) </code></pre> <p>This gives me the error following error:</p> <pre><code> aws:acm:CertificateValidation (abcd-cert-validation): error: 1 error occurred: * 3 errors occurred: * missing *.api.abcd.sanda.XXXXXXXX.co.uk DNS validation record: _AAAAAAAA.api.abcd.sanda.XXXXXXXX.co.uk * missing *.web.abcd.sanda.XXXXXXXX.co.uk DNS validation record: _BBBBBBBB.web.abcd.sanda.XXXXXXXX.co.uk * missing *.abcd.sanda.XXXXXXXX.co.uk DNS validation record: _CCCCCCCC.abcd.sanda.XXXXXXXX.co.uk </code></pre> <p>I have verified that the 3 records mentioned above are in AWS Route53 so I am not sure why Pulumi thinks they are missing ...</p> <p>At this point the <code>Certificate</code> in <code>AWS</code> has change its status from <code>Pending</code> to <code>Issued</code> so i believe the configuration code is okay but not the verification part.</p> <p>If anyone can spot my mistake or a way to troubleshoot this please let me know.</p>
<python><amazon-web-services><pulumi><pulumi-python>
2023-04-25 06:53:23
1
1,251
alexis
76,098,104
6,774,444
How to get SSL Cipher used when using boto3 to interact with AWS?
<p>We are using boto3 to to authenticate in AWS using STS AssumeRoleWithSAML. I need to know the cipher suite used when establishing connection with AWS. I added below code but it doesn't logs any info about the SSL connection:</p> <pre class="lang-py prettyprint-override"><code>boto3.set_stream_logger('', logging.DEBUG) logging.getLogger('').setLevel(logging.DEBUG) sts_client = boto3.client('sts') </code></pre> <p>How can I log the SSL details here?</p>
<python><amazon-web-services><ssl><boto3><python-logging>
2023-04-25 06:16:56
0
485
Faiz Kidwai
76,097,925
9,848,043
AssertionError: query should be unbatched 2D or batched 3D tensor but received 4-D query tensor: Pytorch
<p>I have this <a href="https://paperswithcode.com/method/cct" rel="nofollow noreferrer">CCT</a> Encoder</p> <pre><code>class CctEncoder(nn.Module): def __init__(self, in_channels, cct_block_params, num_layers): super().__init__() self.conv = nn.Conv2d(in_channels, cct_block_params[0][0], kernel_size=3, padding=1) self.blocks = nn.ModuleList() for i in range(num_layers): in_channels, out_channels, num_heads, mlp_ratio = cct_block_params[i] block = CctBlock(in_channels, out_channels, num_heads, mlp_ratio) self.blocks.append(block) def forward(self, x): x = self.conv(x) for block in self.blocks: x = block(x) return x </code></pre> <p>where the CCTBlock is like:</p> <pre><code>class CctBlock(nn.Module): def __init__(self, in_channels, out_channels, num_heads, mlp_ratio=4.0): super().__init__() self.norm1 = nn.LayerNorm(in_channels) self.attn = nn.MultiheadAttention(in_channels, num_heads) self.norm2 = nn.LayerNorm(in_channels) self.mlp = nn.Sequential( nn.Linear(in_channels, int(in_channels * mlp_ratio)), nn.GELU(), nn.Linear(int(in_channels * mlp_ratio), out_channels), ) def forward(self, x): x_norm = self.norm1(x) attn_output, _ = self.attn(x_norm, x_norm, x_norm) x = x + attn_output x_norm = self.norm2(x) mlp_output = self.mlp(x_norm) x = x + mlp_output return x </code></pre> <p>Now I also have a CNN decoder.</p> <pre><code>class CnnDecoder(nn.Module): def __init__(self, in_channels, num_blocks, out_channels): super().__init__() self.blocks = nn.ModuleList() for i in range(num_blocks): self.blocks.append(nn.Conv2d(in_channels, in_channels // 2, kernel_size=3, padding=1)) in_channels //= 2 self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1) def forward(self, x): for block in self.blocks: x = F.interpolate(x, scale_factor=2, mode=&quot;nearest&quot;) x = block(x) x = F.relu(x) x = self.conv(x) return x </code></pre> <p>I want to execute this whole thing using the below code. Here, CCT will work as a primary encoder layer and CNN will work as a decoder layer.</p> <pre><code>import torch import torch.nn as nn import torch.nn.functional as F class InpaintingModel(nn.Module): def __init__(self, cct_block_params=((576, 128, 8, 2.0),) * 5, num_blocks=5): super().__init__() self.encoder = CctEncoder(3, cct_block_params, num_layers=len(cct_block_params)) self.grid_generator = nn.Sequential( nn.Conv2d(1, 64, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(256, 512, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(512, 1024, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(1024, 2048, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.ConvTranspose2d(2048, 1024, kernel_size=4, stride=2, padding=1), nn.ReLU(), ) self.decoder = CnnDecoder(1024, num_blocks, out_channels=3) self.mask_conv = nn.Conv2d(3, 1, kernel_size=1) self.sigmoid = nn.Sigmoid() def forward(self, x, mask): encoded_x = self.encoder(x) batch_size, channels, height, width = encoded_x.size() mask = F.interpolate(mask, size=(height, width), mode='bilinear', align_corners=False) mask = self.sigmoid(self.mask_conv(mask)) masked_encoded_x = encoded_x * mask grid = self.grid_generator(mask.unsqueeze(1)) grid = grid.expand(batch_size, -1, -1, -1) deformed_masked_encoded_x = F.grid_sample(masked_encoded_x, grid, mode='bilinear', align_corners=False) decoded_x = self.decoder(deformed_masked_encoded_x) return decoded_x </code></pre> <p>I am getting the following error while running this model.</p> <pre><code>AssertionError: query should be unbatched 2D or batched 3D tensor but received 4-D query tensor </code></pre> <p><a href="https://gist.github.com/lepotatoguy/a137a748b5a35d3b14ade0c563ab6e65" rel="nofollow noreferrer">Traceback</a></p> <p><a href="https://gist.github.com/lepotatoguy/12aa82874503f986621b35d51a608d80" rel="nofollow noreferrer">Full code</a></p> <p>What should I do in this case? Am I missing anything here?</p>
<python><deep-learning><pytorch><tensor>
2023-04-25 05:43:59
0
1,115
Joyanta J. Mondal
76,097,868
10,294,022
Python subprocess output stream stdout.read hangs forever for input prompt
<p>I am trying to run a subprocess with Popen, and at a time it asks for an input prompt without any <code>EOF</code>, so <code>stdout.read()</code> blocks the while loop until an <code>EOF</code> is found, like forever because there will be none.</p> <p>I am unable to detect if we are in an input prompt coming next via</p> <ul> <li><code>proc.stdout.isatty()</code> it stays at False</li> <li><code>proc.stdout.writable()</code> it stays at False</li> </ul> <p>main1.py</p> <pre><code>from subprocess import Popen, PIPE import sys proc = Popen([&quot;python3&quot;, &quot;main2.py&quot;], stdin=PIPE, stdout=PIPE, stderr=PIPE) def read_until_it_finishes(): while proc.poll() is None: if proc.stdout.isatty() is True: # &lt;-- Why this line isn't it detecting we are in a input prompt ? break if proc.stdout.writable() is True: # &lt;-- Why this line isn't it detecting we are in a input prompt ? break line = proc.stdout.read(1).decode(sys.stdout.encoding) # https://stackoverflow.com/a/63137801/10294022 sys.stdout.write(line) sys.stdout.flush() read_until_it_finishes() proc.stdin.write(b&quot;My name is Catwoman\n&quot;) proc.stdin.flush() </code></pre> <p>main2.py</p> <pre><code>import sys sys.stdout.write(&quot;Hello my name is Batman\n&quot;) sys.stdout.flush() sys.stdout.write(&quot;I am awesome\n&quot;) sys.stdout.flush() name = input('And you what is your name:') sys.stdout.write(name) sys.stdout.flush() sys.stdout.close() </code></pre> <p>Then run</p> <pre><code>python3 main1.py </code></pre> <p>Would you please help me ?</p>
<python><stdout><popen>
2023-04-25 05:28:33
1
803
Thomas Aumaitre
76,097,825
13,581,071
Detect SIGUSR1 in C code and print Hello World
<p>I have a relatively trivial problem, with working reproducible code (you can run it direct from your environment</p> <p>Essentially</p> <p>I have some C and Python code.</p> <p>My python code</p> <p>Creates two pipes tomcat_1 and tomcat_2, these pipes can read and write respectively.</p> <p>Now using my tomcat_2 pipe I write the string F-14 TOMCAT, in my C code, I open the two pipes and read from tomcat_2 pipe the contents of what has been written</p> <p>This stuff works without any dramas.</p> <pre><code> import os import signal #Remove existing fifos if not cleaned up from last time. try: os.unlink('/tmp/tomcat_1') os.unlink('/tmp/tomcat_2') except FileNotFoundError: pass signal.signal(signal.SIGINT, lambda x,y: None) #Setup signal handler with no-op lambda expression os.mkfifo('/tmp/tomcat_1') #Fifos can be constructed in python os.mkfifo('/tmp/tomcat_2') r = open('/tmp/tomcat_1','r') w = open('/tmp/tomcat_2','w') w.write(&quot;F-14 TOMCAT&quot;) w.flush() w.close() with open('/tmp/tomcat_2', 'r') as f: pid = int(f.read()) os.kill(pid, signal.SIGUSR1) while True: data = r.read() if len(data) &gt; 0: print('PYTHON: Received ' + data) # FIFOs can be read from like any other file # Close reading pipe r.close() signal.pause() #mimics pause(2) print('PYTHON: ending process') r.close() #cleanup fifos os.unlink('/tmp/tomcat_1') os.unlink('/tmp/tomcat_2') </code></pre> <p>Now</p> <p>I have C code that can read the data written to the pipe here it is</p> <pre><code> void signal_handler(int SIG); volatile sig_atomic_t g_sigint_flag = 0; volatile sig_atomic_t g_sigusr1_flag = 0; int fd1; int fd2; void exchange_signal_handler(int signo); int main(int argc, char ** argv) { char trader[100]; fd2 = open(&quot;/tmp/tomcat_1&quot;,O_WRONLY); fd1 = open(&quot;/tmp/tomcat_2&quot;, O_RDONLY); if (fd1== -1) { perror(&quot;open pipe failed&quot;); return 1; } if (fd2== -1) { perror(&quot;open pipe failed&quot;); return 1; } if (signal(SIGINT, signal_handler) == SIG_ERR) { printf(&quot;Unable to register signal handler for SIGINT\n&quot;); return 1; } if (signal(SIGUSR1, exchange_signal_handler) == SIG_ERR) { printf(&quot;Unable to register exchange signal handler for SIGUSR1\n&quot;); return 1; } char buf[1024]; int buf_size = read(fd1, buf, sizeof(buf)); buf[buf_size] ='\0'; printf(&quot;Received data: %s\n&quot;, buf); while (!g_sigint_flag) { if (g_sigusr1_flag) { g_sigusr1_flag = 0; // Reset the flag } } close(fd2); close(fd1); return 0; } void signal_handler(int SIG) { if (SIG == SIGINT) { printf(&quot;Terminated\n&quot;); g_sigint_flag = 1; } } void exchange_signal_handler(int signo) { if (signo == SIGUSR1) { printf(&quot;Received SIGUSR1 signal from exchange\n&quot;); g_sigusr1_flag = 1; } } </code></pre> <p>Now my issue is, when my python code sends a SIGUSR1 signal to tomcat_2 pipe, in Python , my C Code should detect this and print a hello world</p> <p>How do I go about doing this?</p>
<python><c><pipe><signals>
2023-04-25 05:19:05
1
301
HyperCoderSuperion
76,097,788
8,294,738
Is there a way to get all namespace scoped resource objects by label selector using python kubernetes client?
<p>I want to achieve this command use kubernetes python client.</p> <pre class="lang-bash prettyprint-override"><code>kubectl -n namespace get all -l key=value </code></pre> <p>I've seen several questions here <a href="https://stackoverflow.com/questions/61787779/list-all-resources-in-a-namespace-using-the-kubernetes-python-client">List all resources in a namespace using the Kubernetes python client</a></p> <p>but it's means</p> <pre class="lang-bash prettyprint-override"><code>kubectl api-resources </code></pre> <p>not what I want.</p> <p>I can't find an example case, so what is the right way to do this?</p>
<python><kubernetes><kubernetes-python-client>
2023-04-25 05:09:57
1
657
romlym
76,097,697
755,934
send SSE only to specific clients using flask
<p><a href="https://github.com/singingwolfboy/flask-sse" rel="nofollow noreferrer">flask-sse</a> provides some example code in its <a href="https://flask-sse.readthedocs.io/en/latest/quickstart.html" rel="nofollow noreferrer">quickstart</a> section. However that code publishes events to all subscribed clients. To publish events to only some clients, the documentation suggests using channels. However, what if those events must be private (for example notifications that a new message has arrived for a specific user)? You don't want to do something like have a client pass a user ID, and for the server to push updates on the channel named after that user ID, because that presents a security/privacy issue.</p> <p>What's the right way, if any, to implement such functionality with <code>flask_sse</code> or simply <code>flask</code>?</p> <p>For example, a user can pass its <code>JWT</code> to the subscription API endpoint. But then what?</p> <pre><code>from flask import Flask, request from flask_jwt_extended import current_user, jwt_required from flask_sse import sse app = Flask(__name__) @app.route(&quot;/subscribe&quot;) @jwt_required() def subscribe(): ....? @app.route(&quot;/receive-message&quot;) def receive_message(): # message comes in for a user from outside the system # for example this API is triggered by a webhook from a third party service form_data = request.json() # figure out who the recipient is user_uuid = get_destination_user_id(form_data) user = db.session.query(User).filter_by(uuid=user_uuid).one() # now we want to publish an event, but only for this user... ...? </code></pre>
<python><flask><server-sent-events>
2023-04-25 04:43:50
0
5,624
Daniel Kats
76,097,693
3,507,127
Return results of Django Query as Dictionary
<p>Suppose I have a Django model that looks like this:</p> <pre><code>class User(models.Model): customer_id = models.TextField() date_time_period_start = models.DateField() date_time_period_end = models.DateField() total_sales = models.IntegerField() </code></pre> <p>The underlying table looks like this:</p> <pre><code>customer_id | date_time_period_start | date_time_period_end | total_sales 1 | 2023-04-01 | 2023-04-01 | 10 1 | 2023-04-02 | 2023-04-02 | 20 1 | 2023-04-03 | 2023-04-03 | 30 </code></pre> <p>I would like to query using Django and return a python dictionary where the key is the value of <code>date_time_period_start</code> and the values is a dictionary where the keys are the columns and the values are the column values. In this example, I would like to return</p> <pre><code>{ '2023-04-01': {'date_time_period_start': '2023-04-01', 'date_time_period_end': '2023-04-01', 'customer_id': 1, 'total_sales': 10}, '2023-04-02': {'date_time_period_start': '2023-04-02', 'date_time_period_end': '2023-04-02', 'customer_id': 1, 'total_sales': 20}, '2023-04-03': {'date_time_period_start': '2023-04-03', 'date_time_period_end': '2023-04-03', 'customer_id': 1, 'total_sales': 30}, } </code></pre> <p>I can achieve this by doing the following</p> <pre><code> records = list( self.model.objects .values( &quot;customer_id&quot;, &quot;date_time_period_start&quot;, &quot;date_time_period_end&quot;, &quot;total_sales&quot;, ) ) records_dict = {} for record in records: records_dict[record[&quot;date_time_period_start&quot;]] = { &quot;date_time_period_start&quot;: record[&quot;date_time_period_start&quot;], &quot;date_time_period_end&quot;: record[&quot;date_time_period_end&quot;], &quot;customer_id&quot;: record[&quot;customer_id&quot;], &quot;total_sales&quot;: record[&quot;total_sales&quot;], } </code></pre> <p>Is there a more efficient way of doing this rather than looping through the list of records?</p> <p>I was trying to use <code>values()</code> and <code>annotate()</code> so something like</p> <pre><code>from django.db.models import F result = self.model.objects.values('date_time_period_start').annotate(date_time_period_end=F('date_time_period_end'), total_sales=F('total_sales'), customer_id=F('customer_id'), ) </code></pre> <p>but that gave this</p> <pre><code>&lt;QuerySet [{'date_time_period_start': datetime.date(2023, 4, 1), 'date_time_period_end': datetime.date(2023, 4, 1), 'total_sales': 10, 'customer_id': 1}, {'date_time_period_start': datetime.date(2023, 4, 2), 'date_time_period_end': datetime.date(2023, 4, 2), 'total_sales': 20, 'customer_id': 1}, {'date_time_period_start': datetime.date(2023, 4, 3), 'date_time_period_end': datetime.date(2023, 4, 3), 'total_sales': 30, 'customer_id': 1}, ]&gt;} </code></pre>
<python><django>
2023-04-25 04:43:11
1
9,006
Vincent
76,097,512
4,083,786
Detail not found error using FastAPI's APIRouter
<p>I have a directory structure as follows:</p> <pre><code>app &gt;routers &gt;items.py __init__.py main.py </code></pre> <p>Inside main I have the following code:</p> <pre><code>from typing import Union import uvicorn from fastapi import FastAPI, APIRouter from routers import items app = FastAPI() app.include_router(items.router, prefix='/items', tags=['items']) @app.get(&quot;/&quot;) async def root(): return {&quot;message&quot;: &quot;World World&quot;} if __name__ == &quot;__main__&quot;: uvicorn.run(app, host=&quot;0.0.0.0&quot;, port=8000) </code></pre> <p>Within <code>items.py</code> I have the following:</p> <pre><code>from fastapi import APIRouter router = APIRouter( prefix=&quot;/items&quot;, tags=[&quot;items&quot;] ) @router.api_route(&quot;/items&quot;) async def items(): return {&quot;test&quot;: &quot;items&quot;} </code></pre> <p>When I run the code, I can go to my url http:127.0.0.0:8000/ and I get the Hello world message. But when i go to http:127.0.0.0:8000/items I'm seeing an error:</p> <pre><code>{&quot;detail&quot;: &quot;not found&quot;} </code></pre> <p>How do I fix this? I tried debugging this but when I hit my debugger, and type items.router it tells me that I'm correctly importing from the right path.</p>
<python><fastapi>
2023-04-25 03:41:24
1
1,182
turtle_in_mind
76,097,342
1,461,972
Can PythonAnywhere Free Account code call outside https Apis?
<p>I'm writing a telegram bot to serve a set of relational card games, and used a blend of ChatGPT output with what I found at <a href="https://blog.pythonanywhere.com/148/" rel="nofollow noreferrer">https://blog.pythonanywhere.com/148/</a>.</p> <p>I can get the bot to start, which is great. The issue is that when I call my website <a href="http://www.relationalgames.com/cards/deck?Papo" rel="nofollow noreferrer">www.relationalgames.com/cards/deck?Papo</a> a Papo, I get the error:</p> <p>File &quot;/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py&quot;, line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.relationalgames.com', port=443): Max retries exceeded with url: /cards?deck=Papo%20a%20Papo (Caused by NewCo nnectionError('&lt;urllib3.connection.HTTPSConnection object at 0x7f46da0873a0&gt;: Failed to establish a new connection: [Errno 111] Connection refused'))</p> <p>I wonder if this is some limitation of PythonAnywhere free account, or if I am doing something wrong...Any ideas on how to overcome this?</p> <p>Here is my current code</p> <pre><code>import telepot import json import random import urllib3 import time # You can leave this bit out if you're using a paid PythonAnywhere account proxy_url = &quot;http://proxy.server:3128&quot; telepot.api._pools = { 'default': urllib3.ProxyManager(proxy_url=proxy_url, num_pools=3, maxsize=10, retries=False, timeout=30), } telepot.api._onetime_pool_spec = (urllib3.ProxyManager, dict(proxy_url=proxy_url, num_pools=1, maxsize=1, retries=False, timeout=30)) # end of the stuff that's only needed for free accounts http = urllib3.PoolManager(timeout=60) def handle(msg): content_type, chat_type, chat_id = telepot.glance(msg) if content_type == 'text': if msg['text'] == '/start': bot.sendMessage(chat_id, &quot;Enter the name of a deck:&quot;) else: #try: deck_name = msg['text'] response = http.request('GET', f'https://www.relationalgames.com/cards?deck={deck_name}') # requests.get(f'https://www.relationalgames.com/cards?deck={deck_name}') print(response) playable_deck = json.loads(response.text) random.shuffle(playable_deck) while len(playable_deck) &gt; 0: card = playable_deck.pop() if 'url' in card: bot.sendMessage(chat_id, card['url']) else: bot.sendMessage(chat_id, card['cardText']) if len(playable_deck) == 0: bot.sendMessage(chat_id, &quot;Do you want to restart the game? (yes or no)&quot;) break bot.sendMessage(chat_id, &quot;Press any key for next card&quot;) bot.getUpdates() #except: # bot.sendMessage(chat_id, &quot;Error processing your request&quot;) TOKEN = 'My Token' bot = telepot.Bot(TOKEN) bot.message_loop(handle) print ('Listening ...') # Keep the program running. while 1: time.sleep(10) </code></pre>
<python><telegram-bot><pythonanywhere><urllib3>
2023-04-25 02:48:30
1
1,805
MrWater
76,097,199
5,383,733
Install Sentence-Transformers as a Workspace package on Azure Synapse Analytics
<p>I'm trying to install sentence-transformers python package as a workspace package for Azure Synapse Analytics.(To be used on Apache Spark Pool)</p> <p>Tried installing it through magic command <code>%pip install sentence-transformers</code> through a notebook and it works.</p> <p>Following the steps for uploading the <code>sentence-transformers-2.2.2.tar.gz</code> to Workspace and configuration throws the following error mentioned in the screenshot.</p> <p>I tested with couple of other python packages and they work fine.</p> <p>Any workarounds?</p> <p><a href="https://i.sstatic.net/4qxEi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4qxEi.png" alt="enter image description here" /></a></p>
<python><apache-spark><pip><azure-synapse><sentence-transformers>
2023-04-25 02:04:12
2
876
Haritha Thilakarathne
76,097,141
15,542,245
Why is a tuple with length=2 described as an 'int object' and therefore not iterable
<p>I have asked the question of ChatGPT, in Neeva, Google &amp; here. On SO there are a lot of similar questions but I can't relate any of them to my example.</p> <p>Small example that hopefully illustrates my problem:</p> <pre><code>import numpy as np # Small function that returns an array of values def npArrayReturn(index, numbers): results = np.array results = numbers * index return results # Code that calls the function to return modified values - logical steps &amp; print statements that ends with the problem # Return one set of modified values i = 2 smalls = np.array([1,2,3]) bigs = np.array bigs = npArrayReturn(i,smalls) print(&quot;The returned array is:&quot;,bigs) # Return multiple sets of modified values indexResults = [],[] print(&quot;indexResults is type:&quot;,type(indexResults)) print(&quot;The length of indexResults is:&quot;,len(indexResults)) # Why is a tuple with length=2 described as an 'int object' and therefore not iterable for i in len(indexResults): indexResults[i] = npArrayReturn(i,smalls) </code></pre> <p>Output</p> <pre><code>The returned array is: [2 4 6] indexResults is type: &lt;class 'tuple'&gt; The length of indexResults is: 2 </code></pre> <p>snip ----&gt;&lt;------</p> <pre><code>line 27, in &lt;module&gt; for i in len(indexResults): TypeError: 'int' object is not iterable </code></pre> <p>In my 'real' code I am trying to store <code>numpy</code> arrays representing text in an array of arrays. I then want to access each <code>numpy</code> array as I process the contents.</p> <p>BTW ChatGPT finishes its answer with this: &quot;In any case, a tuple with length=2 (or any length greater than 1) is certainly iterable. You can loop over its elements using a for loop, access individual elements using indexing (e.g., my_tuple[0] to access the first element), and perform other operations on it just like any other iterable object in Python.&quot;</p>
<python><arrays><loops>
2023-04-25 01:45:19
2
903
Dave
76,097,074
5,680,504
How to restore objects in s3 using pyspark
<p>I am trying to restore object which is in the glacier in s3 using Pyspark. After searching about it, I found that the code below works for my case, converting object from glacier to the standard class in s3.</p> <pre><code>import boto3 s3 = boto3.client('s3') bucket = 'bucket_name' key = 'object_name' response = s3.restore_object( Bucket=bucket, Key=key, RestoreRequest={ 'Days': 1, 'GlacierJobParameters': { 'Tier': 'Standard', }, }, ) </code></pre> <p>However, what I want to achieve is using the <code>pyspark</code> code but not the <code>python</code> code. As you can see it, the code above is only pure python code which is working on the single node. I tried to look for the way to restore object using <code>pyspark</code> or the spark apis but failed because I cannot see any documents about it.</p> <p>So I think that there is no way to restore or other aws sdk apis using pyspark directly.</p> <p>It is my assumption so I am not sure.</p> <p>So my question is that is there any way to restore object using <code>pyspark</code> not the pure python?</p>
<python><amazon-web-services><amazon-s3><pyspark>
2023-04-25 01:17:25
1
1,329
sclee1
76,096,999
1,797,498
How to create a custom bullet-shaped gauge using Plotly?
<p>How to create a custom bullet-shaped gauge using Plotly?</p> <pre><code>import plotly.graph_objects as go fig = go.Figure( go.Indicator( mode = &quot;number+gauge+delta&quot;, gauge = {'shape': &quot;bullet&quot;, 'axis': {'range': [0, max]}, 'threshold': { 'line': {'color': &quot;black&quot;, 'width': 3}, 'thickness': 0.75, 'value': actual_value}, 'steps': [ {'range': [0, 100], 'color': &quot;#1E9600&quot;}, {'range': [100, 200], 'color': &quot;#FFF200&quot;}, {'range': [200,300], 'color': &quot;#FF0000&quot;}], 'bar': {'color': &quot;black&quot;} }, value = 270, delta = {'reference': 300,'font': {&quot;size&quot;: 12}}, number = {'prefix': &quot; &quot;,'font': {&quot;size&quot;: 12}}, domain = {'x': [0.20, 1], 'y': [0, 1]}, title = {'text': &quot;Price Range:&quot;,'font': {&quot;size&quot;: 12}} ) ) fig.update_layout(height = 200, font = {&quot;size&quot;: 12}) fig.show() </code></pre> <p>This Python code uses the Plotly library to create an indicator with a bullet-shaped gauge.</p> <p>The indicator displays the current value of a metric, along with a gauge and a delta value. The gauge has a black bar and three colored regions that represent ranges of values. The delta value represents the difference between the current value and a reference value.</p>
<python><plotly>
2023-04-25 00:50:30
0
1,385
J. Ceron
76,096,896
11,854,490
Scipy curve_fit RuntimeWarning: overflow encountered in exp
<p>I am trying to fit a function with two independent variables <code>a</code> and <code>k</code> to an exponential curve using scipy's <code>curve_fit</code>. I have defined the function and tried to calculate it like this:</p> <pre><code>print(np.min(x_data)) 1 print(np.max(x_data)) 44098 print(x_data.dtype) int64 print(np.size(x_data)) 44098 print(np.min(y_data)) -0.44383433 print(np.max(y_data)) 1.0 print(y_data.dtype) float32 print(np.size(y_data)) 44098 def exponential(x, a, k, b): return a*np.exp(x*k) + b popt, pcov = scipy.optimize.curve_fit(exponential, x_data, y_data, p0=[1, -0.5, 1]) print(popt) [ 1.28765636e+00 -3.27620187e-04 -8.91012481e-02] /software/repo/python/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:6: RuntimeWarning: overflow encountered in exp /software/repo/python/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:6: RuntimeWarning: overflow encountered in multiply </code></pre> <p>This results in a warning as my data is too large. Changing the datatype all the way up to <code>complex256</code> is still not big enough. So I tried updating the bounds; making large enough bounds overcomes the error, but I do not know if I am doing this correctly as the fitted variables become dependent on the bounds I choose.</p> <pre><code>popt, pcov = scipy.optimize.curve_fit(exponential, x_data, y_data, bounds=([1, -1e100, 1], [1e100, 1, 1e100])) print(popt) [ 5.e+99 -5.e+99 5.e+99] popt, pcov = scipy.optimize.curve_fit(exponential, x_data, y_data, bounds=([1, -1e1000, 1], [1e1000, 1, 1e1000])) print(popt) [ 1.00448191 -5.15058477 1. ] </code></pre> <p>What is going on here? What do I need to do to properly fit this function? I also tinkered with the <code>Decimal</code> module, without success. Thank you!</p>
<python><scipy>
2023-04-25 00:16:37
1
822
Christian Seitz
76,096,860
1,584,066
scrap 2 text values from html using python selenium chromedriver
<p>I have webdriver which filters most of the required fields which I need from a webpage, but I am stuck in this part. I have to get 61 and 70% from the below. Can someone please help. I have tried driver.find_element_by_xpath(&quot;//p[@class=&quot;stock-score tykr-score-box&quot;], but am lost in the text after that, how do i handle that followed by span class for 1st value, then followed by another span class for 2nd value.</p> <p>Thanks a lot,<br /> Sudip</p> <pre><code>&lt;p class=&quot;data-score customer-score-box&quot;&gt;This customer has a score of &lt;span class=&quot;value-changes&quot;&gt;61&lt;/span&gt; &lt;span class=&quot;light-value value-changes&quot; style=&quot;margin-right: 5px; margin-left: 0px;&quot;&gt;/100&lt;/span&gt; and safety of &lt;span class=&quot;value-changes&quot; style=&quot;margin-right: 5px;&quot;&gt;70% &lt;/span&gt;therefore the case may &lt;p class=&quot;greenColor&quot; style=&quot;font-weight: 600;&quot;&gt; not &lt;/p&gt; be &lt;p class=&quot;greenColor&quot; style=&quot;font-weight: 600;&quot;&gt; suitable&lt;/p&gt;&lt;/p&gt; </code></pre>
<python><selenium-chromedriver>
2023-04-25 00:07:09
1
426
SudipM
76,096,846
8,068,825
Python - Is it possible to get the init params of a Python class?
<p>So for example let's say I initialize a class like so</p> <pre><code>blah = SomeClass(a=10,b=20) </code></pre> <p>is it possible to get a dictionary of the initialization parameters and their values from <code>blah</code>? So like</p> <pre><code>get_init_params_and_values(blah) </code></pre> <p>should return {'a':10, 'b':20}. <code>SomeClass</code> can have other <code>__init__</code> parameters but I only care about <code>a</code> and <code>b</code> since they were set when initialized.</p> <p>I'm not super familiar with this specific functionality but I remember you can get the parameters of a function. So if that's possible for <code>__init__</code> then let's say we get a list of param names then maybe can get it from the class like so <code>blah.some_param_name</code>.</p>
<python>
2023-04-25 00:02:07
1
733
Gooby
76,096,800
2,735,009
IndexError: pop from an empty deque while using map in Python
<p>I have the following piece of code in which I am trying to calculate sentence embeddings in parallel.</p> <pre><code>import multiprocessing from tqdm import tqdm # Define the function to be executed in parallel def process_data(chunk): results = [] for row in tqdm(chunk): work_id = row[1] mentioning_work_id = row[3] if work_id in df_text and mentioning_work_id in df_text: title1 = df_text[work_id]['title'] title2 = df_text[mentioning_work_id]['title'] print(title1 + '/n' + title2) embeddings_title1 = embedding_model.encode(title1,convert_to_numpy=True) print(embeddings_title1) embeddings_title2 = embedding_model.encode(title2,convert_to_numpy=True) print(embeddings_title2) results.append(np.matmul(embeddings_title1, embeddings_title2.T)) print(results) else: continue return results from multiprocessing import Pool # Define the data to be processed data = df_rud_labels # Split the data into chunks chunk_size = len(data) // num_cores chunks = [data[i:i+chunk_size] for i in range(0, len(data), chunk_size)] # Create a pool of worker processes pool = multiprocessing.Pool(processes=num_cores) results = [] with tqdm(total=len(chunks)) as pbar: for i, result_chunk in enumerate(pool.imap_unordered(process_data, chunks)): # Update the progress bar pbar.update() # Add the results to the list results += result_chunk # Concatenate the results final_result = results </code></pre> <p>But I get the following error after interrupting the kernel:</p> <pre><code> 0%| | 0/2500 [00:00&lt;?, ?it/s] Financialization and Institutional Change in Capitalisms: A Comparison of the US and Germany/nSingle domain antibodies: promising experimental and therapeutic tools in infection and immunity 0%| | 0/2500 [00:00&lt;?, ?it/s] Encyclopedia of India-China Cultural Contacts, vol I/nToll-like receptors as a key regulator of mesenchymal stem cell function: An up-to-date review 0%| | 0/2500 [00:00&lt;?, ?it/s] Ioannis ROMANIDES, Dogmatica patristica ortodoxa, traducere de Dragos Dasca, Editura Ecclesiast, editie de protos Vasile Bîrzu, 2011/nANTHROPOMETRIC MEASUREMENTS, SOMATOTYPES AND PHYSICAL ABILITIES AS A FUNCTION TO PREDICT THE SELECTION OF TALENTS JUNIOR WEIGHTLIFTERS A prophet of old: Jesus the “public theologian”/nCurriculum alignment at undergraduate level: military geography at the South African Military Academy 0%| | 0/4 [00:09&lt;?, ?it/s]Process ForkPoolWorker-72: Process ForkPoolWorker-71: Traceback (most recent call last): Traceback (most recent call last): File &quot;/opt/conda/lib/python3.7/multiprocessing/process.py&quot;, line 297, in _bootstrap self.run() File &quot;/opt/conda/lib/python3.7/multiprocessing/process.py&quot;, line 99, in run self._target(*self._args, **self._kwargs) File &quot;/opt/conda/lib/python3.7/multiprocessing/process.py&quot;, line 297, in _bootstrap self.run() File &quot;/opt/conda/lib/python3.7/multiprocessing/process.py&quot;, line 99, in run self._target(*self._args, **self._kwargs) File &quot;/opt/conda/lib/python3.7/multiprocessing/pool.py&quot;, line 110, in worker task = get() File &quot;/opt/conda/lib/python3.7/multiprocessing/queues.py&quot;, line 351, in get with self._rlock: File &quot;/opt/conda/lib/python3.7/multiprocessing/synchronize.py&quot;, line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt File &quot;/opt/conda/lib/python3.7/multiprocessing/pool.py&quot;, line 110, in worker task = get() File &quot;/opt/conda/lib/python3.7/multiprocessing/queues.py&quot;, line 351, in get with self._rlock: File &quot;/opt/conda/lib/python3.7/multiprocessing/synchronize.py&quot;, line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /opt/conda/lib/python3.7/multiprocessing/pool.py in next(self, timeout) 732 try: --&gt; 733 item = self._items.popleft() 734 except IndexError: IndexError: pop from an empty deque During handling of the above exception, another exception occurred: KeyboardInterrupt Traceback (most recent call last) &lt;ipython-input-48-fcb9ab74a032&gt; in &lt;module&gt; 31 results = [] 32 with tqdm(total=len(chunks)) as pbar: ---&gt; 33 for i, result_chunk in enumerate(pool.imap_unordered(process_data, chunks)): 34 # Update the progress bar 35 pbar.update() /opt/conda/lib/python3.7/multiprocessing/pool.py in next(self, timeout) 735 if self._index == self._length: 736 raise StopIteration from None --&gt; 737 self._cond.wait(timeout) 738 try: 739 item = self._items.popleft() /opt/conda/lib/python3.7/threading.py in wait(self, timeout) 294 try: # restore state no matter what (e.g., KeyboardInterrupt) 295 if timeout is None: --&gt; 296 waiter.acquire() 297 gotit = True 298 else: KeyboardInterrupt: </code></pre> <p>I have tried to calculate these embedding using the same titles in the output and I have been successful. But I'm not sure why the code I've written isn't able to do this.</p> <p>Any help would be appreciated.</p>
<python><machine-learning><nlp><multiprocessing><mapreduce>
2023-04-24 23:48:44
1
4,797
Patthebug
76,096,512
3,144,108
Bash fails to create venv (python3 -m ven venv) remotely via SSH
<p>I'm trying to create venv using &quot;python3 -m venv venv&quot; remotely. However, it fails:</p> <pre><code>&gt; ssh sv3-01 &quot;python3 -m venv venv&quot; Error: Command '['/home/ubuntu/venv/bin/python3', '-m', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. </code></pre> <p>FYI:</p> <ul> <li>ssh config has sv3-01 configuration set correctly and working</li> <li>ssh sv3-01 &quot;ls -lah&quot; works fine returning usual ls file list from the sv3-01 remote server</li> <li>Executing &quot;python3 -m venv venv&quot; directly on the sv3-01 server works absolutely fine. Python3-dev and python3-venv are installed successfully on the server already.</li> </ul> <p>What do I do wrong?</p>
<python><python-3.x><bash><ssh><python-venv>
2023-04-24 22:22:05
1
525
uzla
76,096,409
5,583,772
Conditional assignment to multiple columns in polars
<p>How can I make a conditional assignment to multiple columns in polars if I have an external condition and a list of columns to assign to? In my example I have:</p> <pre><code>df = pl.DataFrame({'a':[0,1,2,3,4], 'b':[10,20,30,40,50], 'c':[100,200,300,400,500]}) </code></pre> <p>Which gives:</p> <pre><code>┌─────┬─────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 0 ┆ 10 ┆ 100 │ │ 1 ┆ 20 ┆ 200 │ │ 2 ┆ 30 ┆ 300 │ │ 3 ┆ 40 ┆ 400 │ │ 4 ┆ 50 ┆ 500 │ └─────┴─────┴─────┘ </code></pre> <p>and</p> <pre><code>condition = [True, False, False, False, True] columns = ['a', 'c'] </code></pre> <p>In pandas, I might have used a state like</p> <pre><code>df.loc[condition, columns] = None </code></pre> <p>How can I do this in polars?</p> <p>And gotten:</p> <pre><code>┌─────┬─────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ None ┆ 10 ┆ None │ │ 1 ┆ 20 ┆ 200 │ │ 2 ┆ 30 ┆ 300 │ │ 3 ┆ 40 ┆ 400 │ │ None ┆ 50 ┆ None │ └─────┴─────┴─────┘ </code></pre>
<python><python-polars>
2023-04-24 22:02:32
2
556
Paul Fleming
76,096,345
16,305,340
gaussian mixture model is inconsistent in its results using sklearn library
<p>so I am trying to segment a hand from an image using the Gaussian mixture model, so read <a href="https://www.iosrjournals.org/iosr-jece/papers/Conf.MAM-2019/Volume-1/6.%2037-41.pdf" rel="nofollow noreferrer">this paper (Hand Gesture Based Recognition And Voice Conversion System For Physically Disabled People)</a> which states that skin color follows gaussian distribution in YCbCr color space, so I wrote the following line :</p> <pre><code>gmm_model = GMM.GaussianMixture(n_components=3, covariance_type='full', warm_start=True, init_params='k-means++').fit(img2) </code></pre> <p>from sklearn library, but my problem is that every time I run the program the results are different completely.</p> <h2>here is the original image:</h2> <p><a href="https://i.sstatic.net/oGEy1.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oGEy1.jpg" alt="enter image description here" /></a></p> <h2>here is the segmented image for the 1st run:</h2> <p><a href="https://i.sstatic.net/wVX2S.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wVX2S.jpg" alt="enter image description here" /></a></p> <h2>here is the segmented image for the 2nd run:</h2> <p><a href="https://i.sstatic.net/HS7x0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HS7x0.jpg" alt="enter image description here" /></a></p> <p>and so on, why is that possible?</p>
<python><image-processing><scikit-learn>
2023-04-24 21:48:24
1
1,893
abdo Salm
76,096,342
360,553
How to resolve "JSONDecodeError" when using LinkedIn API in Python?
<p>I'm trying to access the LinkedIn API in Python and I've been able to successfully obtain an access token and instantiate an <code>application</code> object using the <code>linkedin</code> library.</p> <pre><code># Set up LinkedIn API credentials and scope LINKEDIN_CLIENT_ID = &quot;myappclientid&quot; LINKEDIN_CLIENT_SECRET = &quot;myappclientsecret&quot; REDIRECT_URI = 'https://localhost:8000' SCOPE = ['r_emailaddress', 'r_liteprofile', 'w_member_social'] # Create LinkedInAuthentication object with credentials and scope authentication = linkedin.LinkedInAuthentication( LINKEDIN_CLIENT_ID, LINKEDIN_CLIENT_SECRET, REDIRECT_URI, SCOPE ) # Get the authorization URL to prompt user to grant access authorization_url = authentication.authorization_url print(authorization_url) # User should visit the URL and be redirected to a localhost URL with a 'code' parameter response_url = input(&quot;Please enter the redirected URL:&quot;) parsed_url = urlparse(response_url) query_params = parse_qs(parsed_url.query) code_param = query_params.get('code', [None])[0] # Set authorization code in the authentication object to retrieve access token authentication.authorization_code = code_param access_token = authentication.get_access_token() print(&quot;Access token:&quot;, access_token[0]) # Create LinkedInApplication object with access token application = linkedin.LinkedInApplication(token=access_token[0]) # Try to get the user's profile with the LinkedInApplication object try: application.get_profile() except Exception as e: print(&quot;Error: &quot;, e) </code></pre> <p>However, when I try to use any of the LinkedInApplication methods, I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.10/site-packages/requests/models.py&quot;, line 971, in json return complexjson.loads(self.text, **kwargs) File &quot;/usr/local/Cellar/python@3.10/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py&quot;, line 346, in loads return _default_decoder.decode(s) File &quot;/usr/local/Cellar/python@3.10/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py&quot;, line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File &quot;/usr/local/Cellar/python@3.10/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py&quot;, line 355, in raw_decode raise JSONDecodeError(&quot;Expecting value&quot;, s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/usr/local/lib/python3.10/site-packages/linkedin/utils.py&quot;, line 56, in raise_for_error </code></pre> <p>Can anyone help me understand what's causing this error and how I can fix it so I can access the LinkedIn API?</p>
<python><json><linkedin-api>
2023-04-24 21:47:19
0
5,762
Claus
76,096,103
8,068,825
Remove n duplicates from Pandas DataFrame
<p>I'm trying to make a way to find duplicates and remove the last n duplicate rows so for example if I have</p> <pre><code>df = pd.DataFrame({ 'brand': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'], }) </code></pre> <p>and I want to remove 2 duplicates. It should return</p> <pre><code>df = pd.DataFrame({ 'brand': ['Yum Yum', 'Yum Yum', 'Indomie'], }) </code></pre> <p>As 'Indomie' is a duplicate and it's the last 2 duplicates in the data frame.</p>
<python><pandas>
2023-04-24 21:06:29
2
733
Gooby
76,096,026
968,132
Using Mammoth style_map to apply inline styles and custom attributes
<p>I'd like to use Mammoth's style map to apply inline styles, like <code>color:red</code> and custom tags/attributes, though I cannot seem to find anything in <a href="https://github.com/mwilliamson/python-mammoth" rel="nofollow noreferrer">the docs</a> that suggest I can. Does mammoth have the capability to do inline styling?</p> <p>Simple tags and classes works fine, e.g. <code>p[style-name='Heading 3'] =&gt; div.foo:fresh</code></p> <p>However I'd like to do something like this: <code>p[style-name='Heading 3'] =&gt; div(color:red):fresh</code> or <code>p[style-name='Heading 3'] =&gt; div, cust_tag:val:fresh</code></p>
<python><mammoth>
2023-04-24 20:53:01
1
1,148
Peter
76,095,955
6,929,343
python make sure () aren't passed in parameter
<p>I have a function that works fine if <code>()</code> isn't used in the function name:</p> <pre class="lang-py prettyprint-override"><code>answer = message.AskQuestion( self.loc_top, thread=self.refresh_play_top, align='left', icon='warning', title=&quot;Forget Location Confirmation&quot;, text=text) </code></pre> <p>If I use <code>()</code> by accident, E.G. <code>thread=self.refresh_play_top()</code>, the function returns corrupt data:</p> <pre class="lang-py prettyprint-override"><code>answer = message.AskQuestion( self.loc_top, thread=self.refresh_play_top(), align='left', icon='warning', title=&quot;Forget Location Confirmation&quot;, text=text) </code></pre> <p>Inside the <code>.AskQuestion()</code> function, there is this:</p> <pre class="lang-py prettyprint-override"><code>self.thread = thread # The thread run before button click </code></pre> <p>How could this be changed?</p> <hr /> <h2>Solution</h2> <p>Using the accepted answer:</p> <ul> <li>Change <code>self.refresh_play_top()</code> function to always return <code>True</code> or <code>False</code> including function end which returns <code>None</code> by default.</li> <li>Add 7 lines of code to <code>AskQuestion()</code> function (shown at bottom):</li> </ul> <pre class="lang-py prettyprint-override"><code> def __init__(self, parent, text=None, confirm='yes', align='center', thread=None, icon='warning'): self.top_level = parent # Allows .after() calls self.confirm = confirm # Append &quot;Are you sure?&quot; line? self.align = align # data (text lines) alignment if thread is None: self.thread = None elif callable(thread): self.thread = thread # The thread run before button click else: print(&quot;message.py, CommonSelf, invalid thread= passed&quot;) self.thread = None </code></pre> <ul> <li>Test, fix, test, fix, text, fix :)</li> </ul> <p>Thanks to the experts who promptly commented and answered.</p>
<python>
2023-04-24 20:42:27
1
2,005
WinEunuuchs2Unix
76,095,579
2,954,547
Safely convert a Pandas datetime Series to Int64 epoch timestamp, preserving nulls
<p>Consider a datetime-dtype Series containing some null values:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd t = pd.date_range('2022-01-01', '2022-01-02', freq='4H').to_series('t') t.iat[2] = None </code></pre> <pre class="lang-none prettyprint-override"><code>2022-01-01 00:00:00 2022-01-01 00:00:00 2022-01-01 04:00:00 2022-01-01 04:00:00 2022-01-01 08:00:00 NaT 2022-01-01 12:00:00 2022-01-01 12:00:00 2022-01-01 16:00:00 2022-01-01 16:00:00 2022-01-01 20:00:00 2022-01-01 20:00:00 2022-01-02 00:00:00 2022-01-02 00:00:00 Freq: 4H, dtype: datetime64[ns] </code></pre> <p>The <code>dtype</code> of this Series is <code>&lt;M8[ns]</code>. As far as I understand, this means that the underlying data is physically stored as 64-bit integers representing nanoseconds since the start of the Unix epoch (<a href="https://numpy.org/devdocs/reference/arrays.scalars.html#numpy.datetime64" rel="nofollow noreferrer"><code>numpy.datetime64</code></a>). However I don't know how <code>NaT</code> is represented.</p> <p>I would like to obtain an integer timestamp representing the number of nanoseconds since the start of the Unix epoch, with null values wherever there was an <code>NaT</code> in the data. Therefore it seems like I should be able to somehow check the precision of the timestamp data type, extract the raw seconds, and then scale up or down as needed to obtain nanoseconds.</p> <p>Expected output:</p> <pre class="lang-none prettyprint-override"><code>2022-01-01 00:00:00 1640995200000000000 2022-01-01 04:00:00 1641009600000000000 2022-01-01 08:00:00 &lt;NA&gt; 2022-01-01 12:00:00 1641038400000000000 2022-01-01 16:00:00 1641052800000000000 2022-01-01 20:00:00 1641067200000000000 2022-01-02 00:00:00 1641081600000000000 Freq: 4H, dtype: Int64 </code></pre> <p>Things that did not work:</p> <pre class="lang-py prettyprint-override"><code>t.astype(pd.Int64Dtype()) # datetime64[ns] cannot be converted to IntegerDtype` t.astype(float).astype(pd.Int64Dtype()) # Cannot cast DatetimeArray to dtype float64 t.map(lambda t: t.timestamp()).astype(pd.Int64Dtype()) # NaTType does not support timestamp t.map(lambda t: t.timestamp(), na_action='ignore').astype(pd.Int64Dtype()) # 'values' contains non-numeric NA NaT </code></pre> <p>Notably the <code>.timestamp</code> logic gets hung up on the <code>NaT</code>, which I don't know how to cleanly convert to the &quot;null-aware&quot; <code>Int64Dtype</code>.</p> <p>Something that did work, but is very ugly:</p> <pre class="lang-py prettyprint-override"><code>((t - pd.Timestamp(1970, 1, 1)).dt.total_seconds() * 1e09).astype(pd.Int64Dtype()) </code></pre> <p>Is there an idiomatic way to do this?</p>
<python><pandas><numpy><datetime><unix-timestamp>
2023-04-24 19:49:45
1
14,083
shadowtalker
76,095,574
14,875,027
Python Union Types with Objects
<p>I am writing my own classes for int, str, bool, etc. that have generators on them. I am using this to fuzz the function based on the type annotations. This is going fine, with the exception of the <code>|</code> notation for union type. If I type something like:</p> <pre><code>def test_handles_none_with_arged_types( x: Int[0, 10] | List[Int] | Str | Dict[NoneType, List[NoneType]] ): assert type(x) in [int, list, str, dict, list] if type(x) == int: assert x &gt;= 0 and x &lt;= 10 if type(x) == list: assert all([el is None for el in x]) if type(x) == dict: for k, v in x.items(): assert k is None assert type(v) == list for el in v: assert el is None </code></pre> <p>Python gives me the following error:</p> <pre><code>TypeError: unsupported operand type(s) for |: 'Int' and 'List' </code></pre> <p>This seems to be because <code>Int[0,10]</code> has the type of its class <code>pybt.typing.Int</code>, not <code>type</code>. However, using <code>typing.Union</code> works just fine.</p> <pre><code>def test_handles_none_with_arged_types( x: Union[Int[0, 10], List[Int], Str, Dict[NoneType, List[NoneType]]] ): ... </code></pre> <p>Is there a way to get around this? Unfortunately, I can't think of a way to hold off instantiating <code>Int</code> or other types that are indexed in their <code>__class_getitem__</code> under.</p> <p>EDIT:</p> <p>Here is the full class (for List):</p> <pre><code>class List: def __init__(self, sub_type=_DEFAULT_SUB_TYPE, max_len=_DEFAULT_MAX_LEN): self.max_len: int = _DEFAULT_MAX_LEN self.sub_type = sub_type if max_len is not None: self.max_len = max_len def __str__(self): return &quot;pybt.types.List&quot; def __class_getitem__(cls, parameters): sub_type = None max_len = None if type(parameters) != tuple: parameters = (parameters,) if len(parameters) &gt; 2: raise TypeError(&quot;Expected 2 arguments: List[sub_type, max_length]&quot;) if len(parameters): sub_type = parameters[0] if len(parameters) &gt; 1: max_len = parameters[1] if max_len and max_len &lt;= 0: raise TypeError(f&quot;Max Length of {cls.max_len} is less than or equal to 0&quot;) return cls(sub_type, max_len) </code></pre>
<python><python-typing>
2023-04-24 19:49:11
1
370
dvr
76,095,455
21,420,742
Finding last value and second to last value In Python
<p>I have a dataset and would like to see that last and second to last person to hold the same position.</p> <p>Sample Data</p> <pre><code> ID Date Name Job Job_ID 101 01/2022 Adam Sales 1234 101 02/2022 Adam Sales 1234 101 03/2022 Adam Sales 1234 102 01/2022 Blake Tech 2345 102 02/2022 Blake Tech 2345 102 02/2022 Blake Consultant 3456 103 01/2022 Chris Advisor 4567 103 02/2022 Chris Analyst 5678 103 03/2022 Chris Analyst 5678 104 01/2022 Debbie Consultant 3456 104 02/2022 Debbie Consultant 3456 104 03/2022 Debbie Tech 2345 105 01/2022 Eric Analyst 5678 105 02/2022 Eric Advisor 4567 105 03/2022 Eric Advisor 4567 </code></pre> <p>Desired Output:</p> <pre><code>ID Name Job JobID Lastest Prior 101 Adam Sales 1234 Adam Adam 102 Blake Consultant 3456 Blake Debbie 103 Chris Analyst 5678 Chris Eric 104 Debbie Tech 2345 Debbie Blake 105 Eric Advisor 4567 Eric Chris </code></pre> <p>I have code to get the last:</p> <pre><code> manager_dict = df.groupby('ID').agg({'Name':'last'}).to_dict()['Name'] df['Latest'] = df['ID'].apply(lambda x: manager_dict[x]) </code></pre> <p>I just don't know how to get the second to last person to hold that position.</p>
<python><python-3.x><pandas><dataframe><numpy>
2023-04-24 19:33:39
3
473
Coding_Nubie
76,095,453
3,903,762
Unable to install and use R packages in Docker (along with Python)
<p>I am stuck on this issue and I am new to R.</p> <p>What is the <em>correct</em> way of installing R and Python within Docker? (general approach).</p> <p>I tried this to run Python and R in a Docker with these Dockerfiles (after checking multiple answers on SO):</p> <pre><code>FROM ubuntu:latest ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update &amp;&amp; apt-get install -y --no-install-recommends build-essential wget r-base python3.10 python3-pip python3-setuptools python3-dev WORKDIR /app COPY requirements.txt /app/requirements.txt RUN pip3 install -r requirements.txt RUN R -e &quot;install.packages('rjson')&quot; RUN R -e &quot;install.packages('reticulate')&quot; RUN R -e &quot;install.packages('Robyn')&quot; COPY . /app EXPOSE 80 RUN mkdir ~/.streamlit RUN cp ./config.toml ~/.streamlit/config.toml CMD [&quot;streamlit&quot;, &quot;run&quot;, &quot;Scripts/HomePage.py&quot;] </code></pre> <hr /> <pre><code>FROM ubuntu:latest ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update &amp;&amp; apt-get install -y --no-install-recommends build-essential cmake r-base r-base-dev python3.9 \ python3-pip vim wget libssl-dev \ libcurl4-openssl-dev libxml2-dev WORKDIR /app COPY requirements.txt /app/requirements.txt RUN pip3 install -r requirements.txt -v #RUN Rscript -e &quot;install.packages(c('rjson', 'readxl', 'reticulate', 'Robyn'))&quot; RUN R -e &quot;install.packages('rjson', dependencies=TRUE, repos='http://cran.us.r-project.org')&quot; RUN R -e &quot;install.packages('readxl', dependencies=TRUE, repos='http://cran.us.r-project.org')&quot; RUN R -e &quot;install.packages('reticulate', dependencies=TRUE, repos='http://cran.us.r-project.org')&quot; RUN R -e &quot;install.packages('Robyn', dependencies=TRUE, repos='http://cran.us.r-project.org')&quot; # http://cran.rstudio.com/ COPY . /app EXPOSE 80 RUN mkdir ~/.streamlit RUN cp ./config.toml ~/.streamlit/config.toml CMD [&quot;streamlit&quot;, &quot;run&quot;, &quot;Scripts/HomePage.py&quot;] </code></pre> <p>The problem is that when I invoke R script from Streamlit app, the Error 'Robyn' is not available comes up everytime. If I try to install again within the script I get other issues relating to version conflict.</p> <p>I even tried from <code>r-base</code> image from here: <a href="https://stackoverflow.com/a/60311313/3903762">LINK to SO answer</a></p> <p>But still no success for running the R code. What am I missing here? Do I need to configure R for something too?</p> <p>The way I call R script :</p> <pre><code>res = subprocess.call([&quot;Rscript&quot;, &quot;Scripts/new-mmm.R&quot;]) </code></pre> <p>I am trying to run Robyn demo.R code from here as part of Streamlit app (<a href="https://facebookexperimental.github.io/Robyn/" rel="nofollow noreferrer">Robyn</a>).</p> <p>I install <code>nevergrad</code> via Python before moving to R package installation and install <code>reticulate</code> too before <code>Robyn</code>. Even having these as <code>install.packages(&quot;reticulate&quot;)</code> within the script fails to run successfully.</p> <p>With <code>r-base</code> image:</p> <pre><code>FROM r-base:4.2.0 RUN apt-get update &amp;&amp; apt install -y python3.10 python3-pip WORKDIR /app COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt RUN install2.r --error rjson RUN install2.r --error reticulate RUN install2.r --error Robyn COPY . /app EXPOSE 80 RUN mkdir ~/.streamlit RUN cp ./config.toml ~/.streamlit/config.toml CMD [&quot;streamlit&quot;, &quot;run&quot;, &quot;Scripts/HomePage.py&quot;] </code></pre> <p>For this <code>Docker build logs</code>:</p> <pre><code>#8 2134.0 ERROR: dependencies ‘lares’, ‘nloptr’ are not available for package ‘Robyn’ #8 2134.0 * removing ‘/usr/local/lib/R/site-library/Robyn’ #8 2134.0 #8 2134.0 The downloaded source packages are in #8 2134.0 ‘/tmp/downloaded_packages’ #8 2134.0 Error: installation of package ‘Robyn’ had non-zero exit status #8 2134.0 In addition: Warning messages: #8 2134.0 1: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘curl’ had non-zero exit status #8 2134.0 2: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘xml2’ had non-zero exit status #8 2134.0 3: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘RCurl’ had non-zero exit status #8 2134.0 4: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘openssl’ had non-zero exit status #8 2134.0 5: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘h2o’ had non-zero exit status #8 2134.0 6: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘httr’ had non-zero exit status #8 2134.0 7: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘rvest’ had non-zero exit status #8 2134.0 8: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘lares’ had non-zero exit status #8 2134.0 9: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘nloptr’ had non-zero exit status #8 2134.0 10: In install.packages(pkgs, ...) : #8 2134.0 installation of package ‘Robyn’ had non-zero exit status #8 ERROR: executor failed running [/bin/sh -c install2.r --error Robyn]: exit code: 1 ------ &gt; [5/5] RUN install2.r --error Robyn: ------ executor failed running [/bin/sh -c install2.r --error Robyn]: exit code: 1 </code></pre> <h2><strong>Edit</strong></h2> <p>I also get this when trying to install directly from R script (within container):</p> <pre><code>2023-04-25 19:55:58 ERROR: dependency ‘lares’ is not available for package ‘Robyn’ 2023-04-25 19:55:58 * removing ‘/usr/local/lib/R/site-library/Robyn’ 2023-04-25 19:55:58 2023-04-25 19:55:58 The downloaded source packages are in 2023-04-25 19:55:58 ‘/tmp/Rtmp6FgtmT/downloaded_packages’ 2023-04-25 19:55:58 Warning messages: 2023-04-25 19:55:58 1: In install.packages(&quot;Robyn&quot;, configure.args = &quot;--with-ssl=openssl --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt&quot;, : 2023-04-25 19:55:58 installation of package ‘curl’ had non-zero exit status 2023-04-25 19:55:58 2: In install.packages(&quot;Robyn&quot;, configure.args = &quot;--with-ssl=openssl --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt&quot;, : 2023-04-25 19:55:58 installation of package ‘xml2’ had non-zero exit status 2023-04-25 19:55:58 3: In install.packages(&quot;Robyn&quot;, configure.args = &quot;--with-ssl=openssl --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt&quot;, : 2023-04-25 19:55:58 installation of package ‘RCurl’ had non-zero exit status 2023-04-25 19:55:58 4: In install.packages(&quot;Robyn&quot;, configure.args = &quot;--with-ssl=openssl --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt&quot;, : 2023-04-25 19:55:58 installation of package ‘openssl’ had non-zero exit status 2023-04-25 19:55:58 5: In install.packages(&quot;Robyn&quot;, configure.args = &quot;--with-ssl=openssl --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt&quot;, : 2023-04-25 19:55:58 installation of package ‘h2o’ had non-zero exit status 2023-04-25 19:55:58 6: In install.packages(&quot;Robyn&quot;, configure.args = &quot;--with-ssl=openssl --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt&quot;, : 2023-04-25 19:55:58 installation of package ‘httr’ had non-zero exit status 2023-04-25 19:55:58 7: In install.packages(&quot;Robyn&quot;, configure.args = &quot;--with-ssl=openssl --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt&quot;, : 2023-04-25 19:55:58 installation of package ‘rvest’ had non-zero exit status 2023-04-25 19:55:58 8: In install.packages(&quot;Robyn&quot;, configure.args = &quot;--with-ssl=openssl --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt&quot;, : 2023-04-25 19:55:58 installation of package ‘lares’ had non-zero exit status 2023-04-25 19:55:58 9: In install.packages(&quot;Robyn&quot;, configure.args = &quot;--with-ssl=openssl --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt&quot;, : 2023-04-25 19:55:58 installation of package ‘Robyn’ had non-zero exit status </code></pre> <p><strong>EDIT</strong></p> <p>Using <code>rocker</code>:</p> <pre><code>FROM rocker/r-ubuntu ENV CURLOPT_SSL_VERIFYHOST=0 ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update -y &amp;&amp; apt-get upgrade -y --no-install-recommends RUN apt install -y --no-install-recommends build-essential curl openssl cmake python3.10 python3-pip WORKDIR /app COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt RUN install2.r --error --deps TRUE rjson RUN install2.r --error --deps TRUE reticulate RUN install2.r --error --deps TRUE Robyn COPY . /app EXPOSE 80 RUN mkdir ~/.streamlit RUN cp ./config.toml ~/.streamlit/config.toml CMD [&quot;streamlit&quot;, &quot;run&quot;, &quot;Scripts/HomePage.py&quot;] </code></pre> <p><strong>Error</strong></p> <pre><code> ERROR: dependency ‘lares’ is not available for package ‘Robyn’ #13 2077.7 * removing ‘/usr/local/lib/R/site-library/Robyn’ #13 2077.7 #13 2077.7 The downloaded source packages are in #13 2077.7 ‘/tmp/downloaded_packages’ #13 2077.7 Error: installation of package ‘Robyn’ had non-zero exit status #13 2077.7 In addition: Warning messages: #13 2077.7 1: In install.packages(pkgs, ...) : #13 2077.7 installation of package ‘curl’ had non-zero exit status #13 2077.7 2: In install.packages(pkgs, ...) : #13 2077.7 installation of package ‘xml2’ had non-zero exit status #13 2077.7 3: In install.packages(pkgs, ...) : #13 2077.7 installation of package ‘RCurl’ had non-zero exit status #13 2077.7 4: In install.packages(pkgs, ...) : #13 2077.7 installation of package ‘openssl’ had non-zero exit status #13 2077.7 5: In install.packages(pkgs, ...) : #13 2077.7 installation of package ‘h2o’ had non-zero exit status #13 2077.7 6: In install.packages(pkgs, ...) : #13 2077.7 installation of package ‘httr’ had non-zero exit status #13 2077.7 7: In install.packages(pkgs, ...) : #13 2077.7 installation of package ‘rvest’ had non-zero exit status #13 2077.7 8: In install.packages(pkgs, ...) : #13 2077.7 installation of package ‘lares’ had non-zero exit status #13 2077.7 9: In install.packages(pkgs, ...) : #13 2077.7 installation of package ‘Robyn’ had non-zero exit status #13 ERROR: executor failed running [/bin/sh -c install2.r --error --deps TRUE Robyn]: exit code: 1 ------ &gt; [ 9/12] RUN install2.r --error --deps TRUE Robyn: ------ executor failed running [/bin/sh -c install2.r --error --deps TRUE Robyn]: exit code: 1 </code></pre> <hr /> <p>Thanks for your help in advance.</p>
<python><r><docker>
2023-04-24 19:33:29
0
1,486
MANU
76,095,390
1,914,034
numpy - round columns to closest decimal value
<p>I have an array of floats of shape <code>(N,3)</code>. I would like to round the two first columns to the closest decimal values varing from 1 to 0 which is derived from a parameter.</p> <p>I have already a solution, but I am wondering if there is a more efficient or &quot;numpy&quot; way to achieve it.</p> <p>For istance:</p> <pre><code>import numpy as np def round_to(arr, round_to): col_1_rounded = np.round(arr[:,0] / round_to) * round_to col_2_rounded = np.round(arr[:,1] / round_to) * round_to return np.vstack((col_1_rounded, col_2_rounded, arr[:,2])).transpose() arr = np.array(( (0.24, 1.26, 45.0), (0.76, 1.43, 23.0), (2.923, 2.22, 29.0), (33.1202, 2.1, 5.0), (2.01, 2.55, 41.0) )) print(round_to(arr, 0.1)) &gt;&gt;&gt;[[ 0.2 1.2 45. ] [ 0.8 1.4 23. ] [ 2.9 2.2 29. ] [33.1 2.1 5. ] [ 2. 2.5 41. ]] print(round_to(arr, 0.5)) &gt;&gt;&gt;[[ 0. 1. 45. ] [ 1. 1.5 23. ] [ 3. 2. 29. ] [33. 2. 5. ] [ 2. 2.5 41. ]] </code></pre> <p><strong>Note:</strong></p> <p>I have found other similar rounding functions on SO using instead <code>round(value * round_to) / round_to</code> but inversing the <code>/</code> and <code>*</code> operators does not return the correct results with my demo data. Why?</p> <pre><code>def round_to(arr, round_to): col_1_rounded = np.round(arr[:,0] * round_to) / round_to col_2_rounded = np.round(arr[:,1] * round_to) / round_to return np.vstack((col_1_rounded, col_2_rounded, arr[:,2])).transpose() arr = np.array(( (0.24, 1.26, 45.0), (0.76, 1.43, 23.0), (2.923, 2.22, 29.0), (33.1202, 2.1, 5.0), (2.01, 2.55, 41.0) )) print(round_to(arr, 0.5)) &gt;&gt;&gt;[[ 0. 2. 45.] [ 0. 2. 23.] [ 2. 2. 29.] [34. 2. 5.] [ 2. 2. 41.]] </code></pre>
<python><numpy>
2023-04-24 19:23:20
1
7,655
Below the Radar
76,095,388
2,989,642
pandas: how to get pandas.compare() to return all columns of the other row?
<p>With dataframes <code>df_a</code> and <code>df_b</code>, how do I return the difference (meaning, the data in <code>other</code> at variance with <code>self</code>) as complete rows (e.g., all columns)? If I do</p> <pre><code>first = { 'Name': ['Bob', 'Mike', 'Alex'], 'Job': ['Forklift Operator', 'Forklift Operator', 'Master Forklift Operator'] } second = { 'Name': ['Bob', 'Mike', 'Allen'], 'Job': ['Forklift Operator', 'Forklift Operator', 'Master Forklift Operator'] df_a = pd.DataFrame(first) df_b = pd.DataFrame(second) df_c = df_a.compare(df_b) print(df_c) </code></pre> <p>that gives me</p> <pre><code> Name self other 2 Alex Allen </code></pre> <p>What I would like to be able to get is the entire row from <code>other</code> that does not match the left:</p> <pre><code> Name Job 2 Allen Master Forklift Operator </code></pre>
<python><pandas>
2023-04-24 19:23:10
2
549
auslander
76,095,343
18,189,817
How do I descriptively debugprint objects of custom classes in Python? (Like Rust’s derive(Debug))
<p>I have been working in both Python and Rust for quite a while now, and have started to get irritated about python’s lack of a proper default <code>__repr__</code>. (or, as I understand <code>__repr__</code> is not supposed for default printing: a good human-redable, debug oriented representation.)</p> <p>By default, when you print a custom object in python you get something like: <code>&lt;module.Class object at 0x7f6415259210&gt;</code>, which is simply not helpful.</p> <p>In rust, I can do something like this, with #[derive(Debug)]:</p> <pre class="lang-rust prettyprint-override"><code>#[derive(Debug, Default)] struct Hello { name: String, first: usize, second: usize, } fn main() { let hello = Hello::default(); dbg!(hello); } </code></pre> <p>Which prints the beautiful:</p> <pre><code>[src/main.rs:10] hello = Hello { name: &quot;&quot;, first: 0, second: 0, } </code></pre> <p>As Aran-Fey pointed out, something like this exists for dataclasses:</p> <pre><code>from dataclasses import dataclass @dataclass class Hello: name: str first: int second: int print(Hello(&quot;&quot;, 0, 0)) &gt;&gt;&gt; Hello(name='', first=0, second=0) </code></pre> <p>however, I cannot find an equivalent in python for other cases.</p> <p>I know there are easy ways to solve this, like printing the variables you're interested in one-by-one or writing my own <code>__repr__</code> for every class, however, I am looking for something more convenient.</p> <p>Currently, I have been copy-pasting this class over my python code when debugging:</p> <pre class="lang-py prettyprint-override"><code>class Debug: &quot;&quot;&quot;Inspiered on Rust's #[derive(Debug)] Inherit from this class to make the object easier to see when debug printing &quot;&quot;&quot; if __debug__: def __repr__(self) -&gt; str: names = [f&quot;{attr}: {value}&quot; for attr, value in self.__dict__.items()] if hasattr(self, &quot;__slots__&quot;): names += [f&quot;{attr}: {getattr(self, attr)}&quot; for attr, in getattr(self, &quot;__slots__&quot;)] return f&quot;{type(self).__name__}: [{', '.join(names)}]&quot; </code></pre> <p>which can then be used like this:</p> <pre class="lang-py prettyprint-override"><code>class Hello(Debug): def __init__(self, name: str = &quot;&quot;, first: int = 0, second: int = 0) -&gt; None: self.name = name self.first = first self.second = second def some_method(self) -&gt; Something: hello = something_that_makes_this_not_a_dataclass() self.set = 5 # This'd be printed too! return hello.special() hello = Hello() print(hello) &gt;&gt;&gt; Hello: [name: , first: 0, second: 0] </code></pre> <p>This isn't perfect (I deliberately printed an empty string and inheritance is funky too), but it does the job for me. I understand that, by adding a bunch more if-statements and using something other than inheritance (as roganjosh suggested: class decorators), I can probably get this pretty (and, if there is interest, I'd have the free time to do that and turn it into a small python package), but that is not the point. I feel like this should exist already, and I simply cannot find it.</p> <p>I also understand that my current code is quite messy (<code>__repr__</code> should return a string that, when executed, gives back the object, inheritance might not be the way to go for this, <code>__str__</code> would not make sense either, as that is usually for forewards-facing applications, using <code>__qualname__</code> instead of <code>__name__</code> might give cleaner printing), however, that is not the point. I am looking for the debug method, not what dunder method to define it as.</p> <p>What would the prober way of doing this be?</p>
<python><python-3.x><debugging><printing>
2023-04-24 19:15:16
0
579
Egeau
76,095,267
1,096,660
How to do type checking while using Django?
<p>There are linters that support type checking, however they become a mess in a Django project. Is there any way to have type checking for where I define type hints, but not do it for all the Django bits and pieces?</p> <p>I'm using Conda and VS Code.</p>
<python><django>
2023-04-24 19:04:44
1
2,629
JasonTS
76,095,123
881,411
pytest how to exit only on error not on failure
<p>Similar to <a href="https://stackoverflow.com/questions/36804181/long-running-py-test-stop-at-first-failure">this question</a>, except desiring to exit only on errors encountered, not on failures.</p> <p>In theory, this would be useful because:</p> <ul> <li>A &quot;failed&quot; test indicates something wrong with that test, however there should be no impact to other tests, and so the test suite continuing to execute would be valid.</li> <li>An &quot;error&quot; test indicates something is wrong with the testbed/setup/config, so the test suite continuing to run would be a waste of time.</li> </ul> <p>I surmise there would be elaborate ways to get this accomplished with decorators and pytest.mark, but looking for a clean, simple solution that would avoid that if possible.</p>
<python><unit-testing><pytest>
2023-04-24 18:44:54
0
1,192
Jon