QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,391,152
| 10,912,170
|
Convert Thread to Thread-Pool On Messaging Queue Consumer
|
<p>I have a python code like below:</p>
<pre><code>import logging
import time
import threading
from concurrent.futures import ThreadPoolExecutor
from aio_pika import connect, Message, DeliveryMode
import json
from aio_pika.abc import AbstractIncomingMessage
class RevenueQueue(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.connection = None
self.channel = None
self.result_queue = None
self.problem_queue = None
self.queue = None
async def init(self, loop):
""" Init """
self.connection = await connect(cloudamqp_url, loop=loop)
self.channel = await self.connection.channel()
await self.channel.set_qos(prefetch_count=1)
self.queue = await self.channel.declare_queue(request_queue,
durable=True)
await self.queue.consume(self.callback)
self.result_queue = await self.channel.declare_queue(result_queue, durable=True)
self.problem_queue = await self.channel.declare_queue(result_problem_queue, durable=True)
async def callback(self, message: AbstractIncomingMessage):
""" Callback """
request = json.loads(message.body.decode("utf-8"))
try:
tic = time.perf_counter()
result = ...
result_message = Message(
bytes(json.dumps(result, default=str), encoding='utf8'),
delivery_mode=DeliveryMode.PERSISTENT)
await self.channel.default_exchange.publish(
result_message, routing_key=revenue_result_queue)
except Exception as e:
error_message = Message(
bytes(json.dumps(request, default=str), encoding='utf8'),
delivery_mode=DeliveryMode.PERSISTENT)
await self.channel.default_exchange.publish(
error_message, routing_key=result_problem_queue)
finally:
await message.ack()
async def listen_queue(loop):
for _ in range(5):
td = RevenueQueue()
td.init()
if __name__ == "__main__":
try:
loop = asyncio.get_event_loop()
asyncio.ensure_future(listen_queue(loop))
except Exception as e:
logging.error(e)
</code></pre>
<p>I want to convert this thread logic to ThreadPoolExecutor. How can I handle it? Is there any suggestion?</p>
<p>I want to use ThreadPoolExecutor because I don't want to struggle with zombie threads. I believe that ThreadPoolExecutor can handle it.</p>
<p>Code looks complex but it is easy indeed. It is a request listener. It listens requests and handles them then if the result exists send the result to result_queue otherwise sen d it to problem_queue.</p>
|
<python><threadpool>
|
2023-06-02 14:42:39
| 0
| 1,291
|
Sha
|
76,391,127
| 7,874,693
|
ImportError: PyO3 modules may only be initialized once per interpreter process
|
<p>I am working on a Django project which includes DRF.
The application is dockerized.</p>
<p>Everything was working fine then suddenly I got the following error in my logs and I am completely clueless how it arrived</p>
<pre><code>patients | [uWSGI] getting INI configuration from /patients/src/_settings/local-test/uwsgi.ini
patients | [uwsgi-static] added mapping for /static/ => /static/
patients | *** Starting uWSGI 2.0.21 (64bit) on [Fri Jun 2 13:59:34 2023] ***
patients | compiled with version: 10.2.1 20210110 on 02 June 2023 10:11:18
patients | os: Linux-5.19.0-42-generic #43~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Apr 21 16:51:08 UTC 2
patients | nodename: 9be86066078d
patients | machine: x86_64
patients | clock source: unix
patients | pcre jit disabled
patients | detected number of CPU cores: 8
patients | current working directory: /patients/src
patients | detected binary path: /usr/local/bin/uwsgi
patients | uWSGI running as root, you can use --uid/--gid/--chroot options
patients | setgid() to 33
patients | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
patients | chdir() to /patients/src
patients | your memory page size is 4096 bytes
patients | detected max file descriptor number: 1048576
patients | building mime-types dictionary from file /etc/mime.types...1476 entry found
patients | lock engine: pthread robust mutexes
patients | thunder lock: disabled (you can enable it with --thunder-lock)
patients | uwsgi socket 0 bound to TCP address :8080 fd 3
patients | uWSGI running as root, you can use --uid/--gid/--chroot options
patients | setgid() to 33
patients | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
patients | Python version: 3.9.16 (main, May 23 2023, 14:17:54) [GCC 10.2.1 20210110]
patients | Python main interpreter initialized at 0x5640005ce9b0
patients | uWSGI running as root, you can use --uid/--gid/--chroot options
patients | setgid() to 33
patients | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
patients | python threads support enabled
patients | your server socket listen backlog is limited to 100 connections
patients | your mercy for graceful operations on workers is 60 seconds
patients | mapped 618762 bytes (604 KB) for 4 cores
patients | *** Operational MODE: preforking+threaded ***
patients | WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x5640005ce9b0 pid: 1 (default app)
patients | mounting /patients/src/patients/wsgi.py on /patients
patients | Traceback (most recent call last):
patients | File "/patients/src/patients/wsgi.py", line 16, in <module>
patients | application = get_wsgi_application()
patients | File "/usr/local/lib/python3.9/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
patients | django.setup(set_prefix=False)
patients | File "/usr/local/lib/python3.9/site-packages/django/__init__.py", line 24, in setup
patients | apps.populate(settings.INSTALLED_APPS)
patients | File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 114, in populate
patients | app_config.import_models()
patients | File "/usr/local/lib/python3.9/site-packages/django/apps/config.py", line 301, in import_models
patients | self.models_module = import_module(models_module_name)
patients | File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
patients | return _bootstrap._gcd_import(name[level:], package, level)
patients | File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
patients | File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
patients | File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
patients | File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
patients | File "<frozen importlib._bootstrap_external>", line 850, in exec_module
patients | File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
patients | File "/patients/src/./patient/models.py", line 3, in <module>
patients | from django_cryptography.fields import encrypt
patients | File "/usr/local/lib/python3.9/site-packages/django_cryptography/fields.py", line 9, in <module>
patients | from django_cryptography.core.signing import SignatureExpired
patients | File "/usr/local/lib/python3.9/site-packages/django_cryptography/core/signing.py", line 10, in <module>
patients | from cryptography.exceptions import InvalidSignature
patients | File "/usr/local/lib/python3.9/site-packages/cryptography/exceptions.py", line 9, in <module>
patients | from cryptography.hazmat.bindings._rust import exceptions as rust_exceptions
patients | ImportError: PyO3 modules may only be initialized once per interpreter process
</code></pre>
<p>I am unable to understand how it arrives.</p>
<p>my docker-compose.yml looks like</p>
<pre><code>version: '3'
services:
patients:
container_name: patients
image: patients:latest
restart: always
volumes:
- ./:/patients/
env_file:
- patients.env
command: ["uwsgi", "--ini", "/patients/src/_settings/local-test/uwsgi.ini"]
ports:
- 8003:8080
build: ./
</code></pre>
<p>and here is my uwsgi.ini</p>
<pre><code>[uwsgi]
http-socket = :8080
chdir =/patients/src
master = 1
processes = 2
threads = 2
static-map = /static/=/static/
mount =/patients=/patients/src/patients/wsgi.py
manage-script-name = true
buffer-size = 65535
module=patients.wsgi
vacuum = True
gid = www-data
env = PYTHONDONTWRITEBYTECODE=1
</code></pre>
<p>From my limited knowledge and experience, I think it is due to something in docker-compose.</p>
<p>I got this error suddenly when today I made the container Up by command</p>
<p><code>docker-compose up -d</code></p>
|
<python><django><docker><django-rest-framework><pyo3>
|
2023-06-02 14:38:31
| 1
| 1,034
|
Muhammad Safwan
|
76,391,066
| 1,150,683
|
Altair: only show related field on hover, not all of them
|
<p>This is a follow-up question to <a href="https://stackoverflow.com/questions/76384509/altair-showing-the-value-of-the-current-point-in-the-tooltip">a previous (solved) question</a> about Altair with the same toy dataset.</p>
<p>In the code below, we have a dataset that can be read as: "two cooks cook1, cook2 are doing a competition. They have to make four dishes, each time with two given ingredients ingredient1, ingredient2. A jury has scored the dishes and the grades are stored in _score.</p>
<p>My code is working well already:</p>
<ul>
<li>on the y-axis the score is given, both for cook1 and cook2</li>
<li>in the legend, cook1 and cook2 are separate</li>
<li>a vertical line is drawn over the nearest x-axis item when hovering</li>
</ul>
<p>I would like to make changes to the <strong>legend</strong>.</p>
<p>Now it shows the following properties:</p>
<ul>
<li>dish (x-axis number)</li>
<li>ingredient 1, ingredient 2</li>
<li><code>cook1</code>'s dish, <code>cook2</code>'s dish</li>
<li>score of the cook that we're hovering closest to</li>
<li>cookX_score (cook1_score, cook2_score) of the cook that we're hovering closest to</li>
</ul>
<p>Instead, I would like to change it so that the dishes (<code>cook1</code> and <code>cook2</code>) are not <em>both</em> shown but that, just like with <code>score</code>, only the respective dish is included. So if I hover closer to a point of <code>cook1_score</code> I only want to show <code>cook1</code> and not also <code>cook2</code>.</p>
<p>My attempts to restrict this have failed, but the reason is simply because Altair does not know that <code>cook1</code> and <code>cook1_score</code>, and <code>cook2</code> and <code>cook2_score</code> are linked. But I am not sure how I can tell Altair that so that it only shows the relevant fields when hovering close to a point.</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
import pandas as pd
alt.renderers.enable("altair_viewer")
df = pd.DataFrame({
"ingredient1": ["potato", "onion", "carrot", "beet"],
"ingredient2": ["tomato", "pepper", "zucchini", "lettuce"],
"dish": [1, 2, 3, 4],
"cook1": ["cook1 dish1", "cook1 dish2", "cook1 dish3", "cook1 dish4"],
"cook1_score": [0.4, 0.3, 0.7, 0.9],
"cook2": ["cook2 dish1", "cook2 dish2", "cook2 dish3", "cook2 dish4"],
"cook2_score": [0.6, 0.2, 0.5, 0.6],
})
value_vars = [c for c in df.columns if c.endswith("_score")]
cook_names = [c.replace("_score", "") for c in value_vars]
id_vars = ["dish", "ingredient1", "ingredient2"] + cook_names
df_melt = df.melt(id_vars=id_vars, value_vars=value_vars, var_name="cook", value_name="score")
nearest_dish = alt.selection(type="single", nearest=True, on="mouseover", fields=["dish"], empty="none")
# Main chart with marked circles
chart = alt.Chart(df_melt).mark_circle().encode(
x="dish:O",
y="score:Q",
color="cook:N",
tooltip=id_vars + ["score", "cook"]
).add_selection(
nearest_dish
)
# Draw a vertical rule at the location of the selection
vertical_line = alt.Chart(df_melt).mark_rule(color="gray").encode(
x="dish:O",
).transform_filter(
nearest_dish
)
# Combine the chart and vertical_line
layer = alt.layer(
chart, vertical_line
).properties(
width=600, height=300
).interactive()
layer.show()
</code></pre>
<p>Note: I'm stuck on <code>Altair<5</code>.</p>
|
<python><pandas><dataframe><visualization><altair>
|
2023-06-02 14:29:29
| 1
| 28,776
|
Bram Vanroy
|
76,390,824
| 4,984,061
|
Python Cerberus - Validating Schema with this Example
|
<p>I am using Cerberus to validate dataframes schema. Using this sample data and code below, the if-else statement should "data structure is valid", however it returns that the "data structure is not valid". Any insight would be appreciated.</p>
<pre><code>import pandas as pd
from cerberus import Validator
df = pd.DataFrame({
'name': ['Alice', 'Bob', 'Charlie'],
'age': [25, 30, 35],
'city': ['New York', 'Paris', 'London']
})
data = df.to_dict()
schema = {
'name': {'type': 'string'},
'age': {'type': 'integer', 'min': 18},
'city': {'type': 'string'}
}
validator = Validator(schema)
is_valid = validator.validate(data)
if is_valid:
print("Data structure is valid!")
else:
print("Data structure is not valid.")
print(validator.errors)
</code></pre>
<p>Which results:</p>
<pre><code>>>> Data structure is not valid.
>>> {'age': ['must be of integer type'], 'city': ['must be of string type'], 'name': ['must be of string type']}
</code></pre>
|
<python><validation><cerberus>
|
2023-06-02 14:00:09
| 1
| 1,578
|
Starbucks
|
76,390,794
| 12,336,422
|
How to detect variables defined outside the function (or undefined) without running the function in python
|
<p>I would like a function (let's call it <code>detect_undefined</code>) which detects variables used in a function, which are <strong>not</strong> defined inside the function without running it.</p>
<p>Examples:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def add(a):
return np.sum(a)
print(detect_undefined(add))
</code></pre>
<p>Output: <code>["np"]</code></p>
<pre class="lang-py prettyprint-override"><code>def add(a):
import numpy as np
return np.sum(a)
print(detect_undefined(add))
</code></pre>
<p>Output: <code>[]</code></p>
<pre class="lang-py prettyprint-override"><code>b = 3
def add(a):
return a + b
print(detect_undefined(add))
</code></pre>
<p>Output: <code>["b"]</code></p>
<pre class="lang-py prettyprint-override"><code>def add(a, b=3):
return a + b
print(detect_undefined(add))
</code></pre>
<p>Output: <code>[]</code></p>
<p>It is crucial that the algorithm works without running the function to be examined, i.e. I cannot do something like <code>try ... except</code>. ChatGPT suggested to use <code>inspect</code> and <code>ast</code>, but its suggestion didn't quite work.</p>
|
<python>
|
2023-06-02 13:55:43
| 1
| 733
|
sams-studio
|
76,390,776
| 12,790,501
|
How can I read the rest of a command line into the last argument in Python Click, ignoring existing options?
|
<p>I want to read the rest of the line into the last argument ignoring that it contains even existing options (as the script that I am going to run may contain the same options as the main script containing click.</p>
<p>Using
python 3.10.11
click 8.1.3</p>
<p>The best solution so far which unfortunately omits the <code>-d</code> option form <code>script_args</code></p>
<p><strong>launcher.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import pathlib
import click
@click.command(
context_settings=dict(
ignore_unknown_options=True,
)
)
@click.option(
"-d",
"--detach",
is_flag=True,
)
@click.argument("script_path", nargs=1, type=click.Path(exists=True, path_type=pathlib.Path))
@click.argument("script_args", nargs=-1, type=click.UNPROCESSED)
def run(detach: bool, script_path: pathlib.Path, script_args: tuple[str]) -> None:
print(f"Detach = {detach}")
print(f"Script args = {script_args}")
if __name__ == "__main__":
run()
</code></pre>
<p>when I run the script</p>
<pre><code>> python launcher.py my_script.py -a -b -c -d -e
Detach = True
Script args = ('-a', '-b', '-c', '-e')
</code></pre>
<p>as you can see the '-d' flag is missing and yet '-d' flag was triggered which is undesired. I would expect ('-a', '-b', '-c', '-d', '-e') as a result.</p>
<p>What is the proper way within the click to do this?</p>
|
<python><python-click>
|
2023-06-02 13:53:57
| 1
| 343
|
Petr Synek
|
76,390,753
| 15,098,472
|
Elementwise Multiplication of a Tensor with multiple scalars
|
<p>Assume the following input:</p>
<pre><code>x = torch.randint(1, 5, size=(2, 3, 3))
print(x.shape)
torch.Size([2, 3, 3])
</code></pre>
<p>I want to perform an elementwise multiplication with multiple scalars. The scalars are available in this tensor:</p>
<pre><code>weights = torch.tensor([2, 2, 2, 1])
print(weights.shape)
torch.Size([4])
</code></pre>
<p>So, basically, I want 4 operations :</p>
<pre><code>result_1 = x * weights[0]
result_2 = x * weights[1]
result_3 = x * weights[2]
result_4 = x * weights[3]
</code></pre>
<p>packed in one tensor. However, simply doing</p>
<pre><code>result = x * weights
</code></pre>
<p>will not work, since the dimensions are not correct for broadcasting. My current solution is rather ugly and I assume not efficient :</p>
<pre><code>x = x.unsqueeze(0).repeat_interleave(4, 0)
result = x * weights[:, None, None, None]
</code></pre>
<p>I am looking for a better way!</p>
|
<python><pytorch><tensor>
|
2023-06-02 13:51:20
| 1
| 574
|
kklaw
|
76,390,731
| 131,874
|
Parsing multiple date string languages and formats
|
<p>I'm parsing a list of emails in a text file and I need to parse dates in the email headers. The dates are in a multitude of formats and languages:</p>
<pre><code>sexta-feira, 26 de agosto de 2022 16:41
viernes, 26 de agosto de 2022 19:24
2022/08/26 13:30:56
26 de agosto de 2022 13:32:49 BRT
</code></pre>
<p>Mostly portuguese, spanish, italian and english.</p>
<p>What would be the best aproach? I have tried <code>Babel</code> but the date parsing is very basic. For now I only have access to the text files exported from <code>Outlook</code> not the <code>smpt</code> sources.</p>
|
<python><string><date><parsing><internationalization>
|
2023-06-02 13:48:15
| 1
| 126,654
|
Clodoaldo Neto
|
76,390,611
| 1,514,114
|
import a symbol from another package just to add it to an api
|
<p>Say, I want to add a symbol <code>some_function</code> from package <code>a</code> to the API of package <code>b</code>, because that symbol does exactly what it should, but to packages using <code>b</code>, the fact that <code>some_function</code> is really implemented in package <code>a</code> is an implementation detail of <code>b</code>.</p>
<p>Now, I know that just putting <code>from a import some_function</code> into <code>b</code> will do the trick, but first, my IDE will complain that <code>some_function</code> is never used, and secondly, anyone looking at that file will think that's just a redundant import, probably left-over from an earlier version of that code.</p>
<p>How do I add <code>some_function</code> to <code>b</code> in a way that makes it clear to the IDE and to readers of the code of <code>b</code> that <code>some_function</code> is imported there on purpose and meant as part of <code>b</code>'s API?</p>
<p>(one way to do this is to have something like <code>def another_function(): some_function()</code>, but that seems like a large footprint for so simple a task)</p>
|
<python><python-import><policy>
|
2023-06-02 13:28:59
| 2
| 548
|
Johannes Bauer
|
76,390,607
| 10,353,865
|
pandas - first value bigger than a given value
|
<p>Let's assume that x is a sorted column containing some data with a "<" relation (e.g. floats, integer, strings). I am given a constant - say 5 - and I want to extract the first value in x bigger than that constant.</p>
<p>Now, I don't want to use a boolean comparison like "x > 5", because this will involve some unnecessary comparisons. So my question: Is there a simple function which I can use that does the trick without implicitly looping over ALL elements?</p>
<p>In essence I want something akin to a for loop with a break statement - once the first value bigger than x was found. (But with C-speed)</p>
|
<python><pandas>
|
2023-06-02 13:28:39
| 2
| 702
|
P.Jo
|
76,390,597
| 2,289,030
|
What's the idiomatic way to add a calculated column to multiple data frames?
|
<p>I have a few dataframes, let's call them <code>rates, sensors</code> with <code>"session_start"</code>, <code>"value_timestamp"</code> (timestamps) and <code>"value"</code> (float) columns. I want to add an <code>"elapsed"</code> column, which I've done successfully using the following code:</p>
<pre><code>def add_elapsed_min(df):
df["elapsed"] = (
df["value_timestamp"] - df["session_start"].min()
).dt.total_seconds() / 60.0
for df in [rates, sensors]:
add_elapsed_min(df)
</code></pre>
<p>Now, this code does work, and the elapsed column is correct. The minor problem is that I keep getting the <code>SettingWithCopyWarning</code>. I've tried changing the code as suggested by the warning, tried adding a <code>contextlib.suppress</code>, but can't seem to remove this warning. This makes me think I must be breaking some idiomatic way to do this. So I'm wondering: <strong>If you want to add a calculated column to many dataframes at once, how are you supposed to do this?</strong></p>
|
<python><pandas>
|
2023-06-02 13:26:53
| 1
| 968
|
ijustlovemath
|
76,390,552
| 5,790,653
|
Python show output of remote SSH command in web page in Django
|
<p>I have a simple Django app to show the output of an SSH remote command.</p>
<p><code>views.py</code>:</p>
<pre><code>from django.http import HttpResponse
from subprocess import *
def index(request):
with open('/home/python/django/cron/sites.txt', 'r') as file:
for site in file:
# out = getoutput(f"ssh -o StrictHostKeyChecking=accept-new -p1994 root@{site} crontab -l")
out = run(["ssh", "-o StrictHostKeyChecking=accept-new", "-p1994", f"root@{site}".strip(), "crontab", "-l"])
return HttpResponse(out)
</code></pre>
<p><code>urls.py</code>:</p>
<pre><code>from django.contrib import admin
from django.urls import path
# imported views
from cron import views
urlpatterns = [
path('admin/', admin.site.urls),
# configured the url
path('',views.index, name="homepage")
]
</code></pre>
<p><code>sites.txt</code>:</p>
<pre><code>1.1.1.1
2.2.2.2
3.3.3.3
</code></pre>
<p>The issue is when I run <code>localhost:5000</code>, I see this:</p>
<pre><code>CompletedProcess(args=['ssh', '-o StrictHostKeyChecking=accept-new', '-p1994', 'root@3.3.3.3', 'crontab', '-l'], returncode=0)
</code></pre>
<p>While I should see something like this:</p>
<pre><code>* * * * * ls
* * * * * date
* * * * * pwd
</code></pre>
<p>I tried with both <code>run</code> and <code>getoutput</code>, but they either don't connect or the output is shown in terminal only.</p>
<p>How can I run this and show the output in the webpage?</p>
|
<python><django>
|
2023-06-02 13:21:02
| 1
| 4,175
|
Saeed
|
76,390,471
| 11,862,989
|
pip install pyodbc is not working and Facing Error : Could not find a version that satisfies the requirement of pyodbc
|
<p>Error :</p>
<pre><code> Could not find a version that satisfies the requirement of pyodbc (from versions: ).
No matching distribution found for Pyodbc.
</code></pre>
<p>Traceback :</p>
<pre><code>C:\Users\Administrator> pip install pyodbc
collecting pyodbc
Retrying (Retry(total=s, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolErr]
r (‘Connection aborted. *, ConnectionResetError (10034, 'An existing connection was forcibly closed by the remote host’,
ne, 10054, None)’: /simple/pyodbe/
Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolErr]
r (‘Connection aborted. *, ConnectionResetError (10034, 'An existing connection was forcibly closed by the remote host’,
ne, 10054, None))': /simple/pyodbe/
Retrying’ (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolErr]
r (‘Connection aborted. *, ConnectionResetError (10034, 'An existing connection was forcibly closed by the remote host’,
ne, 10054, None))': /simple/pyodbe/
Retrying (Retry(total=l, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolErr]
r (‘Connection aborted. , ConnectionResetError (10034, 'An existing connection was forcibly closed by the remote host’,
ne, 10054, None))': /simple/pyodbe/
Retrying’ (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolErr]
r (‘Connection aborted. *, ConnectionResetError (10034, 'An existing connection was forcibly closed by the remote host’,
ne. 10054, None)": /simple/pyodbe/.
</code></pre>
<p>Now one solution is that I need to download manually through link but I know that.</p>
|
<python><installation><pip><pyodbc>
|
2023-06-02 13:10:44
| 0
| 659
|
Abhishek Mane
|
76,390,211
| 10,755,032
|
WebScraping - BeautifulSoup Python
|
<p>I am trying to scrape the medium website. Here is my code.</p>
<pre><code>import requests
from bs4 import BeautifulSoup as bs
class Publication:
def __init__(self, publication):
self.publication = publication
self.headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36'}# mimics a browser's request
def get_articles(self):
"Get the articles of the user/publication which was given as input"
publication = self.publication
r = requests.get(f"https://{publication}.com/", headers=self.headers)
soup = bs(r.text, 'lxml')
elements = soup.find_all('h2')
for x in elements:
print(x.text)
publication = Publication('towardsdatascience')
publication.get_articles()
</code></pre>
<p>It is working somewhat good but it is not scraping all the titles. It is only getting the some of the articles from the top of the page. I want it to get all the article names from the page. It also getting the side bar stuff like who to follow and all. I dont want that. How do I do that?</p>
<p>Here is the output of my code:</p>
<pre><code>How to Rewrite and Optimize Your SQL Queries to Pandas in 5 Simple Examples
Storytelling with Charts
Simplify Your Data Preparation with These Four Lesser-Known Scikit-Learn Classes
Non-Parametric Tests for Beginners (Part 1: Rank and Sign Tests)
BigQuery Best Practices: Unleash the Full Potential of Your Data Warehouse
How to Test Your Python Code with Pytest
7 Signs You’ve Become an Advanced Sklearn User Without Even Realizing It
How Data Scientists Save Time
MLOps: What is Operational Tempo?
Finding Your Dream Master’s Program in AI
Editors
TDS Editors
Ben Huberman
Caitlin Kindig
Sign up for The Variable
</code></pre>
|
<python><web-scraping><beautifulsoup>
|
2023-06-02 12:35:55
| 1
| 1,753
|
Karthik Bhandary
|
76,390,109
| 880,874
|
How can I get my Python script to wrap strings with single quotes?
|
<p>I have the Python script below that reads data from an Excel spreadsheet and then inserts that data into SQL Server</p>
<pre><code>excel_file = 'prod_data.xlsx'
df = pd.read_excel(excel_file)
systemDesign_columns = ['dateCreated', 'systemSecure', 'isUsingFirewall','systemStartYear', 'firstName', 'lastName', 'systemDescription']
for _, row in df.iterrows():
systemDesign_values = [str(row[column]) for column in systemDesign_columns]
systemDesign_values_str = ', '.join(systemDesign_values)
commands = f"""
INSERT INTO dbo.systemDesign (dateCreated, systemSecure, isUsingFirewall, systemStartYear, firstName, lastName, systemDescription) VALUES ({systemDesign_values_str});
"""
insert_statements.append(commands)
for statement in insert_statements:
print(statement)
</code></pre>
<p>This script does run without errors, but how can I enclose the string values in single quotes?</p>
<p>Here is an example of what it generates:</p>
<pre><code>INSERT INTO dbo.systemDesign (dateCreated, systemSecure, isUsingFirewall,systemStartYear, firstName, lastName, systemDescription) VALUES (2023-06-01 00:00:00, 1, 0, 2022, Jim, Williams, A user network);
</code></pre>
<p>You can see that there are some string fields.</p>
<p>Is there a way to only enclose the string fields in single quotes?</p>
<p>Thanks!</p>
|
<python><python-3.x>
|
2023-06-02 12:23:00
| 2
| 7,206
|
SkyeBoniwell
|
76,389,970
| 1,911,091
|
How to transpose a range of columns in pandas and combine it with the current keys
|
<p>Lets imagine a pandas dataframe table like (in sql the keys would be [type, product]):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>type</th>
<th>product</th>
<th>...</th>
<th>a1</th>
<th>a2</th>
<th>a3</th>
<th>a4</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>...</td>
<td>a</td>
<td>b</td>
<td>c</td>
<td>d</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>...</td>
<td>e</td>
<td>f</td>
<td>g</td>
<td>h</td>
</tr>
<tr>
<td>0</td>
<td>2</td>
<td>...</td>
<td>i</td>
<td>j</td>
<td>k</td>
<td>l</td>
</tr>
<tr>
<td>0</td>
<td>...</td>
<td>...</td>
<td>m</td>
<td>n</td>
<td>o</td>
<td>p</td>
</tr>
<tr>
<td>0</td>
<td>n</td>
<td>...</td>
<td>q</td>
<td>r</td>
<td>s</td>
<td>t</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>1</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>1</td>
<td>n</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>2</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>2</td>
<td>n</td>
<td>...</td>
<td>...</td>
<td>x</td>
<td>y</td>
<td>z</td>
</tr>
</tbody>
</table>
</div>
<p>what I need as output is :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>type</th>
<th>product</th>
<th>a index</th>
<th>a value</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>a1</td>
<td>a</td>
</tr>
<tr>
<td>0</td>
<td>0</td>
<td>a2</td>
<td>b</td>
</tr>
<tr>
<td>0</td>
<td>0</td>
<td>a3</td>
<td>c</td>
</tr>
<tr>
<td>0</td>
<td>0</td>
<td>a4</td>
<td>d</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>a1</td>
<td>e</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>a2</td>
<td>f</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>a3</td>
<td>g</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>a4</td>
<td>h</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>....</td>
<td>....</td>
</tr>
<tr>
<td>2</td>
<td>n</td>
<td>a4</td>
<td>z</td>
</tr>
</tbody>
</table>
</div>
<p>So, transposing the rows from a1-a4 for every row and combining it with the keys.</p>
<p>Is this possible and if how.</p>
<p>Thanks</p>
|
<python><pandas>
|
2023-06-02 12:01:15
| 0
| 1,442
|
user1911091
|
76,389,849
| 9,488,023
|
Pandas drop_duplicates with a tolerance value for duplicates
|
<p>What I have is two Pandas dataframes of coordinates in xyz-format. One of these contains points that should be masked in the other one, but the values are slightly offset from each other, meaning a direct match with drop_duplicates is not possible. My idea was to round the values to the nearest significant number, but this also does not always work, since if some values are rounded to different numbers, they won't match and won't be removed. For example, if one point lies at x = 149 and another at x = 151, rounding them to the nearest hundred gives different values. My code looks something like this:</p>
<pre><code>import pandas as pd
import numpy as np
df_test_1 = pd.DataFrame(np.array([[123, 449, 756.102], [406, 523, 543.089], [140, 856, 657.24], [151, 242, 124.42]]), columns = ['x', 'y', 'z'])
df_test_2 = pd.DataFrame(np.array([[123, 451, 756.099], [404, 521, 543.090], [139, 859, 657.23], [633, 176, 875.76]]), columns = ['x', 'y', 'z'])
df_test_3 = pd.concat([df_test_1, df_test_2])
df_test_3['xr'] = df_test_3.x.round(-2)
df_test_3['yr'] = df_test_3.y.round(-2)
df_test_3['zr'] = df_test_3.z.round(1)
df_test_3 = df_test_3.drop_duplicates(subset=['xr', 'yr', 'zr'], keep=False)
</code></pre>
<p>What I want is to remove duplicates if the columns 'xr' and 'yr' are duplicates +-100 and 'zr' duplicates +-0.1. For example, if two coordinates are rounded to (100, 300, 756.2) and (200, 400, 756.1), they should be considered duplicates and should be removed. Any ideas are appreciated, thanks!</p>
|
<python><pandas><dataframe><duplicates>
|
2023-06-02 11:44:08
| 2
| 423
|
Marcus K.
|
76,389,797
| 8,207,754
|
ModuleNotFoundError: No module named 'virtualenvwrapper'
|
<p>I'm trying to get virtualenvwrapper working in zsh (for Python 3, MacOS Apple Silicon) and haven't find a solution in other posts.</p>
<p>I'm running <code>pip3 install virtualenvwrapper</code> which appears successful.</p>
<p>Then I run <code>mkvirtualenv helloworld</code> and get this error:</p>
<pre><code>(base) username@xxx ~ % mkvirtualenv helloworld
created virtual environment CPython3.10.6.final.0-64 in 195ms
creator CPython3Posix(dest=/Users/username/Users/username/.virtualenvs/helloworld, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/username/Library/Application Support/virtualenv)
added seed packages: pip==23.0.1, setuptools==67.6.1, wheel==0.40.0
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
/Users/username/miniconda3/bin/python3: Error while finding module specification for 'virtualenvwrapper.hook_loader' (ModuleNotFoundError: No module named 'virtualenvwrapper')`
</code></pre>
<p>My entire .zshrc looks like this, as recommended by this answer <a href="https://stackoverflow.com/a/62993760">https://stackoverflow.com/a/62993760</a> :</p>
<pre><code>(base) username@xxx ~ % cat .zshrc
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/Users/username/miniconda3/bin/conda' 'shell.zsh' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/Users/username/miniconda3/etc/profile.d/conda.sh" ]; then
. "/Users/username/miniconda3/etc/profile.d/conda.sh"
else
export PATH="/Users/username/miniconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
export WORKON_HOME=~/Users/username/.virtualenvs
export VIRTUALENVWRAPPER_VIRTUALENV=$(which virtualenv)
export VIRTUALENVWRAPPER_PYTHON="$(which python3)"
source $(which virtualenvwrapper.sh)
</code></pre>
<p>(I also use conda, and would ideally like both conda and virtualenvwrapper to work in this terminal.)</p>
|
<python><virtualenv><zsh><virtualenvwrapper><zshrc>
|
2023-06-02 11:36:08
| 2
| 794
|
K--
|
76,389,532
| 3,363,228
|
Removing words from FastText Model / Converting a .vec file to a .bin file (vec2bin)
|
<p>I am working with FastText on a language (Tamil) and a task where I don't expect to encounter and simply don't care about character/words from other languages. I have both the text (<code>.vec</code>) and binary (<code>.bin</code>) files for this model. <strong>I want to know how to either:</strong></p>
<p><strong>1. Remove words from the vocabulary of the model</strong> (after loading it from the <code>.bin</code> file), and then save it to disk as <code>.bin</code> (I know how to do the saving, but not if/how I can delete words from its vocab).</p>
<p><strong>2. Convert the <code>.vec</code> file to a <code>.bin</code> file</strong> (this way, I can use simple text processing to drop the unnecesary rows).</p>
<p>Some context: The model files are huge, and the main operation I am interested in (computing vector similarity/finding nearest vectors) takes far too long, and sometimes consumes all of my RAM altogether (admittedly there are other concurrent memory-intensive processes). Doing some cursory text munging/regex matching/line counting in the Terminal, I know that I could <strong>massively</strong> shrink the model if I dropped all words in its vocabulary that simply aren't relevant to my task, viz, those which contains characters outside (1) the Tamil Unicode block and (2) outside ASCII 0-40 (numbers/punctuation/common translingual ASCII symbols).</p>
<p>Is there a way to do (1) or (2) above, or, if not, is there some other way to accomplish my end goals here? I am aware of quantizing but am treating it as a last resort; given the non-Tamil words are nothing but pure bloat given the nature of my project, it seems silly to engage in lossy compression when I could "losslessly" compress by deleting the cruft.</p>
|
<python><machine-learning><nlp><word2vec><fasttext>
|
2023-06-02 10:56:00
| 0
| 3,910
|
ubadub
|
76,389,489
| 11,222,963
|
NLTK lemmatizer changing "less" to "le". Text doesn't make sense anymore
|
<pre><code>from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemmatizer.lemmatize('Less'.lower())
'le'
</code></pre>
<p>What's going on here, and how can I avoid this?</p>
<p>The word <code>'le'</code> is now appearing all over my LDA topic model, and it doesn't make sense.</p>
<p>Who knows what other words it is affecting in the model. Should I avoid using the Lemmatizer or is there a way to fix this?</p>
|
<python><nltk><gensim><text-classification><wordnet>
|
2023-06-02 10:50:46
| 1
| 3,416
|
SCool
|
76,389,395
| 13,086,128
|
AttributeError: module 'numpy' has no attribute 'long'
|
<p>I am trying to find <code>9</code> raise to power <code>19</code> using numpy.</p>
<p>I am using <code>numpy 1.24.3</code></p>
<p>This is the code I am trying:</p>
<pre><code>import numpy as np
np.long(9**19)
</code></pre>
<p>This is the error I am getting:</p>
<pre><code>AttributeError: module 'numpy' has no attribute 'long'
</code></pre>
|
<python><python-3.x><numpy>
|
2023-06-02 10:36:57
| 3
| 30,560
|
Talha Tayyab
|
76,389,341
| 2,532,296
|
common/unified regex for a set of pattern
|
<p>I am trying to do some text processing and was interested to know if I can have a common/unified regex for a certain pattern. The pattern of interest is strings that ends with <code>{string}_{i}</code> where <code>i</code> is a number, on the second column of <code>test.csv</code>. Once the regex is matched, I wish to replace it with <code>{string}[i]</code>.</p>
<p>For now the python script works as expected for the strings for which I explicitly mention the regex pattern. I want to have a more generic regex pattern that will match all the strings that have <code>{string}_{i}</code> instead of writing a regex for all the patterns (which is not scalable).</p>
<h5>input test.csv</h5>
<pre><code>bom_a14 , COMP_NUM_0
bom_a17 , COMP_NUM_2
bom_a27 , COMP_NUM_11
bom_a35 , FUNC_1V8_OLED_OUT_7
bom_a38 , FUNC_1V8_OLED_OUT_9
bom_a39 , FUNC_1V8_OLED_OUT_10
bom_a46 , CAP_4
bom_a47 , CAP_3
bom_a48 , CAP_6
</code></pre>
<h5>test.py</h5>
<pre><code>import csv
import re
# Match the values in the first column of the second file with the first file's data
with open('test.csv', 'r') as file2:
reader = csv.reader(file2)
for row in reader:
row_1=row[1]
# for matching COMP_NUM_{X}
match_data = re.match(r'([A-Z]+)_([A-Z]+)_(\d+)',row_1.strip())
# for matching FUNC_1V8_OLED_OUT_{X}
match_data2 = re.match(r'([A-Z]+)_([A-Z0-9]+)_([A-Z]+)_([A-Z]+)_(\d+)',row_1.strip())
# if match found, reformat the data
if match_data:
new_row_1 = match_data.group(1) +'_'+ match_data.group(2)+ '[' + match_data.group(3) + ']'
elif match_data2:
new_row_1 = match_data2.group(1) +'_'+ match_data2.group(2)+ '_'+ match_data2.group(3)+'_'+ match_data2.group(4)+'[' + match_data2.group(5) + ']'
else:
new_row_1 = row_1
print new_row_1
</code></pre>
<h5>output</h5>
<pre><code>COMP_NUM[0]
COMP_NUM[2]
COMP_NUM[11]
FUNC_1V8_OLED_OUT[7]
FUNC_1V8_OLED_OUT[9]
FUNC_1V8_OLED_OUT[10]
CAP_4
CAP_3
CAP_6
</code></pre>
<h5>expected output</h5>
<pre><code>COMP_NUM[0]
COMP_NUM[2]
COMP_NUM[11]
FUNC_1V8_OLED_OUT[7]
FUNC_1V8_OLED_OUT[9]
FUNC_1V8_OLED_OUT[10]
CAP[4]
CAP[3]
CAP[6]
</code></pre>
|
<python><regex><match>
|
2023-06-02 10:30:44
| 2
| 848
|
user2532296
|
76,389,309
| 7,183,444
|
How to capture words with letters separated by a consistent symbol in Python regex?
|
<p>I am trying to write a Python regex pattern that will allow me to capture words in a given text that have letters separated by the same symbol or space.</p>
<p>For example, in the text "<code>This is s u p e r and s.u.p.e.r and s👌u👌p👌e👌r and s!u.p!e.r</code>", my goal is to extract the words "<code>s u p e r</code>", "<code>s.u.p.e.r</code>", and <code>s👌u👌p👌e👌r</code>. However, I want to exclude "<code>s!u.p!e.r</code>" because it does not have the same consistent separating symbol within the word.</p>
<p>I'm currently using the following:</p>
<pre><code>x="This is s u p e r and s.u.p.e.r and s👌u👌p👌e👌r and s!u.p!e.r"
pattern = r"(?:\b\w[^\w\d]){2,}"
re.findall(pattern, x)
['s u p e r ', 's.u.p.e.r ', 's👌u👌p👌e👌r ', 's!u.p!e.']
</code></pre>
<p>I'm just curious if it's possible to exclude the cases that do not have the same symbol.</p>
|
<python><regex><python-re>
|
2023-06-02 10:26:00
| 2
| 1,731
|
Billy Bonaros
|
76,389,296
| 11,098,908
|
Where should pygame.time.Clock().tick() be called in the script
|
<p>According to this <a href="https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_Languages/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/03%3A_Pygame_Basics/3.18%3A_Frames_Per_Second_and_pygame.time.Clock_Objects#:%7E:text=fpsClock.tick(FPS)-,The,pygame.display.update(),-.%20The%20length%20of" rel="nofollow noreferrer">statement</a>, <code>pygame.time.Clock().tick()</code> should be called at the end of the main loop. However, I couldn't see any differences on my screen display regardless where I executed that method within the loop. Could someone please give some clarification on this? Thanks</p>
|
<python><pygame><pygame-clock>
|
2023-06-02 10:23:26
| 2
| 1,306
|
Nemo
|
76,389,191
| 3,482,266
|
NetworkX graph creating process from numpy array gets killed
|
<p>I have a similarity (dense) matrix of shape 20K by 20K, with float type data.</p>
<p>When I run <code>graph = nx.from_numpy_array(similarity_matrix)</code>, my computer starts to eat RAM memory like crazy, more than 10GB... and the python process ends up being killed.</p>
<p>I thought that networkx was able to deal with graphs in the order of magnitude of a few millions of nodes...</p>
<p>Is there a way to improve my process?</p>
|
<python><networkx>
|
2023-06-02 10:05:56
| 1
| 1,608
|
An old man in the sea.
|
76,389,148
| 19,390,849
|
Esprima for Python does not parse the JSX code
|
<p>In <a href="https://stackoverflow.com/questions/76387687/how-to-get-attributes-of-a-jsx-element-using-python-regular-expressions">How to get attributes of a JSX element using Python regular expressions?</a> I was introduced to <a href="https://pypi.org/project/esprima/" rel="nofollow noreferrer">esprima</a> for Python.</p>
<p>I installed it and tried to use it. But it complains about the syntax.</p>
<p>These are my component's start lines:</p>
<pre><code>import { Image } from 'Base'
import {
AddCircle,
ChevronRight,
} from 'Svg'
const Hero = ({
firstFloatingImage,
mobileBackgroundImage,
}) => {
return <>
<section class="relative bg-paydar-color2 bg-cover hero">
<Image
containerClass="sm:hidden absolute contain bottom-0 mx-auto"
imageClass=""
src={mobileBackgroundImage}
alt="background image"
priority
/>
</code></pre>
<p>And this is my Python code to test it:</p>
<pre><code>import esprima
code = open('path_to_my_component.jsx').read().strip()
ast=esprima.parseScript(code)
print(ast)
</code></pre>
<p>And this is the error I get:</p>
<pre><code> return esprima.parseScript(code)
File "/home/user/.local/lib/python3.10/site-packages/esprima/esprima.py", line 100, in parseScript
return parse(code, options, delegate, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/esprima/esprima.py", line 79, in parse
ast = parser.parseModule() if isModule else parser.parseScript()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 2867, in parseScript
body.append(self.parseStatementListItem())
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1545, in parseStatementListItem
self.tolerateUnexpectedToken(self.lookahead, Messages.IllegalImportDeclaration)
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 240, in tolerateUnexpectedToken
self.errorHandler.tolerate(self.unexpectedTokenError(token, message))
File "/home/user/.local/lib/python3.10/site-packages/esprima/error_handler.py", line 60, in tolerate
raise error
esprima.error_handler.Error: Line 1: Unexpected token
</code></pre>
<p>I wanted to report this to their team, but <a href="https://github.com/Kronuz/esprima-python" rel="nofollow noreferrer">their GitHub</a> does not have the issues.</p>
<p>Does anybody know what might be wrong here?</p>
<p><strong>Update</strong></p>
<p>This is a test with another component:</p>
<pre><code>const About = ({
summary,
title,
}) => {
return <section>
<div class="max-w-4xl mx-auto px-3 xl:px-0 pb-10 overflow-hidden">
<div class="text-2xl sm:text-3xl md:text-4xl text-paydar-color31 my-6 md:my-10 text-center font-bold capitalize">
{title}
</div>
<p class="leading-7 text-paydar-color3">
{summary}
</p>
</div>
</section>
}
export default About
</code></pre>
<p>And this is the error:</p>
<pre><code> return esprima.parseScript(code)
File "/home/user/.local/lib/python3.10/site-packages/esprima/esprima.py", line 100, in parseScript
return parse(code, options, delegate, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/esprima/esprima.py", line 79, in parse
ast = parser.parseModule() if isModule else parser.parseScript()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 2867, in parseScript
body.append(self.parseStatementListItem())
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1548, in parseStatementListItem
statement = self.parseLexicalDeclaration(Params(inFor=False))
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1628, in parseLexicalDeclaration
declarations = self.parseBindingList(kind, options)
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1601, in parseBindingList
lst = [self.parseLexicalBinding(kind, options)]
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1591, in parseLexicalBinding
init = self.isolateCoverGrammar(self.parseAssignmentExpression)
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 503, in isolateCoverGrammar
result = parseFunction()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1465, in parseAssignmentExpression
body = self.parseFunctionSourceElements()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 2362, in parseFunctionSourceElements
body.append(self.parseStatementListItem())
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1556, in parseStatementListItem
statement = self.parseStatement()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 2320, in parseStatement
statement = self.parseReturnStatement()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 2094, in parseReturnStatement
argument = self.parseExpression() if hasArgument else None
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1514, in parseExpression
expr = self.isolateCoverGrammar(self.parseAssignmentExpression)
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 503, in isolateCoverGrammar
result = parseFunction()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1430, in parseAssignmentExpression
expr = self.parseConditionalExpression()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1338, in parseConditionalExpression
expr = self.inheritCoverGrammar(self.parseBinaryExpression)
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 522, in inheritCoverGrammar
result = parseFunction()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1280, in parseBinaryExpression
expr = self.inheritCoverGrammar(self.parseExponentiationExpression)
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 522, in inheritCoverGrammar
result = parseFunction()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1247, in parseExponentiationExpression
expr = self.inheritCoverGrammar(self.parseUnaryExpression)
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 522, in inheritCoverGrammar
result = parseFunction()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1240, in parseUnaryExpression
expr = self.parseUpdateExpression()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1201, in parseUpdateExpression
expr = self.inheritCoverGrammar(self.parseLeftHandSideExpressionAllowCall)
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 522, in inheritCoverGrammar
result = parseFunction()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 1096, in parseLeftHandSideExpressionAllowCall
expr = self.inheritCoverGrammar(self.parseNewExpression if self.matchKeyword('new') else self.parsePrimaryExpression)
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 522, in inheritCoverGrammar
result = parseFunction()
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 597, in parsePrimaryExpression
expr = self.throwUnexpectedToken(self.nextToken())
File "/home/user/.local/lib/python3.10/site-packages/esprima/parser.py", line 237, in throwUnexpectedToken
raise self.unexpectedTokenError(token, message)
esprima.error_handler.Error: Line 5: Unexpected token <
</code></pre>
|
<python><jsx>
|
2023-06-02 10:01:00
| 1
| 1,889
|
Big boy
|
76,389,051
| 2,749,397
|
Proper position for my colorbar in an AxesGrid collocation
|
<p><a href="https://i.sstatic.net/7oui1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7oui1.png" alt="enter image description here" /></a></p>
<p>The figure above was produced by</p>
<pre><code>
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import AxesGrid
mat = [[1,2],[3,4]]
fig = plt.figure()
grid = AxesGrid(fig, 111, nrows_ncols=(1,3),
axes_pad=0.2, share_all=False, label_mode='L',
cbar_location="bottom", cbar_mode="single")
for ax in grid.axes_all: im = ax.imshow(mat)
# plt.colorbar(im, cax=??)
plt.show()
</code></pre>
<p>To complete the job, I'd like to draw a colorbar, <em>probably</em> using the Axes at the bottom of the figure (but I'm not sure that using <code>cax=...</code> is what I need),</p>
<p>HOWEVER</p>
<p>I don't know how to recover the bottom Axes from <code>grid</code>.</p>
<p>After <code>dir(grid)</code>, I've tried to specify <code>ax=grid.cb_axes</code> but the result is amendable</p>
<p><a href="https://i.sstatic.net/D11lD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D11lD.png" alt="enter image description here" /></a></p>
<p>What must be done to have everything in its right place?</p>
|
<python><matplotlib><multiple-axes>
|
2023-06-02 09:47:45
| 1
| 25,436
|
gboffi
|
76,389,045
| 14,076,103
|
ADD end of month column Dynamically to spark Dataframe
|
<p>I have pyspark Dataframe as follows,</p>
<p><a href="https://i.sstatic.net/TD2J0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TD2J0.png" alt="enter image description here" /></a></p>
<p>I need to add EOM column to all the null values for each id dynamically based on last non null EOM value and it should be continuous.</p>
<p>My output dataframe looks like this,</p>
<p><a href="https://i.sstatic.net/SyTGQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SyTGQ.png" alt="enter image description here" /></a></p>
<p>I have tried this logic</p>
<pre><code>df.where("EOM IS not NULL").groupBy(df['id']).agg(add_months(first(df['EOM']),1))
</code></pre>
<p>but the expected format is different</p>
|
<python><pyspark><apache-spark-sql><spark-window-function>
|
2023-06-02 09:46:45
| 1
| 415
|
code_bug
|
76,389,016
| 5,868,293
|
How to get the index with the minimum value in a column avoiding duplicate selection
|
<p>I have the following dataframe:</p>
<pre><code>import pandas as pd
pd.DataFrame({'index': {0: 'x0',
1: 'x1',
2: 'x2',
3: 'x3',
4: 'x4',
5: 'x5',
6: 'x6',
7: 'x7',
8: 'x8',
9: 'x9',
10: 'x10'},
'distances_0': {0: 0.42394711275317537,
1: 0.40400179114038315,
2: 0.4077213959237454,
3: 0.3921048592156785,
4: 0.25293154279281627,
5: 0.2985576890173001,
6: 0.0,
7: 0.32563550923886675,
8: 0.33341592647322754,
9: 0.30653189426783256,
10: 0.31749957588191197},
'distances_1': {0: 0.06684300576184829,
1: 0.04524728117549289,
2: 0.04896118088709522,
3: 0.03557204741075342,
4: 0.10588973399963886,
5: 0.06178330590643222,
6: 0.0001,
7: 0.6821440376099591,
8: 0.027074111335967314,
9: 0.6638424898747833,
10: 0.674718181953208},
'distances_2': {0: 0.7373816871931514,
1: 0.7184619375104593,
2: 0.7225072199147892,
3: 0.7075191710741303,
4: 0.5679436864793461,
5: 0.6142446533143044,
6: 0.31652743219529056,
7: 0.010859948083988706,
8: 0.6475070638933254,
9: 0.010567926115431175,
10: 0.0027932480510772413}}
</code></pre>
<p>)</p>
<pre><code>index distances_0 distances_1 distances_2
0 x0 0.423947 0.066843 0.737382
1 x1 0.404002 0.045247 0.718462
2 x2 0.407721 0.048961 0.722507
3 x3 0.392105 0.035572 0.707519
4 x4 0.252932 0.105890 0.567944
5 x5 0.298558 0.061783 0.614245
6 x6 0.000000 0.000100 0.316527
7 x7 0.325636 0.682144 0.010860
8 x8 0.333416 0.027074 0.647507
9 x9 0.306532 0.663842 0.010568
10 x10 0.317500 0.674718 0.002793
</code></pre>
<p>I would like to get, for every <code>distances_</code> column, the <code>index</code> with the minimum value.</p>
<p>The requirement is that each <code>distances_</code> column, should have a different <code>index</code>: For instance <code>index=="x6"</code> has the minimum value for both <code>distances_0</code> and <code>distances_1</code>, columns, but it should be chosen only for one (and in this case it should be chosen for <code>distances_0</code>, since <code>0.000000 < 0.000100</code>).</p>
<p>How could I do that ?</p>
|
<python><pandas>
|
2023-06-02 09:42:27
| 2
| 4,512
|
quant
|
76,388,987
| 11,202,401
|
How can I sort a JSON array by a key inside of it?
|
<p>I have an unknown number of items and item categories in a json array like so:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"x_name": "Some Name",
"x_desc": "Some Description",
"id": 1,
"category": "Email"
},
{
"x_name": "Another name here",
"x_desc": "Another description",
"id": 2,
"category": "Email"
},
{
"x_name": "Random Name",
"x_desc": "Random Description",
"id": 3,
"category": "Email"
},
{
"x_name": "Owner Meetings",
"x_desc": "Total count",
"id": 167,
"category": "Owner Specific"
},
{
"x_name": "Owner Tasks",
"x_desc": "Total count of tasks",
"id": 168,
"category": "Owner Specific"
},
{
"x_name": "Owner Calls",
"x_desc": "Total count of calls",
"id": 169,
"category": "Owner Specific"
},
{
"x_name": "Overall Total Views",
"x_desc": "The total views",
"id": 15,
"category": "Totals Report"
}
......
]
</code></pre>
<p>I need to group these JSONObjects based on the property "category".</p>
<p>I've seen similar examples in JS using the reduce function but couldn't get a similar python solution. How can I efficiently do this in Python?</p>
<p>The desired outcome would be:</p>
<pre class="lang-json prettyprint-override"><code>{
"category": "Email",
"points": [
{
"x_name": "Some Name",
"x_desc": "Some Description",
"id": 1,
"category": "Email"
},
{
"x_name": "Another name here",
"x_desc": "Another description",
"id": 2,
"category": "Email"
},
{
"x_name": "Random Name",
"x_desc": "Random Description",
"id": 3,
"category": "Email"
}
]
}
</code></pre>
<p>and then:</p>
<pre class="lang-json prettyprint-override"><code>{
"category": "Owner Specific",
"points": [
{
"x_name": "Owner Meetings",
"x_desc": "Total count",
"id": 167,
"category": "Owner Specific"
},
{
"x_name": "Owner Tasks",
"x_desc": "Total count of tasks",
"id": 168,
"category": "Owner Specific"
},
{
"x_name": "Owner Calls",
"x_desc": "Total count of calls",
"id": 169,
"category": "Owner Specific"
}
]
}
</code></pre>
<p>and so on.</p>
<p>I do not know the value of the key "category" or the number of "categories" in the original JSON array.</p>
|
<python><arrays><json>
|
2023-06-02 09:38:31
| 1
| 605
|
nt95
|
76,388,981
| 10,755,032
|
WebScraping - Parsel: XPath in python
|
<p>I am trying to scrape medium.com<br />
I am using the following code:</p>
<pre><code>from parsel import Selector
def get_trending(html):
selector = Selector(text=html)
text = selector.xpath("//*[@id='root']/div/div[4]/div[1]/div/div/div/div[2]/div/div[1]/div/div/div[2]/div[2]/a/div/h2")
# h2 = text.xpath("//h2/text()")
print(text)
response = requests.get("https://medium.com")
opponents = get_trending(response.text)
opponents
</code></pre>
<p>For some reason, it is giving an empty list as a result. I tried it with another h2 and I got the result I wanted. What might be the problem?
Here is the screenshot when I try the xpath in the inspect tool it is working as shown in the image below.</p>
<p><a href="https://i.sstatic.net/bIai9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bIai9.png" alt="enter image description here" /></a></p>
|
<python><web-scraping><xpath>
|
2023-06-02 09:37:36
| 2
| 1,753
|
Karthik Bhandary
|
76,388,886
| 3,871,036
|
Python : Rounding errors between C-contiguous and F-contiguous arrays for matrix multiplication
|
<hr />
<p><em>Problem :</em></p>
<p>I stumbled across a problem that has deep (and very impactful) consequences in the industry, and does not seem to be documented anywhere :</p>
<p>It seems that in python, Matrix Multiplication (using either <code>@</code> or <code>np.matmul</code>) gives different answers between C-contiguous and F-contiguous arrays :</p>
<pre class="lang-py prettyprint-override"><code>import platform
print(platform.platform()) # Linux-5.10.0-23-cloud-amd64-x86_64-with-glibc2.31
import numpy as np
print(np.__version__) # 1.23.5
np.random.seed(0) # Will work with most of seed, I believe
M, N = 10, 5
X_c = np.random.normal(size=M*N).reshape(M,N)
X_f = np.asfortranarray(X_c)
assert (X_c == X_f).all()
p = np.random.normal(size=N)
assert (X_c @ p == X_f @ p).all() # FAIL
</code></pre>
<p><em><strong>To your best knowledge, is this problem documented anywhere ?</strong></em></p>
<hr />
<p><em>Example of Consequence :</em></p>
<p>In some cases, these errors can become <em>huge</em> :</p>
<p>Example : In <code>sklearn.linear_model.LinearRegression</code> class, the <code>fit</code> method will leads to very wrong parameters in the case of near-singular matrix X that is F-contiguous (typically, that came out of a pandas DataFrame).
This can leads to prediction all over the place and negative R2.</p>
|
<python><rounding-error><contiguous>
|
2023-06-02 09:28:25
| 1
| 1,497
|
Jean Lescut
|
76,388,839
| 1,145,666
|
How to prevent CherryPy to log to stdout when using the logging module?
|
<p>I am using the <code>logging</code> module to log in my application:</p>
<pre><code>import logging
logging.basicConfig(level=logging.INFO)
</code></pre>
<p>However, this causes CherryPy to log all lines two times (one in stdout, one in the logging module; which happens to be set to stdout currently):</p>
<pre><code>[02/Jun/2023:11:17:54] ENGINE Bus STARTING
INFO:cherrypy.error:[02/Jun/2023:11:17:54] ENGINE Bus STARTING
[02/Jun/2023:11:17:54] ENGINE Started monitor thread 'Autoreloader'.
INFO:cherrypy.error:[02/Jun/2023:11:17:54] ENGINE Started monitor thread 'Autoreloader'.
[02/Jun/2023:11:17:54] ENGINE Serving on http://0.0.0.0:5005
INFO:cherrypy.error:[02/Jun/2023:11:17:54] ENGINE Serving on http://0.0.0.0:5005
[02/Jun/2023:11:17:54] ENGINE Bus STARTED
INFO:cherrypy.error:[02/Jun/2023:11:17:54] ENGINE Bus STARTED
</code></pre>
<p>Of course, I only want the lines that are sent to the <code>logging</code> module, so I can control all logging in my application in the same way.</p>
<p>How can I prevent CherryPy to write the log lines also to stdout?</p>
|
<python><logging><cherrypy>
|
2023-06-02 09:23:04
| 1
| 33,757
|
Bart Friederichs
|
76,388,796
| 15,136,810
|
Why pybind11 can not recognize PyObject* as a Python object and how to fix it?
|
<p>I am trying to build a library using this C++ code:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <pybind11/pybind11.h>
namespace py = pybind11;
PyObject* func() {
return Py_BuildValue("iii", 1, 2, 3);
}
PYBIND11_MODULE(example, m) {
m.def("func", &func);
}
</code></pre>
<p>But when I tried to run this Python 3 code:</p>
<pre class="lang-py prettyprint-override"><code>import example
print(example.func())
</code></pre>
<p>It gives me following error:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: Unregistered type : _object
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: Unable to convert function return value to a Python type! The signature was
() -> _object
</code></pre>
<p>Why it is happening and how to fix it?</p>
|
<python><c++><pybind11>
|
2023-06-02 09:18:49
| 1
| 316
|
Vad Sim
|
76,388,696
| 9,212,313
|
Should I use model.eval() when using transformers.pipeline for inference with a fine-tuned BERT model in PyTorch?
|
<p>When training transformer models with <code>Trainer()</code>, documentation shows the following usage:</p>
<pre class="lang-py prettyprint-override"><code>model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
</code></pre>
<p>If you check <code>model.training</code> flag, it's set to <code>false</code> by default, but <code>Trainer</code> has a logic which calls <code>model.train()</code> to set it to <code>True</code>, which makes sense.</p>
<p>When using this fine-tuned model for inference, you can leverage <code>transformers.pipeline</code>, which accepts model object as an argument. But, <code>pipeline</code> doesn't have a logic to check if model is in training mode. I haven't found it in source code, and I don't see it anywhere in the documentation. When I use <code>pipeline</code> for predictions, results are not deterministic, which is another indicator that model is not in eval mode.</p>
<pre><code>generator = pipeline(
"some_task",
model=model,
tokenizer=tokenizer,
aggregation_strategy=aggregation_strategy,
ignore_labels=[],
)
generator("Example") # returns score X
generator("Example") # returns score Y
# but if model.eval() is used before creating pipeline object
generator("Example") # returns score X
generator("Example") # returns score X
</code></pre>
<p>Should I call <code>model.eval()</code> before using the model in <code>pipeline</code> or should <code>pipeline</code> handle it by itself but because of some reason it doesn't?</p>
|
<python><machine-learning><pytorch><nlp><huggingface-transformers>
|
2023-06-02 09:06:27
| 1
| 315
|
robocat314
|
76,388,695
| 15,904,492
|
Scraping data from web imbedded interactive graph
|
<p>I'm trying to retrieve data from a website's interactive graph using the underlying API that sends data from the server to the web browser but I'm no sure what I'm doing nor if this approach can work. The website is <a href="https://ember-climate.org/data/data-tools/carbon-price-viewer/" rel="nofollow noreferrer">located here</a> and the page only contains the grap that holds the data I'm trying to pull.</p>
<p>What I've tried so far:</p>
<ul>
<li>After inspecting the element I found the request url:</li>
</ul>
<pre><code> url ='https://ember-data-api-scg3n.ondigitalocean.app/ember.json?sql=select+variable%2C+year%2C+share_of_generation_pct+from+generation_yearly+where+variable+in+%28%27Clean%27%2C+%27Coal%27%29+and+country_or_region%3D%27World%27&_shape=array'
</code></pre>
<ul>
<li>Pulled the cookies:</li>
</ul>
<pre><code> res = r.get(url)
cookies = res.cookies
</code></pre>
<ul>
<li>Tried to retreive data using:</li>
</ul>
<pre><code>res_post = r.post(url, data={'method':'POST'}, cookies=cookies,
headers={'user-agent':user_agent})
</code></pre>
<p>However I get <code><Response [405]></code></p>
<p>Can you help ?</p>
<p>Thank you!</p>
|
<python><web-scraping><graphics><request>
|
2023-06-02 09:06:25
| 1
| 729
|
zanga
|
76,388,515
| 9,309,990
|
Why mocking HuggingFace datasets library does not work?
|
<p>I have a Python function that uses the HuggingFace <code>datasets</code> library to load a private dataset from HuggingFace Hub.</p>
<p>I want to write a unit test for that function, but it seems pytest-mock does not work for some reason. The real function keeps getting called, even if the mock structure should be correct.</p>
<p>This is the main function:</p>
<pre class="lang-py prettyprint-override"><code>def load_data(token: str):
dataset = load_dataset("MYORG/MYDATASET", use_auth_token=token, split="train")
return dataset
</code></pre>
<p>And this is the test function I wrote:</p>
<pre class="lang-py prettyprint-override"><code>def test_data(mocker):
# Mocked data
token_test = "test_token"
mocked_dataset = [
{'image': [[0.5, 0.3], [0.7, 0.9]], 'timestamp': datetime.date(2023, 1, 1)},
]
mocker.patch('datasets.load_dataset', return_value=mocked_dataset)
result = load_data(token_test)
assert len(result) == 1
</code></pre>
<p>Could it be that there are some "unmockable" libraries which do stuff under the hood and make their functions impossible to stub?</p>
|
<python><mocking><pytest><huggingface><huggingface-datasets>
|
2023-06-02 08:37:57
| 1
| 385
|
Fsanna
|
76,388,344
| 4,025,404
|
Apply a weighting to a 4 parameter regression curvefit
|
<p>The below code generates a plot and 4PL curve fit, but the fit is poor at lower values. This error can usually be addressed by ading a 1/y^2 weighting, but I dont know how to do it in this instance. Adding <code>sigma=1/Y_data**2</code> to the fit just makes it worse.</p>
<pre><code>from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
def fourPL(x, A, B, C, D):
return ((A-D) / (1.0 + np.power(x / C, B))) + D
X_data = np.array([700,200,44,11,3,0.7,0.2,0])
Y_data = np.array([600000,140000,30000,8000,2100,800,500,60])
popt, pcov = curve_fit(fourPL, X_data, Y_data)
fig, ax = plt.subplots()
ax.scatter(X_data, Y_data, label='Data')
X_curve = np.linspace(min(X_data[np.nonzero(X_data)]), max(X_data), 5000)
Y_curve = fourPL(X_curve, *popt)
ax.plot(X_curve, Y_curve)
ax.set_xscale('log')
ax.set_yscale('log')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/OmTia.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OmTia.png" alt="Plot" /></a></p>
|
<python><scipy><scipy-optimize>
|
2023-06-02 08:13:06
| 1
| 957
|
John Crow
|
76,388,331
| 3,416,774
|
Why does py not recognize all Python versions?
|
<p>If I use <code>py -0</code>, I get:</p>
<pre><code> -V:3.11 * Python 3.11 (Store)
</code></pre>
<p>But if I use <code>python -v</code>, I get:</p>
<pre><code>Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] on win32
</code></pre>
<p>Why does py not recognize other Python versions? VS Code detects that I have 4 of them:
<img src="https://i.imgur.com/r4mKd8s.png" alt="" /></p>
<p>The result on cmd:</p>
<pre><code>where python
C:\Users\ganuo\miniconda3\python.exe
C:\Users\ganuo\AppData\Local\Microsoft\WindowsApps\python.exe
C:\Program Files\Inkscape\bin\python.exe
where python3
C:\Users\ganuo\AppData\Local\Microsoft\WindowsApps\python3.exe
</code></pre>
<p>Strangely it doesn't show the <a href="https://orangedatamining.com/" rel="nofollow noreferrer" title="Orange Data Mining - Data Mining">Orange</a> path.</p>
|
<python>
|
2023-06-02 08:11:22
| 1
| 3,394
|
Ooker
|
76,388,160
| 16,306,516
|
search method optimization for searching field area in odoo15
|
<p>I have a function</p>
<pre><code> def test(self):
tech_line = self.env['tech_line']
allocated_technician = self.env['allocated_technician']
users = self.env['res.users']
tech_line = tech_line.search(
[('service_type_id', '=', self.service_type_id.id)])
al_a6 = self.env['tech_line'].filtered(lambda rec: rec.service_type_id.id == self.service_type_id.id)
area = []
area_new = []
for tec in tech_line:
territory = self.env['territory']
territories = territory.search(
[('technicians', 'in', tec.technician_allociated_id.user_id.id)])
territories_lam = self.env['territory'].filtered(
lambda t_lam: t_lam.technicians.id in tec.technician_allociated_id.user_id.id)
for territory in territories:
area.append(territory.id)
for tet in territories_lam:
area_new.append(tet.id)
print('##################33', len(area))
print('%%%%%%%%%%%%%%%%%%%%', len(area_new))
print('$$$$$$$$$$$$$$$$$$$', tech_line)
print('***************8***', al_a6)
</code></pre>
<p>this method when executed screen gets loading and I need to optimize this method, please do share your thoughts on how to optimize this code</p>
<p>I cannot limit the value which is generated from the search method as we need all of its value so instead of that I thought to use <code>filtered</code> instead of <code>search</code> method, but when I use <code>filtered</code> it gives an empty recordset. need help with that</p>
|
<python><odoo><odoo-15>
|
2023-06-02 07:49:22
| 1
| 726
|
Sidharth Panda
|
76,388,131
| 10,035,190
|
how drop some dictionary key-value from a dataframe column pandas?
|
<p>I have dataframe with two columns <code>time</code> and <code>dic</code> in <code>dic</code> column i want to drop <code>1:00</code> and <code>4:00</code>. how to do that with pandas</p>
<pre><code>import pandas as pd
dic={'time':[1685660515,1685689306],
'dic':[{"0:00":"2", "1:00":"1","2:00":"0","4:00":"0"},
{"0:00": "0", "1:00": "0", "2:00":"3","4:00":"0" }]}
df= pd.DataFrame(dic)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>time</th>
<th>dic</th>
</tr>
</thead>
<tbody>
<tr>
<td>1685660515</td>
<td>"0:00": "2", "1:00": "1", "2:00": "0","4:00":"0"</td>
</tr>
<tr>
<td>1685689306</td>
<td>"0:00": "0", "1:00": "0", "2:00":3","4:00":"0"</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas>
|
2023-06-02 07:44:31
| 0
| 930
|
zircon
|
76,388,067
| 1,629,615
|
conda does not update to 23.5.0
|
<p>My <code>conda</code> won't update anymore. Why? What can I do?</p>
<p>While doing a <code>conda update --all</code> on one of my environments, I got the occasional</p>
<pre><code>==> WARNING: A newer version of conda exists. <==
current version: 23.3.1
latest version: 23.5.0
Please update conda by running
$ conda update -n base -c defaults conda
Or to minimize the number of packages updated during conda update use
conda install conda=23.5.0
</code></pre>
<p>So, as usual, I deactivated my current environment an did the <code>conda update -n base -c defaults conda</code>.</p>
<p>Strangely, the latter command runs smoothly, once again telling me that a newer version of <code>conda</code> exists and that I should update <strong>and</strong> then ... it just terminates with</p>
<pre><code># All requested packages already installed.
</code></pre>
<p>After that, <code>conda</code> is still not updated. When I try a somewhat more forceful approach by doing</p>
<pre><code>conda install -n base -c defaults 'conda>=23.5.0'
</code></pre>
<p>my machine goes into outer space for several hours, failing initial frozen solve, retrying flexible solve, analyzing conflicts, etc., seemingly without success ... until I stop the process with <code>CTRL C</code></p>
<p>I do have (only) two environments, i.e., <code>base</code> and <code>evb</code>. With <code>conda list</code> the former one shows a <code>conda 23.3.1</code> the latter one a <code>conda 23.5.0</code>. The only environment I keep up to date with <code>conda update --all</code> is the <code>evb</code> one.</p>
<p>So, now I'm stuck. Any suggestions would be welcome.</p>
<p>(B.t.w. I'm also confused why the 'non-base' environment contains a most recent <code>conda</code>. Does <code>conda update --all</code> within an environment also update the <code>conda</code> of that environment? And if so, why care about the <code>conda</code> of the base at all? But anyway, that is not the issue of this post.)</p>
|
<python><anaconda><conda>
|
2023-06-02 07:33:55
| 2
| 659
|
Mark
|
76,387,953
| 11,163,122
|
Image duplicated when using fig.canvas.tostring_rgb()
|
<p>I am plotting 3D data using <code>matplotlib==3.3.4</code>:</p>
<pre class="lang-py prettyprint-override"><code>fig = plt.figure(figsize=(15, 10))
ax = fig.gca(projection="3d")
ax.view_init(30, 0)
# facecolors is a 3D volume with some processing
ax.voxels(
x, y, z, facecolors[:, :, :, -1] != 0, facecolors=facecolors, shade=False
)
fig.canvas.draw()
image_flat = np.frombuffer(fig.canvas.tostring_rgb(), dtype="uint8")
image_shape = (*fig.canvas.get_width_height(), 3) # (1500, 1000, 3)
ax.imshow(image_flat.reshape(*image_shape))
plt.show()
</code></pre>
<p>(I am making some improvements on <a href="https://www.kaggle.com/code/polomarco/brats20-3dunet-3dautoencoder" rel="nofollow noreferrer">BraTS20_3dUnet_3dAutoEncoder</a> with inspiration from <a href="https://stackoverflow.com/questions/35355930/matplotlib-figure-to-image-as-a-numpy-array">Figure to image as a numpy array</a>).</p>
<p>However, when I actually plot the image, there are two copies:</p>
<p><a href="https://i.sstatic.net/gU3nCm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gU3nCm.png" alt="plotted image" /></a></p>
<p>What am I doing wrong? I can't figure out where the second image is coming from.</p>
|
<python><matplotlib><matplotlib-3d>
|
2023-06-02 07:17:09
| 1
| 2,961
|
Intrastellar Explorer
|
76,387,896
| 894,755
|
Switching Matplotlib backends programmatically in Spyder
|
<p>I am running pyhton inside Spyder. My code produces static figures and an animated figure. For each type of figures, I need to use a different backend. How can I switch between them programmatically inside the code?
I am trying things like</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib
plt.switch_backend('TkAgg')
#matplotlib.use('TkAgg')
#Draw Static Plots
plt.switch_backend('Qt5Agg')
#matplotlib.use('Qt5Agg')
#Draw Dynamic Plots
</code></pre>
<p>But I get different kinds of errors.</p>
|
<python><spyder>
|
2023-06-02 07:07:27
| 2
| 1,100
|
Tarek
|
76,387,796
| 13,615,987
|
How can I create fastapi strawberry mutation with nested data structure as input and output
|
<p>Using Python's fast-api, I implemented a POST endpoint which will take request body something like below and will do some sort of processing and will return resultant dictionary to the client as shown below.</p>
<p><strong>RequestBody:(user details)</strong></p>
<pre><code>{
"0": {
"details": [
{"name": "sam", "designation": "student"},
{"name": "raj", "designation": "worker"},
]
},
"1": {
"details": [
{"name": "ram", "designation": "employer"},
{"name": "sekhar", "designation": "engineer"},
]
},
}
</code></pre>
<p><strong>Response:(user details with ranks)</strong></p>
<pre><code>{
"ranks": [
{
"id": "0",
"rank": 1,
},
{
"id": "1",
"rank": 2,
},
],
"details": [
{"id": "0", "name": "sam", "rank": 1},
{"id": "0", "name": "raj", "rank": 1},
{"id": "1", "name": "ram", "rank": 2},
{"id": "1", "name": "sekhar", "rank": 2},
],
}
</code></pre>
<p>I achieved the proper result in fast-api(REST-apis).</p>
<p><strong>My Goal:</strong>
My goal is to achieve the similar thing in <code>Graphql(fastapi, Strawberry module)</code>. I did some research about Strawberry module and found that, I can achieve this using <code>mutations</code></p>
<p>I am going through some docs and tutorials about Strawberry library but didnt find any option(mutations) to work with this kind of nested data.</p>
<p>I am only finding simple approaches to work with mutations something like this: <a href="https://github.com/itsmaheshkariya/cautious-octo-disco/blob/main/type/user.py#L8" rel="nofollow noreferrer">https://github.com/itsmaheshkariya/cautious-octo-disco/blob/main/type/user.py#L8</a></p>
<p>Can any one help me in developing Graphql mutations for this nested data?</p>
|
<python><python-3.x><fastapi><strawberry-graphql>
|
2023-06-02 06:52:44
| 0
| 659
|
siva
|
76,387,687
| 19,390,849
|
How to get attributes of a JSX element using Python regular expressions?
|
<p>I want to write a simple parser for JSX code. I'm doing it for static analysis, to enforce some policies in our company.</p>
<p>These are possibilities:</p>
<pre><code><Component firstAttribute={dynamicValue} />
<Component firstAttribute="stringValue" secondAttribute={integerValue} />
<Component
firstAttribute={dynamicValue}
secondAttribute="stringValue"
thirdAttribute={numericValue}
booleanAttribute
/>
<Component
handlerAttribute={() => {
// JS code here
}}
/>
<Component
jsonAttribute={{
key: value,
key2: value2,
}}
/>
</code></pre>
<p>As you see, it's really a mess. And these are not the only possible code snippets.</p>
<p>For simple attributes, it's not a big deal. I can easily extract them using this regular expression:</p>
<pre><code><\w+(\s+[\w*]*=['\"\{][^'\"\}]+['\"\}])*
</code></pre>
<p>But it gets really complicated when JSX and JS are mixed or when developers use string interpolation to provide a dynamic value for an attribute.</p>
<p>It seems to me that I'm not on the right path and regular expression is not the correct tool here.</p>
<p>Am I on the right path? How should I deal with this complexity? How can I extract attributes of elements reliably?</p>
|
<python><jsx>
|
2023-06-02 06:35:54
| 1
| 1,889
|
Big boy
|
76,387,474
| 6,997,665
|
Wierd behavior with raising intgers to the negative powers in python
|
<p>I am observing this weird behavior when I am raising an integer to the negative power using an <code>np.array</code>. Specifically, I am doing</p>
<pre><code>import numpy as np
a = 10**(np.arange(-1, -8, -1))
</code></pre>
<p>and it results in the following error.</p>
<blockquote>
<p>ValueError: Integers to negative integer powers are not allowed.</p>
</blockquote>
<p>This is strange as the code <code>10**(-1)</code> works fine. However the following workaround (where <code>10</code> is a float instead of integer) works fine.</p>
<pre><code>import numpy as np
a = 10.**(np.arange(-1, -8, -1)
print(a) # Prints array([1.e-01, 1.e-02, 1.e-03, 1.e-04, 1.e-05, 1.e-06, 1.e-07])
</code></pre>
<p>Why is it not valid for integers? Any explanation is appreciated.</p>
|
<python><python-3.x><numpy>
|
2023-06-02 05:58:22
| 1
| 3,502
|
learner
|
76,387,448
| 3,677,173
|
Python ML / LSTM Prediction on multiple columns
|
<p>I am trying to learn ML, and I have this example which doesn't work as I expect.</p>
<p>Everything works if there is only one column, however, I would like to train model to use multiple columns, for example <code>open</code> and <code>close</code>. And as soon as I add the second column
<code>inverse_transform</code> fails. From what I understood is that there needs to be a special handling of inverse_transform when dealing with 2 column, But I have no idea on how to do that. To reproduce my issue, simply modify this line <code>features = ["open", "close]</code></p>
<p>Down below is a contained sample with dummy values.</p>
<pre><code>import math
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from keras.models import Sequential
from keras.layers import Dense, LSTM
import sys, os
from keras.models import load_model
from keras.regularizers import l2
def get_source():
data = np.random.randint(1, 101, size=(1000, 2))
data[:, 1] = data[:, 0] + np.random.randint(1, 101, size=1000)
return [{'open': row[0], 'close': row[1]} for row in data]
def main():
TRAINING_STEP = 10
features = ["open"]
data = get_source()
df = pd.DataFrame(data, columns=features)
dataset = df.values
training_data_len = math.ceil(len(dataset) * .8)
scaler = StandardScaler()
scaled_data = scaler.fit_transform(dataset)
train_data = scaled_data[0:training_data_len, :]
x_train = []
y_train = []
for i in range(TRAINING_STEP, len(train_data)):
x_train.append(train_data[i-TRAINING_STEP:i, :])
y_train.append(train_data[i, 0])
x_train, y_train = np.array(x_train), np.array(y_train)
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], x_train.shape[2]))
model = Sequential()
model.add(LSTM(50, return_sequences=True, input_shape=(x_train.shape[1], x_train.shape[2])))
model.add(LSTM(50, return_sequences=False))
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(x_train, y_train, batch_size=1, epochs=1)
# predictin next price from stratch
data = get_source()
df = pd.DataFrame(data, columns=features)
scaler = StandardScaler()
scaled_data = scaler.fit_transform(dataset)
test_data = scaled_data[training_data_len - TRAINING_STEP:, :]
x_test = []
t = len(test_data)
x_test.append(test_data[t-TRAINING_STEP:t])
x_test = np.array(x_test)
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], x_test.shape[2]))
predictions = model.predict(x_test)
result = scaler.inverse_transform(predictions)
print("result is",result)
</code></pre>
|
<python><machine-learning><keras><lstm>
|
2023-06-02 05:51:38
| 1
| 2,619
|
user3677173
|
76,387,445
| 2,178,942
|
np.random.shuffle() is not working as expected
|
<p>I want to shuffle the rows of my matrices. Here is my code:</p>
<pre><code> #prepare the training data
dat_training = data_all_training[sbj]
labels_training = dat_training.select("Label")
labels_test = dat_test.select("Label")
y_w2v_train_n = return_normal_features_for_MEG_training(labels_training, type_emb)
y_w2v_pt_n = return_normal_features_for_MEG_validation(labels_test, type_emb)
y_w2v_train_s = y_w2v_train_n
y_w2v_pt_s = y_w2v_pt_n
np.random.shuffle(y_w2v_train_s)
np.random.shuffle(y_w2v_pt_s)
print("sim", y_w2v_train_s[0,0:10] == y_w2v_train_n[0, 0:10])
print(y_w2v_train_s[0, 0:10])
print(y_w2v_train_n[0, 0:10])
print("shapes", x_train.shape, y_w2v_train.shape, x_test_pt.shape, y_w2v_pt.shape)
</code></pre>
<p>Result is:</p>
<pre><code>sim [ True True True True True True True True True True]
[ 0.12567943 0.38765216 0.05903614 0.35545474 0.15695235 -0.09684472
0.20318605 0.09171303 0.19060805 0.19470002]
[ 0.12567943 0.38765216 0.05903614 0.35545474 0.15695235 -0.09684472
0.20318605 0.09171303 0.19060805 0.19470002]
shapes (7200, 99) (7200, 1000) (300, 99) (300, 1000)
</code></pre>
<p>How can I fix it? Where is wrong?</p>
|
<python><arrays><python-3.x><random><shuffle>
|
2023-06-02 05:50:49
| 0
| 1,581
|
Kadaj13
|
76,387,362
| 4,464,045
|
How to set UID and GID for the container using python sdk?
|
<p>How to set UID and GID for the container when <a href="https://docker-py.readthedocs.io/en/stable/containers.html" rel="nofollow noreferrer">Python SDK</a> is used to spin up the container?</p>
|
<python><docker>
|
2023-06-02 05:28:34
| 1
| 1,077
|
Dr.PB
|
76,387,360
| 2,628,868
|
raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command
|
<p>When I tried to install Python 3.11.3 using <code>pyenv 2.3.18</code> in mac book pro with M1 pro chip, shows error like this:</p>
<pre><code>> pyenv install 3.11.3
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Downloading Python-3.11.3.tar.xz...
-> https://www.python.org/ftp/python/3.11.3/Python-3.11.3.tar.xz
Installing Python-3.11.3...
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
BUILD FAILED (OS X 13.3.1 using python-build 20180424)
Inspect or clean up the working tree at /var/folders/1p/dz3r2rz55kd60_t8sgslkvvh0000gn/T/python-build.20230602131834.2670
Results logged to /var/folders/1p/dz3r2rz55kd60_t8sgslkvvh0000gn/T/python-build.20230602131834.2670.log
Last 10 log lines:
File "/private/var/folders/1p/dz3r2rz55kd60_t8sgslkvvh0000gn/T/python-build.20230602131834.2670/Python-3.11.3/Lib/ensurepip/__init__.py", line 202, in _bootstrap
return _run_pip([*args, *_PACKAGE_NAMES], additional_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/1p/dz3r2rz55kd60_t8sgslkvvh0000gn/T/python-build.20230602131834.2670/Python-3.11.3/Lib/ensurepip/__init__.py", line 103, in _run_pip
return subprocess.run(cmd, check=True).returncode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/1p/dz3r2rz55kd60_t8sgslkvvh0000gn/T/python-build.20230602131834.2670/Python-3.11.3/Lib/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/private/var/folders/1p/dz3r2rz55kd60_t8sgslkvvh0000gn/T/python-build.20230602131834.2670/Python-3.11.3/python.exe', '-W', 'ignore::DeprecationWarning', '-c', '\nimport runpy\nimport sys\nsys.path = [\'/var/folders/1p/dz3r2rz55kd60_t8sgslkvvh0000gn/T/tmp14w0pyb5/setuptools-65.5.0-py3-none-any.whl\', \'/var/folders/1p/dz3r2rz55kd60_t8sgslkvvh0000gn/T/tmp14w0pyb5/pip-22.3.1-py3-none-any.whl\'] + sys.path\nsys.argv[1:] = [\'install\', \'--no-cache-dir\', \'--no-index\', \'--find-links\', \'/var/folders/1p/dz3r2rz55kd60_t8sgslkvvh0000gn/T/tmp14w0pyb5\', \'--root\', \'/\', \'--upgrade\', \'setuptools\', \'pip\']\nrunpy.run_module("pip", run_name="__main__", alter_sys=True)\n']' returned non-zero exit status 1.
make: *** [install] Error 1
</code></pre>
<p>does anyone facing the same issue? what should I do to fixed it? the macos OS is: <code>13.3.1 (a)</code>. I have tried:</p>
<pre><code>sudo rm -rf /Library/Developer/CommandLineTools
xcode-select --install
</code></pre>
<p>still could not fixed this issue.</p>
|
<python>
|
2023-06-02 05:28:27
| 0
| 40,701
|
Dolphin
|
76,387,340
| 736,662
|
Python Locust how to print to console and debug
|
<p>I have this Python - Locust script:</p>
<pre><code>from locust import HttpUser, between, task
class LoadValues(HttpUser):
def _run_read_ts(self, series_list, resolution, start, end):
tsIds = ",".join(series_list)
resp= self.client.get(f'/api/loadValues?tsIds={tsIds}&resolution={resolution}&startUtc= {start}&endUtc={end}',
headers={'X-API-KEY': 'sss='})
print("Response content:", resp.text)
@task
def test_get(self):
self._run_read_ts([98429], 'PT1H', '2020-05-11T09%3A17%3A47.478Z', '2023-05-11T09%3A17%3A47.478Z')
</code></pre>
<p>I have tried to print above using "print("Response content:", resp.text)"</p>
<p>But do not see anything in the power shell.</p>
<p>Where can I see the response?</p>
|
<python><locust>
|
2023-06-02 05:25:12
| 1
| 1,003
|
Magnus Jensen
|
76,387,309
| 392,233
|
Creating hierarchy using 4 columns in dataframe - pandas
|
<p>Dataframe is below</p>
<pre><code> ID ParentID Filter Text
0 98 97 NULL AA
1 99 98 NULL BB
2 100 99 NULL CC
3 107 100 1 DD
4 9999 1231 NULL EE
5 10000 1334 NULL FF
6 10001 850 2 GG
7 850 230 NULL HH
8 230 121 NULL II
9 121 96 NULL JJ
10 96 0 NULL KK
11 97 0 NULL LL
</code></pre>
<p>I need to add an additional column hierarchy like this:</p>
<pre><code> ID ParentID Filter Text Hierarchy
0 98 97 NULL AA
1 99 98 NULL BB
2 100 99 NULL CC
3 107 100 1 DD DD/CC/BB/AA/LL
4 9999 1231 NULL EE
5 10000 1334 NULL FF
6 10001 850 2 GG GG/HH/II/JJ/KK
7 850 230 NULL HH
8 230 121 NULL II
9 121 96 NULL JJ
10 96 0 NULL KK
11 97 0 NULL LL
</code></pre>
<p>The rules I am looking at are below:</p>
<ol>
<li><p>Only populate hierarchy column for rows which have filter value populated, the rest of the rows don't need hierarchy done.</p>
</li>
<li><p>When a row is found having filter value not null, lookup its parentID, then search this parentid in ID column. When found reclusively keep going up till, parent id is 0.</p>
</li>
<li><p>Trying to do this with itertools but the looping is taking too long as the original dataset is huge</p>
</li>
</ol>
<p>4)Recordset size is ~ 200k</p>
<p>The below solution provided kindly by mozway seems to work but for a recorset of 200k records, it takes a lot of time. Is there a tweak that can be done to this to get to the solution faster ?</p>
|
<python><pandas><dataframe><networkx>
|
2023-06-02 05:17:10
| 3
| 3,809
|
misguided
|
76,387,219
| 15,406,243
|
How can I use Language Constant in google ads python?
|
<p>I have a code that give me suggestion of keywords from google ads api.</p>
<p>in this code I have a section that select language with language id but in new version of Google Ads API (V13) it's deprecated and removed.</p>
<pre><code>language_rn = client.get_service(
"LanguageConstantService"
).language_constant_path(language_id)
</code></pre>
<p>What is the alternative of LanguageConstantService Now?
How can I set Language in my request?</p>
<p>I find my code from link below:
<a href="https://www.danielherediamejias.com/python-keyword-planner-google-ads-api/" rel="nofollow noreferrer">https://www.danielherediamejias.com/python-keyword-planner-google-ads-api/</a></p>
<hr />
<p>The Error is:</p>
<blockquote>
<p>ValueError: Specified service LanguageConstantService does not exist
in Google Ads API v13.</p>
</blockquote>
|
<python><google-ads-api>
|
2023-06-02 04:53:40
| 2
| 312
|
Ali Esmaeili
|
76,387,171
| 14,154,784
|
Beautiful Soup Img Src Scrape
|
<p><strong>Problem:</strong> I am trying to scrape the image source locations for pictures on a website, but I cannot get Beautiful Soup to scrape them successfully.</p>
<p><strong>Details:</strong></p>
<ul>
<li><p><a href="https://www.acefitness.org/resources/everyone/exercise-library/14/bird-dog/" rel="nofollow noreferrer">Here is the website</a></p>
</li>
<li><p>The three images I want have the following HTML tags:</p>
<ul>
<li><code><img src="https://ik.imagekit.io/02fmeo4exvw/exercise-library/large/14-1.jpg" style="display: none;"></code></li>
<li><code><img src="https://ik.imagekit.io/02fmeo4exvw/exercise-library/large/14-2.jpg" style="display: none;"></code></li>
<li><code><img src="https://ik.imagekit.io/02fmeo4exvw/exercise-library/large/14-3.jpg" style="display: none;"></code></li>
</ul>
</li>
</ul>
<p><strong>Code I've Tried:</strong></p>
<ul>
<li><code>soup.find_all('img')</code></li>
<li><code>soup.select('#imageFlicker')</code></li>
<li><code>soup.select('#imageFlicker > div')</code></li>
<li><code>soup.select('#imageFlicker > div > img:nth-child(1)')</code></li>
<li><code>soup.find_all('div', {'class':'exercise-post__step-image-wrap'})</code></li>
<li><code>soup.find_all('div', attrs={'id': 'imageFlicker'})</code></li>
<li><code>soup.select_all('#imageFlicker > div > img:nth-child(1)')</code></li>
</ul>
<p>The very first query of <code>soup.find_all('img')</code> gets every image on the page except the three images I want. I've tried looking at the children and sub children of each of the above, and none of that works either.</p>
<p>What am I missing here? I think there may be javascript that is changing the css <code>display</code> attribute from <code>block</code> to <code>none</code> and back so the three images look like a gif instead of three different images. Is that messing things up in a way I'm not understanding? Thank you!</p>
|
<python><python-3.x><web-scraping><beautifulsoup>
|
2023-06-02 04:43:16
| 1
| 2,725
|
BLimitless
|
76,387,041
| 4,872,065
|
Incorrect timestamps are shown on the x-axis
|
<p>I have the following bar plot being generated with the following code:</p>
<pre><code>import matplotlib as mplt
from matplotlib import dates, pyplot
from matplotlib.transforms import ScaledTranslation
import numpy as np
import pandas as pd
ts = pd.date_range('2023/01/01', '2023/01/06', freq='3H', tz='utc')
xs = np.arange(len(ts))
df = pd.DataFrame({'date':ts,'value':np.ones(shape=len(ts)), 'intensity':np.random.uniform(0, 10, len(ts))})
colors = []
for i in df.intensity:
if 0 <= i < 6:
colors.append('#75FF71')
elif 6 <= i < 8:
colors.append('#FFC53D')
else:
colors.append('#FF5C5C')
# pyplot.box
fig, ax = pyplot.subplots(figsize = (24,1), constrained_layout=False)
ax.yaxis.set_ticklabels(labels=[])
ax.yaxis.set_visible(False)
ax.grid(False)
ax.set_frame_on(False)
hour_locs = dates.HourLocator(byhour=[6, 12, 18])
hour_locs_fmt = dates.DateFormatter('%H:%M')
ax.xaxis.set_minor_locator(hour_locs)
ax.xaxis.set_minor_formatter(hour_locs_fmt)
day_locs = dates.DayLocator(interval=1)
day_locs_fmt = dates.DateFormatter('%B %-d')
ax.xaxis.set_major_locator(day_locs)
ax.xaxis.set_major_formatter(day_locs_fmt)
ax.xaxis.set_tick_params(which='major', pad=-10, length=40)
ax.bar(df.date, df.value, color=colors)
offset = ScaledTranslation(1.6, 0, fig.dpi_scale_trans)
for label in ax.xaxis.get_majorticklabels():
label.set_transform(label.get_transform() + offset)
</code></pre>
<p>The output:
<a href="https://i.sstatic.net/Zv7Ij.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zv7Ij.png" alt="Output" /></a></p>
<p>The timestamps start from 2023/01/01 00:00:00+000 (UTC), however the plot shows that the data is starting at ~15:00 the day before. I'm assuming that matplotlib is ignoring the timezone in the data.</p>
<p>I did try specifying TZ in the locators and formatter in vain.</p>
<p>How do I get matplotlib to plot in UTC?</p>
|
<python><matplotlib><x-axis>
|
2023-06-02 04:02:34
| 1
| 427
|
AGS
|
76,387,002
| 2,816,215
|
Python inheritance - Interfaces/classes
|
<pre><code>from langchain.schema import BaseMemory
class ChatMemory(BaseMemory):
def __init__(self, user_id: UUID, type: str):
self.user_id = user_id
self.type = type
# implemented abstract methods
class AnotherMem(ChatMemory):
def __init__(self, user_id: UUID, type: str):
super().__init__(user_id, type)
</code></pre>
<p>This seems simple enough - but I get an error: <code>ValueError: "AnotherMem" object has no field "user_id"</code>. What am I doing wrong?</p>
<p>Note that <a href="https://github.com/hwchase17/langchain/blob/db45970a66f39a32f2cdd83e7bde26a404efad7b/langchain/schema.py#L188" rel="nofollow noreferrer">BaseMemory</a> is an interface.</p>
|
<python><inheritance><langchain>
|
2023-06-02 03:50:09
| 1
| 441
|
user2816215
|
76,386,924
| 9,795,817
|
Iteratively apply a function that updates one of its arguments at each step (PySpark)
|
<p>I am working on a pyspark implementation of the Louvain algorithm for community detection. In a nutshell, the algorithm greedily takes one node and assigns it to the community that causes the greatest increase in a purity metric. This process is sequentially iterated to each node in the graph.</p>
<p>By construction, the communities change at each step. For example, assigning node <em>i</em> from community <em>X</em> to community <em>Y</em> implies that at the start of the next step, community <em>Y</em> will have grown by one member. This affects the optimal community of node <em>i+1</em>.</p>
<p>I wrote a function that reassigns a node to its optimal neighboring community. Currently, I am using a <code>for</code> loop to apply the function to each node in the graph and update the graph's edges at each step (and yes, it's as slow as you'd expect).</p>
<p>The function receives a node ID and a pyspark dataframe as arguments and returns a community ID. This ID is then used to update the pyspark dataframe for the next iteration.</p>
<p>This is what the dataframe that is being updated looks like:</p>
<pre class="lang-py prettyprint-override"><code>>>> edges.show()
+---+---+----+----+
|src|dst|cSrc|cDst|
+---+---+----+----+
| 0| 1| 40| 41|
| 0| 2| 40| 42|
| 1| 2| 41| 42|
+---+---+----+----+
</code></pre>
<p>It represents the edges of an undirected graph. <code>src</code> is the ID of the source node and <code>dst</code> is the ID of the destination node. Columns <code>cSrc</code> and <code>cDst</code> represent the community that <code>src</code> and <code>dst</code> currently belong to (respectively).</p>
<p>This is what my code looks like:</p>
<pre class="lang-py prettyprint-override"><code># Define function that finds the best community ID
def getOptimalCommunity(nodeId, edges) -> int:
# Do a ton of smart stuff here to determine `optimalCommunityId`
return optimalCommunityId
# Apply to each node ID in the graph
for i in [0, 1, 2]:
# Get optimal community
comm = getOptimalCommunity(nodeId=i, edges=edges)
# Update `edges`
edges = (edges
.selectExpr(
'src',
'dst',
# Update the community that `i` belongs to
f'case when src = i then {comm} else cSrc end as cSrc',
f'case when dst = i then {comm} else cDst end as cDst'
)
)
</code></pre>
<hr />
<h1>Question</h1>
<p>In PySpark, is there a more efficient way to apply a function that updates one of its arguments?</p>
|
<python><apache-spark><for-loop><pyspark>
|
2023-06-02 03:20:35
| 0
| 6,421
|
Arturo Sbr
|
76,386,917
| 14,639,992
|
cuckoo service web Unable to connect to MongoDB
|
<p>I am following to install cuckoo sandbox with docker: <a href="https://github.com/blacktop/docker-cuckoo" rel="nofollow noreferrer">https://github.com/blacktop/docker-cuckoo</a></p>
<p>I did docker-compose up -d.</p>
<p>Mongo start but web service does not start it tryes to connect mongo.</p>
<p>I got:</p>
<blockquote>
<p>CuckooCriticalError: Unable to connect to MongoDB: command
SON([('listCollections', 1), ('cursor', {})]) on namespace cuckoo.$cmd
failed: Unsupported OP_QUERY command: listCollections. The client
driver may require an upgrade. For more details see
<a href="https://dochub.mongodb.org/core/legacy-opcode-removal" rel="nofollow noreferrer">https://dochub.mongodb.org/core/legacy-opcode-removal</a>. In order to
operate Cuckoo as per your configuration, a running MongoDB server is
required.</p>
</blockquote>
<p>Mongo is running in a container I am able to access mongosh and show all the databases.</p>
<p>this is my docker-compose.yml</p>
<pre><code>version: "2"
services:
cuckoo:
image: blacktop/cuckoo:2.0
command: daemon
ports:
- "2042:2042"
volumes:
- ./cuckoo-tmp/:/tmp/cuckoo-tmp/
- ./storage/:/cuckoo/storage/
networks:
- cuckoo
env_file:
- ./2.0/config-file.env
web:
image: blacktop/cuckoo:2.0
ports:
- "80:31337"
links:
- mongo
- elasticsearch
- postgres
command: web
volumes:
- ./cuckoo-tmp/:/tmp/cuckoo-tmp/
- ./storage/:/cuckoo/storage/
networks:
- cuckoo
env_file:
- ./2.0/config-file.env
api:
depends_on:
- postgres
image: blacktop/cuckoo:2.0
ports:
- "8000:1337"
links:
- postgres
command: api
volumes:
- ./cuckoo-tmp/:/tmp/cuckoo-tmp/
- ./storage/:/cuckoo/storage/
networks:
- cuckoo
env_file:
- ./2.0/config-file.env
# nginx:
# build: nginx/.
# depends_on:
# - mongo
# ports:
# - "80:80"
# links:
# - mongo
# networks:
# - cuckoo
mongo:
image: mongo:latest
ports:
- 27017
volumes:
- mongo-data:/data/db
networks:
- cuckoo
elasticsearch:
image: blacktop/elasticsearch:5.6
ports:
- 9200
volumes:
- es-data:/usr/share/elasticsearch/data
networks:
- cuckoo
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
# mem_limit: 1g
postgres:
image: postgres
ports:
- 5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: cuckoo
PGDATA: /var/lib/postgresql/data/pgdata
# POSTGRES_INITDB_ARGS: --data-checksums
volumes:
- postgres-data:/var/lib/postgresql/data/pgdata
networks:
- cuckoo
networks:
cuckoo:
driver: bridge
volumes:
cuckoo-data:
mongo-data:
es-data:
postgres-data:
</code></pre>
<p>SOLVED:
I used mongodb version 3.6.8
and for elasticsearch I had to setup:
sudo sysctl -w vm.max_map_count=262144</p>
<p>Thank</p>
|
<python><mongodb><docker><sandbox><cuckoo>
|
2023-06-02 03:17:41
| 0
| 479
|
Raul Cejas
|
76,386,905
| 10,946,893
|
Validate number of decimal places in FastAPI
|
<p>Using FastAPI (and pydantic), is there a built-in way to validate the number of decimal places of a query parameter?</p>
<p>For example, I want to allow monetary values in USD only, such that only 2 decimal places are allowed. For example, 12.34 would be allowed, but 12.345 would not as it has 3 decimal places.</p>
<p>Here is my current code:</p>
<pre class="lang-py prettyprint-override"><code>@app.post("send_usd")
async def send_usd(amount: Decimal = Query(gt=0)):
pass
</code></pre>
<p>Is there a built-in way (without writing a custom validator or regex) similar to the following example:</p>
<pre class="lang-py prettyprint-override"><code>amount: Decimal = Query(gt=0, decimal_places=2)
</code></pre>
|
<python><fastapi><pydantic>
|
2023-06-02 03:11:07
| 1
| 1,030
|
mincom
|
76,386,889
| 2,635,090
|
pymongo and python: issue with find document using date filter
|
<p>I am trying to retrive document from mongodb using pymongo and python from these mongodb collection.</p>
<pre><code>{
"fees": "0.00",
"trigger": "immediate",
"price": "18.30000000",
"recordTimestamp": "2023-01-10T12:03:27.197000Z"
},
{
"fees": "4.00",
"trigger": "immediate",
"price": "12.30000000",
"recordTimestamp": "2023-02-10T12:03:27.197000Z"
},
{
"fees": "1.00",
"trigger": "immediate",
"price": "10.30000000",
"recordTimestamp": "2023-03-10T12:03:27.197000Z"
}
</code></pre>
<p>If I use mongoDB compass and do this filter</p>
<pre><code>{recordTimestamp: {$gte: '2023-01-24',$lte: '2023-03-01'}}
</code></pre>
<p>I get below expected result.</p>
<pre><code>{
"fees": "4.00",
"trigger": "immediate",
"price": "12.30000000",
"recordTimestamp": "2023-02-10T12:03:27.197000Z"
}
</code></pre>
<p>however when I use same filter in pymongo I get nothing.</p>
<pre><code>stdateval = datetime(2023,1,24)
endateval = datetime(2023,3,1)
dtFilter = { "recordTimestamp" : {"$gte": stdateval,"$lt": endateval}}
collection = DBModule.database[collectionName]
jsnObj_list = collection.find(dtFilter)
</code></pre>
<p>What am I doing wrong?</p>
<p>btw I tried this alternative and it also gets the expected result. but not with pymongo find option.</p>
<pre><code>stdateval = datetime(2023, 1, 24)
endateval = datetime(2023, 3, 1)
jsnObj_list = collection.find({})
filtered_list = []
for item in jsnObj_list:
recordTimestamp = datetime.strptime(item['recordTimestamp'], "%Y-%m-%dT%H:%M:%S.%fZ")
if stdateval <= recordTimestamp < endateval:
filtered_list.append(item)
</code></pre>
|
<python><python-3.x><mongodb><datetime><pymongo>
|
2023-06-02 03:03:33
| 2
| 787
|
digitally_inspired
|
76,386,788
| 2,769,240
|
How to pick up on streamlit text area (Command + Enter) action?
|
<p>So, I have a text_area in the app. A user enters something there and then press the button below "submit". Now I can use the click of a Button as an action callback like below:</p>
<pre><code>with st.container():
query = st.text_area("Ask a question here:", height= 100, key= "query_text")
button = st.button("Submit", key="button")
if button:
with st.spinner('Fetching Answer...'):
response = custom_qa.qa.run(query)
</code></pre>
<p>And run another piece of code tied to that action.</p>
<p>But the text area also has Command+Enter option (bottom right) for use to enter the text. How do I catch that to initiate the same action as Button clicked, as text_area doesn't return any value concerning that.</p>
<p><a href="https://i.sstatic.net/mhQNq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mhQNq.png" alt="enter image description here" /></a></p>
|
<python><streamlit>
|
2023-06-02 02:28:19
| 0
| 7,580
|
Baktaawar
|
76,386,641
| 11,317,931
|
Get indexes chosen when slicing a list
|
<p>I have a list:</p>
<pre class="lang-py prettyprint-override"><code>things = ["a", "b", "c", "d", "e", "f"]
</code></pre>
<p>I also have a slice object provided to me, intended for the list above. Here's an example:</p>
<pre class="lang-py prettyprint-override"><code>s = slice(-2, None, None)
print(things[-2:]) # ['e', 'f']
</code></pre>
<p>My goal is to extract the specific indexes the slice impacts. In this example, that would be:</p>
<pre class="lang-py prettyprint-override"><code>[4, 5]
</code></pre>
<p>Does anyone know how I would go about doing this more pythonically? My current solution is:</p>
<pre class="lang-py prettyprint-override"><code>print([i for i in range(len(things))][s]) # [4, 5]
</code></pre>
|
<python><slice>
|
2023-06-02 01:37:08
| 1
| 1,017
|
Krishnan Shankar
|
76,386,598
| 4,434,941
|
Issues with Python Scrapy script with multiple start urls
|
<p>I have a scraper written in Scrapy. It works perfectly fine as long as there is just one url in start_urls. As soon as I add 2 or more start urls, the behavior is erratic. The number of pages it scrapes is wrong, it stops midway etc. Each individual url works fine if entered individually. Any idea on what might be causing this? I have a large number of urls to scrape so can't do this one by one.</p>
<p>--</p>
<pre><code>import scrapy
from selenium import webdriver
from scrapy.selector import Selector #used to pull urls
from scrapy.http import Request
from time import sleep
from random import random
from datetime import datetime
class UsedaudiSpider(scrapy.Spider):
'''
This spider basically crawls a number of audi dealerships and gets a list of all the used vehicles available there
'''
name = 'usedaudi'
allowed_domains = [
'www.audimv.com',
]
start_urls = [
'https://www.audioxnard.com/used-inventory/index.htm',
'https://www.audimv.com/used-inventory/index.htm',
'https://www.montereyaudi.com/used-inventory/index.htm',
]
def __init__(self):
options = webdriver.ChromeOptions()
options.add_argument('headless')
self.driver = webdriver.Chrome(options=options)
def parse(self, response):
self.driver.get(response.url)
sleep(2)
try:
but = self.driver.find_element_by_xpath('//button[@aria-label="switch to List view"]')
but.click()
except:
pass
sleep(1.5)
for _ in range(10):
self.driver.execute_script("window.scrollBy(0, 700);")
sleep(0.25)
sel = Selector(text=self.driver.page_source)
listings = sel.xpath('//li[@class="box box-border vehicle-card vehicle-card-detailed vehicle-card-horizontal"]')
for listing in listings:
yr_brand = listing.xpath('.//span[@class="ddc-font-size-small"]/text()').extract_first()
if yr_brand is not None:
year = yr_brand.split(' ')[0]
brand = yr_brand.split(' ')[1]
else:
year=''
brand=''
model= listing.xpath('.//h2/a/text()').extract_first()
if model is not None:
model = model.strip()
base_model = model.split(' ')[0]
else:
base_model = ''
link = listing.xpath('.//h2/a/@href').extract_first()
url2 = response.urljoin(link)
price = listing.xpath('.//span[@class="price-value"]/text()').extract_first()
miles = listing.xpath('.//li[@class="odometer"]/text()').extract_first()
engine = listing.xpath('.//li[@class="engine"]/text()').extract_first()
awd = listing.xpath('.//li[@class="normalDriveLine"]/text()').extract_first()
stock = listing.xpath('.//li[@class="stockNumber"]/text()').extract_first()
#dealership name
dealership = self.driver.current_url
dealership = dealership.replace('https://', '').split("/")[0].replace("www.", "").replace(".com", "")
yield {
'dealership':dealership,
'year': year,
'brand': brand,
'base_model':base_model,
'model_detail': model,
'price': price,
'miles': miles,
'engine': engine,
'awd': awd,
'stock': stock,
'link': url2,
}
#find the next link and click it if it exists
try:
next = self.driver.find_element_by_xpath('//li[@class="pagination-next"]/a')
self.logger.info('NEXT IS:')
print(next)
except:
next = None
self.logger.info('No more pages to load.')
if next is not None:
next_url = next.get_attribute("href")
self.logger.info('THE NEXT URL IS')
self.logger.info(next_url)
yield scrapy.Request(next_url, callback=self.parse)
def close(self, reason):
self.driver.quit()
</code></pre>
<p>Any help would be much appreciated</p>
|
<python><selenium-webdriver><scrapy>
|
2023-06-02 01:18:56
| 1
| 405
|
jay queue
|
76,386,442
| 7,362,388
|
Model defined with SQLAlchemy and mypy requires a relationship parameter during initialization
|
<p>I have a model defined as following:</p>
<pre><code>class Base(MappedAsDataclass, DeclarativeBase):
"""subclasses will be converted to dataclasses"""
class Prompt(Base):
__tablename__ = "Prompt"
id = mapped_column(
"id",
UUID(as_uuid=True),
primary_key=True,
index=True,
server_default=sa.text("gen_random_uuid()"),
)
created_at = mapped_column(
"created_at", DateTime(timezone=True), server_default=func.now(), nullable=False
)
text: Mapped[str] = mapped_column(Text)
display_name: Mapped[str] = mapped_column("display_name", String)
# many to one relationship
owner_id: Mapped[uuid.UUID] = mapped_column(
"owner_id",
UUID(as_uuid=True),
ForeignKey("User.id"),
)
owner: Mapped[User] = relationship("User", back_populates="prompts")
# many-to-many relationship
transcripts: Mapped[List[Transcript]] = relationship(
"Transcript",
secondary=transcript_prompt_association,
back_populates="prompts",
)
deleted: Mapped[bool] = mapped_column("deleted", Boolean, default=False)
</code></pre>
<p>When I want to create an instance of the model:</p>
<pre><code>db_prompt = models.Prompt(text=text, display_name=display_name, owner_id=user_id)
</code></pre>
<p>I receive the following error:</p>
<pre><code>Missing positional arguments "owner", "transcripts" in call to "Prompt" [call-arg]mypy
</code></pre>
<p>How can I fix it?</p>
<p>I already tried to:</p>
<pre><code>owner: Optional[Mapped[User]] = relationship("User", back_populates="prompts")
</code></pre>
<p>=> Same error.</p>
<p>I thought mypy understands automatically that a relationship field is not required during init.</p>
<p>EDIT:</p>
<p>My mypy.ini</p>
<pre><code>[mypy]
python_version = 3.11
plugins = pydantic.mypy,sqlalchemy.ext.mypy.plugin
ignore_missing_imports = True
disallow_untyped_defs = True
exclude = (?x)(
alembic # files named "one.py"
)
</code></pre>
|
<python><sqlalchemy><mypy>
|
2023-06-02 00:13:36
| 1
| 1,573
|
siva
|
76,386,404
| 15,008,956
|
Calculating paths around complicated shape
|
<p>I have a several datasets for irregular 2D shapes. These datasets contain unordered points that make up the outline of shapes, but occasionally contain loops or islands of points. An example is shown below in <strong>fig 1</strong>. Coordinates for the shown shape are available at <a href="https://pastebin.com/0cHxPnua" rel="nofollow noreferrer">https://pastebin.com/0cHxPnua</a>.</p>
<p><strong>Figure 1</strong></p>
<p><a href="https://i.sstatic.net/M4Jb1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M4Jb1.png" alt="enter image description here" /></a></p>
<p><strong>Purpose:</strong> I am trying to calculate a continuous line around these outlines in order to measure their perimeter, and eventually the distance between 2 points in that perimeter.</p>
<p><strong>Problem:</strong> In the instances of complex shapes like the one shown in figure 1, the looped regions and disconnected regions can cause issues in generating a reasonable outline. I handle the islands by disregarding any connections that are much longer than the average distance between nearest point, but I haven't figured out what to do with the loops yet.</p>
<p>If you look below to <strong>fig 2</strong>, my pathfinding so far is able to handle most of the shape and successfully ignores island points, but fails after picking a bad initial path around the loop. It has to go to a new unlinked point, so it doubles back to a previously missed point near the intersection of the loop, but is then stuck as the next nearest unlinked points are beyond the distance threshold for ignoring island points.</p>
<p><strong>Figure 2</strong> Blue start, Red end. <strong>Note: The last point ALWAYS connects back to the first (red)</strong></p>
<p><a href="https://i.sstatic.net/QyHaj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QyHaj.png" alt="enter image description here" /></a></p>
<p><strong>Goal:</strong> My goal here is either to chage my pathfinding method so that it can consistently handle complicated loops like these, or to find a wholly alternative method for pathfinding that is known to be applicable to this issue.</p>
<p>A test version of my code is available below to trial. You need to copy the shape's points from pastebin and paste them on the <code>points =</code> line</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import scipy.spatial
import sys
import os
import numpy as np
import pandas as pd
import re
from scipy.interpolate import CubicSpline
from scipy.signal import savgol_filter
from matplotlib.collections import LineCollection
def calculate_path_length(path):
"""
Calculate the path length.
Parameters:
path (array): An array of 2D points in the path.
Returns:
float: The total length of the path.
"""
return np.sum(np.sqrt(np.sum(np.diff(path, axis=0)**2, axis=1)))
def get_shortest_path(full_path, start_point, end_point):
"""
Get the shortest path between start and end points in a circular path.
Parameters:
full_path (array): An array of 2D points in the circular path.
start_point (array): The start point.
end_point (array): The end point.
Returns:
array: The shortest path.
"""
# Get the indices of the start and end points in the full path
start_index = np.where((full_path == start_point).all(axis=1))[0][0]
end_index = np.where((full_path == end_point).all(axis=1))[0][0]
# Calculate the paths
if start_index < end_index:
direct_path = full_path[start_index:end_index + 1]
indirect_path = np.concatenate((full_path[end_index:], full_path[:start_index + 1]))
else:
direct_path = full_path[end_index:start_index + 1]
indirect_path = np.concatenate((full_path[:end_index + 1], full_path[start_index:]))
# Calculate the lengths of the paths
direct_path_length = calculate_path_length(direct_path)
indirect_path_length = calculate_path_length(indirect_path)
# Return the shortest path
print(direct_path_length, indirect_path_length)
if direct_path_length < indirect_path_length:
return direct_path
else:
return indirect_path
def GetPointDistance(p1, p2):
return np.sqrt( ((p2[0] - p1[0])**2) + ((p2[1] - p1[1])**2) )
def AverageDistanceWithBuffer(points, buffer_size=20):
"""
Calculate the average distance of a group of points and add buffer_size standard deviations to
add a large ceiling for distance variances. This should still be able to exclude extremely
large deviations that occur when outliers are present.
Parameters:
points (list): 2D list of 2xN points.
buffer_size (int): Padding multiplier to use for outlier rejection.
Returns:
float: The average distance with buffer.
"""
distances = []
for i, point in enumerate(points):
if i != len(points) - 1:
distances.append(GetPointDistance(point, points[i+1]))
return np.mean(distances) + buffer_size * np.std(distances)
def random_downsample(arr, percentage):
"""
Randomly downsample an array by the given percentage.
Parameters:
arr (array): The input array.
percentage (float): The percentage of data to retain.
Returns:
array: The downsampled array.
"""
if percentage <= 0 or percentage > 100:
raise ValueError("Percentage should be between 0 and 100 (inclusive).")
sample_size = int(len(arr) * percentage / 100)
random_indices = np.random.choice(len(arr), sample_size, replace=False)
sampled_arr = arr[random_indices]
return sampled_arr
def shortest_path(points, start_point, end_point):
"""
Find the shortest path between start and end points.
Parameters:
points (array): An array of 2D points.
start_point (array): The start point.
end_point (array): The end point.
Returns:
array: The full path.
array: The shortest path between the start and end points.
"""
points = points.tolist()
start_point = start_point.tolist()
end_point = end_point.tolist()
points = points + [start_point, end_point]
current_point = start_point
full_path = [current_point]
points.remove(current_point)
use_cutoff = False
while points:
if len(full_path) == 1000:
buffered_avg_dist = AverageDistanceWithBuffer(full_path)
use_cutoff = True
if len(points) == 1 and points[0] == end_point:
full_path.append(end_point)
points.remove(end_point)
else:
closest_point = min(points, key=lambda x: np.linalg.norm(np.array(current_point) - np.array(x)))
closest_point_dist = GetPointDistance(current_point, closest_point)
if use_cutoff:
if closest_point_dist < buffered_avg_dist:
full_path.append(closest_point)
current_point = closest_point
else:
full_path.append(closest_point)
current_point = closest_point
points.remove(closest_point)
# To form a closed loop for the full perimeter path
full_path.append(start_point)
# Shortest path from start to end
shortest_path = get_shortest_path(np.array(full_path), start_point, end_point)
plt.scatter(shortest_path[:, 0], shortest_path[:, 1], c=range(len(shortest_path)), cmap='inferno')
plt.close()
# Convert lists into numpy arrays
full_path = np.array(full_path)
shortest_path = np.array(shortest_path)
return full_path, shortest_path
points = # Copy points from https://pastebin.com/0cHxPnua
points = np.array(points)
# Returns a full path and the shortest path between the start and end points
full, shortest = shortest_path(points, points[48], points[int(len(points)/2)]) # Arbitrary Start and End points
# Plot the outline
plt.scatter(points[0, 0], points[0, 1], s=200, facecolor='red', zorder=1000) # Plot start point
plt.scatter(points[int(len(points)/2), 0], points[int(len(points)/2), 1], s=200, facecolor='green', zorder=1000) # Plot end point
plt.scatter(points[:, 0], points[:, 1]) # Plot all points
plt.plot(full[:, 0], full[:, 1], color='k') # Plot path line
plt.axis('equal')
plt.show()
</code></pre>
|
<python><numpy><path-finding>
|
2023-06-02 00:00:50
| 1
| 305
|
Paul
|
76,386,280
| 10,237,506
|
How to run `pip install -e .` programatically
|
<p>Sorry if this question has been asked before.</p>
<p>How can I run equivalent of <code>pip install -e .</code> in python code itself? Or install directly from the egg or <code>setuptools</code> if it's possible.</p>
<p>Currently I can programatically generate the <code>*.egg_info</code> for a project that specifies <code>pyproject.toml</code> or <code>setup.py</code> in this manner:</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup
from setuptools.dist import Distribution
distro: Distribution = setup(script_args=['egg_info'])
</code></pre>
<p>Can i extend the above code to run <code>pip install -e .</code> with the generated egg directory? Is that possible?</p>
|
<python><python-3.x>
|
2023-06-01 23:19:11
| 0
| 12,038
|
Wizard.Ritvik
|
76,386,269
| 2,059,584
|
Filter on exception from generator
|
<p>I would like to iterate over a generator/iterator and pass the value produced to the function.
The function can either return a new value or raise an error.
I then need to return the value or skip on error.</p>
<p>I came up with the following code:</p>
<pre class="lang-py prettyprint-override"><code>def yield_or_skip(iter_: Iterable, func: Callable, skip_on_errors: Iterable[Type[Exception]]) -> Iterator:
skip_on_errors = set(skip_on_errors)
for item in iter_:
try:
yield func(item)
except Exception as e:
if type(e) in skip_on_errors:
continue
raise
</code></pre>
<p>It seems to work, but I was wondering if there is a better way to do this?</p>
<p>Note: it has to be an iterator/generator, I can't gather results in a list.</p>
|
<python><exception>
|
2023-06-01 23:15:39
| 2
| 854
|
Rizhiy
|
76,385,999
| 891,441
|
How to export a Pydantic model instance as YAML with URL type as string
|
<p>I have a Pydantic model with a field of type <code>AnyUrl</code>.
When exporting the model to YAML, the <code>AnyUrl</code> is serialized as individual field slots, instead of a single string URL (perhaps due to how the <code>AnyUrl.__repr__</code> method is <a href="https://github.com/samuelcolvin/pydantic/blob/cc54acb612cb5144a34caebb9ac143324a0cb4a1/pydantic/networks.py#L333-L335" rel="nofollow noreferrer">implemented</a>).</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, AnyUrl
import yaml
class MyModel(BaseModel):
url: AnyUrl
data = {'url': 'https://www.example.com'}
model = MyModel.parse_obj(data)
y = yaml.dump(model.dict(), indent=4)
print(y)
</code></pre>
<p>Produces:</p>
<pre class="lang-yaml prettyprint-override"><code>url: !!python/object/new:pydantic.networks.AnyUrl
args:
- https://www.example.com
state: !!python/tuple
- null
- fragment: null
host: www.example.com
host_type: domain
password: null
path: null
port: null
query: null
scheme: https
tld: com
user: null
</code></pre>
<p>Ideally, I would like the serialized YAML to contain <code>https://www.example.com</code> instead of individual fields.</p>
<p>I have tried to override the <code>__repr__</code> method of <code>AnyUrl</code> to return the <code>AnyUrl</code> object itself, as it extends the <code>str</code> class, but no luck.</p>
|
<python><yaml><pydantic><pyyaml>
|
2023-06-01 22:01:58
| 2
| 2,435
|
morfys
|
76,385,964
| 9,531,047
|
Failure to run shutil.copytree when source folder on network drive
|
<p>Half of our developers are running into an issue using shutil.copytree.
I think this is a Python question, but it could likely be a networking/hardware question as well, so I am providing as much information as possible, which may or my not be relevant.</p>
<p>Our company has developers spread across 2 locations, we are all on Windows 10, and using python 3.7.9.</p>
<p>Network drives are mimicked to be the same at both locations:</p>
<ul>
<li>We have a Y: drive, which is a mounted network drive. Each location cannot access the Y: drive from the other location.</li>
<li>We have a <code>\\<domain_name>\code</code> DFS path that's another network path, where the code is "released" and served from, and synchronized between the 2 locations via DFS replication.</li>
</ul>
<p>When developers have code to release, they run a release script, the whole release is staged on their Y: drive, then the whole package is copied over to the <code>\\<domain_name>\code</code> via a python <code>shutil.copytree</code>.</p>
<p>This works flawlessly at our Location A, but has a considerable failure rate in our LocationB, where the devs often encounter the following error (double backslashes edited for legibility):</p>
<p><code>shutil.Error: [('Y:\package\src\file.py', '\\domain\code\package\src\file.py', "[Errno 2] No such file or directory: 'Y:\package\src\file.py\file.py'")]</code></p>
<p>We have occasionally encountered the same error when copying via shutil from <code>Y:</code> to <code>Y:</code>, though we rarely do this so 90% of the time the error occurs from Y: to DFS.</p>
<p>The folder structure we're trying to copy from might look something like this:</p>
<pre><code>Y:\
package\
src\
file.py
file2.py
</code></pre>
<p>I did notice that in the error message: <code>[Errno 2] No such file or directory: 'Y:\package\src\file.py\file.py'</code>, it seems to indicate that it's looking for a file <code>file.py\file.py</code> which indeed does not exist, and was never meant to exist.</p>
<p>While this usually happens as part of our deployment script, it can be reproduced by running a copy manually:</p>
<pre class="lang-py prettyprint-override"><code>import shutil
src = r'Y:\package'
dst = r'\\domain\code\package'
shutil.copytree(src, dst)
</code></pre>
<p>I have dug into the shutil code, and the error it raises in copytree would be:
<code>(srcname, dstname, str(the_OSError))</code>, so somehow the <code>srcname</code> and <code>dstname</code> are correct, but somewhere the <code>srcname</code> changes like if it did <code>os.path.join(srcname, os.path.basename(srcname))</code>.</p>
<p>To make things more fun, the error never seems to occur if I have a debugger attached, so I'm having a hard time figuring out where exactly the issue is happening. I've been trying to follow the code and imagine where things could go wrong but I'm not finding it. Maybe some race condition disappears when the debugger makes lines run one by one.</p>
<p>Usually, everything would succeed on a retry, or if not on the next retry, which points me at maybe it's one of the os.path methods being called on Y: returning a lie because of some weirdness with the network drive, but even then I don't see where the shutil code would go wrong exactly.</p>
<p>Does anyone have any clue as to what it may be?</p>
|
<python><shutil>
|
2023-06-01 21:53:44
| 0
| 330
|
Erwan Leroy
|
76,385,931
| 1,745,616
|
Validate csv by checking if enumeration columns contains any invalid coded values
|
<p>We recieve many different csv files from external labs and centers. When recieving such a file, we first need to do some QA checks before further processing. So make sure the data is correct, at least on a technical level.</p>
<p>We have some Python scripts to check the number of columns, check date values, min/max range etc. But now we also want to check wether the enumerated columns are correct. So for example, if a column visit is a coded value and may only contain <code>baseline</code>, <code>fup_6_m</code>, <code>fup_12_m</code> then it shouldn't contain anything else like <code>fup_36_m</code>.</p>
<p>We have the metadata specifications, so the column names and the lists of coded values (aka enumeration) are known beforehand.</p>
<p>This is the Python script I've got so far:</p>
<pre><code># check if coded values are correct
import pandas as pd
import io
## load data from csv files
##df = pd.read_csv (r'patlist_mcl2017.csv', sep = ",", decimal=".")
# TESTING: create data frame from text
str_patients = """patid,dob,sex,height,score,visit
1072,16-01-1981,M,154,1,fup_12_m
1091,20-12-1991,M,168,4,baseline
1126,25-12-1999,M,181,3,fup_6_m
1139,14-04-1980,Y,165,1,baseline
1171,05-11-1984,M,192,2,fup_12_m
1237,17-08-1983,F,170,3,fup_6_m
1334,26-08-1985,F,160,5,fup_6_m
1365,14-09-1976,M,184,3,fup_24_m
1384,28-12-1993,F,152,1,baseline
1456,27-09-1998,F,164,5,fup_12_m
"""
df = pd.read_csv(io.StringIO(str_patients), sep = ",", decimal=".")
print(df)
# allowed values for enumeration columnms
allowed_enum = {
'sex': ['M', 'F'],
'score': [0, 1, 2, 3, 4],
'visit': ['baseline', 'fup_6_m', 'fup_12_m']
}
# check enumeration
for column_name, allowed_values in allowed_enum.items():
df_chk = df[~df[column_name].isin(allowed_values)].groupby(column_name).size().reset_index(name='Count')
if not df_chk.empty:
print("Found invalid values for column '%s':" % column_name)
print(df_chk)
</code></pre>
<p>It works and the output is like this:</p>
<pre><code>Found invalid values for column 'sex':
sex Count
0 Y 1
Found invalid values for column 'score':
score Count
0 5 2
Found invalid values for column 'visit':
visit Count
0 fup_24_m 1
</code></pre>
<p>But the different files can contain many columns, and for better reporting we'd like to get the output as one dataframe, so something like this:</p>
<pre><code> Column_name Invalid Count
0 Sex Y 1
1 Score 5 2
2 visit fup_24_m 1
</code></pre>
<p>So my question is:</p>
<ul>
<li>What is the best way to collect the invalid values in a dataframe, like above?</li>
<li>Or, is there maybe a better way for checking/validating these kind of coded values?</li>
</ul>
|
<python><csv><validation><enumeration>
|
2023-06-01 21:49:08
| 2
| 3,128
|
BdR
|
76,385,794
| 1,088,536
|
Best way to implement --verbose flag globally in a python project
|
<p>I have a python project with multiple functions. I've implemented verbose printing in one of the functions the following way</p>
<pre><code>def func(a, b=None, verbose=False):
verboseprint = print if verbose else lambda *a, **k: None
verboseprint('only if verbose')
print('always printed')
>> func(2)
always printed
>> func(2, verbose=True)
only if verbose
always printed
</code></pre>
<p>It's working as a debugging tool for me. Now, I want to implement this in multiple other functions, and I don't know if defining this in each function is the best way. I tried defining it as a function in my config.py and import it, but it's not working as expected.</p>
<pre><code>config.py
----------
verbose = False
def verboseprint(text):
print(text) if verbose else lambda *a, **k: None
main.py
-------
from config import verboseprint
def func(a, b=None, verbose=False):
verboseprint('only if verbose')
print('always printed')
>> from main import func
>> func(a)
always printed
>> func(a, verbose=True)
always printed
</code></pre>
<p>As you can see, the verbose flag doesn't seem to be working, because the verboseprint definition seems to be going by the verbose value in its local module (config.py). I tried using nonlocal, but it's failing at config.py because there's no nonlocal verbose there.</p>
<p>How do I implement this function and import it elsewhere?</p>
<p>I'm able to implement it this way, but I would much rather not pass the verbose flag every time I call the function.</p>
<pre><code>config.py
----------
def verboseprint(text, verbose=False):
print(text) if verbose else lambda *a, **k: None
main.py
-------
from config import verboseprint
def func(a, b=None, verbose=False):
verboseprint('only if verbose', verbose)
print('always printed')
>> from main import func
>> func(a)
always printed
>> func(a, verbose=True)
only if verbose
always printed
</code></pre>
|
<python><python-3.x><function><scope>
|
2023-06-01 21:22:28
| 1
| 992
|
mankand007
|
76,385,758
| 9,620,383
|
Why do these two ways of using np.pad in python with matrixes result in seemingly different results and is there a way to fix that?
|
<p>I have a matrix that is [100, 94] full of numbers.
What I want to do:
<code>np.pad(matrix, [(0,0),(0,6)], 'constant', constant_values=(1))</code>
This code results in a matrix that is [100, 0], when I expected a [100, 100] matrix.</p>
<p>If I instead do
<code>np.pad(matrix, [(0,6),(0,0)], 'constant', constant_values=(1))</code>
I end up with a [106, 94] matrix as expected.</p>
<p>Is this just a bug in np.pad? I want to make it a 100 x 100 matrix where the 6 padded at the end numbers are all 1.</p>
|
<python><numpy>
|
2023-06-01 21:14:40
| 1
| 600
|
WorstCoder4Ever
|
76,385,663
| 12,553,730
|
Getting a "Client Error" when attempting to retrieve projects from Label-studio
|
<p>I am trying to fetch all the current projects from the Label-studio, here functions <code>check_connection</code> & <code>get_session</code> work just fine, however, I am getting the following error while fetching the projects using <code>get_projects</code>:</p>
<h4>Code:</h4>
<pre><code>from label_studio_sdk import Client
LABEL_STUDIO_URL = 'http://xx.xxx.xx.xxx:xxxx'
API_KEY = 'a6eecde17b14b768085a67a6657c44fac5e0244d'
ls = Client(url=LABEL_STUDIO_URL, api_key=API_KEY)
print(ls.check_connection()) # WORKS
print(ls.get_session()) # WORKS
print(ls.get_projects()) # DOES NOT WORK
</code></pre>
<h4>Error:</h4>
<pre><code>{'status': 'UP'}
<requests.sessions.Session object at 0x7f85f818a8b0>
Traceback (most recent call last):
File "/path/to/neuron-de-yolo/label_studio/test.py", line 18, in <module>
print(ls.get_projects())
File "/opt/anaconda3/lib/python3.9/site-packages/label_studio_sdk/client.py", line 143, in get_projects
return self.list_projects()
File "/opt/anaconda3/lib/python3.9/site-packages/label_studio_sdk/client.py", line 181, in list_projects
response = self.make_request(
File "/opt/anaconda3/lib/python3.9/site-packages/label_studio_sdk/client.py", line 374, in make_request
response.raise_for_status()
File "/opt/anaconda3/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: http://xx.xxx.xx.xxx:xxxx/api/projects?page_size=10000000
</code></pre>
|
<python><python-3.x><label-studio>
|
2023-06-01 20:58:34
| 1
| 309
|
nikhil int
|
76,385,658
| 15,067,358
|
Using enums in Django with error ModuleNotFoundError: No module named 'models'
|
<p>I am trying to use enums in my unit tests but I'm getting an error when I try to import them.
An excerpt from <code>models.py</code>:</p>
<pre><code>class SeekingConstants:
MEN = 'Men'
WOMEN = 'Women'
BOTH_MEN_AND_WOMEN = 'Both men and women'
</code></pre>
<p>An excerpt from <code>test_user_api.py</code>:</p>
<pre><code>from models import SeekingConstants
...
def test_update_user_seeking_choice(self):
"""Part 1: Update the seeking choice from nothing"""
payload = {
'email': 'test@example.com',
'seeking_choice': SeekingConstants.WOMEN
}
res = self.client.patch(ME_URL, payload)
self.user.refresh_from_db()
self.assertEqual(self.user.email, payload['email'])
self.assertTrue(self.user.seeking_choice, payload['seeking_choice'])
self.assertEqual(res.status_code, status.HTTP_200_OK)
"""Part 2: Update an existing seeking choice"""
new_payload = {
'email': 'test@example.com',
'seeking_choice': SeekingConstants.BOTH_MEN_AND_WOMEN
}
res = self.client.patch(ME_URL, new_payload)
self.user.refresh_from_db()
self.assertEqual(self.user.email, payload['email'])
self.assertTrue(self.user.seeking_choice, payload['seeking_choice'])
self.assertEqual(res.status_code, status.HTTP_200_OK)
</code></pre>
<p><a href="https://i.sstatic.net/Ey3Dm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ey3Dm.png" alt="My project directory" /></a></p>
<p>I'm not sure why I can't import this enum or how I should import this enum.</p>
|
<python><django>
|
2023-06-01 20:57:39
| 1
| 364
|
code writer 3000
|
76,385,383
| 21,420,742
|
Removing Prefix from column of names in python
|
<p>I have this dataset</p>
<pre><code>ID Name
101 DR. ADAM SMITH
102 BEN DAVIS
103 MRS. ASHELY JOHNSON
104 DR. CATHY JONES
105 JOHN DOE SMITH
</code></pre>
<p>Desired Output</p>
<pre><code>ID Name
101 ADAM SMITH
102 BEN DAVIS
103 ASHELY JOHNSON
104 CATHY JONES
105 JOHN DOE SMITH
</code></pre>
<p>I need to get rid of the prefix I tried <code>df['Name'] = df['Name'].replace(to_replace = 'DR. ', value = '')</code>I repeated the same code for all prefixes, but I have when I do it nothing happens. Any reason for this?</p>
<p>Thank you in advance.</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-06-01 20:11:16
| 6
| 473
|
Coding_Nubie
|
76,385,353
| 2,620,838
|
SQLAlchemy 2.0 mock is inserting data
|
<p>I am trying to test a SQLAlchemy 2.0 repository and I am getting the error:</p>
<p>sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "professions_name_key"</p>
<p>So, although I am mocking the test, it inserts data into the database. What should I do to the test not insert data into the database?</p>
<p>I am using pytest-mock.</p>
<p>Here is the SQLAlchemy model</p>
<pre><code># File src.infra.db_models.profession_db_model.py
import uuid
from sqlalchemy import Column, String
from sqlalchemy.orm import Mapped, mapped_column
from sqlalchemy.dialects.postgresql import UUID
from src.infra.db_models.db_base import Base
class ProfessionsDBModel(Base):
""" Defines the professions database model.
"""
__tablename__ = "professions"
profession_id: Mapped[uuid.UUID] = mapped_column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
name: Mapped[str] = mapped_column(String(80), nullable=False, unique=True)
description: Mapped[str] = mapped_column(String(200), nullable=False)
</code></pre>
<p>Here is the repository:</p>
<pre><code># File src.infra.repositories.profession_postgresql_repository.py
from typing import Dict, Optional
import copy
import uuid
from src.domain.entities.profession import Profession
from src.interactor.interfaces.repositories.profession_repository \
import ProfessionRepositoryInterface
from src.domain.value_objects import ProfessionId
from src.infra.db_models.db_base import Session
from src.infra.db_models.profession_db_model import ProfessionsDBModel
class ProfessionPostgresqlRepository(ProfessionRepositoryInterface):
""" Postgresql Repository for Profession
"""
def __init__(self) -> None:
self._data: Dict[ProfessionId, Profession] = {}
def __db_to_entity(self, db_row: ProfessionsDBModel) -> Optional[Profession]:
return Profession(
profession_id=db_row.profession_id,
name=db_row.name,
description=db_row.description
)
def create(self, name: str, description: str) -> Optional[Profession]:
session = Session()
profession_id=uuid.uuid4()
profession = ProfessionsDBModel(
profession_id=profession_id,
name=name,
description=description
)
session.add(profession)
session.commit()
session.refresh(profession)
if profession is not None:
return self.__db_to_entity(profession)
return None
</code></pre>
<p>Here is the test:</p>
<pre><code>import uuid
import pytest
from src.infra.db_models.db_base import Session
from src.domain.entities.profession import Profession
from src.infra.db_models.profession_db_model import ProfessionsDBModel
from .profession_postgresql_repository import ProfessionPostgresqlRepository
from unittest.mock import patch
def test_profession_postgresql_repository(mocker, fixture_profession_developer):
mocker.patch(
'uuid.uuid4',
return_value=fixture_profession_developer["profession_id"]
)
professions_db_model_mock = mocker.patch(
'src.infra.db_models.profession_db_model.ProfessionsDBModel')
session_add_mock = mocker.patch.object(
Session,
"add"
)
session_commit_mock = mocker.patch.object(
Session,
"commit"
)
session_refresh_mock = mocker.patch.object(
Session,
"refresh"
)
repository = ProfessionPostgresqlRepository()
repository.create(
fixture_profession_developer["name"],
fixture_profession_developer["description"]
)
assert session_add_mock.add.call_once_with(professions_db_model_mock)
assert session_commit_mock.commit.call_once_with()
assert session_refresh_mock.refresh.call_once_with(professions_db_model_mock)
</code></pre>
|
<python><unit-testing><sqlalchemy><pytest-mock>
|
2023-06-01 20:06:59
| 2
| 1,003
|
Claudio Shigueo Watanabe
|
76,385,310
| 11,999,957
|
How do I turn 2 equal sized vectors into a matrix that's the pairwise product of the two vectors in Python?
|
<p>Surprised I can't find this answer, but I am looking to basically create a covariance matrix, but instead of each value being a covariance, I want each cell to be the product of the two vectors. So if I have 1x5 vector, I want to end up with a 5x5 matrix.</p>
<p>For example:</p>
<p>Input:</p>
<pre><code>[1, 2, 3, 4, 5]
</code></pre>
<p>Output:</p>
<pre><code>[[ 1, 2, 3, 4, 5],
[ 2, 4, 6, 8, 10],
[ 3, 6, 9, 12, 15],
[ 4, 8, 12, 16, 20],
[ 5, 10, 15, 20, 25]]
</code></pre>
<p>Is there a fast way without building a loop?</p>
|
<python><numpy>
|
2023-06-01 20:00:32
| 1
| 541
|
we_are_all_in_this_together
|
76,385,124
| 9,937,874
|
ONNX performance compared to sklearn
|
<p>I have converted a sklearn logistic regression model object to an ONNX model object and noticed that ONNX scoring takes significantly longer to score compared to the sklearn.predict() method. I feel like I must be doing something wrong b/c ONNX is billed as an optimized prediction solution. I notice that the difference is more noticeable with larger data sets so I created X_large_dataset as as proxy.</p>
<pre><code>from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import datetime
from sklearn.linear_model import LogisticRegression
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
import numpy as np
import onnxruntime as rt
# create training data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
# fit model to logistic regression
clr = LogisticRegression()
clr.fit(X_train, y_train)
# convert to onnx format
initial_type = [('float_input', FloatTensorType([None, 4]))]
onx = convert_sklearn(clr, initial_types=initial_type)
with open("logreg_iris.onnx", "wb") as f:
f.write(onx.SerializeToString())
# create inference session from onnx object
sess = rt.InferenceSession(
"logreg_iris.onnx", providers=rt.get_available_providers())
input_name = sess.get_inputs()[0].name
# create a larger dataset as a proxy for large batch processing
X_large_dataset = np.array([[1, 2, 3, 4]]*10_000_000)
start = datetime.datetime.now()
pred_onx = sess.run(None, {input_name: X_large_dataset.astype(np.float32)})[0]
end = datetime.datetime.now()
print("onnx scoring time:", end - start)
# compare to scoring directly with model object
start = datetime.datetime.now()
pred_sk = clr.predict(X_large_dataset)
end = datetime.datetime.now()
print("sklearn scoring time:", end - start)
</code></pre>
<p>This code snippet on my machine shows that sklearn predict runs in less than a second and ONNX runs in 18 seconds.</p>
|
<python><onnx><onnxruntime>
|
2023-06-01 19:30:45
| 2
| 644
|
magladde
|
76,385,039
| 1,473,517
|
How to make an animated gif that shows each picture only once
|
<p>I want to make an animated gif with one second per picture that shows each picture exactly once. Here is a MWE:</p>
<pre><code>from PIL import Image
import seaborn as sns
import random
import matplotlib.pyplot as plt
# Make three random heatmaps and save them as "pic0.png", "pic1.png" and "pic2.png".
# You can use any three pictures you have instead.
for i in range(3):
new_list = [random.sample(range(80), 10)]*6
sns.heatmap(new_list)
plt.savefig("pic"+str(i)+".png")
plt.clf()
# Add the images to a list
images = []
for i in range(3):
images.append(Image.open("pic"+str(i)+".png"))
# Create the animated GIF. I was hoping loop=1 would show each image once but it doesn't.
images[0].save("out.gif", save_all=True, duration=1000,loop=1, append_images=images[1:])
</code></pre>
<p>The result of this is the following:</p>
<p><a href="https://i.sstatic.net/5S3dg.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5S3dg.gif" alt="enter image description here" /></a></p>
<ul>
<li>How do I make it show each image exactly once? Currently it shows them all twice.</li>
</ul>
|
<python><python-imaging-library>
|
2023-06-01 19:17:06
| 1
| 21,513
|
Simd
|
76,384,902
| 12,027,869
|
Different X Labels for Different Variables
|
<p>I have two data frames <code>t1</code> and <code>t2</code>. I want a seaborn plot where it plots side by side for every variable using the for loop. I was able to achieve this but I fail when I try to set the customized x labels. How do I incorporate <code>set_xlabel</code> in to the for loop?</p>
<pre><code>data1 = {
'var1': [1, 2, 3, 4],
'var2': [20, 21, 19, 18],
'var3': [5, 6, 7, 8]
}
data2 = {
'var1': [5. 2. 3. 5],
'var2': [21, 18, 3, 11]
'var3': [1, 9, 3, 6]
}
t1 = pd.DataFrame(data1)
t2 = pd.DataFrame(data2)
xlabel_list = ["new_var1", "new_var2", "new_var3"]
def fun1(df1, df2, numvar, new_label):
plt.tight_layout()
fig, ax = plt.subplots(1, 2)
sns.kdeplot(data = df1[numvar], linewidth = 3, ax=ax[0])
sns.kdeplot(data = df2[numvar], linewidth = 3, ax=ax[1])
ax[0].set_xlabel(new_label, weight='bold', size = 10)
ax[1].set_xlabel(new_label, weight='bold', size = 10)
for col in t1.columns: # how to incorporate the new_label parameter in the for loop along with col?
fun1(df1 = t1, df2 = t2, numvar = col, new_label??)
</code></pre>
|
<python><seaborn>
|
2023-06-01 18:55:30
| 1
| 737
|
shsh
|
76,384,722
| 5,165,649
|
When reading an excel file in Python can we know which column/field is filtered
|
<p>I want to capture the field or column name that is filtered in the excel file when reading through python. I saw that we can also capture only the filtered rows by using openpyxl and using hidden == False (<a href="https://stackoverflow.com/questions/46002159/how-to-import-filtered-excel-table-into-python">How to import filtered excel table into python?</a>). In my project it is important to identify which field/column is filtered in the excel file. Is it possible? and how to achieve?
Adding an example.</p>
<pre><code>pip install openpyxl
from openpyxl import load_workbook
wb = load_workbook('test_filter_column.xlsx')
ws = wb['data']
</code></pre>
<p><a href="https://i.sstatic.net/ItyPh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ItyPh.png" alt="![enter image description here" /></a></p>
<p>This is non hidden data
while if the gender column is filtered below<a href="https://i.sstatic.net/kXz7J.png." rel="nofollow noreferrer"><img src="https://i.sstatic.net/kXz7J.png." alt="[![enter image description here][2]][2] this." /></a></p>
<p>So what I am expecting is my output should be giving gender as that is filtered. If more than one field is filtered then expecting to provide all the filtered column names.</p>
|
<python>
|
2023-06-01 18:28:37
| 1
| 487
|
viji
|
76,384,679
| 14,293,020
|
Chunked xarray: load only 1 cell in memory efficiently
|
<p><strong>Context:</strong></p>
<p>I have a datacube with 3 variables (3D arrays, dims:<code>time,y,x</code>). The datacube is too big to fit in memory so I chunk it with <code>xarray/dask</code>. I want to apply a function to every cell in <code>x,y</code> of every variable in my datacube.</p>
<p><strong>Problem:</strong></p>
<p>My method takes a long time to load only one cell (1 minute) and I have to do that 112200 times. I use a <code>for loop</code> with <code>dataset.variable.isel(x=i, y=j).values</code> to load a single 1D array from my variables. Is there a better way to do that ? Also, knowing my dataset is chunked, is there a way to do that in parallel for all the chunks at once ?</p>
<p><strong>Code example:</strong></p>
<pre><code># Setup
import xarray as xr
import numpy as np
# Create the dimensions
x = np.linspace(0, 99, 100)
y = np.linspace(0, 349, 350)
time = np.linspace(0, 299, 300)
# Create the dataset
xrds= xr.Dataset()
# Add the dimensions to the dataset
xrds['time'] = time
xrds['y'] = y
xrds['x'] = x
# Create the random data variables with chunking
chunksize = (10, 100, 100) # Chunk size for the variables
data_var1 = np.random.rand(len(time), len(y), len(x))
data_var2 = np.random.rand(len(time), len(y), len(x))
data_var3 = np.random.rand(len(time), len(y), len(x))
xrds['data_var1'] = (('time', 'y', 'x'), data_var1, {'chunks': chunksize})
xrds['data_var2'] = (('time', 'y', 'x'), data_var2, {'chunks': chunksize})
xrds['data_var3'] = (('time', 'y', 'x'), data_var3, {'chunks': chunksize})
#### ---- My Attempt ---- ####
# Iterate through all the variables in my dataset
for var_name, var_data in xrds.data_vars.items():
# if variable is 3D
if var_data.shape == (xrds.dims['time'], xrds.dims['y'], xrds.dims['x']):
# Iterate through every cell of the variable along the x and y axis only
for i in range(xrds.dims['y']):
for j in range(xrds.dims['x']):
# Load a single 1D cell into memory (len(cell) = len(time))
print(xrds.v.isel(y=i,x=j).values)
</code></pre>
|
<python><for-loop><parallel-processing><dask><python-xarray>
|
2023-06-01 18:20:17
| 1
| 721
|
Nihilum
|
76,384,653
| 2,974,821
|
Sort pandas dataframe columns on second row order
|
<p>I need to sort a dataframe based on the order of the second row. For example:</p>
<pre><code>import pandas as pd
data = {'1a': ['C', 3, 1], '2b': ['B', 2, 3], '3c': ['A', 5, 2]}
df = pd.DataFrame(data)
df
</code></pre>
<p>Output:</p>
<pre><code> 1a 2b 3c
0 C B A
1 3 2 5
2 1 3 2
</code></pre>
<p>Desired output:</p>
<pre><code> 3c 2b 1a
0 A B C
1 5 2 3
2 2 3 1
</code></pre>
<p>So the columns have been order based on the zero index row, on the A, B, C.</p>
<p>Have tried many sorting options without success.</p>
<p>Having a quick way to accomplish this would be beneficial, but having granular control to both order the elements and move a specific column to the first position would be even better. For example move "C" to the first column.</p>
<p>Something like make a list, sort, move and reorder on list.</p>
<pre><code>mylist = ['B', 'A', 'C']
mylist.sort()
mylist.insert(0, mylist.pop(mylist.index('C')))
</code></pre>
<p>Then sorting the dataframe on ['C', 'A', 'B'] outputting</p>
<pre><code> 1a 3c 2b
0 C A B
1 3 5 2
2 1 2 3
</code></pre>
|
<python><pandas><dataframe><sorting>
|
2023-06-01 18:16:15
| 3
| 487
|
Stuber
|
76,384,645
| 1,270,603
|
How to directly use Flask's jsonify with custom classes like FireO Models?
|
<p>I'm using Flask version 2.3.2 and FireO ORM 2.1.0</p>
<p>FireO models are not directly compatible with Flask's jsonify. Meaning that they are not serializables by default and need to be converted into a dictionary before passing them to jsonify.</p>
<p>Ex:</p>
<pre><code>users_dicts = [user.to_dict() for user in users]
return jsonify(users_dicts), 200
# or
return jsonify(user.to_dict()), 200
</code></pre>
<p>So my idea was to extend either FireO Models or Flask so I could just do:</p>
<pre><code>jsonify(users), 200 # Users is a FireO QueryIterator
jsonify(user), 200 # User is a FireO model
</code></pre>
|
<python><flask><serialization><jsonify>
|
2023-06-01 18:15:57
| 2
| 517
|
Caponte
|
76,384,509
| 1,150,683
|
Altair: showing the value of the current point in the tooltip
|
<p>In the code below, we have a dataset that can be read as: "two cooks <code>cook1</code>, <code>cook2</code> are doing a competition. They have to make four dishes, each time with two given ingredients <code>ingredient1</code>, <code>ingredient2</code>. A jury has scored the dishes and the grades are stored in <code>_score</code>.</p>
<p>I want to use Altair to show a graph where the x-axis is each dish (1, 2, 3, 4) and the y-axis contains the scores of the two cooks separately. This currently works but the main issue is that on hover, the tooltip does not include the score of the current point that is being hovered.</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
import pandas as pd
df = pd.DataFrame({
"ingredient1": ["potato", "onion", "carrot", "beet"],
"ingredient2": ["tomato", "pepper", "zucchini", "lettuce"],
"dish": [1, 2, 3, 4],
"cook1": ["cook1 dish1", "cook1 dish2", "cook1 dish3", "cook1 dish4"],
"cook1_score": [0.4, 0.3, 0.7, 0.9],
"cook2": ["cook2 dish1", "cook2 dish2", "cook2 dish3", "cook2 dish4"],
"cook2_score": [0.6, 0.2, 0.5, 0.6],
})
value_vars = [c for c in df.columns if c.endswith("_score")]
cook_names = [c.replace("_score", "") for c in value_vars]
id_vars = ["dish", "ingredient1", "ingredient2",] + cook_names
df_melt = df.melt(id_vars=id_vars, value_vars=value_vars,
var_name="cook", value_name="score")
chart = alt.Chart(df_melt).mark_circle().encode(
x=alt.X("dish:O", title="Dish number"),
y=alt.Y("score:Q", title="Score"),
color="cook:N",
tooltip=id_vars
)
chart.show()
</code></pre>
<p>I tried explicitly adding the score columns to the tooltip:</p>
<pre><code> tooltip=id_vars+value_vars
</code></pre>
<p>But that yields the following error:</p>
<blockquote>
<p>ValueError: cook1_score encoding field is specified without a type; the type cannot be inferred because it does not match any column in the data.</p>
</blockquote>
<p>So how can I get altair to also show the score of (only) the currently hovered element?</p>
|
<python><pandas><altair><graph-visualization>
|
2023-06-01 17:54:21
| 1
| 28,776
|
Bram Vanroy
|
76,384,338
| 9,840,684
|
Looping through combinations of subsets of data for processing
|
<p><strong>I am processing sales data, sub-setting across a <em>combination</em> of two distinct dimensions.</strong></p>
<p>The first is a category as indicated by each of these three indicators <code>['RA','DS','TP']</code>. There are more indicators in the data; however, those are the only ones of interest, and the others not mentioned but in the data can be ignored.</p>
<p>In combination with those indicators, I want to subset across varying time intervals <code>7 days back, 30 days back, 60 days back, 90 days back, 120 days back, and no time constraint</code></p>
<p>Without looping through this would create 18 distinct functions for those combinations of dimensions 3 categories x 6 time which was what I first started to do</p>
<p>for example a function that subsets on DS and 7 days back:</p>
<pre><code>def seven_days_ds(df):
subset = df[df['Status Date'] > (datetime.now() - pd.to_timedelta("7day"))]
subset = subset[subset['Association Label']=="DS"]
grouped_subset = subset.groupby(['Status Labelled'])
status_counts_seven_ds = (pd.DataFrame(grouped_subset['Status Labelled'].count()))
status_counts_seven_ds.columns = ['Counts']
status_counts_seven_ds = status_counts_seven_ds.reset_index()
return status_counts_seven_ds #(the actual function is more complicated than this).
</code></pre>
<p>And then repeat this, but changing the subset criteria for each combination of category and time-delta for 18 combinations of the variables of interest. Obviously, this is not ideal.</p>
<p>Is there a way to have a single function that creates those 18 objects, or (ideally), a single object whose columns indicate the dimensions being subset on? ie <code>counts_ds_7</code> etc.</p>
<p>Or is this not possible, and I'm stuck doing it the long way doing them all separately?</p>
|
<python><pandas><loops><iterator><iteration>
|
2023-06-01 17:28:54
| 1
| 373
|
JLuu
|
76,384,189
| 17,254,677
|
Sort Rows Based on Tuple Index
|
<h2>Overview:</h2>
<p>Pandas dataframe with a <strong>tuple index</strong> and corresponding 'Num' column:</p>
<pre><code>Index Num
('Total', 'A') 23
('Total', 'A', 'Pandas') 3
('Total', 'A', 'Row') 7
('Total', 'A', 'Tuple') 13
('Total', 'B') 35
('Total', 'B', 'Rows') 12
('Total', 'B', 'Two') 23
('Total', 'C') 54
('Total', 'C', 'Row') 54
Total 112
</code></pre>
<p>The index and 'Num' column are already sorted with a lambda function by Alphabetical Order and based on the length of tuple elements:</p>
<pre><code>dataTable = dataTable.reindex(sorted(dataTable.index, key=lambda x: (not isinstance(x, tuple), x)))
</code></pre>
<h2>Problem:</h2>
<p>Now, I want to sort <em>only</em> the 3rd tuple index element based on it's corresponding 'Num' value. Here would be an updated example of the dataframe:</p>
<pre><code>Index Num
('Total', 'A') 23
('Total', 'A', 'Tuple') 13
('Total', 'A', 'Row') 7
('Total', 'A', 'Pandas') 3
('Total', 'B') 35
('Total', 'B', 'Two') 23
('Total', 'B', 'Rows') 12
('Total', 'C') 54
('Total', 'C', 'Row') 54
Total 112
</code></pre>
<h2>Question:</h2>
<p>What Lambda function can achieve this?</p>
|
<python><pandas><dataframe><lambda><tuples>
|
2023-06-01 17:05:51
| 1
| 737
|
Luke Hamilton
|
76,383,984
| 2,100,039
|
Fill Data from DataFrame When 2 Column Values Match
|
<p>I am trying to populate an empty column dataframe when two conditions (columns ['SITE','week'] are equal) are met when comparing two dataframes. Here is my example:</p>
<pre><code>df1_small:
week SITE LAL
0 1 BARTON CHAPEL 1.1
1 2 BARTON CHAPEL 1.8
2 3 BARTON CHAPEL 1.4
3 1 PENASCAL I 1.7
4 2 PENASCAL I 2.9
5 3 PENASCAL I 2.2
df2_large:
SITE hour day week POWER LAL
0 BARTON CHAPEL 1 1 1 54
1 BARTON CHAPEL 2 1 1 32
2 BARTON CHAPEL 3 1 1 56
3 BARTON CHAPEL 4 1 1 81
4 BARTON CHAPEL 5 1 1 90
5 BARTON CHAPEL 6 1 1 12
6 BARTON CHAPEL 7 1 1 10
7 BARTON CHAPEL 8 1 1 73
8 BARTON CHAPEL 9 1 1 55
9 BARTON CHAPEL 10 1 1 66
10 PENASCAL I 1 1 1 39
11 PENASCAL I 2 1 1 90
12 PENASCAL I 3 1 1 13
13 PENASCAL I 4 1 1 44
14 PENASCAL I 5 1 1 51
</code></pre>
<p>After the 2 condtions are met when the dataframe column values match, then df_large column 'LAL' is populated using the 'LAL' column from df1_small. The final df_large looks like this:</p>
<pre><code>df2_large:
SITE hour day week POWER LAL
0 BARTON CHAPEL 1 1 1 54 1.1
1 BARTON CHAPEL 2 1 1 32 1.1
2 BARTON CHAPEL 3 1 1 56 1.1
3 BARTON CHAPEL 4 1 1 81 1.1
4 BARTON CHAPEL 5 1 1 90 1.1
5 BARTON CHAPEL 6 1 1 12 1.1
6 BARTON CHAPEL 7 1 1 10 1.1
7 BARTON CHAPEL 8 1 1 73 1.1
8 BARTON CHAPEL 9 1 1 55 1.1
9 BARTON CHAPEL 10 1 1 66 1.1
10 PENASCAL I 1 1 1 39 1.7
11 PENASCAL I 2 1 1 90 1.7
12 PENASCAL I 1 1 2 13 2.9
13 PENASCAL I 2 1 2 44 2.9
14 PENASCAL I 3 1 2 51 2.9
</code></pre>
<p>thank you,</p>
|
<python><compare><vlookup><fill><populate>
|
2023-06-01 16:31:39
| 1
| 1,366
|
user2100039
|
76,383,900
| 13,097,194
|
Getting a libprotobuf error message when importing storage module of google.cloud
|
<p>When I run <code>from google.cloud import storage</code> in Python, I receive the following error:</p>
<p><code>ImportError: libprotobuf.so.23.1.0: cannot open shared object file: No such file or directory</code></p>
<p>How can I resolve this issue?</p>
|
<python><cloud><protocol-buffers>
|
2023-06-01 16:21:22
| 1
| 974
|
KBurchfiel
|
76,383,877
| 12,200,808
|
How to find out which package depends on "futures" in requirements.txt
|
<p>I have defined many <code>pip</code> packages in a <code>requirements.txt</code>, but I have not define the "<code>futures</code>" package:</p>
<pre><code>...
future == 0.18.3
six == 1.16.0
joblib == 1.2.0
...
</code></pre>
<p>And then download all packages with the following command on Ubuntu 22.04:</p>
<pre><code>pip3.9 download -r "/home/requirements.txt"
</code></pre>
<p><strong>The above command exited with the following error:</strong></p>
<pre><code>...
...
Collecting widgetsnbextension~=4.0.7
Downloading widgetsnbextension-4.0.7-py3-none-any.whl (2.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 3.9 MB/s eta 0:00:00
Collecting branca>=0.5.0
Downloading branca-0.6.0-py3-none-any.whl (24 kB)
Collecting traittypes<3,>=0.2.1
Downloading traittypes-0.2.1-py2.py3-none-any.whl (8.6 kB)
Collecting xyzservices>=2021.8.1
Downloading xyzservices-2023.5.0-py3-none-any.whl (56 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.5/56.5 KB 1.3 MB/s eta 0:00:00
Collecting futures
Downloading futures-3.0.5.tar.gz (25 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [25 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 14, in <module>
File "/python39/lib/python3.9/site-packages/setuptools/__init__.py", line 18, in <module>
from setuptools.dist import Distribution
File "/python39/lib/python3.9/site-packages/setuptools/dist.py", line 32, in <module>
from setuptools.extern.more_itertools import unique_everseen
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
File "<frozen importlib._bootstrap>", line 565, in module_from_spec
File "/python39/lib/python3.9/site-packages/setuptools/extern/__init__.py", line 52, in create_module
return self.load_module(spec.name)
File "/python39/lib/python3.9/site-packages/setuptools/extern/__init__.py", line 37, in load_module
__import__(extant)
File "/python39/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/__init__.py", line 1, in <module>
from .more import * # noqa
File "/python39/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/more.py", line 5, in <module>
from concurrent.futures import ThreadPoolExecutor
File "/tmp/pip-download-jelw4tc2/futures/concurrent/futures/__init__.py", line 8, in <module>
from concurrent.futures._base import (FIRST_COMPLETED,
File "/tmp/pip-download-jelw4tc2/futures/concurrent/futures/_base.py", line 357
raise type(self._exception), self._exception, self._traceback
^
SyntaxError: invalid syntax
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> futures
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>How to find out which package depends on the "<code>futures</code>" from the "<code>requirements.txt</code>"?</p>
<p><strong>Here is the dummy code:</strong></p>
<pre><code># find_out_depends --requirement-file "/home/requirements.txt" --find-depends "futures"
</code></pre>
<p>Is there any "<code>find_out_depends</code>" command for accepting <code>requirements.txt</code> as argument and then print out the whole dependencies tree?</p>
|
<python><pip><ubuntu-22.04>
|
2023-06-01 16:17:37
| 2
| 1,900
|
stackbiz
|
76,383,841
| 3,247,006
|
How to set the current proper date and time to "DateField()" and "TimeField()" respectively as a default value by "TIME_ZONE" in Django Models?
|
<p>The doc says below in <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#django.db.models.DateField.auto_now_add" rel="nofollow noreferrer">DateField.auto_now_add</a>. *I use <strong>Django 4.2.1</strong>:</p>
<blockquote>
<p>Automatically set the field to now when the object is first created. ... If you want to be able to modify this field, set the following instead of <code>auto_now_add=True</code>:</p>
</blockquote>
<ul>
<li>For <code>DateField</code>: <code>default=date.today</code> - <code>from datetime.date.today()</code></li>
<li>For <code>DateTimeField</code>: <code>default=timezone.now</code> - <code>from django.utils.timezone.now()</code></li>
</ul>
<p>So, I set <a href="https://docs.djangoproject.com/en/4.2/ref/utils/#django.utils.timezone.now" rel="nofollow noreferrer">timezone.now</a> and <a href="https://docs.python.org/3/library/datetime.html#datetime.date.today" rel="nofollow noreferrer">date.today</a> to <code>datetime</code>'s <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#datetimefield" rel="nofollow noreferrer">DateTimeField()</a> and <code>date1</code>'s <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#datefield" rel="nofollow noreferrer">DateField()</a> respectively and I also set <code>current_date</code> which returns <code>timezone.now().date()</code> and <code>current_time</code> which returns <code>timezone.now().time()</code> to <code>date2</code>'s <code>DateField()</code> and <code>time</code>'s <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#timefield" rel="nofollow noreferrer">TimeField()</a> respectively as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
from django.db import models
from datetime import date
from django.utils import timezone
def current_date():
return timezone.now().date()
def current_time():
return timezone.now().time()
class MyModel(models.Model):
datetime = models.DateTimeField(default=timezone.now) # Here
date1 = models.DateField(default=date.today) # Here
date2 = models.DateField(default=current_date) # Here
time = models.TimeField(default=current_time) # Here
</code></pre>
<p>Then, I set <code>'America/New_York'</code> to <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#std-setting-TIME_ZONE" rel="nofollow noreferrer">TIME_ZONE</a> in <code>settings.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "settings.py"
LANGUAGE_CODE = "en-us"
TIME_ZONE = 'America/New_York' # Here
USE_I18N = True
USE_L10N = True
USE_TZ = True
</code></pre>
<p>But, <code>date1</code>'s <code>DateField()</code> and <code>time</code>'s <code>TimeField()</code> show UTC(+0 hours)'s date and time respectively on Django Admin as shown below:</p>
<p><a href="https://i.sstatic.net/PQ4Ou.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PQ4Ou.png" alt="enter image description here" /></a></p>
<p>Next, I set <code>'Asia/Tokyo'</code> to <code>TIME_ZONE</code> in <code>settings.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "settings.py"
LANGUAGE_CODE = "en-us"
TIME_ZONE = 'Asia/Tokyo' # Here
USE_I18N = True
USE_L10N = True
USE_TZ = True
</code></pre>
<p>But, <code>date2</code>'s <code>DateField()</code> and <code>time</code>'s <code>TimeField()</code> show UTC(+0 hours)'s date and time on Django Admin as shown below:</p>
<p><a href="https://i.sstatic.net/uaIQa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uaIQa.png" alt="enter image description here" /></a></p>
<p>So, how can I set the current proper date and time to <code>DateField()</code> and <code>TimeField()</code> respectively as a default value by <code>TIME_ZONE</code> in Django Models?</p>
<p>In addition, <code>DateField()</code> and <code>TimeField()</code> with <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#django.db.models.DateField.auto_now" rel="nofollow noreferrer">auto_now</a> or <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#django.db.models.DateField.auto_now_add" rel="nofollow noreferrer">auto_now_add</a> cannot save UTC(+0 hours)'s date and time respectively with any <code>TIME_ZONE</code> settings.</p>
|
<python><django><datetime><django-models><timezone>
|
2023-06-01 16:12:28
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,383,814
| 6,435,921
|
Parallelize `apply_along_axis` within a Class method
|
<h3>Minimal Working Example</h3>
<p>I have a class <code>MyClass</code> that is a abstract simplification of my actual problem. There's a method called <code>run_algorithm</code> that when it is called, it generates some <code>(100,10)</code> array named <code>z</code> and then at each iteration applies a method called <code>function</code> to the rows of <code>z</code>.</p>
<p>At the moment, this is done using <code>np.apply_along_axis(self.function, 1, z)</code>. However, I would like to parallelize this.</p>
<p>There are two levels of complexity:</p>
<ol>
<li>At each iteration in <code>run_algorithm()</code> we change what the method <code>self.function</code> actually does (in this case, the difference is minimal since it is a MWE).</li>
<li>The function <code>self.function</code> is a method of the class and it keeps changing, meaning that I cannot use the standard multiprocessing stuff.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import numpy as np
class MyClass:
def __init__(self, B=10, n_iterations=10):
self.B = B
self.n_iterations = n_iterations
def function1(self, z_row):
"""Takes numpy array z_row of dimension (d,) and returns
numpy array of dimension (B+1, d)"""
return np.arange(1, self.B+2).reshape(-1, 1) * z_row
def function2(self, z_row):
"""Similar to function1 in terms of shapes, but does something slightly
different."""
return np.arange(1, (2*self.B)+2).reshape(-1, 1) * z_row
def run_algorithm(self):
"""At each iteration chooses to apply either function1 or function2."""
z = np.random.randn(100, 10)
for i in range(self.n_iterations):
print("Iteration: ", i)
# Choose the function
self.function = self.function1 if (i % 2 == 1) else self.function2
# Apply the function using numpy (not in parallel)
z_final = np.apply_along_axis(self.function, 1, z)
# choose a random slice and start again
index = np.random.choice(a=np.arange(self.B+1), size=1)[0]
z = z_final[:, index, :]
return z
</code></pre>
<h3>What doesn't work</h3>
<p>If <code>self.function</code> wasn't a method of a class, I could use <code>multiprocessing</code> by doing something like this:</p>
<pre class="lang-py prettyprint-override"><code> def function_parallel(z):
"""Parallel version."""
try:
with Pool(8) as p:
results = p.map(self.function, product(z))
return results
except KeyboardInterrupt:
p.terminate()
except Exception as e:
print('Exception occurred: ', e)
p.terminate()
finally:
p.join()
self.function_parallel = function_parallel
</code></pre>
<p>but this fails because multiprocessing needs a function defined in <code>main</code> but I can't have it since in my example the function depends on the state of the class itself.</p>
|
<python><numpy><parallel-processing><multiprocessing>
|
2023-06-01 16:08:59
| 0
| 3,601
|
Euler_Salter
|
76,383,799
| 4,704,065
|
Split a list into list of tuple with overlapping elements
|
<p>I have a list which looks like this:<br />
<code>temp=[-39.5, -27.5, -15.5, -3.5, 8.5, 20.5, 32.5, 44.5, 56.5, 68.5, 80.5, 92.5,104.5]</code></p>
<p>I want to split the list into list of tuple where second element becomes the first element of the tuple</p>
<p>e.g.: Expected output:<br />
<code>temps = [(-39.5, -27.5), (-27.5,-15.5), (-15.5,-3.5), (-3.5,8.5), (8.5,20.5), (20.5,32.5), (32.5,44.5), (44.5,56.5), (56.5,68.5), (68.5,80.5), (80.5,92.5), (92.5,104.5)]</code></p>
|
<python><list><tuples>
|
2023-06-01 16:06:40
| 0
| 321
|
Kapil
|
76,383,744
| 9,784,337
|
Convert TTL output into RDF triples readable by BlazeGraphDB
|
<p>I am working on an NLP pipeline that takes a collection of textual records as input and extracts entities and relationships within the text of each record. The pipeline utilizes the spaCy library for named entity extraction and BLINK for linking entities to an external data source (wikidata). The pipeline currently outputs a ttl file in the following format:</p>
<pre><code>@prefix : <http://cna.outwebsite.ac.uk/our_Text_Collection/> .
@prefix cna: <http://cna.outwebsite.ac.uk/> .
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix DBpedia: <http://dbpedia.org/ontology/> .
@prefix Schema: <http://schema.org/> .
@prefix Wikidata: <https://www.wikidata.org/wiki/> .
@prefix DUL: <http://www.ontologydesignpatterns.org/ont/dul/DUL.owl#> .
:15606 cna:text "Floods Newlyn Coombe St Peters Church visible centre rear built in 1866"^^xsd:string .
<<:15606 cna:mentions <https://en.wikipedia.org/wiki/Newlands_Church>>> cna:similarity 79.5187759399414;
cna:start 7 ;
cna:end 37 ;
cna:support 1 .
<<:15606 cna:mentions <https://en.wikipedia.org/wiki/1866_in_architecture>>> cna:similarity 78.223876953125;
cna:start 67 ;
cna:end 71 ;
cna:support 1 .
:15608 cna:text "View of beach and foreshore near the bowling green pavilion"^^xsd:string .
@prefix : <http://cna.outwebsite.ac.uk/our_Text_Collection/> .
@prefix cna: <http://cna.outwebsite.ac.uk/> .
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix DBpedia: <http://dbpedia.org/ontology/> .
@prefix Schema: <http://schema.org/> .
@prefix Wikidata: <https://www.wikidata.org/wiki/> .
@prefix DUL: <http://www.ontologydesignpatterns.org/ont/dul/DUL.owl#> .
@prefix : <http://cna.outwebsite.ac.uk/our_Text_Collection/> .
@prefix cna: <http://cna.outwebsite.ac.uk/> .
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix DBpedia: <http://dbpedia.org/ontology/> .
@prefix Schema: <http://schema.org/> .
@prefix Wikidata: <https://www.wikidata.org/wiki/> .
@prefix DUL: <http://www.ontologydesignpatterns.org/ont/dul/DUL.owl#> .
@prefix : <http://cna.outwebsite.ac.uk/our_Text_Collection/> .
@prefix cna: <http://cna.outwebsite.ac.uk/> .
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix DBpedia: <http://dbpedia.org/ontology/> .
@prefix Schema: <http://schema.org/> .
@prefix Wikidata: <https://www.wikidata.org/wiki/> .
@prefix DUL: <http://www.ontologydesignpatterns.org/ont/dul/DUL.owl#> .
:15620 cna:text "Location bottom of Morrab Road at the junction with the promenade London"^^xsd:string .
<<:15620 cna:mentions <https://en.wikipedia.org/wiki/Morchard_Road>>> cna:similarity 82.10499572753906;
cna:start 19 ;
cna:end 30 ;
cna:support 1 .
<<:15620 cna:mentions <https://en.wikipedia.org/wiki/London>>> cna:similarity 83.20065307617188;
cna:start 66 ;
cna:end 74 ;
cna:support 1 .
:15640 cna:text "Damage to the Bolitho Gardens at Wherrytown Bijou House in view"^^xsd:string .
<<:15640 cna:mentions <https://en.wikipedia.org/wiki/Bolitho,_Cornwall>>> cna:similarity 79.88461303710938;
cna:start 14 ;
cna:end 29 ;
cna:support 1 .
<<:15640 cna:mentions <https://en.wikipedia.org/wiki/Merriville_House_and_Gardens>>> cna:similarity 79.99214935302734;
cna:start 33 ;
cna:end 55 ;
cna:support 1 .
</code></pre>
<p>I need to extract Subject Predicate Object triples in RDF format from this ttl file to be uploaded as an input to BlazeGraph. I initially attempted to achieve this through string manipulation, but I faced challenges due to variations in file content across different collections.</p>
<p>I was advised to use Displacy code to extract the desired triples. However, the current code I am using does not provide the exact relationship I need. I want the triples in the simple format of Subject, Predicate, Object, like this example :</p>
<pre><code><http://cna.outwebsite.ac.uk/our_Text_Collection/15606>
<http://cna.outwebsite.ac.uk/mentions>
<https://en.wikipedia.org/wiki/Newlands_Church>
</code></pre>
<p>Here is the Displacy code I am currently using:</p>
<pre><code>import spacy
from spacy import displacy
from rdflib import Graph, Literal, Namespace, RDF, URIRef
# Load English model
nlp = spacy.load('en_core_web_sm')
# Create RDF graph
graph = Graph()
# Define prefixes
tanc = Namespace('http://cna.outwebsite.ac.uk/')
dbr = Namespace('http://dbpedia.org/resource/')
DBpedia = Namespace('http://dbpedia.org/ontology/')
Schema = Namespace('http://schema.org/')
Wikidata = Namespace('https://www.wikidata.org/wiki/')
DUL = Namespace('http://www.ontologydesignpatterns.org/ont/dul/DUL.owl#')
# Process the content
with open("our_collection.ttl","r") as ff:
content = ff.read()
# Split the content into statements
statements = content.split('\n\n')
# Process each statement
triples = []
for statement in statements:
if statement.strip():
# Parse the statement using spaCy
doc = nlp(statement)
# Extract the subject, predicate, and object
subject = doc[0].text.strip(':')
predicate = doc[1].text.strip()
obj = doc[2].text.strip('"')
# Create RDF triples
triples.append((subject, predicate, obj))
# Create a new Doc object from the triples
text = ' '.join([f'{subj} {pred} {obj}' for subj, pred, obj in triples])
doc = nlp(text)
# # Generate the displacy visualization
# displacy.serve(doc, style='dep')
# Generate the displacy visualization
html = displacy.render(doc, style='dep', options={'compact': True, 'bg': '#ffffff'})
# Save the visualization to a file
with open('visualization.html', 'w', encoding='utf-8') as file:
file.write(html)
# Generate the displacy visualization
displacy.serve(graph, style='dep', port=8000, auto_select_port=True)
</code></pre>
<p>I would appreciate any guidance on a more effective approach to extracting the triples from the ttl file, as the above code doesn't achieve what I want . Is there a better way to achieve this?</p>
|
<python><python-3.x><database><rdf><blazegraph>
|
2023-06-01 15:59:03
| 0
| 1,273
|
Youcef B
|
76,383,727
| 249,199
|
How do I prevent an arbitrary object's destructor from ever running in Python?
|
<h2>Context</h2>
<p>I have a troublesome object from third-party code in a Python (3.10) program. The object is <em>not</em> written in pure Python; it's provided by an extension module.</p>
<p>Due to a bug in the third party code, when the object's destructor runs, the program blocks indefinitely (waiting for a mutex it will never get).</p>
<p>Attempts to fix the bug causing the indefinite blocking have failed, though I'm still trying.</p>
<h2>Question</h2>
<p>How can I intentionally <em>prevent</em> a destructor from being called, ever, when Python shuts down?</p>
<p>For example, let's say I have the following code in <code>test.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>
class MyClass:
def __del__(self):
print("bye")
x = MyClass()
</code></pre>
<p>What can I do to make sure that <code>x</code>'s destructor never runs, and "bye" is never printed--either when the program shuts down or when <code>x</code> goes out of scope?</p>
<h2>What I've tried</h2>
<ul>
<li>I've tried forcing the program to exit via <code>os._exit</code> in a destructor that's guaranteed to run before the troublesome object's destructor. However, that causes other destructors that I do want to run to not do so. While I know destructor execution can't be guaranteed, this approach is throwing out rather a lot of good with the bad.</li>
<li>I've tried intentionally creating a reference cycle involving this object, but the cyclic GC is too good for me to defeat simply.</li>
<li>I've tried code like this, but it did not work:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import ctypes
incref = ctypes.pythonapi.Py_IncRef
incref.argtypes = [ctypes.py_object]
incref.restype = None
incref(x)
</code></pre>
|
<python><garbage-collection><automatic-ref-counting><destructor><reference-counting>
|
2023-06-01 15:57:12
| 0
| 4,292
|
Zac B
|
76,383,659
| 372,086
|
How to write a Faiss index to memory?
|
<p>I want to write a <code>faiss</code> index to back it up on the cloud.
I can write it to a local file by using <code>faiss.write_index(filename, f)</code>.
However, I would rather dump it to memory to avoid unnecessary disk IO.</p>
<p>I tried passing in a <code>StringIO</code> or <code>ByteIO</code> stream, but I'm getting these error from the <code>swig</code> layer.</p>
<pre><code>TypeError: Wrong number or type of arguments for overloaded function 'write_index'.
Possible C/C++ prototypes are:
faiss::write_index(faiss::Index const *,char const *)
faiss::write_index(faiss::Index const *,FILE *)
faiss::write_index(faiss::Index const *,faiss::IOWriter *)
</code></pre>
<p>Any idea on how to <code>write_index</code> to memory?</p>
|
<python><swig><stringio><faiss>
|
2023-06-01 15:46:49
| 2
| 2,351
|
Lizozom
|
76,383,607
| 1,608,276
|
How can i give type(None) a name, or use NoneType in any python3 subversion?
|
<p>I'm doing some kind of Python3 code generation, and I hope to generate strongly typed code, but I found it is hard to process with <code>NoneType</code>.</p>
<p>When I got an type reference on hand, I might want to generate it's representing code in type annotation in some case, and also might want to generate it's representing code as type reference at runtime code in some other case. But it's not consistant, in the first case, I need to generate code like <code>None</code> in context like <code>f: int -> None</code>, but in the second case, I have to generate code like <code>type(None)</code> in context like <code>ty = type(None)</code>, I hope to use <code>NoneType</code> to make it consistent.</p>
<p>I found that there exists <code>types.NoneType</code> in some earlier subversion of Python3, but removed then, and in Python3.10 it is added back. So there are many Python3 subversions which doesn't export the <code>NoneType</code> from <code>types</code>.</p>
<p><a href="https://stackoverflow.com/questions/21706609/where-is-the-nonetype-located">Where is the NoneType located?</a></p>
<p>So, is there any way I can define a backport of <code>NoneType</code> so I can use it consistently and be compatibility with as much Python3 subversion as possible.</p>
|
<python><python-typing><nonetype>
|
2023-06-01 15:41:47
| 1
| 3,895
|
luochen1990
|
76,383,444
| 7,076,616
|
Passing R object (plot/image) to Python environment in Python
|
<p>I am creating some plots using <code>ggplot2</code>in R which I would like to pass to the Python environment an eventually use a already developed script that writes the images to an excel.</p>
<p>Is this possible somehow in Databricks?</p>
<p>(I am aware of the <code>plotnine</code> packages in Python to create <code>ggplot2</code> imagine but I wish not to use it)</p>
|
<python><r><databricks>
|
2023-06-01 15:21:17
| 1
| 2,579
|
MLEN
|
76,383,413
| 7,848,740
|
Get value of a field from a foreigKey in Django Models
|
<p>I have two classes in my Database in Django</p>
<pre><code>class Test(models.Model):
sequenzaTest = models.ForeignKey("SequenzaMovimento", null=True, on_delete=models.SET_NULL)
class SequenzaMovimento(models.Model):
nomeSequenza = models.CharField(max_length=50, blank=False, null=False)
serieDiMovimenti = models.TextField(blank=False, null=False, default="")
</code></pre>
<p>Now, every <code>Test</code> object created can be associated with just one <code>SequenzaMovimento</code> object. Different <code>Test</code> objects can have the same <code>SequenzaMovimento</code></p>
<p>Now, I know the primary key of my Test. How do I find the <code>serieDiMovimenti</code> inside the <code>SequenzaMovimento</code> object which is linked to the <code>Test</code> object?</p>
<p>I can get the <code>sequenzaTest</code> from the <code>Test</code> object with</p>
<pre><code>testo_sequenza = Test.objects.get(pk=idObj)
testo_sequenza.sequenzaTest
</code></pre>
<p>but I can't find to understand how access serieDiMovimenti</p>
|
<python><django><django-models>
|
2023-06-01 15:17:34
| 2
| 1,679
|
NicoCaldo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.