QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 โ |
|---|---|---|---|---|---|---|---|---|
76,800,382 | 10,262,805 | OpenAI from Langchain requires "openai_api_key" even though it is loaded | <p>this is my code:</p>
<pre><code>import os
from dotenv import load_dotenv,find_dotenv
load_dotenv(find_dotenv())
print(os.environ.get("OPEN_AI_KEY"))
from langchain.llms import OpenAI
llm=OpenAI(model_name="text-davinci-003",temperature=0.7,max_tokens=512)
print(llm)
</code></pre>
<p>when I execute above code I get this error</p>
<pre><code>ValidationError: 1 validation error for OpenAI
__root__
Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)
</code></pre>
<p>docs say</p>
<blockquote>
<p>If you'd prefer not to set an environment variable you can pass the
key in directly via the openai_api_key named parameter when initiating
the OpenAI LLM class:</p>
</blockquote>
<p>But already set it and it prints correctly</p>
<p><a href="https://i.sstatic.net/0wxKN.png" rel="noreferrer"><img src="https://i.sstatic.net/0wxKN.png" alt="enter image description here" /></a></p>
<p>When I set the <code>llm</code> by passing named param:</p>
<pre><code>llm=OpenAI(openai_api_key="PASSINGCORRECTKEY", model_name="text-davinci-003",temperature=0.7,max_tokens=512)
llm("Tell me a joke")
</code></pre>
<p>then I get this error:</p>
<pre><code>raise ValueError(
"Argument `prompt` is expected to be a string. Instead found "
f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
"`generate` instead."
)
</code></pre>
<h2>UPDATE</h2>
<p>env variable initially was set as <code>OPEN_AI_KEY</code> since I copied and pasted from one of my other project which calls <code>chat/completions</code> api. I changed the env to <code>OPENAI_API_KEY</code> not I get this error:</p>
<pre><code>AuthenticationError: Incorrect API key provided: org-Wz3J****************2XK6. You can find your API key at https://platform.openai.com/account/api-keys.
</code></pre>
<p>But same api key works when i call <code>"https://api.openai.com/v1/chat/completions"</code> endpoint</p>
| <python><python-3.x><openai-api><langchain><large-language-model> | 2023-07-31 02:38:50 | 9 | 50,924 | Yilmaz |
76,800,020 | 11,028,689 | Shape problem with Sequential model for multiclassification task in Tensorflow | <p>I'm building a simple multiclass classifier in PyTorch with the softmax function for my 36 classes.</p>
<pre><code>#my data
X_train data shape - (155648, 384)
y_train data shape - (155648,)
X_test data shape - (34167, 384)
y_test data shape - (34167,)
# to use 'categorical_crossentropy' on my labels
y_train = tf.keras.utils.to_categorical(y_train, num_classes=36)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=36)
</code></pre>
<p>my model is defined like this:</p>
<pre><code>batch_size = 32
epochs = 50
num_nodes = 64
dropout = 0.1
input_dim = 384
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(384,)),
tf.keras.layers.Dense(64, activation='relu', use_bias = False),
tf.keras.layers.Dropout(dropout),
tf.keras.layers.Dense(36, activation="softmax", use_bias = False)
])
model.summary()
Model: "sequential_12"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_25 (Dense) (None, 64) 24576
dropout_12 (Dropout) (None, 64) 0
dense_26 (Dense) (None, 36) 2304
=================================================================
Total params: 26880 (105.00 KB)
Trainable params: 26880 (105.00 KB)
Non-trainable params: 0 (0.00 Byte)
</code></pre>
<p>Next I compile the model:</p>
<pre><code>optimizer=tf.keras.optimizers.Adam()
loss = 'categorical_crossentropy'
metrics =['accuracy', tf.keras.metrics.AUC(multi_label = True)]
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
</code></pre>
<p>Then get this ValueError when training:</p>
<pre><code>model.fit(X_train, y_train,
epochs=epochs,
validation_data=(X_test, y_test))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[62], line 1
----> 1 model.fit(X_train, y_train,
2 epochs=epochs,
3 validation_data=(X_test, y_test))
File /usr/local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File /tmp/__autograph_generated_filekny6h0ur.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "/usr/local/lib/python3.10/site-packages/keras/src/engine/training.py", line 1338, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.10/site-packages/keras/src/engine/training.py", line 1322, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.10/site-packages/keras/src/engine/training.py", line 1303, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.10/site-packages/keras/src/engine/training.py", line 1081, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/usr/local/lib/python3.10/site-packages/keras/src/engine/training.py", line 1139, in compute_loss
return self.compiled_loss(
File "/usr/local/lib/python3.10/site-packages/keras/src/engine/compile_utils.py", line 265, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.10/site-packages/keras/src/losses.py", line 142, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.10/site-packages/keras/src/losses.py", line 268, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.10/site-packages/keras/src/losses.py", line 2122, in categorical_crossentropy
return backend.categorical_crossentropy(
File "/usr/local/lib/python3.10/site-packages/keras/src/backend.py", line 5560, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (32, 36, 36) and (32, 36) are incompatible
</code></pre>
<p>Can anybody tell me how to fix this error? Also, Is it bad that I am getting 0 parameters with my Dropout layer?</p>
| <python><tensorflow><keras><deep-learning> | 2023-07-30 23:57:00 | 1 | 1,299 | Bluetail |
76,799,902 | 3,285,817 | Python keyboard library not sending keypresses to external program | <p>I'm building a media player using a Raspberry Pi. My current goal is to configure a passive infrared motion detector to pause the music when it detects motion (so that I can pause and resume the music when I wave my hand in front of the player).</p>
<p>I've got the motion detector hooked up and configured to pull down GPIO Pin 17 whenever motion is detected. I'm using the basic Linux terminal music player <code>mplayer</code>, which will pause and resume whatever its playing with the space bar is pressed.</p>
<p>Right now, it is not working. The LED indicator light on the motion sensor shows that it is registering motion events. Previous experiments show that I am correctly reading the events on GPIO 17. For some reason, <code>mplayer</code> isn't registering the commands from my python script as keypresses. Here is my code as it stands now (I know its not pretty, I'm just tinkering at the moment):</p>
<pre><code>import RPi.GPIO as GPIO
from time import sleep
import keyboard
SENSOR_PIN = 17
GPIO.setmode(GPIO.VCM)
GPIO.setup(SENSOR_PIN, GPIO.IN, pull_up_down=GPIO.PUD_UP)
def motion_detected_callback(channel):
keyboard.send("space")
GPIO.add_event_detect(SENSOR_PIN, GPIO.RISING, callback=motion_detected_callback, bouncetime=2000)
try:
while True:
sleep(0.01)
except KeyboardInterrupt:
GPIO.cleanup()
</code></pre>
<p>I have tried all the <code>keyboard</code> library functions that seemed appropriate -- <code>keyboard.write("space", delay=0)</code>, <code>keyboard.send("space")</code>, nor <code>keyboard.press("space")</code> worked. [EDIT: None of these worked for the scancode of the the spacebar either, e.g. <code>keyboard.send(39)</code>.]</p>
<p>I also tried several ways of invoking the script. Neither <code>sudo python3 pir_sensor_listener.py & mplayer music.mp3</code> Nor <code>sudo python3 pir_sensor_listener | mplayer music.mp3</code> worked.</p>
<p>Any advice?</p>
| <python><events><triggers><raspberry-pi><keyboard> | 2023-07-30 22:56:17 | 1 | 1,039 | OnlyDean |
76,799,847 | 16,004,568 | Serialize list of a specific field of Many To Many relation in Django | <p>What I want is to have a list of a specific field from a ManyToMany relation instead of a list of dictionaries that the field is in.</p>
<p>For example, with the following code, I get the result shown below</p>
<pre class="lang-py prettyprint-override"><code># models.py
from django.db import models
class Bar(models.Model):
title = models.CharField(max_length=255)
class Foo(models.Model):
title = models.CharField(max_length=255)
bars = models.ManyToManyField(Bar, related_name="foos")
# serializers.py
from rest_framework.serializers import ModelSerializer
from .models import Bar, Foo
class BarSerializer(ModelSerializer):
class Meta:
model = Bar
fields = ("title",)
class FooSerializer(ModelSerializer):
bars = BarSerializer(many=True)
class Meta:
model = Foo
fields = ("title", "bars")
# views.py
from rest_framework.generics import ListAPIView
from .serializers import FooSerializer
from .models import Foo
class FooAPIView(ListAPIView):
queryset = Foo.objects.prefetch_related("bars")
serializer_class = FooSerializer
</code></pre>
<p>Result:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"title": "foo 1",
"bars": [
{ "title": "bar title 1" },
{ "title": "bar title 2" },
{ "title": "bar title 3" }
]
},
{
"title": "foo 2",
"bars": [
{ "title": "bar title 4" },
{ "title": "bar title 5" },
{ "title": "bar title 6" }
]
},
{
"title": "foo 3",
"bars": [
{ "title": "bar title 7" },
{ "title": "bar title 8" },
{ "title": "bar title 9" }
]
}
]
</code></pre>
<p>But what I want is a list of titles for the <code>bars</code> field.</p>
<p>Desired Result:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"title": "foo 1",
"bars": ["bar title 1", "bar title 2", "bar title 3"]
},
{
"title": "foo 2",
"bars": ["bar title 4", "bar title 5", "bar title 6"]
},
{
"title": "foo 3",
"bars": ["bar title 7", "bar title 8", "bar title 9"]
}
]
</code></pre>
<p>Is there any way to do this without having to override Django methods?</p>
| <python><django><serialization><django-rest-framework><django-serializer> | 2023-07-30 22:33:16 | 3 | 638 | Amrez |
76,799,822 | 3,847,117 | Why can't I import a module in a Python script, but I can import it in a REPL? | <p>Why is that this code does not work (run with <code>python3 x.py</code>):</p>
<pre><code>import os
os.chdir("/app/my-favorite-module")
from my_favorite_module.foo import Bar
os.chdir("/app")
</code></pre>
<p>(says no module called my_favorite_module`)</p>
<p>but using the Python REPL works (launched in the same directory as the script):</p>
<pre><code>root@afad0fa67aa8:/content# python3
Python 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os; os.chdir("/app/my-favorite-module")
>>> from my_favorite_module.foo import Bar
>>> os.chdir("/app")
</code></pre>
<p>I found a workaround using <code>sys.path</code>, but I'm curious about the underlying reason.</p>
| <python><python-3.x> | 2023-07-30 22:24:29 | 0 | 8,659 | Foobar |
76,799,381 | 1,246,950 | bybit rest api set take profit order | <p>i am trying to set take profit order with bybit rest api (testnet) <br>
i followd the docs <a href="https://bybit-exchange.github.io/docs/v5/position/trading-stop" rel="nofollow noreferrer">https://bybit-exchange.github.io/docs/v5/position/trading-stop</a><br>
but i just cant get it to work
i try to changing :
position_idx to 0 1 2 <br>
all so change tpsl_mode to full ,removing stop_loss changing and removing all non required fields<br>
nothing workt
in this last one i get error:<br><code>'{"retCode":10001,"retMsg":"empty value: apiTimestamp[] apiKey[] apiSignature[]","result":{},"retExtInfo":{},"time":1690745497593}'</code></p>
<pre><code>def set_take_profit_bybit(symbol,profit_price,quantity):
BASE_URL = 'https://api-testnet.bybit.com'
SET_ISO_ENDPOINT='/v5/position/trading-stop'
endpoint = BASE_URL + SET_ISO_ENDPOINT
timestamp = int(time.time() * 1000)
stop_loss=0
tp_trigger_by='MarkPrice'
sl_trigger_by='IndexPrice'
category='linear'
tpsl_mode='Partial'
tp_order_type='Limit'
sl_order_type='Limit'
tp_size=quantity
sl_size=quantity
tp_limit_price='0.49'
sl_limit_price='0.21'
position_idx=2
payload = {
"api_key": API_KEY,
"category": category,
"symbol": symbol,
"takeProfit": profit_price,
"stopLoss": stop_loss,
"tpTriggerBy": tp_trigger_by,
"slTriggerBy": sl_trigger_by,
"tpslMode": tpsl_mode,
"tpOrderType": tp_order_type,
"slOrderType": sl_order_type,
"tpSize": tp_size,
"slSize": sl_size,
"tpLimitPrice": tp_limit_price,
"slLimitPrice": sl_limit_price,
"positionIdx": position_idx,
"timestamp": timestamp,
}
signature = create_signature(API_SECRET, payload)
payload["sign"] = signature
headers = {
"Content-Type": "application/x-www-form-urlencoded",
}
response = requests.post(endpoint, json=payload, headers=headers)
</code></pre>
| <python><bybit> | 2023-07-30 19:44:00 | 1 | 1,102 | user1246950 |
76,799,208 | 12,170,032 | String of letters/numbers being added to the end of my filename | <pre><code>import re
import time
import requests
import os, os.path
from urllib.error import HTTPError
import pandas as pd
import sqlalchemy as sa
from sqlalchemy import (
create_engine,
MetaData,
Table,
Column,
Integer,
String,
Boolean,
Date,
)
import boto3
import random
import botocore
from datetime import datetime
from urllib.error import HTTPError
import mysql.connector as connection
engine = create_engine(
"mysql+pymysql://"
+ user
+ ":"
+ passw
+ "@"
+ host
+ ":"
+ str(port)
+ "/"
+ database,
echo=False,
)
try:
mydb = connection.connect(
host="",
port=int(3306),
database="db",
user="",
passwd="",
use_pure=True,
)
query = "select * from rentalPropertyDetails;"
df = pd.read_sql(query, mydb)
mydb.close() # close the connection
except Exception as e:
mydb.close()
print(str(e))
client = boto3.client("s3")
paginator = client.get_paginator("list_objects_v2")
result = paginator.paginate(Bucket="zillowhousephotos")
s3 = boto3.client(
"s3",
region_name="us-east-2",
aws_access_key_id="",
aws_secret_access_key="",
)
zpids = list(set(df["zpid"].tolist()))
keyStrings = []
picNums = [random.randint(10, 50) for _ in range(len(zpids))]
for page in result:
if "Contents" in page:
for item in page["Contents"]:
keyStrings.append(item["Key"])
for zpid, picNum in zip(zpids, picNums):
keyString = f"{zpid}/{picNum}.jpg"
if keyString in keyStrings:
zpidPicNum = keyString.split(".")[0].replace("/", "_")
s3.download_file(
"zillowhousephotos",
keyString,
f"/Users/erinoefelein/Desktop/HomeFlip/notFrontOfHouse/{zpidPicNum}"+".jpg",
)
</code></pre>
<p>I'm trying to download images from an Amazon S3 bucket and keep getting a string of numbers/letters added to the end of my filename?</p>
<p>Here is the error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/erinoefelein/Desktop/HomeFlip/read_images_to_aws_rds.py", line 88, in <module>
s3.download_file(
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/boto3/s3/inject.py", line 190, in download_file
return transfer.download_file(
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/boto3/s3/transfer.py", line 326, in download_file
future.result()
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/s3transfer/futures.py", line 103, in result
return self._coordinator.result()
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/s3transfer/futures.py", line 266, in result
raise self._exception
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/s3transfer/tasks.py", line 139, in __call__
return self._execute_main(kwargs)
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/s3transfer/tasks.py", line 162, in _execute_main
return_value = self._main(**kwargs)
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/s3transfer/download.py", line 642, in _main
fileobj.seek(offset)
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/s3transfer/utils.py", line 378, in seek
self._open_if_needed()
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/s3transfer/utils.py", line 361, in _open_if_needed
self._fileobj = self._open_function(self._filename, self._mode)
File "/Users/erinoefelein/.local/share/virtualenvs/HomeFlip-71a8tVCO/lib/python3.10/site-packages/s3transfer/utils.py", line 272, in open
return open(filename, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/Users/erinoefelein/Desktop/HomeFlip/notFrontOfHouse/29417540_17.jpg.ed7c7cDb'
</code></pre>
<p>Looking specifically at this:
29417540_17.jpg**.ed7c7cDb**
What's going on with that? How do I fix this?</p>
| <python><amazon-web-services><amazon-s3> | 2023-07-30 18:47:04 | 1 | 495 | Erin |
76,799,196 | 1,942,868 | Django separate own log from library log? | <p>I have django project which print the log such as below.</p>
<pre><code>import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
.
.
.
.
logger.debug("test")
</code></pre>
<p>However I am using boto3 library.</p>
<p>so, there are many debugging information on console from boto3 libraru.</p>
<p>I want to check my own log message.</p>
<p>Is there any good method to do this??</p>
<p>my debug setting is like this below.</p>
<pre><code>DEFAULT_LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse',
},
'require_debug_true': {
'()': 'django.utils.log.RequireDebugTrue',
},
},
'formatters': {
'django.server': {
'()': 'django.utils.log.ServerFormatter',
'format': '[%(server_time)s] %(message)s a',
}
},
'handlers': {
'console': {
'level': 'INFO',
'filters': ['require_debug_true'],
'class': 'logging.StreamHandler',
},
'django.server': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'django.server',
},
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django': {
'handlers': ['console', 'mail_admins'],
'level': 'INFO',
},
'django.server': {
'handlers': ['django.server'],
'level': 'INFO',
'propagate': False,
},
}
}
</code></pre>
| <python><django> | 2023-07-30 18:42:09 | 0 | 12,599 | whitebear |
76,799,165 | 11,058,930 | Selenium + webdriver_manager results in WebDriverException: Message: Can not connect to the Service | <p>**Solution: Selenium's new tool known as SeleniumManager will do what ChromeDriverManager used to do. No need to use webdriver_manager **</p>
<pre><code>
options = Options()
options.add_argument('--no-sandbox') # Run Chrome without the sandbox
options.add_argument('--headless') # Run Chrome in headless mode (no GUI)
driver = webdriver.Chrome(options=options)
</code></pre>
<hr />
<p>I'm trying to run the code below, but getting the following error:</p>
<blockquote>
<p>WebDriverException: Message: Can not connect to the Service /Users/abc/.wdm/drivers/chromedriver/mac64/115.0.5790.114/chromedriver-mac-arm64/chromedriver</p>
</blockquote>
<pre><code>from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument('--no-sandbox') # Run Chrome without the sandbox
options.add_argument('--headless') # Run Chrome in headless mode (no GUI)
options.binary_location="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options = options)
</code></pre>
<p>I've done the following:</p>
<ol>
<li>Ensuring ChromeDriver binary has executable permission for the non-root user.</li>
<li>Ensure that /etc/hosts file contains <code>127.0.0.1 localhost</code></li>
<li>ChromeDriver executable file is the correct version for the installed version of Chrome (see image below)</li>
<li>Verify that the path to the ChromeDriver executable is correct: <code>/Users/abc/.wdm/drivers/chromedriver/mac64/115.0.5790.114/chromedriver-mac-arm64/chromedriver</code></li>
<li>I've added <code>/Users/abc/.wdm/drivers/chromedriver/</code> as a path environment variable.</li>
</ol>
<p><a href="https://i.sstatic.net/AHG8O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AHG8O.png" alt="enter image description here" /></a></p>
<p>Any ideas?</p>
| <python><google-chrome><selenium-webdriver> | 2023-07-30 18:34:00 | 2 | 1,747 | mikelowry |
76,799,021 | 23,213 | Unable to use py_binary target as executable in a custom rule | <p>I have a py_binary executable target, and want to use this from a custom rule. I am able to get this to work, but only by duplicating the dependencies of my py_binary target with my custom rule. Is there any way to avoid this duplication and automatically include the dependencies of the py_binary?</p>
<p>I have simplified my problem down to the following (reproduce by running the //foo:main target)</p>
<p><code>greet/BUILD.bazel</code>:</p>
<pre><code>py_binary(
name = "greet",
srcs = ["greet.py"],
deps = ["@rules_python//python/runfiles"],
visibility = ["//visibility:public"],
)
</code></pre>
<p><code>greet/greet.py</code>:</p>
<pre><code>from rules_python.python.runfiles import runfiles
print("hello world")
</code></pre>
<p><code>foo/BUILD.bazel</code>:</p>
<pre><code>load("//bazel:defs.bzl", "foo_binary")
foo_binary(
name = "main",
)
</code></pre>
<p><code>bazel/BUILD.bazel</code>: (empty)</p>
<p><code>bazel/defs.bzl</code>:</p>
<pre><code>def _foo_binary(ctx):
shell_script = ctx.actions.declare_file("run.sh")
ctx.actions.write(
output = shell_script,
content = """#!/bin/bash
$0.runfiles/svkj/{runtime}
""".format(runtime=ctx.executable._greet.short_path),
is_executable = True,
)
return [
DefaultInfo(
executable = shell_script,
runfiles = ctx.runfiles(
files = ctx.files._greet + ctx.files.deps + ctx.files._python,
collect_data = True
),
),
]
foo_binary = rule(
implementation = _foo_binary,
attrs = {
"deps": attr.label_list(
allow_files=True,
),
"_greet": attr.label(
default = "//greet:greet",
cfg = "exec",
executable = True,
),
# TODO - Why is it necessary to add this explicitly?
"_python": attr.label_list(
allow_files=True,
default = ["@python3_9//:python3", "@rules_python//python/runfiles"],
),
},
executable = True,
)
</code></pre>
<p><code>bazel/BUILD.bazel</code>: </p>
<p>WORKSPACE:</p>
<pre><code>workspace(name = "svkj")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_python",
sha256 = "5fa3c738d33acca3b97622a13a741129f67ef43f5fdfcec63b29374cc0574c29",
strip_prefix = "rules_python-0.9.0",
url = "https://github.com/bazelbuild/rules_python/archive/refs/tags/0.9.0.tar.gz",
)
load("@rules_python//python:repositories.bzl", "python_register_toolchains")
python_register_toolchains(
name = "python3_9",
python_version = "3.9",
)
load("@python3_9//:defs.bzl", "interpreter")
</code></pre>
| <python><bazel><starlark> | 2023-07-30 17:54:10 | 1 | 1,455 | Steve Vermeulen |
76,799,001 | 17,741,308 | Range of Pixel Data for a Dicom File | <p>I am trying to combine several dicom files in a larger one using python3 with <code>pydicom</code> library and encounters an error message I do not understand.</p>
<p>Let several dicom files be given. We assume that each dicom file contains a video. That is, after calling <code>pydicom.dcmread</code> and calling the <code>pixel_array</code>, I get a numpy array in shape <code>(number_of_frames, rows, columns, 3)</code>, with <code>number_of_frames > 0</code>. We will also assume that each dicom only differs in number of frames.</p>
<p>So I read their <code>pixel_array</code>, use <code>numpy</code> to concatenate them, and gets a bigger video with more frames. I would like to save this bigger video as another dicom. So I write the new video into a <code>filedataset</code> and call <code>save_as</code> in <code>pydicom</code> library. It turns out that:</p>
<ol>
<li>If my concatenated video is small, then my code runs without problem.</li>
<li>If my concatenated video is sufficiently large (one example would be <code>(6000,500,500,3)</code>), then I get the following error:</li>
</ol>
<pre><code>Traceback (most recent call last):
File "...\Lib\site-packages\pydicom\tag.py", line 28, in tag_in_exception
yield
File "...\Lib\site-packages\pydicom\filewriter.py", line 662, in write_dataset
write_data_element(fp, dataset.get_item(tag), dataset_encoding)
File "...\Lib\site-packages\pydicom\filewriter.py", line 620, in write_data_element
fp.write_UL(0xFFFFFFFF if is_undefined_length else value_length)
File "...\Lib\site-packages\pydicom\filebase.py", line 119, in write_leUL
self.write(pack(b"<L", val))
^^^^^^^^^^^^^^^^
struct.error: argument out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "...\dicom_concatenate.py", line 56, in work
first_dicom.save_as(absolute_save_path)
File "...\Lib\site-packages\pydicom\dataset.py", line 2061, in save_as
pydicom.dcmwrite(filename, self, write_like_original)
File "...\Lib\site-packages\pydicom\filewriter.py", line 1153, in dcmwrite
_write_dataset(fp, dataset, write_like_original)
File "...\Lib\site-packages\pydicom\filewriter.py", line 889, in _write_dataset
write_dataset(fp, get_item(dataset, slice(0x00010000, None)))
File "...\Lib\site-packages\pydicom\filewriter.py", line 661, in write_dataset
with tag_in_exception(tag):
File "...\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "...\Lib\site-packages\pydicom\tag.py", line 32, in tag_in_exception
raise type(exc)(msg) from exc
struct.error: With tag (7fe0, 0010) got exception: argument out of range
Traceback (most recent call last):
File "...\Lib\site-packages\pydicom\tag.py", line 28, in tag_in_exception
yield
File "...\Lib\site-packages\pydicom\filewriter.py", line 662, in write_dataset
write_data_element(fp, dataset.get_item(tag), dataset_encoding)
File "...\Lib\site-packages\pydicom\filewriter.py", line 620, in write_data_element
fp.write_UL(0xFFFFFFFF if is_undefined_length else value_length)
File "...\Lib\site-packages\pydicom\filebase.py", line 119, in write_leUL
self.write(pack(b"<L", val))
^^^^^^^^^^^^^^^^
struct.error: argument out of range
</code></pre>
<p>I do not understand this error message. What exactly goes wrong? It seems my video is too large (?). Is there any way to help me save my concatenated video?</p>
<p>GitHub Version: <a href="https://github.com/pydicom/pydicom/issues/1849" rel="nofollow noreferrer">https://github.com/pydicom/pydicom/issues/1849</a></p>
| <python><numpy><dicom><pydicom> | 2023-07-30 17:49:38 | 1 | 364 | ๆธฉๆณฝๆตท |
76,798,963 | 11,028,689 | How to make label column in pandas dataframe while grouping by counts? | <p>I have a dataframe with the category column similar to this:</p>
<pre><code>import pandas as pd
data = {'category': ['POLITICS','WELLNESS', 'ENTERTAINMENT', 'TRAVEL','POLITICS', 'ENTERTAINMENT','POLITICS'],
'dates': ["2013-01-31","2013-01-31","2013-02-02", "2013-02-02","2013-02-03", "2013-02-03", "2013-02-04"]}
df1=pd.DataFrame(data, columns=['category', 'dates'])
df1
</code></pre>
<p>I want a dictionary which would be like this for some 30+ different categories, which I can later use to make another column, label.</p>
<pre><code>label_dict = {'POLITICS':0, 'ENTERTAINMENT':1, 'WELLNESS':2 ,'TRAVEL':3, .....}
df1['label'] = df1['category'].map(label_dict).fillna(6).astype(int)
</code></pre>
<p>I want my labelling begin with the category containing most counts (e.g. 'POLITICS', 'ENTERTAINMENT' and so on).</p>
<p>I have tried this:</p>
<pre><code>df1["category"].value_counts().to_dict()
</code></pre>
<p>which gives me</p>
<pre><code>{'POLITICS': 3, 'ENTERTAINMENT': 2, 'WELLNESS': 1, 'TRAVEL': 1}
</code></pre>
<p>How can I create label_dict? i.e. how can I assign the labels - eg. 0 instead of 3, 1 instead of 2, 2 instead of 1 and 3 instead of 1?</p>
<p>my final dataframe should look like this:</p>
<pre><code> category dates label
0 POLITICS 2013-01-31 0
1 WELLNESS 2013-01-31 1
2 ENTERTAINMENT 2013-02-02 2
3 TRAVEL 2013-02-02 3
4 POLITICS 2013-02-03 0
5 ENTERTAINMENT 2013-02-03 2
6 POLITICS 2013-02-04 0
</code></pre>
<p>thank you.</p>
| <python><pandas><dataframe> | 2023-07-30 17:40:39 | 3 | 1,299 | Bluetail |
76,798,851 | 5,947,182 | Python unittest async doesn't display print | <p>I want to use Python's async unittest and followed the instructions from <a href="https://bbc.github.io/cloudfit-public-docs/asyncio/testing.html" rel="nofollow noreferrer">here</a>. I tried the examples from the source and they all worked except for not displaying the <code>print</code> as demonstrated. <strong>How do you display <code>print</code> using Python's async unittest?</strong></p>
<pre><code>import asyncio
import unittest
async def my_func():
print("my_func", flush=True)
class TestStuff(unittest.IsolatedAsyncioTestCase):
async def test_my_func(self):
_ = await my_func()
</code></pre>
| <python><unit-testing><asynchronous><python-asyncio> | 2023-07-30 17:08:49 | 0 | 388 | Andrea |
76,798,827 | 12,300,981 | LMFIT vs. LSMR am I getting different fits due to machine precision? | <p>I've been playing around with using LMFIT vs. LMSR for fitting of linear systems. The coefficients of this system are in fact derived from global parameters that are also being fit (i.e. I have a nested solver). But the solutions I get using these different solvers are different, despite similar setups.</p>
<p>Here is my MVE</p>
<pre><code>import numpy as np
import scipy.optimize as so
from scipy.sparse.linalg import lsmr
from lmfit import minimize,Parameters
data_1=[[117.417, 117.423, 117.438, 117.501], [124.16, 124.231, 124.089, 124.1], [115.632, 115.645, 115.828, 115.947], [118.314, 118.317, 118.287, 118.228], [108.407, 108.419, 108.396, 108.564], [116.636, 116.648, 116.684, 116.729], [122.874, 122.905, 122.851, 122.894], [114.958, 115.059, 115.044, 115.341], [110.322, 110.258, 110.177, 110.216], [129.049, 129.13, 129.139, 129.165], [122.679, 122.699, 122.685, 122.668], [120.965, 120.965, 120.946, 120.915], [120.709, 120.73, 120.723, 120.686], [115.351, 115.362, 115.389, 115.345], [118.593, 118.687, 118.738, 118.71], [114.48, 114.402, 114.601, 114.502], [118.714, 118.768, 118.869, 119.148], [116.406, 116.322, 116.288, 116.143], [122.459, 122.475, 122.424, 122.411], [122.001, 121.961, 121.955, 121.965]]
data_2=[[9.05, 9.044, 9.057, 9.079], [9.178, 9.167, 9.16, 9.176], [7.888, 7.893, 7.911, 7.895], [7.198, 7.202, 7.197, 7.213], [7.983, 7.976, 7.979, 8.02], [8.215, 8.218, 8.223, 8.218], [8.099, 8.114, 8.109, 8.13], [8.781, 8.769, 8.778, 8.778], [9.605, 9.612, 9.59, 9.589], [8.985, 8.997, 8.998, 8.986], [9.361, 9.364, 9.368, 9.39], [8.123, 8.123, 8.106, 8.108], [9.192, 9.194, 9.201, 9.168], [8.78, 8.784, 8.798, 8.804], [8.47, 8.473, 8.482, 8.483], [8.567, 8.563, 8.552, 8.536], [8.022, 8.023, 8.04, 8.04], [10.063, 10.052, 10.026, 10.01], [7.745, 7.738, 7.74, 7.709], [8.741, 8.73, 8.739, 8.75]]
def chemical_shift_model(new_paramters,populations,experimental_shifts):
return (populations@new_paramters)-experimental_shifts
def fit_residues(populations,scipy_flag,lm_fit_flag):
local_chi2_list=[]
local_chi2=0
for experimental_shifts_n,experimental_shifts_h in zip(data_1,data_2):
experimental_shifts_n=(np.array([experimental_shifts_n])/10)*800
experimental_shifts_h=(np.array([experimental_shifts_h]))*800
new_paramters_n=Parameters()
new_paramters_n.add('free',value=110)
new_paramters_n.add('open',value=110)
new_paramters_n.add('closed',value=110)
new_paramters_h=Parameters()
new_paramters_h.add('free',value=7)
new_paramters_h.add('open',value=7)
new_paramters_h.add('closed',value=7)
if lm_fit_flag is True:
solution_n=minimize(chemical_shift_model,new_paramters_n,args=(populations,experimental_shifts_n))
solution_h=minimize(chemical_shift_model,new_paramters_h,args=(populations,experimental_shifts_h))
local_chi2_list.append(solution_n.residual)
local_chi2_list.append(solution_h.residual)
local_chi2+=(solution_n.chisqr+solution_h.chisqr)
else:
least_squared_fit_n=lsmr(populations,experimental_shifts_n,maxiter=10)
least_squared_fit_h=lsmr(populations,experimental_shifts_h,maxiter=10)
local_chi2_list.append(populations@least_squared_fit_n[0]-experimental_shifts_n)
local_chi2_list.append(populations@least_squared_fit_h[0]-experimental_shifts_h)
local_chi2+=((least_squared_fit_n[3])**2+least_squared_fit_h[3]**2)
if scipy_flag is True:
return local_chi2
return (np.array(local_chi2_list)).flatten()
def get_populations(initial,io,scipy_flag,lm_fit_flag):
if scipy_flag is True:
k,k1,x,y=float(initial[0]),float(initial[1]),initial[2],initial[3]
else:
k,k1,x,y=float(initial['kvar'].value),float(initial['k1var'].value),float(initial['xvar'].value),float(initial['yvar'].value)
k_array=np.array([k,k*x,k*y,k*x*y])
k1_array=np.array([k1,k1*x,k1*y,k1*x*y])
partial_free_concentration=(np.sqrt((k_array*k1_array)**2+(8*io*k_array*k1_array)+(8*io*k_array*k1_array**2))-(k_array*k1_array))/(4*(1+k1_array))
partial_closed_concentration=(((4*io)+(4*io*k1_array))/(4*(1+k1_array)**2))-(partial_free_concentration/(1+k1_array))
partial_open_concentration=k1_array*partial_closed_concentration
populations=(np.stack((partial_free_concentration,partial_open_concentration,partial_closed_concentration),axis=1)/io)
return fit_residues(populations,scipy_flag,lm_fit_flag)
io=270_000
</code></pre>
<p>So I am doing 2 fits here. One is for parameters k,k1,x,y which are used to calculate the coefficients for the next system to solve with data coming from data_1,data_2. The reason for the flags are because scipy and lmfit are using different methods and require different formats for inputs and outputs (i.e. scipy uses list input with scalar output, lmfit uses dict input with array of residuals output). So I have a setup here that tests 4 different scenarios using the flags:</p>
<ol>
<li>Scipy for global (non linear), scipy/LSMR for local (linear)</li>
<li>Scipy for global (non linear), LMFIT for local (linear)</li>
<li>LMFIT for global (non linear), scipy/LSMR for local (linear)</li>
<li>LMFIT for both</li>
</ol>
<p>The problem is the output, so say we test using Scipy/LSMR and compare this to Scipy/LMFIT</p>
<pre><code>#scenario 1
scipy_flag=True
lm_fit_flag=False
print(so.minimize(get_populations,args=(io,scipy_flag,lm_fit_flag),bounds=((0,np.inf),)*4,method='Nelder-Mead',options={'maxiter':1000},x0=np.array([500,2e-2,7,30])))
#scenario 2
scipy_flag=True
lm_fit_flag=True
print(so.minimize(get_populations,args=(io,scipy_flag,lm_fit_flag),bounds=((0,np.inf),)*4,method='Nelder-Mead',options={'maxiter':1000},x0=np.array([500,2e-2,7,30])))
</code></pre>
<p>The respective solutions</p>
<pre><code>#scenario 1
message: Optimization terminated successfully.
success: True
status: 0
fun: 481.9183017741435
x: [ 1.127e+03 4.283e-07 1.448e+02 6.792e+02]
nit: 279
nfev: 538
#scenario 2
message: Optimization terminated successfully.
success: True
status: 0
fun: 482.14321287433705
x: [ 7.043e+02 1.588e-11 9.453e+00 4.316e+01]
nit: 254
nfev: 521
</code></pre>
<p>We can see both found solutions, but both are completely different from each other. Now is this a setup issue, or is this a minimizer issue? If I look at just plugging in value (so no minimization)</p>
<pre><code>#scenario 1
scipy_flag=True
lm_fit_flag=False
print(get_populations([500,0.02,7,30],io,scipy_flag,lm_fit_flag))
#scenario 2
scipy_flag=True
lm_fit_flag=True
print(get_populations([500,0.02,7,30],io,scipy_flag,lm_fit_flag))
</code></pre>
<p>The solutions for the sum of residuals are almost identical</p>
<pre><code>#scenario 1
508.04726728449936
#scenario 2
508.04726728447883
</code></pre>
<p>So we can see here I have set things up correctly, and both LMFIT and LMSR are fitting the same way, giving almost exact answers to the 1e-10 correction (I presume this is approaching machine precesion).</p>
<p>Now if we truncate data_1 and data_2, so we only use the first half of each dataset, we can see the solutions do end up converging:</p>
<pre><code>data_1=[[117.417, 117.423, 117.438, 117.501], [124.16, 124.231, 124.089, 124.1], [115.632, 115.645, 115.828, 115.947], [118.314, 118.317, 118.287, 118.228], [108.407, 108.419, 108.396, 108.564], [116.636, 116.648, 116.684, 116.729], [122.874, 122.905, 122.851, 122.894], [114.958, 115.059, 115.044, 115.341], [110.322, 110.258, 110.177, 110.216], [129.049, 129.13, 129.139, 129.165]]
data_2=[[9.05, 9.044, 9.057, 9.079], [9.178, 9.167, 9.16, 9.176], [7.888, 7.893, 7.911, 7.895], [7.198, 7.202, 7.197, 7.213], [7.983, 7.976, 7.979, 8.02], [8.215, 8.218, 8.223, 8.218], [8.099, 8.114, 8.109, 8.13], [8.781, 8.769, 8.778, 8.778], [9.605, 9.612, 9.59, 9.589], [8.985, 8.997, 8.998, 8.986]]
</code></pre>
<p>Using the same 2 scenarios with the same conditions as above but now using this shortened dataset, we can get identical solutions:</p>
<pre><code>#scenario 1
message: Optimization terminated successfully.
success: True
status: 0
fun: 329.3893192091266
x: [ 4.898e+02 2.367e-02 4.986e+00 3.908e+01]
nit: 121
nfev: 241
#scenario 2
message: Optimization terminated successfully.
success: True
status: 0
fun: 329.3893192092007
x: [ 4.898e+02 2.367e-02 4.986e+00 3.908e+01]
nit: 110
nfev: 225
</code></pre>
<p>So <strong>assuming</strong> my setup is correct, the code is correct, I have setup the solvers properly, the solution simply diverges once I add more data.</p>
<p>Is this occurring because my chi2 landscape is so poorly defined/flat, that the true "minima" is within machine precision error resulting in me getting different answers when trying to fit using different solvers?</p>
<p>Basically I'm just trying to ensure it's my data that is garbage, not my code or how I've set everything up.</p>
<p>I should note I haven't shown any code for when you use LMFIT for the global parameters (scenario 3 and 4), but I have the same issue there.</p>
| <python><numpy><scipy><minimize> | 2023-07-30 17:03:55 | 0 | 623 | samman |
76,798,715 | 10,542,284 | Getting UNEXPECTED_MESSAGE while handling SSL/TLS in python | <p>I'm writing a custom web proxy which will work with <code>FoxyProxy</code> addon in Firefox. I generated some self signed certificates and I think I'm loading them how they should plus I have added the <code>min</code> and <code>max</code> versions of <code>TLS</code> and it still looks to have problems.</p>
<pre><code>import socket
import signal
import ssl
import traceback
import select
import sys
import threading
def start_server(host, port):
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind((host, port))
server_socket.listen(4096)
print(f"Server listening on {host}:{port}")
signal.signal(signal.SIGINT, signal_handler)
while True:
client_socket, client_address = server_socket.accept()
print(f"Connection established with {client_address}")
# Create a new thread to handle the connection
t = threading.Thread(target=handle_connection, args=(client_socket,))
t.start()
def handle_connection(client_socket):
try:
data = read_full_request(client_socket) # Read full HTTP request from the client
if not data:
# No data received, close the connection gracefully
client_socket.close()
return
# Handle HTTP/HTTPS based on the destination port
dest_host, dest_port = extract_host_port(data)
print(f"h:{dest_host}, p:{dest_port}")
if dest_port == 443:
handle_https(client_socket, data)
else:
handle_http(client_socket, data)
except Exception as e:
print(f"Error occurred during connection handling: {e}")
traceback.print_exc()
finally:
# Close the client socket
client_socket.close()
def create_https_context():
context = ssl.SSLContext(ssl.PROTOCOL_TLS)
context.load_verify_locations(cafile="./keys/bundle.pem")
context.load_cert_chain(certfile="./keys/cert.pem",
keyfile="./keys/privatekey.pem")
# Explicitly set the minimum and maximum TLS versions to TLSv1.3
context.min_version = ssl.TLSVersion.TLSv1_3
context.max_version = ssl.TLSVersion.TLSv1_3
return context
def read_full_request(client_socket):
buffer = b""
while True:
data = client_socket.recv(4096)
buffer += data
if b"\r\n\r\n" in buffer:
break
return buffer
def handle_https(client_socket, data):
try:
# Handle HTTPS connections (TLS handshake)
dest_host, dest_port = extract_host_port(data)
print(f"Received HTTPS CONNECT request: {dest_host}:{dest_port}")
# Create a client socket and wrap it using the client-side SSL context
context = create_https_context()
client_ssl_socket = context.wrap_socket(client_socket, server_hostname=dest_host)
# Forward the CONNECT request to the destination server
forward(dest_host, dest_port, data, client_ssl_socket)
# After forwarding the CONNECT request, establish a tunnel
tunnel_established = False
while not tunnel_established:
server_data = client_ssl_socket.recv(4096)
forward(dest_host, dest_port, server_data, client_ssl_socket)
tunnel_established = b"\r\n\r\n" in server_data
print("HTTPS tunnel established.")
# Once the tunnel is established, forward the encrypted data without interpretation
while True:
server_data = client_ssl_socket.recv(4096)
if not server_data:
break
forward(dest_host, dest_port, server_data, client_ssl_socket)
except ssl.SSLError as e:
print(f"SSL Error occurred during HTTPS handling: {e}")
traceback.print_exc()
except Exception as e:
traceback.print_exc()
def handle_http(client_socket, data):
try:
# Handle HTTP connections
http_request = data
while b"\r\n\r\n" not in http_request:
data = client_socket.recv(4096)
http_request += data
print(f"Received data over HTTP:\n{http_request.decode('utf-8')}")
dest_host, dest_port = extract_host_port(http_request) # Pass data directly
forward(dest_host, dest_port, http_request, None)
except Exception as e:
print(f"Error occurred during HTTP handling: {e}")
traceback.print_exc()
def extract_host_port(http_request):
host, port = None, None
lines = http_request.split(b"\r\n")
for line in lines:
if line.startswith(b"Host: "):
host_port = line[6:]
if b":" in host_port:
host, port = host_port.split(b":")
port = int(port)
else:
host = host_port
port = 80
break
return host, port
def forward(hosts, ports, datas, context):
# Forward the packet to the extracted destination
forward_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if ports == 443:
forward_socket = context.wrap_socket(forward_socket, server_hostname=hosts)
forward_socket.connect((hosts, ports))
forward_socket.sendall(datas)
forward_socket.close()
print("Packet forwarded successfully.")
def signal_handler(sig, frame):
print("Server terminated by Ctrl+C")
sys.exit(0)
if __name__ == "__main__":
HOST = "127.0.0.1"
PORT = 8080
start_server(HOST, PORT)
</code></pre>
<p>I get this error and I have explored different ways but I have failed. Could someone lend me a hand?</p>
<p>The error:</p>
<pre class="lang-none prettyprint-override"><code>h:b'push.services.mozilla.com', p:443
Received HTTPS CONNECT request: b'push.services.mozilla.com':443
SSL Error occurred during HTTPS handling: [SSL: UNEXPECTED_MESSAGE] unexpected message (_ssl.c:997)
Traceback (most recent call last):
File "burplike.py", line 83, in handle_https
client_ssl_socket = context.wrap_socket(client_socket, server_hostname=dest_host)
File "/usr/lib/python3.10/ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.10/ssl.py", line 1071, in _create
self.do_handshake()
File "/usr/lib/python3.10/ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: UNEXPECTED_MESSAGE] unexpected message (_ssl.c:997)
</code></pre>
| <python><ssl> | 2023-07-30 16:36:22 | 0 | 473 | Jugert Mucoimaj |
76,798,643 | 189,247 | Quantizing normally distributed floats in Python and NumPy | <p>Let the values in the array <code>A</code> be sampled from a Gaussian
distribution. I want to replace every value in <code>A</code> with one of <code>n_R</code>
"representatives" in <code>R</code> so that the total quantization error is
minimized.</p>
<p>Here is NumPy code that does linear quantization:</p>
<pre><code>n_A, n_R = 1_000_000, 256
mu, sig = 500, 250
A = np.random.normal(mu, sig, size = n_A)
lo, hi = np.min(A), np.max(A)
R = np.linspace(lo, hi, n_R)
I = np.round((A - lo) * (n_R - 1) / (hi - lo)).astype(np.uint32)
L = np.mean(np.abs(A - R[I]))
print('Linear loss:', L)
-> Linspace loss: 2.3303939600700603
</code></pre>
<p>While this works, the quantization error is large. Is there a smarter
way to do it? I'm thinking that one could take advantage of <code>A</code> being
normally distributed or perhaps use an iterative process that
minimizes the "loss" function.</p>
<p><strong>Update</strong> While researching this question, I found a <a href="https://stackoverflow.com/questions/15051624/python-weighted-linspace">related question</a> about "weighting" the quantization. Adapting their method sometimes gives better quantization results:</p>
<pre><code>from scipy.stats import norm
dist = norm(loc = mu, scale = sig)
bounds = dist.cdf([mu - 3*sig, mu + 3*sig])
pp = np.linspace(*bounds, n_R)
R = dist.ppf(pp)
# Find closest matches
lhits = np.clip(np.searchsorted(R, A, 'left'), 0, n_R - 1)
rhits = np.clip(np.searchsorted(R, A, 'right') - 1, 0, n_R - 1)
ldiff = R[lhits] - A
rdiff = A - R[rhits]
I = lhits
idx = np.where(rdiff < ldiff)[0]
I[idx] = rhits[idx]
L = np.mean(np.abs(A - R[I]))
print('Gaussian loss:', L)
-> Gaussian loss: 1.6521974945326285
</code></pre>
<p>K-means clustering might be better but seem to be too slow to be
practical on large arrays.</p>
| <python><numpy><floating-point><k-means><quantization> | 2023-07-30 16:19:22 | 2 | 20,695 | Gaslight Deceive Subvert |
76,798,488 | 7,658,051 | ImportError: relative import with no known parent package - Importing a scripts from a directory "one level above" the current script's one | <p>In my python script <code>L47_trial.py</code>, which has the following relative path with respect to my project:</p>
<pre><code>my_scripts/04-trial/L47_trial.py
</code></pre>
<p>there is this import line</p>
<pre><code>from ..starting_template import load_img
</code></pre>
<p>which should import the function <code>load_img</code> from the script <code>starting_template</code>, which has the following relative path with respect to my project:</p>
<pre><code>my_scripts/starting_template.py
</code></pre>
<p>So, to be clear:</p>
<pre><code>- my_scripts/
- 04-trial/
- L47_trial.py
- starting_template.py
</code></pre>
<p>However, when I run</p>
<pre><code>python my_scripts/04-trial/L47_trial.py
</code></pre>
<p>from my project directory (the folder one level upper with respect to <code>my_scripts</code>), I get this error:</p>
<pre><code>from ..starting_template import load_img
ImportError: attempted relative import with no known parent package
</code></pre>
<p>My thinking is that as I run the file, python should do a relative import starting form the path of <code>my_scripts/04-trial/L47_trial.py</code>, so,<br>
if <code>starting_template</code> were in the same directory of <code>L47_trial.py</code> (that is <code>04-trial/</code>), I would do</p>
<pre><code>from starting_template import load_img
</code></pre>
<p>but since <code>starting_template</code> is one level upper, then I would expect</p>
<pre><code>from ..starting_template import load_img
</code></pre>
<p>to work.</p>
<p>What is the problem with my import?</p>
<p>Is it forbidden in python to import a script "from directories one level above the current script's one"?</p>
<h2>What I tried</h2>
<ol>
<li></li>
</ol>
<p>I tried to change</p>
<pre><code>from ..starting_template import load_img
</code></pre>
<p>to</p>
<pre><code>from my_scripts.starting_template import load_img
</code></pre>
<p>but I get</p>
<pre><code>from my_scripts.resources.starting_template import load_img
ModuleNotFoundError: No module named 'my_scripts'
</code></pre>
<p>Why cannot it find <code>my_scripts</code> folder ?</p>
<p>2)</p>
<p>I tryed to move starting_template.py in the folder <code>my_scripts/resources/</code> and change</p>
<pre><code>from ..starting_template import load_img
</code></pre>
<p>into</p>
<pre><code>from my_scripts.resources.starting_template import load_img # refactoring made by VScode
</code></pre>
<p>but I get again</p>
<pre><code>from my_scripts.resources.starting_template import load_img
ModuleNotFoundError: No module named 'my_scripts'
</code></pre>
<h2>Notes</h2>
<p><strong>Note</strong>: I have tested that by moving <code>starting_template.py</code> into <code>my_scripts/S04-trial</code> and changing</p>
<pre><code>from ..starting_template import load_img
</code></pre>
<p>to</p>
<pre><code>from starting_template import load_img
</code></pre>
<p>solves the issue, but I would like to keep the file there where it is.</p>
<p><strong>Note</strong>: The directories <code>my_scripts</code> and <code>my_scripts/S04-trial</code> contain a void file <code>__init__.py</code></p>
| <python><import> | 2023-07-30 15:38:48 | 1 | 4,389 | Tms91 |
76,798,465 | 16,383,578 | How to combine overlapping ranges efficiently? | <p>I have data from multiple csv files and I need to merge the tables into one table. The data in question is text dump of GeoLite2 database, and there are literally millions of rows, simply loading the data into lists takes 2927MiB.</p>
<p>The tables contain information about IP networks, some tables contain information about ASN, some about city, and some others about country, these tables have different keys (IP networks), and they may contain common keys, I intend to merge these tables into one table containing information about ASN, country and city of all networks listed.</p>
<p>The question is related to my previous <a href="https://stackoverflow.com/questions/76693414/how-to-optimize-splitting-overlapping-ranges">question</a>, but it is different.</p>
<p>Imagine an infinite boxes arranged in a line, they are numbered using unique integers, and all are initially empty. This time, all boxes can hold infinitely many values, but they can only hold unique values. Meaning, if you put A into box 0, box 0 contains A, but after that, no matter how many times you put A into box 0, box 0 always contains exactly 1 instance of A. But if you put B into box 0, box 0 now contains A and B. But if you put B into the box again, box 0 still contains 1 instance of A and 1 instance of B.</p>
<p>Now there are many triplets, the first two elements are integers, they correspond to start and end of an integer range (inclusive), each triplet describes a continuous integer range of boxes (meaning the number of every box is the number of the previous box plus one) with the same object.</p>
<p>For example, <code>(0, 10, 'A')</code> means boxes 0 to 10 contain an instance of <code>'A'</code>.</p>
<p>The task is to combine the information from the triplets and describe the state of the boxes in the least amount of triplets, in this case the third elements are <code>set</code>s.</p>
<p>Input <code>(0, 10, 'A')</code> -> Output <code>(0, 10, {'A'})</code>, explanation: boxes 0 to 10 contain an instance of <code>'A'</code>.</p>
<p>Input <code>(0, 10, 'A'), (11, 20, 'A')</code> -> Output <code>(0, 20, {'A'})</code>, explanation: boxes 0 to 10 contain an instance of <code>'A'</code>, and boxes 11 to 20 also contain an instance of <code>'A'</code>, 11 is 10 + 1, so boxes 0 to 20 contain an instance of <code>'A'</code>.</p>
<p>Input <code>(0, 10, 'A'), (20, 30, 'A')</code> -> Output <code>(0, 10, {'A'}), (20, 30, {'A'})</code>, explanation: boxes 0 to 10 contain an instance of <code>'A'</code>, and boxes 20 to 30 also contain an instance of <code>'A'</code>, all other boxes are empty, and 20 is not adjacent to 10, don't merge.</p>
<p>Input <code>(0, 10, 'A'), (11, 20, 'B')</code> -> Output <code>(0, 10, {'A'}), (11, 20, {'B'})</code></p>
<p>Input <code>(0, 10, 'A'), (2, 8, 'B')</code> -> Output <code>(0, 1, {'A'}), (2, 8, {'A', 'B'}), (9, 10, {'A'})</code>, explanation: boxes 0 to 10 have <code>'A'</code>, while boxes 2 to 8 have <code>'B'</code>, so boxes 2 to 8 have <code>{'A', 'B'}</code>.</p>
<p>Input <code>(0, 10, 'A'), (5, 20, 'B')</code> -> Output <code>(0, 4, {'A'}), (5, 10, {'A', 'B'}), (11, 20, {'B'})</code> explanation: same as above.</p>
<p>Input <code>(0, 10, 'A'), (5, 10, 'A')</code> -> Output <code>(0, 10, {'A'})</code>, explanation: boxes 0 to 10 have <code>'A'</code>, the second triplet adds no new information and is garbage, discard it.</p>
<p>My current code that produces correct output for some test cases but raises <code>KeyError</code> for others:</p>
<pre><code>import random
from collections import defaultdict
from typing import Any, List, Tuple
def get_nodes(ranges: List[Tuple[int, int, Any]]) -> List[Tuple[int, int, Any]]:
nodes = []
for ini, fin, data in ranges:
nodes.extend([(ini, False, data), (fin, True, data)])
return sorted(nodes)
def combine_gen(ranges):
nodes = get_nodes(ranges)
stack = set()
actions = []
for node, end, data in nodes:
if not end:
if (action := (data not in stack)):
if stack and start < node:
yield start, node - 1, stack.copy()
stack.add(data)
start = node
actions.append(action)
elif actions.pop(-1):
if start <= node:
yield start, node, stack.copy()
start = node + 1
stack.remove(data)
def merge(segments):
start, end, data = next(segments)
for start2, end2, data2 in segments:
if end + 1 == start2 and data == data2:
end = end2
else:
yield start, end, data
start, end, data = start2, end2, data2
yield start, end, data
def combine(ranges):
return list(merge(combine_gen(ranges)))
</code></pre>
<p>It produces correct output for the following test cases:</p>
<pre><code>sample1 = [(0, 20, 'A'), (10, 40, 'B'), (32, 50, 'C'), (40, 50, 'D'), (45, 50, 'E'), (70, 80, 'F'), (90, 100, 'G'), (95, 120, 'H'), (131, 140, 'I'), (140, 150, 'J')]
sample2 = [(0, 10, 'A'), (0, 1, 'B'), (2, 5, 'C'), (3, 4, 'C'), (6, 7, 'C'), (8, 8, 'D'), (110, 150, 'E'), (250, 300, 'C'), (256, 270, 'D'), (295, 300, 'E'), (500, 600, 'F')]
sample3 = [(0, 100, 'A'), (10, 25, 'B'), (15, 25, 'C'), (20, 25, 'D'), (30, 50, 'E'), (40, 50, 'F'), (60, 80, 'G'), (150, 180, 'H')]
sample4 = [(0, 16, 'red'), (0, 4, 'green'), (2, 9, 'blue'), (2, 7, 'cyan'), (4, 9, 'purple'), (6, 8, 'magenta'), (9, 14, 'yellow'), (11, 13, 'orange'), (18, 21, 'green'), (22, 25, 'green')]
</code></pre>
<p>I won't include the expected output for them here, run my code and you will find out what the outputs are, the outputs are correct.</p>
<p>I have written a function to make test cases and a guaranteed correct but inefficient solution, and my efficient code raises <code>KeyError</code> when fed machine generated inputs.</p>
<pre><code>def make_generic_case(num, lim, dat):
ranges = []
for _ in range(num):
start = random.randrange(lim)
end = random.randrange(lim)
if start > end:
start, end = end, start
ranges.append([start, end, random.randrange(dat)])
ranges.sort(key=lambda x: (x[0], -x[1]))
return ranges
def bruteforce_combine(ranges):
boxes = defaultdict(set)
for start, end, data in ranges:
for n in range(start, end + 1):
boxes[n].add(data)
boxes = sorted(boxes.items())
output = []
lo, cur = boxes.pop(0)
hi = lo
for n, data in boxes:
if cur == data and n - hi == 1:
hi = n
else:
output.append((lo, hi, cur))
lo = hi = n
cur = data
output.append((lo, hi, cur))
return output
</code></pre>
<p>Because my code <em><strong>isn't working properly I CAN'T post it on Code Review</strong></em>, because Code Review only reviews working code and mine isn't.</p>
<p><em><strong>Answers are required to use <code>make_generic_case(512, 4096, 16)</code> to get test cases and verify proposed solution's correctness against the output of <code>bruteforce_combine</code></strong></em>, <code>bruteforce_combine</code> is by definition correct (my logic is <code>defaultdict(set)</code>).</p>
<p>What is a more efficient way to combine the overlapping ranges?</p>
<hr />
<p>Both existing answers are not ideal, the first gives the correct result but is very inefficient and will never finish processing my millions of rows:</p>
<pre><code>In [5]: for _ in range(256):
...: case = make_generic_case(512, 4096, 16)
...: assert bruteforce_combine(case) == combine(case)
In [6]: case = make_generic_case(512, 4096, 16)
In [7]: %timeit combine(case)
9.3 ms ยฑ 35 ยตs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>The second is much more efficient, but I haven't tested thoroughly yet.</p>
<hr />
<p>I have confirmed the correctness of the code from the second answer, and I have rewritten it to the following:</p>
<pre><code>from collections import Counter
def get_nodes(ranges):
nodes = []
for start, end, label in ranges:
nodes.extend(((start, 0, label), (end + 1, 1, label)))
return sorted(nodes)
def combine(ranges):
if not ranges:
return []
nodes = get_nodes(ranges)
labels = set()
state = Counter()
result = []
start = nodes[0][0]
for node, is_end, label in nodes:
state[label] += [1, -1][is_end]
count = state[label]
if (is_end, count) in {(0, 1), (1, 0)}:
if start < node:
if not count or labels:
result.append((start, node - 1, labels.copy()))
start = node
(labels.remove, labels.add)[count](label)
return result
</code></pre>
<p>And it is still very inefficient, I need to process literally millions of rows:</p>
<pre><code>In [2]: for _ in range(128):
...: case = make_generic_case(256, 4096, 16)
...: assert bruteforce_combine(case) == combine(case)
In [3]: for _ in range(2048):
...: case = make_generic_case(512, 2048, 16)
...: assert bruteforce_combine(case) == combine(case)
In [4]: case = make_generic_case(2048, 2**64, 32)
In [5]: %timeit combine(case)
4.19 ms ยฑ 112 ยตs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each)
In [6]: case = make_generic_case(32768, 2**64, 32)
In [7]: %timeit combine(case)
116 ms ยฑ 1.11 ms per loop (mean ยฑ std. dev. of 7 runs, 10 loops each)
In [8]: case = make_generic_case(1048576, 2**64, 32)
In [9]: %timeit combine(case)
5.12 s ยฑ 30.3 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>I have data from 6 gigantic CSV files, the total number of rows is:</p>
<pre><code>In [74]: 495209+129884+3748518+1277097+429639+278661
Out[74]: 6359008
</code></pre>
<p>That is well over 6 million, merely loading the data into RAM takes 2.9GiB, and I have only 16GiB RAM. I need a solution that is much more efficient, both in time complexity and space complexity.</p>
| <python><python-3.x><algorithm><performance> | 2023-07-30 15:32:57 | 3 | 3,930 | ฮฮญฮฝฮท ฮฮฎฮนฮฝฮฟฯ |
76,798,323 | 616,460 | Built-in function to find first character NOT in a set | <p>Is there a built-in function in Python (3.11.1) that, given a string and a set of characters and a starting position, can find the index of the first character in the string that is <em>not</em> in the set of characters, beginning the search at the specified start position?</p>
<p>C++ has <a href="https://cplusplus.com/reference/string/string/find_first_not_of/" rel="nofollow noreferrer"><code>string::find_first_not_of</code></a> and Fortran has <a href="https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/FORTRAN/spicelib/ncpos.html" rel="nofollow noreferrer"><code>NCPOS</code></a>, is there a Python equivalent?</p>
<p>For example, if I did (making up <code>ncpos</code>):</p>
<pre><code> pos = " 012345 hello".ncpos(" eh", 10)
</code></pre>
<p>Then <code>pos</code> would have the value <code>14</code> (the first character that isn't in <code>" eh"</code>, starting at location 10). Is there something like this?</p>
| <python><string> | 2023-07-30 14:59:51 | 1 | 40,602 | Jason C |
76,798,185 | 12,313,380 | Can't modify html file with python script | <p>I'm attempting to change the text of certain elements with specific ID in my HTML file. Despite running the code without any reported errors, the classes' content remains unchanged, and the modifications do not take effect.</p>
<p>Also when I run my code it say Saved correctly and HTML file successfully updated.</p>
<p>html code:</p>
<pre><code>
<div class="container">
<div class="row">
<div class="col-md-12">
<div class="mt-n6 bg-white mb-10 rounded-3 shadow-sm p-lg-10 p-5">
<div class="mb-8">
<!DOCTYPE html>
<html>
<head>
<title>
Display HTML Inside iFrame
</title>
<style>
#iframeBox {
border: 2px solid #ccc;
width: 100%; /* You can adjust the width as needed */
height: 400px; /* You can adjust the height as needed */
}
</style>
</head>
<body>
<div class="center-container" style="text-align: center; margin-bottom: 10px">
<label for="iframeBox" style="font-weight: bold">
<h3 style="color: black">
Veuillez signer le
contrat ci-dessous
</h3>
</label>
</div>
<iframe frameborder="0" id="iframeBox">
</iframe>
<script>
// Get the iframe element
const iframeBox = document.getElementById('iframeBox');
// Set the content of the iframe to your HTML code
iframeBox.contentDocument.open();
iframeBox.contentDocument.write(`
<body class="c35 doc-content">
<div><p class="c20"><span class="c1 c26">ย ย ย ย ย ย ย ย - </span><span class="c1 c26">ย -</span><span
style="overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 97.33px; height: 97.33px;"></span>
</p></div>
<p class="c19"><span class="c13 c24">CONTRAT DE PRรT</span></p>
<p class="c4 c18"><span class="c1 c0"></span></p>
<p class="c4"><span class="c0">INTERVENU LE </span><span class="c0 c16" id="date1">___ AVRIL 2023</span><span class="c0">, ร</span>
</p>
<p class="c4 c11 c18"><span class="c1 c0"></span></p>
<p class="c4 c11"><span class="c0">ENTRE:ย ย ย ย ย ย ย ย .</span><span class="c2">, constituรฉe en vertu de la </span><span
class="c2 c30">Loi canadienne sur les sociรฉtรฉs par actions)</span><span class="c2">ย ayant son siรจge social au 3, </span><span
class="c2 c12">dรปment reprรฉsentรฉe par, sa prรฉsidente, et , sa Vice-Prรฉsidente</span><span
class="c2">ย </span><span class="c1 c0">;</span></p>
<p class="c28"><span class="c2">ย ย ย ย ย ย ย ย (ci-aprรจs dรฉnommรฉe, le ยซย </span><span class="c0">Prรชteur</span><span
class="c1 c2">ย ยป)</span></p>
<p class="c4 c32"><span class="c0">ET :ย ย ย ย ย ย ย ย [</span><span class="c0 c16" id="nom1">INSรRER LE NOM</span><span
</code></pre>
<p>and here is my python code</p>
<pre><code>def update_html_file(file_path, date_text, nom_text):
with open(file_path, 'r', encoding='utf-8') as file:
html_content = file.read()
# Parse the HTML content using Beautiful Soup
soup = BeautifulSoup(html_content, 'html.parser')
# Find the element with the id 'date1' and update its text
date_element = soup.find('span', {'id': 'date1'})
if date_element:
date_element.string = date_text
# Find the element with the id 'nom1' and update its text
nom_element = soup.find('span', {'id': 'nom1'})
if nom_element:
nom_element.string = nom_text
# Return the updated HTML content
updated_html = soup.prettify()
# Write the updated HTML content back to the file
with open(file_path, 'w', encoding='utf-8') as file:
file.write(updated_html)
# Provide the path to your HTML file
html_file_path = 'templates/main/contrat_template.html'
# Provide the new text for date1 and nom1
new_date_text = 'NEW DATE TEXT'
new_nom_text = 'NEW NOM TEXT'
# Update the HTML content and save to the file
update_html_file(html_file_path, new_date_text, new_nom_text)
</code></pre>
| <python><html><beautifulsoup> | 2023-07-30 14:25:19 | 1 | 576 | tiberhockey |
76,798,090 | 9,877,065 | PyQt get sender/source of DragLeaveEvent | <p>hi got this bit of code in <strong>PyQt5</strong> (PyQt: v 5.15.7) , basically I am trying to track widget that are dragged outside QMainwindow :</p>
<p><a href="https://i.sstatic.net/ig47d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ig47d.png" alt="enter image description here" /></a></p>
<pre><code>#!/usr/bin/env python3
from PyQt5.QtWidgets import (QApplication, QWidget, QMainWindow,
QVBoxLayout, QPushButton, QGridLayout
)
from PyQt5.QtCore import Qt, QMimeData, pyqtSignal, pyqtSlot
from PyQt5.QtGui import QDrag, QPixmap, QCursor
class DragWidget(QWidget):
def __init__(self, *args, name , **kwargs):
super().__init__(*args, **kwargs)
def mouseMoveEvent(self, e):
if e.buttons() == Qt.LeftButton:
drag = QDrag(self)
mime = QMimeData()
drag.setMimeData(mime)
pixmap = QPixmap(self.size())
self.render(pixmap)
drag.setPixmap(pixmap)
drag.exec_(Qt.MoveAction)
class MainWindow(QMainWindow):
whereDropped = pyqtSignal(int,int)
dragleaves = pyqtSignal()
def __init__(self):
super().__init__()
self.setGeometry(200,200, 400, 500)
self.container = QWidget()
self.layout = QGridLayout()
self.container.setLayout(self.layout)
self.setCentralWidget(self.container)
for i in range(4):
self.x = DragWidget(self, name = 'widget_'+str(i))
self.layout.addWidget( self.x)
self.x.setObjectName('widget_'+str(i))
self.b = QPushButton('widget___'+str(i))
self.layout_2 = QVBoxLayout()
self.x.setLayout(self.layout_2)
self.layout_2.addWidget(self.b)
self.x.show()
self.installEventFilter(self)
self.setAcceptDrops(True)
self.whereDropped.connect(self.attdeta)
self.dragleaves.connect(self.removeParent)
@pyqtSlot()
def removeParent(self) :
print("\n\nself.dragleaves.connect(self.removeParent))")
pass
@pyqtSlot(int, int)
def attdeta(self, tupx, tupy) :
print("\n\nself.whereDropped.connect(self.attdeta)")
print('tupxy : ', tupx , tupy)
def dropEvent(self, e):
pos = e.pos()
widget = e.source()
print('\n\ndropEvent _________')
print('widget = e.source() :' , widget)
print('e.pos() : ' , pos ,'\n\n')
self.whereDropped.emit(e.pos().x(), e.pos().y())
e.accept()
def dragEnterEvent(self, e):
print('dragEnterEvent _________' , e.source(), '\n')
print('dragEnterEvent _________event-source-name :' ,e.source().objectName())
print('e.pos() : ' , e.pos().x())
print('e.pos() : ' , e.pos().y() ,'\n\n')
e.accept()
def dragLeaveEvent(self, event):
print('\n\nDragLeaveEvent event : ', event)
# print(event.sender()) ### AttributeError: 'QDragLeaveEvent' object has no attribute 'sender'
# print(event.source()) ### AttributeError: 'QDragLeaveEvent' object has no attribute 'source'
print("Drag left at: " + str(self.mapFromGlobal(QCursor.pos())))
print('self.dragleaves.emit()')
self.whereDropped.emit(self.mapFromGlobal(QCursor.pos()).x(), self.mapFromGlobal(QCursor.pos()).y()) ## see [https://stackoverflow.com/questions/50022465/how-do-i-get-the-exit-point-from-a-qdragleaveevent][1]
self.dragleaves.emit()
event.accept()
app = QApplication([])
w = MainWindow()
w.show()
app.exec_()
</code></pre>
<p>Any chance of getting sender/sorce/widget of <code>dragLeaveEvent</code> like for</p>
<p><code>dragEnterEvent</code> ??? :</p>
<pre><code>def dragEnterEvent(self, e):
print('dragEnterEvent _________' , e.source(), '\n')
print('dragEnterEvent _________event-source-name :' ,e.source().objectName())
print('e.pos() : ' , e.pos().x())
print('e.pos() : ' , e.pos().y() ,'\n\n')
e.accept()
</code></pre>
| <python><qt><pyqt><pyqt5> | 2023-07-30 14:01:12 | 2 | 3,346 | pippo1980 |
76,797,935 | 3,758,232 | `sorted()` objects with custom `__lt__` logic gives inconsistent results | <p>I have a list of strings that I want sorted so that, if string A is entirely contained in string B and B is longer, B is sorted before A; e.g. <code>abc</code> would come before <code>ab</code> but after <code>aba</code>.</p>
<p>To do that, I implemented a class with a custom <code>__lt__()</code> method that I can use on <code>sorted()</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Token(str):
"""
Token class: minimal unit of text parsing.
This class overrides the `<` operator for strings, so that sorting is done
in a way that prioritizes a longer string over a shorter one with identical
root.
"""
def __init__(self, content):
self.content = content
def __lt__(self, other):
logger.debug(f"lt called on {self.content}, {other.content}")
if self.content == other.content:
return False
# If one of the strings is entirely contained in the other string, then
# the containing string has precedence (is "less").
if self.content in other.content:
logger.debug(f"{other.content} comes before {self.content}")
return False
# Other way around.
if other.content in self.content:
logger.debug(f"{self.content} comes before {other.content}")
return True
# If neither of the strings contains the other, perform a normal
# string comparison.
logger.debug(f"neither {other.content} nor {self.content} are subs.")
return self.content < other.content
def __hash__(self):
return hash(self.content)
</code></pre>
<p>For safety I implemented all other comparison functions, which I have omitted here, but none of them is called in <code>sorted()</code> anyway.</p>
<p>then I use a sample <code>Token</code> list:</p>
<pre class="lang-py prettyprint-override"><code>sl = ['ABCD', 'ZABCD', 'AB', 'A', 'BCDE', 'BCD', 'BEFGH', 'ZAB', 'B']
tokens = [Token(s) for s in sl]
sorted(tokens)
DEBUG:lt called on ABCD, ZABCD
DEBUG:ZABCD comes before ABCD
DEBUG:lt called on AB, ABCD
DEBUG:ABCD comes before AB
DEBUG:lt called on A, AB
DEBUG:AB comes before A
DEBUG:lt called on BCDE, A
DEBUG:neither A nor BCDE are subs.
DEBUG:lt called on BCD, BCDE
DEBUG:BCDE comes before BCD
DEBUG:lt called on BEFGH, BCD
DEBUG:neither BCD nor BEFGH are subs.
DEBUG:lt called on ZAB, BEFGH
DEBUG:neither BEFGH nor ZAB are subs.
DEBUG:lt called on B, ZAB
DEBUG:ZAB comes before B
-> ['ZABCD', 'ABCD', 'AB', 'A', 'BCDE', 'BCD', 'BEFGH', 'ZAB', 'B']
</code></pre>
<p>Some elements have been sorted correctly, some others not (<code>ZAB</code> should be before <code>AB</code>). It looks like this is dependent on the original order, because if I change the order of the original list, the result is different:</p>
<pre class="lang-py prettyprint-override"><code>sl = ['B', 'BCD', 'BCDE', 'BEFGH', 'A', 'AB', 'ABCD', 'ZABCD', 'ZAB']
tokens = [Token(s) for s in sl]
sorted(tokens)
DEBUG:lt called on BCD, B
DEBUG:BCD comes before B
DEBUG:lt called on BCDE, BCD
DEBUG:BCDE comes before BCD
DEBUG:lt called on BEFGH, BCDE
DEBUG:neither BCDE nor BEFGH are subs.
DEBUG:lt called on BEFGH, BCD
DEBUG:neither BCD nor BEFGH are subs.
DEBUG:lt called on BEFGH, B
DEBUG:BEFGH comes before B
DEBUG:lt called on A, BEFGH
DEBUG:neither BEFGH nor A are subs.
DEBUG:lt called on A, BCD
DEBUG:neither BCD nor A are subs.
DEBUG:lt called on A, BCDE
DEBUG:neither BCDE nor A are subs.
DEBUG:lt called on AB, BCD
DEBUG:neither BCD nor AB are subs.
DEBUG:lt called on AB, BCDE
DEBUG:neither BCDE nor AB are subs.
DEBUG:lt called on AB, A
DEBUG:AB comes before A
DEBUG:lt called on ABCD, BCD
DEBUG:ABCD comes before BCD
DEBUG:lt called on ABCD, A
DEBUG:ABCD comes before A
DEBUG:lt called on ABCD, AB
DEBUG:ABCD comes before AB
DEBUG:lt called on ZABCD, BCDE
DEBUG:neither BCDE nor ZABCD are subs.
DEBUG:lt called on ZABCD, BEFGH
DEBUG:neither BEFGH nor ZABCD are subs.
DEBUG:lt called on ZABCD, B
DEBUG:ZABCD comes before B
DEBUG:lt called on ZAB, BCD
DEBUG:neither BCD nor ZAB are subs.
DEBUG:lt called on ZAB, ZABCD
DEBUG:ZABCD comes before ZAB
DEBUG:lt called on ZAB, B
DEBUG:ZAB comes before B
-> ['ABCD', 'AB', 'A', 'BCDE', 'BCD', 'BEFGH', 'ZABCD', 'ZAB', 'B']
</code></pre>
<p>The order is now different, for example, <code>ZABCD</code> comes after <code>ABCD</code>. Note that the comparison between the two is not called in the debug statements.</p>
<p>Why would this comparison be skipped depending on the original order?</p>
<p>EDIT: FYI, I'm using Python 3.11.3.</p>
| <python><sorting> | 2023-07-30 13:19:14 | 1 | 928 | user3758232 |
76,797,934 | 1,581,875 | add_dll_directory to path where toto.pyd is but error "No module named toto" at execution of python script | <p>I use 64 bits python 3.11.4. I coded a toto.pyd with pybind11 (last commit version) in C++. I call a function from this library in a python script. When the library pyd file is in the same folder as the python script, no problem, the code runs as expected. If I move the library file <code>toto.pyd</code> to <code>C:\path\to\toto</code> for instance, add</p>
<pre><code>import os
os.add_dll_directory(r"C:\path\to\toto")
</code></pre>
<p>in my script before the <code>import toto</code> line and execute the python script I surprisingly have a <code>ModuleNotFoundError: No module named 'toto'</code> error at the <code>import toto</code> line in my python script.</p>
<p>(This happens btw independently of the fact that <code>C:\path\to\toto</code> is in the <code>PATH</code> or <code>PYTHONPATH</code> or not.)</p>
| <python><c++><python-3.x><pybind11> | 2023-07-30 13:19:01 | 1 | 3,881 | Olรณrin |
76,797,456 | 292,291 | In SQLAlchemy 2, how do I subquery in insert .. on conflict update | <p>I want to build a insert subquery like:</p>
<pre><code>INSERT INTO services
(name, tags)
VALUES
('service 1', '{"new one"}')
ON CONFLICT (name) DO UPDATE SET
name = EXCLUDED.name,
tags = (
SELECT coalesce(ARRAY_AGG(x), ARRAY[]::VARCHAR[])
FROM
UNNEST(EXCLUDED.tags || ARRAY['new 2']) AS x
LEFT JOIN
UNNEST(ARRAY['new one']) AS y
ON x = y
WHERE y IS NULL
)
RETURNING *
</code></pre>
<p>How do I do something like that in ORM?</p>
<p>I tried</p>
<pre><code>stmt = insert(Service).values(
name=input.name,
)
stmt = stmt.on_conflict_do_update(
index_elements=[Service.name],
set_={
Service.name: stmt.excluded.name,
Service.tags: select(
func.array_agg(column("t")),
).select_from(
func.unnest(
Service.tags + tags_list
).alias("t")
).outerjoin(
func.unnest(
remove_tags
).alias("r"),
column("t") == column("r")
).where(column("r") == None)
}
).returning(Service)
</code></pre>
<p>But I am getting</p>
<pre><code>asyncpg.exceptions.AmbiguousFunctionError: function unnest(unknown) is not unique
HINT: Could not choose a best candidate function. You might need to add explicit type casts.
</code></pre>
<p>Generated SQL looks similar and seem to work fine</p>
<pre><code>INSERT INTO services (name, tags)
VALUES ($1::VARCHAR, $2::VARCHAR []) ON CONFLICT (name) DO
UPDATE
SET name = excluded.name,
tags = (
SELECT array_agg(t) AS array_agg_1
FROM unnest(services.tags || $3::VARCHAR []) AS t
LEFT OUTER JOIN unnest($4) AS r ON t = r
WHERE r IS NULL
)
RETURNING services.name,
services.tags,
services.id,
services.created_at,
services.updated_at
</code></pre>
| <python><postgresql><sqlalchemy> | 2023-07-30 11:08:28 | 1 | 89,109 | Jiew Meng |
76,797,415 | 20,920,790 | How to make schema graph with sqlalchemy_schemadisplay? | <p>How to make schema graph?</p>
<pre><code>meta = MetaData()
meta.reflect(bind=engine, views=True, resolve_fks=False)
graph = create_schema_graph(
metadata=meta,
show_indexes=False,
concentrate=False,
rankdir='LR'
)
graph.write_png('dbschema.png')
imgplot = plt.imshow(mpimg.imread('dbschema.png'))
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.show()
</code></pre>
<p>I get error in create_schema_graph:</p>
<pre><code>AttributeError: 'MetaData' object has no attribute 'bind'
</code></pre>
<p>Thats shame, but I already did it, but I can't figure out, that I doing wrong this time.</p>
<p><a href="https://i.sstatic.net/XUOzz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XUOzz.png" alt="enter image description here" /></a></p>
| <python><sqlalchemy><metadata><database-metadata> | 2023-07-30 10:53:29 | 1 | 402 | John Doe |
76,797,243 | 8,068,825 | Pandas - Replace outliers on on groupby with largest and lowest value in interquartile range in groupby | <p>So I'm trying to replace outliers on a groupby basis. If it is an outlier above the range for a particular groupby it should be set to 1.5 * interquartile range + quartile 3 and if it's below the range it should be set to quartile 1 - 1.5 * interquartile range. So let's say in the image below let's ignore row with index 6 because it's null for <code>temperature</code> and <code>windspeed</code>, but let's say we group by <code>event</code> and let's say the median for snow is 20 and upper bound for determining if it's an outlier is 24, then we should set row with id=2 to 24, because <code>temperature</code> is 28, which is an outlier.</p>
<p><a href="https://i.sstatic.net/qxi56.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qxi56.png" alt="enter image description here" /></a></p>
| <python><pandas> | 2023-07-30 09:57:20 | 1 | 733 | Gooby |
76,797,213 | 9,072,753 | How to implement thread counted singleton with constructor and destructor? | <p>I have multiple threads that construct a parametrized object and call start and stop on it. The issue is, that the underlying service for each parameter should be only started on the first start, and only stopped on the last stop, once. I tried the following:</p>
<pre><code># /usr/bin/env python3
import multiprocessing.pool
import random
import threading
import time
import uuid
def log(args):
print(f"{threading.get_ident()}: {args}")
class Myobj:
lock = threading.Lock()
count = 0
def __init__(self, name: str):
self.name = name
def log(self, args):
log(f"{self.name}: {args}")
def start(self):
with self.lock:
self.count += 1
if self.count == 1:
self.value = uuid.uuid1()
self.log(f"starting {self.value}")
self.log(f"ref up {self.count}")
def stop(self):
with self.lock:
self.count -= 1
if self.count == 0:
self.log(f"stopping {self.value}")
self.log(f"ref down {self.count}")
def thread(i):
# references service with specific name
myobj = Myobj(f"name{i % 2}")
# only the first thread that gets here should start the service with that name
myobj.start()
# some_computation()
time.sleep(random.uniform(0, 2))
# only the last thread that gets here should stop the service with that name
myobj.stop()
size = 6
with multiprocessing.pool.ThreadPool(size) as tp:
tp.map(thread, range(size))
</code></pre>
<p>However, the counter is not unique:</p>
<pre><code>140385706604224: name0: starting ab142d3c-2ebd-11ee-9d78-901b0e12b878
140385706604224: name0: ref up 1
140385698211520: name1: starting ab143264-2ebd-11ee-a67e-901b0e12b878
140385698211520: name1: ref up 1
140385689818816: name0: starting ab1435fc-2ebd-11ee-99f3-901b0e12b878
140385689818816: name0: ref up 1
140385681426112: name1: starting ab1439b2-2ebd-11ee-932c-901b0e12b878
140385681426112: name1: ref up 1
140385673033408: name0: starting ab143d36-2ebd-11ee-9959-901b0e12b878
140385673033408: name0: ref up 1
140385664640704: name1: starting ab1443f8-2ebd-11ee-a1b9-901b0e12b878
140385664640704: name1: ref up 1
140385673033408: name0: stopping ab143d36-2ebd-11ee-9959-901b0e12b878
140385673033408: name0: ref down 0
140385706604224: name0: stopping ab142d3c-2ebd-11ee-9d78-901b0e12b878
140385706604224: name0: ref down 0
140385689818816: name0: stopping ab1435fc-2ebd-11ee-99f3-901b0e12b878
140385689818816: name0: ref down 0
140385681426112: name1: stopping ab1439b2-2ebd-11ee-932c-901b0e12b878
140385681426112: name1: ref down 0
140385664640704: name1: stopping ab1443f8-2ebd-11ee-a1b9-901b0e12b878
140385664640704: name1: ref down 0
140385698211520: name1: stopping ab143264-2ebd-11ee-a67e-901b0e12b878
140385698211520: name1: ref down 0
</code></pre>
<p>Ideally, I should see the lines starting and stopping with <code>name0</code> and <code>name1</code> printed once, like the following means that service <code>name0</code> and service <code>name1</code> was started, and then stopped. The order between <code>name0</code> and <code>name1</code> services is not relevant. So only output like the following:</p>
<pre><code> 140385706604224: name0: starting ab142d3c-2ebd-11ee-9d78-901b0e12b878
140385698211520: name1: starting ab143264-2ebd-11ee-a67e-901b0e12b878
140385681426112: name0: stopping ab142d3c-2ebd-11ee-9d78-901b0e12b878
140385664640704: name1: stopping ab143264-2ebd-11ee-a67e-901b0e12b878
</code></pre>
<p>What can be a good design to implement such class that starts and stops unique named service with multiple threads once? Can this be implemented as a generic class?</p>
| <python><python-3.x><multithreading><thread-safety> | 2023-07-30 09:46:26 | 1 | 145,478 | KamilCuk |
76,797,146 | 877,329 | Separating channels with OpenImageIO python | <p>I want to render level curves from an image, and for that I need only one channel. When using <code>read_image</code>, it generates RGBRGBRGB, which is incompatible with <code>matplotlib</code> contour. From the documentation of <code>ImageInput</code>, it looks like I should be able to do the following:</p>
<pre class="lang-py prettyprint-override"><code>pixels = numpy.zeros((spec.nchannels, spec.height, spec.width), "uint8")
for channel in range(spec.nchannels) :
pixels[channel] = file.read_image(0, 0, channel, channel + 1, "uint8")
</code></pre>
<p>Since I am using <code>float</code>, my code now looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import OpenImageIO as oiio
import matplotlib.pyplot
import numpy
inp = oiio.ImageInput.open('test.exr')
if inp:
spec = inp.spec()
xres = spec.width
yres = spec.height
nchannels = spec.nchannels
print(nchannels)
pixels = numpy.zeros((spec.nchannels, spec.height, spec.width), 'float')
for channel in range(spec.nchannels) :
pixels[channel] = inp.read_image(0, 0, channel, channel + 1, 'float')
inp.close()
matplotlib.pyplot.contour(pixels[0], extent = [0, 49152, 0, 49252])
matplotlib.pyplot.colorbar()
matplotlib.pyplot.show()
</code></pre>
<p>But the example seems buggy:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/home/torbjorr/Dokument/terraformer/experiments/levelcurves.py", line 14, in <module>
pixels[channel] = inp.read_image(0, 0, channel, channel + 1, 'float')
ValueError: could not broadcast input array from shape (1024,1024,1) into shape (1024,1024)
</code></pre>
<p>The correct shape in my case is <code>(1024,1024)</code>.</p>
| <python><numpy><openimageio> | 2023-07-30 09:28:19 | 1 | 6,288 | user877329 |
76,797,043 | 16,383,578 | How to group two element tuples for a continuous range of first element with same second element? | <p>I have a list of two element tuples, the first element of each tuple is an integer, these tuples are equivalent to key-value pairs, in fact these tuples are flattened <code>dict_items</code>, generated using <code>list(d.items())</code>.</p>
<p>The element in the first position is guaranteed to be unique (they are keys), and there are lots of key-value pairs where the value is the same and the keys are in a continuous range, meaning there are lots of consecutive key-value pairs where one key is equal to the previous key plus one.</p>
<p>I would like to group the pairs into triplets, where the first two elements are integers and are start and end of such consecutive pairs, and the third element is the value.</p>
<p>The logic is simple, if the input is <code>[(0, 0), (1, 0), (2, 0)]</code>, the output should be <code>[(0, 2, 0)]</code>, the first number is the start of the key range, and the second number is the end of the key range, the third is the value. 0, 1, 2 are consecutive integers.</p>
<p>Given <code>[(0, 0), (1, 0), (2, 0), (3, 1), (4, 1)]</code>, the output should be <code>[(0, 2, 0), (3, 4, 1)]</code>, consecutive keys with same values are grouped.</p>
<p>Given <code>[(0, 0), (1, 0), (2, 0), (3, 1), (4, 1), (5, 2), (7, 2), (9, 2)]</code>, the output should be <code>[(0, 2, 0), (3, 4, 1), (5, 5, 2), (7, 7, 2), (9, 9, 2)]</code>, because 5, 7, 9 are not consecutive integers, <code>5 + 1 != 7 and 7 + 1 != 9</code>.</p>
<p>Input:</p>
<pre><code>[(3, 0),
(4, 0),
(5, 0),
(6, 2),
(7, 2),
(8, 2),
(9, 2),
(10, 2),
(11, 2),
(12, 2),
(13, 1),
(14, 1),
(15, 3),
(16, 3),
(17, 3),
(18, 3),
(19, 3),
(20, 3),
(21, 3),
(22, 3),
(23, 3),
(24, 3),
(25, 3),
(26, 3),
(27, 1),
(28, 1)]
</code></pre>
<p>Output:</p>
<pre><code>[(3, 5, 0), (6, 12, 2), (13, 14, 1), (15, 26, 3), (27, 28, 1)]
</code></pre>
<p>My code gives the correct output but isn't efficient:</p>
<pre><code>def group_numbers(numbers):
l = len(numbers)
i = 0
output = []
while i < l:
di = 0
curn, curv = numbers[i]
while i != l and curn + di == numbers[i][0] and curv == numbers[i][1]:
i += 1
di += 1
output.append((curn, numbers[i - 1][0], curv))
return output
</code></pre>
<p>Code to generate test cases:</p>
<pre><code>def make_test_case(num, lim, dat):
numbers = {}
for _ in range(num):
start = random.randrange(lim)
end = random.randrange(lim)
if start > end:
start, end = end, start
x = random.randrange(dat)
numbers |= {n: x for n in range(start, end + 1)}
return sorted(numbers.items())
</code></pre>
<p>How to do this more efficiently, such as using <code>itertools.groupby</code>?</p>
<p>Answers are required to be verified against my correct approach.</p>
<hr />
<p>Note that there can be gaps where there are no values in the input, I wanted to make the question short so I didn't include such a test case, but do know that such gaps should not be filled. The more efficient approach should produce the same output as mine.</p>
<p>A manual test case to demonstrate this:</p>
<pre><code>In [35]: group_numbers([(0, 0), (1, 0), (2, 0), (3, 0), (10, 1), (11, 1), (12, 1), (13, 1)])
Out[35]: [(0, 3, 0), (10, 13, 1)]
</code></pre>
<p>Clarification for a test case suggested in the comments, the expected output is:</p>
<pre><code>In [61]: group_numbers([(3, 0), (5, 0), (7, 0)])
Out[61]: [(3, 3, 0), (5, 5, 0), (7, 7, 0)]
</code></pre>
<hr />
<p>The output for <code>[(1, 0), (1, 1), (2, 0)]</code> should be undefined, it should raise an exception if encountered. Inputs like this aren't valid inputs. As you can see from my code to generate the sample, all numbers can only have one value.</p>
<p>And output for <code>[(1, 0), (3, 0), (5, 0)]</code> is <code>[(1, 1, 0), (3, 3, 0), (5, 5, 0)]</code>.</p>
<hr />
<h2>Edit</h2>
<p>I am not a native English speaker, in fact I am bad with languages in general (though hopefully not programming languages), and I have no people skills (I really have no one to talk to), so my question may be confusing originally, I feared that if I made it long it would surely contain grammatical mistakes.</p>
<p>I have edited my question to include more details and explain things more thoroughly, to hopefully make the question less confusing.</p>
| <python><python-3.x><itertools-groupby> | 2023-07-30 09:00:29 | 2 | 3,930 | ฮฮญฮฝฮท ฮฮฎฮนฮฝฮฟฯ |
76,797,026 | 21,787,377 | form with multiple image value return list object has no attribute _committed when I try to submit it | <p>I have a form inside my <code>forms.py</code> file when I try to submit the form it throws me an error: <code>'list' object has no attribute '_committed'</code>. In that form, I want to allow user to upload multiple image in one model field, for that I have to <a href="https://docs.djangoproject.com/en/4.2/topics/http/file-uploads/#uploading-multiple-files" rel="nofollow noreferrer">follow this documentation</a>, but I don't have an idea why it's throws the error every time I try to submit it.</p>
<pre><code>model
class LevelType(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
images = models.FileField(upload_to='images-level')
```
```
```
</code></pre>
<pre><code>forms.py
class MultipleFileInput(forms.ClearableFileInput):
allow_multiple_selected = True
class MultipleFileField(forms.FileField):
def __init__(self, *args, **kwargs):
kwargs.setdefault("widget", MultipleFileInput())
super().__init__(*args, **kwargs)
def clean(self, data, initial=None):
single_file_clean = super().clean
if isinstance(data, (list, tuple)):
result = [single_file_clean(d, initial) for d in data]
else:
result = single_file_clean(data, initial)
return result
class LevelTypesForms(forms.ModelForm):
images = MultipleFileField()
class Meta:
model = LevelType
fields = [
'level', 'bank_name', 'account_number',
'account_name', 'images', 'price', 'currency',
'description', 'accept_payment_with_card',
]
</code></pre>
<pre><code> views.py:
class CreateLevelType(CreateView):
model = LevelType
form_class = LevelTypesForms
success_url = reverse_lazy('Level-Types')
template_name = 'Account/create_level_type.html'
def get_form(self, form_class=None):
form = super().get_form(form_class)
form.fields['level'].queryset = Level.objects.filter(user=self.request.user)
return form
def post(self, request, *args, **kwargs):
form_class = self.get_form_class()
form = self.get_form(form_class)
if form.is_valid():
return self.form_valid(form)
else:
return self.form_invalid(form)
def form_valid(self, form):
form.instance.user = self.request.user
form.save()
images = form.cleaned_data["images"]
for image in images:
LevelType.objects.create(
user=self.request.user,
image=image
)
return redirect('Level-Types')
return super(CreateLevelType, self).form_valid(form)
</code></pre>
| <python><django><django-views><django-forms> | 2023-07-30 08:53:23 | 1 | 305 | Adamu Abdulkarim Dee |
76,796,990 | 138,624 | "Module Not Found Error" in Virtual environment | <p>I'm a newbie with Python and been trying to install modules using pip unsuccessfully in my small project.</p>
<p>Following advice online, I've created my own virtual environment and imported my first module <code>cowsay</code> fine. I can definitely see the module being installed in my project:</p>
<p><a href="https://i.sstatic.net/popVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/popVo.png" alt="enter image description here" /></a></p>
<p>BUT, when attempting to run the file in my terminal, I keep getting a <code>ModuleNotFoundError</code>.</p>
<pre><code>(env) sr@python-virtual-env >> pip install cowsay
Collecting cowsay
Using cached cowsay-5.0-py2.py3-none-any.whl
Installing collected packages: cowsay
Successfully installed cowsay-5.0
(env) sr@python-virtual-env >> python say.py John
Traceback (most recent call last):
File "/Users/sr/Sites/python-virtual-env/say.py", line 1, in <module>
import cowsay
ModuleNotFoundError: No module named 'cowsay'
</code></pre>
<p>What am I missing here? Thanks in advance!</p>
| <python> | 2023-07-30 08:42:01 | 4 | 1,136 | Teknotica |
76,796,848 | 5,760,832 | How to enable Intel Iris Xe GPU for TensorFlow+Keras on Windows? | <p>I am trying to leverage Intel Iris Xe Graphics to train my models using keras & tensorflow.
I found the script to check if the MKL flag is enabled & my system detects the GPU.
But the script does not detect Intel GPU.</p>
<pre><code>import tensorflow as tf
import os
def get_mkl_enabled_flag():
mkl_enabled = False
major_version = int(tf.__version__.split(".")[0])
minor_version = int(tf.__version__.split(".")[1])
if major_version >= 2:
if minor_version < 5:
from tensorflow.python import _pywrap_util_port
else:
from tensorflow.python.util import _pywrap_util_port
onednn_enabled = int(os.environ.get('TF_ENABLE_ONEDNN_OPTS', '0'))
mkl_enabled = _pywrap_util_port.IsMklEnabled() or (onednn_enabled == 1)
else:
mkl_enabled = tf.pywrap_tensorflow.IsMklEnabled()
return mkl_enabled
print ("We are using Tensorflow version", tf.__version__)
print("MKL enabled :", get_mkl_enabled_flag())
# Check available physical devices (GPUs)
physical_devices = tf.config.list_physical_devices('GPU')
if len(physical_devices) == 0:
print("No GPU devices found.")
else:
for device in physical_devices:
print("GPU:", device)
# Check if Intel GPU is being used
intel_gpu_in_use = False
for device in physical_devices:
if 'Intel' in tf.config.experimental.get_device_name(device):
intel_gpu_in_use = True
print("Is Intel GPU being used?", intel_gpu_in_use)
</code></pre>
<p>This script throws an output that Intel gpu is not being used.</p>
| <python><windows><tensorflow><keras><gpu> | 2023-07-30 07:58:51 | 1 | 891 | Aqua 4 |
76,796,833 | 6,218,849 | How to add a horizontal line that spans the x axis in a plotly subplot figure of indicators and scatter? | <p>I am building a dashboard application that will display live data from a web API. To make the updates as efficient as possible I am currently rewriting my code to use partial property updates when new data is available instead of building the entire figure from scratch each time (approx. every 10-15 seconds).</p>
<p>Some of the timeseries that gets displayed have operational limits, and I want to show these as horizontal lines that span the x axis. For a non-subplot figure (<code>px.line</code> or <code>go.Figure(go.Scatter())</code> the normal <code>fig.add_hline</code> works, but I get an error if I try to do with this the subplot shown below.</p>
<pre><code>Traceback (most recent call last):
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/src/app.py", line 173, in <module>
app = main()
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/src/app.py", line 168, in main
app.layout = layout_partialupdate.make_layout(app, APP_TITLE, INTERVAL_REFRESH)
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/src/layout_partialupdate.py", line 53, in make_layout
dbc.Col(utils.make_kpi_graph_card(
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/src/utils.py", line 84, in make_kpi_graph_card
fig.add_hline(
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/.venv/lib/python3.10/site-packages/plotly/graph_objs/_figure.py", line 1084, in add_hline
return super(Figure, self).add_hline(
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/.venv/lib/python3.10/site-packages/plotly/basedatatypes.py", line 4122, in add_hline
self._process_multiple_axis_spanning_shapes(
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/.venv/lib/python3.10/site-packages/plotly/basedatatypes.py", line 4044, in _process_multiple_axis_spanning_shapes
self.add_shape(
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/.venv/lib/python3.10/site-packages/plotly/graph_objs/_figure.py", line 24028, in add_shape
return self._add_annotation_like(
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/.venv/lib/python3.10/site-packages/plotly/basedatatypes.py", line 1590, in _add_annotation_like
not self._subplot_not_empty(
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/.venv/lib/python3.10/site-packages/plotly/basedatatypes.py", line 4231, in _subplot_not_empty
for t in [
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/.venv/lib/python3.10/site-packages/plotly/basedatatypes.py", line 4235, in <listcomp>
"x" if d[xaxiskw] is None else d[xaxiskw],
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/.venv/lib/python3.10/site-packages/plotly/basedatatypes.py", line 4718, in __getitem__
self._raise_on_invalid_property_error(_error_to_raise=PlotlyKeyError)(
File "/home/brakjen/dev/akerbp/W2W_dashboard_frontend/.venv/lib/python3.10/site-packages/plotly/basedatatypes.py", line 5092, in _ret
raise _error_to_raise(
_plotly_utils.exceptions.PlotlyKeyError: Invalid property specified for object of type plotly.graph_objs.Indicator: 'xaxis'
</code></pre>
<p>I have the following plotly subplots figure, here without tryint add the hlines (not a MWE, but how I create the function in my application):</p>
<pre class="lang-py prettyprint-override"><code>def make_kpi_graph_card(
component_id: Union[str, None]=None,
col: Union[str, None]=None,
color: Union[str, None]=None,
limits: Union[Tuple[float, float], Tuple[None, None]]=(None, None)
) -> dbc.Card:
specs = [
[{"type": "indicator"}] * 4,
[{"type": "scatter", "colspan": 4}] + [None] * 3
]
fig = ps.make_subplots(
rows=2,
cols=4,
specs=specs,
vertical_spacing=0.2,
row_heights=(0.1, 0.9)
)
# Add the indicators
for i in range(4):
kpi_title=KPI_ORDER[i]
fig.add_trace(go.Indicator(
value=0,
title={"text": KPI_TITLE_MAPPER[kpi_title], "font": {"color": color, "size": 35}},
mode="number+delta",
number={"valueformat": ".2s", "font": {"color": color, "size": 40}},
delta={"font": {"size": 35}, "valueformat": ".2s"}
),
row=1, col=i+1)
# Add the scatter plot
fig.add_trace(go.Scatter(
x=None,
y=None,
line={"width": 4, "color": color},
showlegend=False
),
row=2, col=1)
# Define x axis properties
xaxis_props = dict(
xaxis_tickfont={"size": 20},
xaxis_ticks="outside",
xaxis_mirror=True,
xaxis_showline=True
)
# Define y axis properties
yaxis_props = dict(
yaxis_tickfont={"size": 30},
yaxis_ticks="outside",
yaxis_mirror=True,
yaxis_showline=True,
)
# Update layout and return the Card
fig.update_layout(template="plotly_dark", **xaxis_props, **yaxis_props)
return dbc.Card(
dbc.CardBody([
html.H1(col, style={"color": color, "textAlign": "center"}),
dcc.Graph(figure=fig, id=component_id)
])
)
</code></pre>
<p>Which looks like this:</p>
<p><a href="https://i.sstatic.net/0jPKF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0jPKF.png" alt="enter image description here" /></a></p>
<p><strong>How can I add a hline to the figure without knowing the xaxis data at the point when the figure is created?</strong> Data for the figures will come in the callbacks as <code>dcc.Patch</code> objects.</p>
| <python><plotly-dash> | 2023-07-30 07:54:30 | 1 | 710 | Yoda |
76,796,430 | 7,587,176 | pre_order_traversal vs post_order_traversal | <p>I have the two functions below
"PRE_ORDER_TRAVERSAL"</p>
<pre><code>def pre_order_traversal(root: Node):
if root is not None:
print(root.val)
pre_order_traversal(root.left)
pre_order_traversal(root.right)
</code></pre>
<p>"POST_ORDER_TRAVERSAL":</p>
<pre><code>def post_order_traversal(root: Node):
if root is not None:
post_order_traversal(root.left)
post_order_traversal(root.right)
print(root.val)
</code></pre>
<p>So for post order traversal , it is a recursive call that runs root.left and adds it to the call stack all the way until there are no more root.lefts and does the same for root.right and then the print.val just .pops each value from the stack?</p>
<p>If so, how does python or where is python instructing it to do down a level in the tree? As well what is the datatype for root.left to hold multiple values? I get it is recursive having trouble understanding the operations to actually enable so much going on -- it does not seem like enough code to eanble the above.</p>
<p>For instance -- if this is the input:</p>
<pre><code>5 4 3 x x 8 x x 6 x x
</code></pre>
<p>how does it know to go from 4 to 3?</p>
| <python><traversal> | 2023-07-30 05:15:42 | 1 | 1,260 | 0004 |
76,796,370 | 1,473,517 | How to show progress of optimizer without using a global variable? | <p>Take this MWE:</p>
<pre><code>import dlib
from scipy.optimize import rosen
def rosen_f(*x):
result = rosen((x))
global OPT_FOUND
if result < OPT_FOUND:
OPT_FOUND = result
print("New min", (x), result)
return result
dlib.find_min_global(rosen_f, [0]*5, [0.5]*5, 1000)
</code></pre>
<p>If I set OPT_FOUND = 2**30, this does print out all the improvements that find_min_global finds in the optimization.</p>
<p>How can I:</p>
<ul>
<li>Avoid using a global variable</li>
<li>Add the improvements to a list that I can use in the main part of my code?</li>
</ul>
| <python><scipy><dlib> | 2023-07-30 04:41:56 | 1 | 21,513 | Simd |
76,796,199 | 687,739 | Parse string timezone in pandas | <p>I have a list of tuples that looks like this:</p>
<pre><code>[('20230728 09:30:00 US/Eastern', 194.7, 195.26)]
</code></pre>
<p>I am using this list to create a pandas <code>DataFrame</code> like this:</p>
<pre><code>df = pd.DataFrame(
received_historic_data, columns=["time", "open", "close"]
).set_index("time")
</code></pre>
<p>I want to convert the <code>Index</code> from dtype <code>object</code> (e.g. string) to a <code>datetime</code>.</p>
<p>I used this:</p>
<pre><code>df.index = pd.to_datetime(df.index)
</code></pre>
<p>But pandas can't parse the string timezone "US/Eastern".</p>
<p>How do I parse this string to a datetime object?</p>
| <python><pandas><timezone><datetime-format> | 2023-07-30 03:12:38 | 2 | 15,646 | Jason Strimpel |
76,796,168 | 11,065,874 | fastapi pytest: how to override a fastapi dependency that accepts arguments? | <p>I have this small <a href="https://meta.stackoverflow.com/questions/366988/what-does-mcve-mean">MCVE</a> FastAPI application which works fine as expected:</p>
<pre><code># run.py
import uvicorn
from fastapi import FastAPI, Depends
app = FastAPI()
###########
def dep_with_arg(inp):
def sub_dep():
return inp
return sub_dep
@app.get("/a")
def a(v: str = Depends(dep_with_arg("a"))):
return v
###########
def dep_without_arg():
return "b"
@app.get("/b")
def b(v: str = Depends(dep_without_arg)):
return v
###########
def main():
uvicorn.run(
"run:app",
host="0.0.0.0",
reload=True,
port=8000,
workers=1
)
if __name__ == "__main__":
main()
</code></pre>
<p>notice the difference between the dependencies of the <code>/a</code> and <code>/b</code> endpoints. in the <code>/a</code> endpoint, I have created a dependency that accepts an argument. both endpoints are working as expected when I call them.</p>
<p>I now try to test the endpoints while overriding the dependencies as below:</p>
<pre><code>from run import app, dep_without_arg, dep_with_arg
from starlette.testclient import TestClient
def test_b():
def dep_override_for_dep_without_arg():
return "bbb"
test_client = TestClient(app=app)
test_client.app.dependency_overrides[dep_without_arg] = dep_override_for_dep_without_arg
resp = test_client.get("/b")
resp_json = resp.json()
assert resp_json == "bbb"
###########
def test_a_method_1():
def dep_override_for_dep_with_arg(inp):
def sub_dep():
return "aaa"
return sub_dep
test_client = TestClient(app=app)
test_client.app.dependency_overrides[dep_with_arg] = dep_override_for_dep_with_arg
resp = test_client.get(
"/a",
)
resp_json = resp.json()
assert resp_json == "aaa"
def test_a_method_2():
def dep_override_for_dep_with_arg(inp):
return "aaa"
test_client = TestClient(app=app)
test_client.app.dependency_overrides[dep_with_arg] = dep_override_for_dep_with_arg
resp = test_client.get(
"/a",
)
resp_json = resp.json()
assert resp_json == "aaa"
def test_a_method_3():
def dep_override_for_dep_with_arg(inp):
return "aaa"
test_client = TestClient(app=app)
test_client.app.dependency_overrides[dep_with_arg] = dep_override_for_dep_with_arg("aaa")
resp = test_client.get(
"/a",
)
resp_json = resp.json()
assert resp_json == "aaa"
def test_a_method_4():
def dep_override_for_dep_with_arg():
return "aaa"
test_client = TestClient(app=app)
test_client.app.dependency_overrides[dep_with_arg] = dep_override_for_dep_with_arg
resp = test_client.get(
"/a",
)
resp_json = resp.json()
assert resp_json == "aaa"
</code></pre>
<p><code>test_b</code> passes as expected but all the other tests fail:</p>
<pre><code>FAILED test.py::test_a_method_1 - AssertionError: assert 'a' == 'aaa'
FAILED test.py::test_a_method_2 - AssertionError: assert 'a' == 'aaa'
FAILED test.py::test_a_method_3 - AssertionError: assert 'a' == 'aaa'
FAILED test.py::test_a_method_4 - AssertionError: assert 'a' == 'aaa'
</code></pre>
<p>How should I override <code>dep_with_arg</code> in the example above?</p>
<hr />
<p>related question: <a href="https://stackoverflow.com/questions/74689457/overriding-fastapi-dependencies-that-have-parameters">Overriding FastAPI dependencies that have parameters</a></p>
| <python><dependency-injection><pytest><fastapi> | 2023-07-30 02:50:23 | 0 | 2,555 | Amin Ba |
76,796,044 | 14,182,249 | Django View - How to Efficiently Filter Combined Querysets from Multiple Models? | <p>I have a Django view that combines and sorts querysets from three different models (Event, Subject, and Article) to create a feed. I'm using the sorted function along with chain to merge the querysets and sort them based on the 'created_at' attribute. The feed is then returned as a list.</p>
<p>However, I'm facing challenges when it comes to filtering the feed based on user-provided search and author parameters. Since the feed is in list form, I can't directly use the filter method from the Django QuerySet.</p>
<p>The <code>created_by__username__icontains</code> is not working. Only the <code>ModelChoiceField</code> is working along with it but is want to use <code>CharField</code> field with it.</p>
<p>forms.py</p>
<pre><code>class FeedFilterForm(forms.Form):
author = forms.CharField( # ModelChoiceField
label='Author',
required=False,
# queryset=User.objects.all(),
widget=forms.TextInput(attrs={'placeholder': 'Author'}),
)
</code></pre>
<p>Views.py</p>
<pre><code>class FeedView(View):
form_class = FeedFilterForm
template_name = 'your_template.html'
def get_queryset(self):
"""
Get the combined and sorted queryset from events, subjects, and articles.
"""
events = Event.objects.all()
subjects = Subject.objects.all()
articles = Article.objects.all()
feed = sorted(
chain(articles, subjects, events),
key=attrgetter('created_at'),
reverse=True,
)
form = self.form_class(self.request.GET)
if form.is_valid():
data = form.cleaned_data
author = data.get('author')
if author:
feed = filter(
lambda x: author
== getattr(x, 'created_by__username__icontains', ''),
feed,
)
return list(feed)
def get(self, request, *args, **kwargs):
queryset = self.get_queryset()
context = {
'object_list': queryset,
}
return render(request, self.template_name, context)
</code></pre>
| <python><django><django-models><django-views><django-class-based-views> | 2023-07-30 01:33:54 | 1 | 593 | afi |
76,795,935 | 1,684,426 | OSError: exception: access violation reading|writing when calling any function after library instance creation | <p>I'm trying to make use of a SecuGen Fingerprint scanner using Python, the SDK lib I'm using is <code>sgfplib.dll</code>. I have a C++ program that makes use of this DLL and access the hardware using C++ and the very same DLL.
I can also access the device using the Windows Biometric Framework (<code>winbio.dll</code>), Python and ctypes but this framework don't have the functionallity I need.</p>
<p>The thing is, using the <code>sgfplib.dll</code>, after creation of the main library object I get a handle of the dll just fine, but anytime I try to call any other function I'll get an <code>OSError: exception: access violation reading|writing 0x###########</code>.</p>
<p>I've searched both the site and google looking for similar errors but nothing seems close to my problem. I got the DLLs in my <code>system32</code> folder and I also tried putting them in the scripts dir without any luck</p>
<p>The address in the error does change if I change the devname or if I call a diff function.</p>
<p>Can anyone point me in the right direction or provide some more info on this kind of errors?</p>
<p>Here's a min reproducible example:</p>
<pre><code>import ctypes
from ctypes import wintypes
lib = ctypes.WinDLL(r"..\pathToTheDLLsDir\sgfplib.dll")
session_handle = ctypes.c_uint32()
devname = ctypes.wintypes.DWORD()
devname = "SG_DEV_FDU05" # 5
err = lib.SGFPM_Create(ctypes.byref(session_handle)) # create an SGFPM object and
print('CREATE', err) # 0 # return a handle to the object
err = lib.SGFPM_Init(ctypes.byref(session_handle), devname) # OSError: exception: access
print('INIT', err) # violation reading 0x00000000AFE3FB10
</code></pre>
| <python><dll><ctypes><fingerprint> | 2023-07-30 00:35:35 | 1 | 930 | Solrac |
76,795,920 | 2,453,202 | Why do plot tick labels overlap only for Imported CSV data? | <p>When I create a simple matpltlib plot with numpy arrays, the tick labels are well-behaved, chosen intelligently to not overlap and spaced to span the data range evenly.</p>
<p>However when I imported data into numpy arrays, the tick labels are a mess. It appears that it has added a tick label for <em>each datapoint</em>, rather than auto-generating a sensible scale.</p>
<p>Why is my data causing it to not be automatic?<br />
How do I get MPL to do this automatically for real-world data with irregularly-spaced X/Y data?</p>
<hr />
<p>"Normal" behavior:</p>
<pre><code>
import matplotlib.pyplot as plt
import numpy as np
import numpy.random as rnd
x = np.array( range(1000000) )
y = rnd.rand(1,1000000)[0]
fig, ax = plt.subplots()
ax.plot(x,y)
</code></pre>
<p>Resulting plot, as expected:
<a href="https://i.sstatic.net/1sQQw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1sQQw.png" alt="Plot of 2 numpy arrays, sensible axes" /></a></p>
<hr />
<p>Real-world data with non-equally-spaced X-axis data, imported from file.</p>
<p>Snippet of data file:</p>
<pre><code>-1900.209922,-106.022
-1900.176409,-103.902
-1900.142897,-112.337
-1900.109384,-109.252
...
</code></pre>
<p>Plotting script:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import csv
# Read CSV file
with open(r"graph.csv", encoding='utf-8-sig') as fp:
reader = csv.reader(fp, delimiter=",", quotechar='"', )
data_read = [row for row in reader]
#end with file
d = np.array(data_read).T # transpose
x = d[0][0:10]
y = d[1][0:10]
fig, ax = plt.subplots()
ax.plot( x, y, "." )
fig.show()
</code></pre>
<p>I get messy tick labels:
<a href="https://i.sstatic.net/xWIkV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xWIkV.png" alt="collided x-tick labels" /></a></p>
<p>Zoomed in, you can see it added ticks at exactly my data points:
<a href="https://i.sstatic.net/Szuq6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Szuq6.png" alt="messy x-ticks, zoomed in" /></a></p>
<hr />
<p>If I change the X-data to a linear array, then it auto-ticks the x-axis, putting labels at intuitive locations (not at datapoints):</p>
<pre><code>y = d[1][0:100]
x = range( len(y) ) # integer x-axis points
fig, ax = plt.subplots()
ax.plot( x, y, "." )
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/Y5zIf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y5zIf.png" alt="integer x-axes" /></a></p>
<p>By the way, even if I load 20,000 data points, such that the y axis spans from -106 --> -88 (in case the values were too closely spaced), the y-axis labels still collide:</p>
<pre><code>y[-1]
Out[31]: '-88.109'
y[0]
Out[32]: '-106.022'
</code></pre>
<p><a href="https://i.sstatic.net/KWPsZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KWPsZ.png" alt="20,000 datapoints" /></a></p>
<p>Ultimately I'll be loading a large number of datapoints (200,000), so need this solved.</p>
| <python><matplotlib><plot> | 2023-07-30 00:23:23 | 1 | 5,767 | Demis |
76,795,498 | 12,358,733 | Doing async SSL/TLS handshake Python | <p>I have this traditional synchronous code which connects to a website and gathers the SSL/TLS version along with server certificate details:</p>
<pre><code>import ssl, socket
SERVER_HOSTNAME = "www.example.com"
try:
sock = socket.create_connection((SERVER_HOSTNAME, 443), timeout=3)
ssl_context = ssl.create_default_context()
ssock = ssl_context.wrap_socket(sock, server_hostname=SERVER_HOSTNAME)
tls_info = {
'version': ssock.version(),
'peercert': ssock.getpeercert(),
}
except Exception as e:
quit(e)
</code></pre>
<p>How can I port this to async so I can gather this information for multiple sites in parallel? I found asyncore, but it's being deprecated. aioopenssl looks like a possibility, but I can't find a good usage example. aiohttp seems to support it, although it would be overkill since I'm just looking to do a handshake and don't need to do an actual HTTP request.</p>
| <python><openssl><python-asyncio><aiohttp><python-sockets> | 2023-07-29 21:15:58 | 0 | 931 | John Heyer |
76,795,268 | 11,152,224 | Error on application startup after adding pg_context to main.py | <p>I make polls application from <a href="https://aiohttp-demos.readthedocs.io/en/latest/tutorial.html#creating-connection-engine" rel="nofollow noreferrer">docs</a>. Version of PostgreSQL is 15. Using poetry here are my dependencies from <code>pyproject.toml</code>:</p>
<pre><code>python = "^3.11"
aiohttp = "^3.8.5"
aiodns = "^3.0.0"
pyyaml = "^6.0.1"
aiopg = {extras = ["sa"], version = "^1.4.0"}
</code></pre>
<p>main.py:</p>
<pre><code>app.cleanup_ctx.append(pg_context) # Added this
</code></pre>
<p>db.py</p>
<pre><code>async def pg_context(app):
conf = app['config']['postgres']
engine = await aiopg.sa.create_engine(
database=conf['database'],
user=conf['user'],
password=conf['password'],
host=conf['host'],
port=conf['port'],
minsize=conf['minsize'],
maxsize=conf['maxsize'],
)
app['db'] = engine
yield
app['db'].close()
await app['db'].wait_closed()
</code></pre>
<p>If I remove <code>app.cleanup_ctx.append(pg_context)</code> from <code>main.py</code> server runs successfully. But when I try to run server <code>python main.py</code> I got such error (see below): What problem can it be?</p>
<p>From traceback: it says exception occurs when I do this <code>engine = await aiopg.sa.create_engine(...</code> in db.py.</p>
<pre><code>(aiohttp-example-py3.11) F:\python\AIOHTTP\polls>python app_polls/main.py
unhandled exception during asyncio.run() shutdown
task: <Task finished name='Task-1' coro=<_run_app() done, defined at C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web.py:289> exception=NotImplementedError()>
Traceback (most recent call last):
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web.py", line 516, in run_app
loop.run_until_complete(main_task)
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web.py", line 323, in _run_app
await runner.setup()
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web_runner.py", line 279, in setup
self._server = await self._make_server()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web_runner.py", line 375, in _make_server
await self._app.startup()
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web_app.py", line 417, in startup
await self.on_startup.send(self)
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiosignal\__init__.py", line 36, in send
await receiver(*args, **kwargs) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web_app.py", line 539, in _on_startup
await it.__anext__()
File "F:\python\AIOHTTP\polls\app_polls\db.py", line 32, in pg_context
engine = await aiopg.sa.create_engine(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\sa\engine.py", line 94, in _create_engine
pool = await aiopg.create_pool(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\pool.py", line 300, in from_pool_fill
await self._fill_free_pool(False)
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\pool.py", line 336, in _fill_free_pool
conn = await connect(
^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\connection.py", line 65, in connect
connection = Connection(
^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\connection.py", line 772, in __init__
self._loop.add_reader(
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\asyncio\events.py", line 530, in add_reader
raise NotImplementedError
NotImplementedError
Traceback (most recent call last):
File "F:\python\AIOHTTP\polls\app_polls\main.py", line 11, in <module>
web.run_app(app)
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web.py", line 516, in run_app
loop.run_until_complete(main_task)
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web.py", line 323, in _run_app
await runner.setup()
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web_runner.py", line 279, in setup
self._server = await self._make_server()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web_runner.py", line 375, in _make_server
await self._app.startup()
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web_app.py", line 417, in startup
await self.on_startup.send(self)
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiosignal\__init__.py", line 36, in send
await receiver(*args, **kwargs) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiohttp\web_app.py", line 539, in _on_startup
await it.__anext__()
File "F:\python\AIOHTTP\polls\app_polls\db.py", line 32, in pg_context
engine = await aiopg.sa.create_engine(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\sa\engine.py", line 94, in _create_engine
pool = await aiopg.create_pool(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\pool.py", line 300, in from_pool_fill
await self._fill_free_pool(False)
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\pool.py", line 336, in _fill_free_pool
conn = await connect(
^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\connection.py", line 65, in connect
connection = Connection(
^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\connection.py", line 772, in __init__
self._loop.add_reader(
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\asyncio\events.py", line 530, in add_reader
raise NotImplementedError
NotImplementedError
Exception ignored in: <function Connection.__del__ at 0x000001CF28EEC360>
Traceback (most recent call last):
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\connection.py", line 1188, in __del__
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\connection.py", line 995, in close
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\aiohttp-example-zDtRzW9K-py3.11\Lib\site-packages\aiopg\connection.py", line 977, in _close
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\asyncio\events.py", line 533, in remove_reader
NotImplementedError:
</code></pre>
| <python><postgresql><aiohttp><aiopg> | 2023-07-29 20:03:26 | 1 | 569 | WideWood |
76,795,201 | 7,093,241 | Get actual value that failed an assertion | <p>Is there a way to get <code>assert</code> to show the actual value that failed the assertion on top of listing the line that failed? Is there a way of doing this in <strong>pure Python</strong>? I have seen in <code>C++</code> <code>gtest</code> where you see the <code>Actual</code> value right where the <code>Expected</code> value is shown.</p>
| <python><assert> | 2023-07-29 19:45:29 | 0 | 1,794 | heretoinfinity |
76,795,024 | 4,913,660 | Choose colormap for a "flat" function with steep gradient regions (Euler Gamma function) | <p>I am trying to produce some decent plots and as an example I would like to plot the Euler Gamma function in the complex plane, like done on <a href="https://mathworld.wolfram.com/GammaFunction.html" rel="nofollow noreferrer">this page</a>, see first plot on the left, second row, somewhere mid-page.
The function diverges on the real axis and "squashes" the colorscale, I tried to eliminate values above a threshold masking the numpy array, but it looks awful.</p>
<p>My best attempt so far uses transparency as follows</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import Normalize
from scipy.special import gamma
x = np.linspace(-5,5,250)
y = np.linspace(-5,5,250)
xx,yy = np.meshgrid(x,y)
Z = np.real(gamma(xx + 1j*yy))
# Alpha channel based on Z values
# Null transparency to values whose absolute value is > .0001
alphas = Normalize(0, .3, clip=True)(np.abs(Z))
alphas = np.clip(alphas, .3, 1) # alpha value clipped at the bottom at .3
# Figure
fig, ax = plt.subplots()
ax.imshow(Z, alpha=alphas, cmap = "hsv")
# Some contours for the show
ax.contour(Z[::-1], levels=[-.1, .1], colors='k', linestyles='-')
ax.set_axis_off()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/HHLua.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HHLua.jpg" alt="Real part of Euler Gamma function on the complex plane" /></a></p>
<p>It still is not too informative nor pretty, for example I lose all the details, say the "striations" starting from the real axis on the page I linked.
I think the colormap could be improved, or maybe there are other techniques, would be grateful for an advice thanks.</p>
| <python><numpy><matplotlib> | 2023-07-29 18:48:21 | 1 | 414 | user37292 |
76,794,979 | 6,936,582 | How to split a string of email addresses | <p>I have strings with email addresses which I want to split into individual addresses:</p>
<pre><code>import re
emails = "person1@gmail.comperson2@gmail.comperson3@gmail.com"
re.split("(.com)", emails)
#['person1@gmail', '.com', 'person2@gmail', '.com', 'person3@gmail', '.com', '']
#Expected result: ["person1@gmail.com", "person2@gmail.com", "person3@gmail.com"]
</code></pre>
| <python><python-re> | 2023-07-29 18:32:11 | 2 | 2,220 | Bera |
76,794,944 | 10,836,714 | How do I make a redirect policy on authentik and apply it to a flow without error 'flow_plan'? | <p>I am using authentik and I want to redirect to a different website when the user logs out.
This is done by applying a policy to the last invalidation stage of the logout flow.</p>
<p>It seems my policy comes BEFORE the flow which is why i get 'flow_plan' not defined, and consequently context['flow'].redirect is not defined either.</p>
<p>How do i get the policy to come AFTER the flow in the chart?
<a href="https://i.sstatic.net/NMhC7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NMhC7.png" alt="chart flow" /></a></p>
<p><a href="https://i.sstatic.net/o8OTh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o8OTh.png" alt="error" /></a></p>
<p>Here is the yaml for the flow:</p>
<pre><code>context: {}
entries:
- attrs:
authentication: none
denied_action: message_continue
designation: invalidation
layout: stacked
name: Logout
policy_engine_mode: any
title: Nouveau Invalidation Flow
conditions: []
id: null
identifiers:
pk: be191d0e-a41b-483b-aa79-d570ed2022bb
slug: nouveau-invalidation-flow
model: authentik_flows.flow
state: present
- attrs:
execution_logging: true
expression: "plan = request.context.get(\"flow_plan\")\nif not plan:\n return\
\ False\nplan.redirect(\"https://foo.bar\")\nreturn False"
name: nouveau-redirect-policy
conditions: []
id: null
identifiers:
pk: 68ecaf99-0d18-48bb-956b-b69df515a685
model: authentik_policies_expression.expressionpolicy
state: present
- attrs: {}
conditions: []
id: null
identifiers:
name: nouveau-invalidation-stage
pk: 73056828-cff0-4ec6-af49-e72feaf1df0d
model: authentik_stages_user_logout.userlogoutstage
state: present
- attrs:
invalid_response_action: retry
policy_engine_mode: any
re_evaluate_policies: true
conditions: []
id: null
identifiers:
order: 20
pk: 090b0350-e22e-4f46-9010-2ae9979885b1
stage: 73056828-cff0-4ec6-af49-e72feaf1df0d
target: be191d0e-a41b-483b-aa79-d570ed2022bb
model: authentik_flows.flowstagebinding
state: present
- attrs:
enabled: true
group: bea3a21e-5c8e-42cf-ac80-03eb07e3c858
timeout: 30
conditions: []
id: null
identifiers:
order: 100
pk: fda33932-856f-41ea-9d9c-989748d19441
policy: 68ecaf99-0d18-48bb-956b-b69df515a685
target: be191d0e-a41b-483b-aa79-d570ed2022bb
model: authentik_policies.policybinding
state: present
metadata:
labels:
blueprints.goauthentik.io/generated: 'true'
name: authentik Export - 2023-07-29 18:26:43.853730+00:00
version: 1
</code></pre>
| <python><openid-connect> | 2023-07-29 18:19:52 | 1 | 1,221 | Mark Wagner |
76,794,744 | 2,826,018 | Generate all permutations for all rows of a pandas DataFrame - fast | <p>Is there a faster way to generate all permutations for all rows of a pandas DataFrame (and get it back as a dataframe)?</p>
<p>My current approach looks like this:</p>
<pre><code>dataframe = pd.DataFrame(columns=["RX1", "RX2", "RX3", "RX4", "module"])
for idx, row in df.iterrows():
vals = row.values
for permutation in itertools.permutations(vals[0:4], 4):
data = np.stack(permutation)
dataframe.loc[len(dataframe)] = {
"RX1": data[0], "RX2": data[1], "RX3": data[2], "RX4": data[3],
"module": df.iloc[0]["module"]
}
</code></pre>
<p>But it takes quite a bit of time. Is there a faster way to do this?</p>
| <python><pandas><permutation> | 2023-07-29 17:28:57 | 0 | 1,724 | binaryBigInt |
76,794,391 | 11,092,636 | difference between all() and .all() for checking if an iterable is True everywhere | <p>I believe there are two ways of checking if a <code>torch.Tensor</code> has values all greater than 0. Either with <code>.all()</code> or <code>all()</code>, a Minimal Reproducible Example will illustrate my idea:</p>
<pre class="lang-py prettyprint-override"><code>import torch
walls = torch.tensor([-1, 0, 1, 2])
result1 = (walls >= 0.0).all() # DIFFERENCE WITH BELOW???
result2 = all(walls >= 0.0) # DIFFERENCE WITH ABOVE???
print(result1) # Output: False
print(result2) # Output: False
</code></pre>
<p><code>all()</code> is builtin so I think I would prefer using that one, but most code I see on the internet uses <code>.all()</code> so I'm afraid there is unexpected behaviour.</p>
<p>Are they both behaving the exact same?</p>
| <python><pytorch> | 2023-07-29 16:03:03 | 1 | 720 | FluidMechanics Potential Flows |
76,794,241 | 7,169,895 | How to hide legend box in pyside6 | <p><a href="https://i.sstatic.net/MkMqF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MkMqF.png" alt="I have a graph here in QtGraphs" /></a></p>
<p>However, removing setName only makes the legend name disappear. using <code>chart.legend().hide()</code> makes the chart axis disappear as well.</p>
<p>I do not see a hide function in the documentation.</p>
<p>Here is my code.</p>
<pre><code> # Add Federal Reserve Data
unemployment_data = pandas.DataFrame.from_dict({u'2012-06-08': 388,
u'2012-06-09': 388,
u'2012-06-10': 388,
u'2012-06-11': 389,
u'2012-06-12': 389,
u'2012-07-05': 392,
u'2012-07-06': 392}, orient='index', columns=['foo'])
self.unemployment_chart = QLineSeries()
for date, value in unemployment_data.items():
date = QDateTime(date).toSecsSinceEpoch()
print(date, value)
self.unemployment_chart.append(date, value)
chart2 = QChart()
chart2.setTitle('Unemployment')
chart2.addSeries(self.unemployment_chart)
# chart2.createDefaultAxes()
chartView2 = QChartView(chart2)
grid_box.addWidget(chartView2, 0, 1)
# Formatting for Date / DateTime Data
axis_x = QDateTimeAxis()
axis_x.setFormat("yyyy-MM-dd")
axis_x.setTitleText("Date")
chart2.setAxisX(axis_x)
self.unemployment_chart.attachAxis(axis_x)
</code></pre>
| <python><pyside6> | 2023-07-29 15:24:04 | 0 | 786 | David Frick |
76,794,133 | 4,159,461 | How to improve performance of a loop in python | <p>I have a problem that I need to interate over a big list and find the first True condition, if I use filter and list built-in functions it is faster but have to complete the iteration or I use traditional 'for' that can return in the first True condition.
But it have almost same time to complete, here some example.</p>
<pre><code>def for_first_true_1(l_list, string):
for x in l_list:
# may be a more complex logic
if string in x:
return True
return False
</code></pre>
<p>The example above is slower in execution time but faster in logic</p>
<pre><code>def for_first_true_2(l_list, string):
def has_string(x):
if string in x:
return True
else:
return False
value = list(filter(has_string, l_list))
return bool(value)
</code></pre>
<p>The example above have more calls once it have to iterate over the whole list, but is more efficient.</p>
<p>The result is both have almost the same time to complete.</p>
<p>The list had about 100 elements
The result below is based on original code that I can't expose:</p>
<pre><code># equivalent of for_first_true_2
RUNNING TIME: 0.0005581378936767578
909 function calls (906 primitive calls)
# equivalent of for_first_true_1
RUNNING TIME: 0.0004608631134033203
264 function calls (261 primitive calls)
</code></pre>
<p>How do I improve performance of the for loop?</p>
<p>What I have thinking as alternative?</p>
<p>1 - Build a cython operation, but I don't know how:
It have a list of strings(ll), and the func is the python function to be called</p>
<pre><code>def bool_for_loop(func, ll: list, arg: str) -> bool:
value = False
for x in ll:
value = func(x, arg)
if value:
return True
return False
</code></pre>
<p>something like</p>
<pre><code>def for_first_true_cython(l_list, string: str):
def has_string(x):
if string in x:
return True
else:
return False
return bool_for_loop(has_string, l_list)
</code></pre>
<p>The any does't seems to be that optimized</p>
| <python><performance><cython> | 2023-07-29 14:58:22 | 0 | 571 | DevBush |
76,794,117 | 4,865,723 | GNU Gettext class-based API and plural forms with ngettext()? | <p>I use Pythons <a href="https://docs.python.org/3/library/gettext.html" rel="nofollow noreferrer"><code>gettext</code></a> module with its so called <a href="https://docs.python.org/3/library/gettext.html#localizing-your-application" rel="nofollow noreferrer">Class-based API</a>.</p>
<p>I do "install" the current translation via calling <code>gettext.install()</code>. Does does install the function <code>_()</code> in Pythons builtin namespace. The consequence is I don't have to setup gettext in each of the modules of my application. It works everywhere.</p>
<pre><code>translation = gettext.translation(
domain=MY_DOMAIN,
localedir=MY_LOCALE_DIR,
languages=[language_code, ] if language_code else None,
fallback=True
)
translation.install()
</code></pre>
<p>I can use <code>_()</code> everywhere without problems.</p>
<p>But it seems that <code>gettext.ngettext()</code> for plural forms do not work as expected. It do not translate my plural forms like this.</p>
<pre><code>import gettext
translated = gettext.ngettext('singular %d', 'plural %d', val) % val
</code></pre>
<p>The reason is that this <code>ngettext()</code> do not know the installed translation using a specific domain and context.</p>
| <python><gettext> | 2023-07-29 14:53:25 | 1 | 12,450 | buhtz |
76,793,927 | 2,523,254 | Homebrew install: PHP dependency re2c can't find Python interpreter | <p>I broke my PHP installation on MacOS Mojave, so I'm trying:</p>
<pre><code>brew reinstall shivammathur/php/php@7.4
</code></pre>
<p>This goes well until reaching <code>re2c</code>:</p>
<pre><code>==> Installing shivammathur/php/php@7.4 dependency: re2c
</code></pre>
<p>which fails with:</p>
<pre><code>checking for a Python interpreter with version >= 3.7... none
configure: error: no suitable Python interpreter found
</code></pre>
<p>This is very odd, because python is installed and has multiple versions that should pass:</p>
<pre><code>~$ which python
/usr/local/bin/python
~$ python --version
Python 3.11.4
~$ python3 --version
Python 3.11.4
~$ python3.9 --version
Python 3.9.17
</code></pre>
<p>Digging through the <code>config.log</code> of <code>re2c</code>, I find a lot of failed checks:</p>
<pre><code>...
configure:3672: checking for a Python interpreter with version >= 3.7
configure:3690: python -c import sys # split strings by '.' and convert to numeric. Append some zeros # because we need at least 4 digits for the hex conversion. # map returns an iterator in Python 3.0 and a list in 2.x minver = list(map(int, '3.7'.split('.'))) + [0, 0, 0] minverhex = 0 # xrange is not present in Python 3.0 and range returns an iterator for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[i] sys.exit(sys.hexversion < minverhex)
configure:3693: $? = 1
configure:3690: python2 -c import sys # split strings by '.' and convert to numeric. Append some zeros # because we need at least 4 digits for the hex conversion. # map returns an iterator in Python 3.0 and a list in 2.x minver = list(map(int, '3.7'.split('.'))) + [0, 0, 0] minverhex = 0 # xrange is not present in Python 3.0 and range returns an iterator for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[i] sys.exit(sys.hexversion < minverhex)
./configure: line 3691: python2: command not found
configure:3693: $? = 127
configure:3690: python3 -c import sys # split strings by '.' and convert to numeric. Append some zeros # because we need at least 4 digits for the hex conversion. # map returns an iterator in Python 3.0 and a list in 2.x minver = list(map(int, '3.7'.split('.'))) + [0, 0, 0] minverhex = 0 # xrange is not present in Python 3.0 and range returns an iterator for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[i] sys.exit(sys.hexversion < minverhex)
./configure: line 3691: python3: command not found
configure:3693: $? = 127
configure:3690: python3.9 -c import sys # split strings by '.' and convert to numeric. Append some zeros # because we need at least 4 digits for the hex conversion. # map returns an iterator in Python 3.0 and a list in 2.x minver = list(map(int, '3.7'.split('.'))) + [0, 0, 0] minverhex = 0 # xrange is not present in Python 3.0 and range returns an iterator for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[i] sys.exit(sys.hexversion < minverhex)
./configure: line 3691: python3.9: command not found
...
</code></pre>
<p>It seems that the <code>python</code> command executes, but fails the check, while <code>python3</code> and <code>python3.9</code> return <code>command not found</code>.</p>
<p>How can this be if my shell clearly thinks otherwise?</p>
<p>Thank you for any suggestions!!</p>
| <python><php><homebrew><re2c> | 2023-07-29 14:08:29 | 0 | 1,181 | gl03 |
76,793,865 | 2,558,463 | scrapy shell works fine but not for scrapy crawl | <p>today i stumbled on a problem using scrapy (i dont have prior experience using scrapy)</p>
<pre><code>import scrapy
from scrapy_splash import SplashRequest
class AtmSpider(scrapy.Spider):
name = 'scrapebca'
allowed_domains = ['www.bca.co.id']
starts_url = ['https://www.bca.co.id/id/lokasi-bca']
def parse(self, response):
data = response.xpath('//div[@class="a-card shadow0 m-maps-location-container-wrapper"]')
# terminal_ids = response.xpath('//p[@class="a-text a-text-small m-maps-location-container-wrapper-code"]')
# terminal_names = response.xpath('//p[@class="a-text a-text-subtitle a-text-ellipsis-single m-maps-location-container-wrapper-title"]')
# terminal_locations = response.xpath('//p[@class="a-text a-text-body a-text-ellipsis-address m-maps-location-container-wrapper-address"]')
# services = response.xpath('//p[@class="a-text a-text-small m-maps-location-container-wrapper-code service-value"]')
# longitudes = response.xpath('//div[@class="action-link maps-show-route"]/@data-long"]').extract()
# latitudes = response.xpath('//div[@class="action-link maps-show-route"]/@data-lat"]').extract()
for item in data:
terminal_id = item.xpath('.//p[@class="a-text a-text-small m-maps-location-container-wrapper-code"]/text()').getall()
yield {
'terminal_id': terminal_id
}
</code></pre>
<p>the problem is, scrapy shell works fine:</p>
<ol>
<li>fetch('https://www.bca.co.id/id/lokasi-bca')</li>
<li>data = response.xpath('//div[@class="a-card shadow0 m-maps-location-container-wrapper"]')</li>
<li>data.xpath('.//p[@class="a-text a-text-small m-maps-location-container-wrapper-code"]/text()').getall() --> it returns something</li>
</ol>
<p>but when i tried to execute using crawler</p>
<pre><code>scrapy crawl <appname>
</code></pre>
<p>it returns this error:</p>
<pre><code>c:\phyton373\lib\site-packages\OpenSSL\_util.py:6: UserWarning: You are using cryptography on a 32-bit Python on a 64-bit Windows Operating System. Cryptography will be significantly faster if you switch to using a 64-bit Python.
from cryptography.hazmat.bindings.openssl.binding import Binding
2023-07-29 20:45:18 [scrapy.utils.log] INFO: Scrapy 2.9.0 started (bot: scrapebca)
2023-07-29 20:45:18 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.8.1, w3lib 2.1.1, Twisted 22.10.0, Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 21:26:53) [MSC v.1916 32 bit (Intel)], pyOpenSSL 23.2.0 (OpenSSL 3.1.1 30 May 2023), cryptography 41.0.2, Platform Windows-10-10.0.19041-SP0
2023-07-29 20:45:18 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'scrapebca',
'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
'FEED_EXPORT_ENCODING': 'utf-8',
'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage',
'NEWSPIDER_MODULE': 'scrapebca.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['scrapebca.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2023-07-29 20:45:18 [asyncio] DEBUG: Using selector: SelectSelector
2023-07-29 20:45:18 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2023-07-29 20:45:18 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2023-07-29 20:45:18 [scrapy.extensions.telnet] INFO: Telnet Password: 47c5b5514938a8f3
2023-07-29 20:45:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2023-07-29 20:45:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy_splash.SplashCookiesMiddleware',
'scrapy_splash.SplashMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2023-07-29 20:45:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy_splash.SplashDeduplicateArgsMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2023-07-29 20:45:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2023-07-29 20:45:18 [scrapy.core.engine] INFO: Spider opened
2023-07-29 20:45:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2023-07-29 20:45:18 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2023-07-29 20:45:18 [scrapy.core.engine] INFO: Closing spider (finished)
2023-07-29 20:45:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.003025,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2023, 7, 29, 13, 45, 18, 446634),
'log_count/DEBUG': 3,
'log_count/INFO': 10,
'start_time': datetime.datetime(2023, 7, 29, 13, 45, 18, 443609)}
2023-07-29 20:45:18 [scrapy.core.engine] INFO: Spider closed (finished)
</code></pre>
<p>any steps that i missed?</p>
| <python><scrapy> | 2023-07-29 13:47:04 | 1 | 511 | galih |
76,793,801 | 19,661,530 | Why does this error google.auth.exceptions.RefreshError in firebase_admin lib | <p>I followed a detailed youtube video tutorial and wrote the code:</p>
<pre><code>import firebase_admin
from firebase_admin import db, credentials
cred = credentials.Certificate("C:/Users/Administrator/Desktop/firebase/dsffd-a55b6-firebase-adminsdk-o3dda-42a1614ac9.json")
firebase_admin.initialize_app(cred, {"databaseURL": "https://dsffd-a55b6-default-rtdb.europe- west1.firebasedatabase.app/"})
ref = db.reference("/")
ref.get()
</code></pre>
<p>This is the error that appears:</p>
<pre><code>google.auth.exceptions.RefreshError: ('invalid_grant: Invalid JWT: Token
must be a short-lived token (60 minutes) and in a reasonable timeframe.
Check your iat and exp values in the
JWT claim.', {'error': 'invalid_grant', 'error_description': 'Invalid
JWT: Token must be a short-lived token (60 minutes) and in a reasonable
timeframe. Check your iat and exp values in the JWT claim.'})
</code></pre>
| <python><python-3.x><firebase-admin> | 2023-07-29 13:33:11 | 0 | 664 | islam abdelmoumen |
76,793,704 | 1,942,868 | Sending json and file(binary) together by requests() of python | <p>I have this curl command which send file and data to my api.</p>
<p>It works correctly.</p>
<pre><code>curl --location 'localhost:8088/api/' \
--header 'Content-Type: multipart/form-data' \
--header 'Accept: application/json' \
--form 'file=@"image.png"' \
--form 'metadata="{
\"meta\": {
\"number\": 400
}}"'
</code></pre>
<p>Now I want to do the equivalent thing to inside of the python.</p>
<p>So I use <code>requests</code> however it says <code>TypeError: request() got an unexpected keyword argument 'file'</code></p>
<p>How can I do when sending the json and image data together?</p>
<pre><code>headers = {
'Content-Type': 'multipart/form-data',
'Accept': 'application/json'
}
metadata = {"number":400}
response = requests.post('https://localhost:8088/api/',
headers=headers, data={
metadata:metadata},
file = {
open("image.png",'rb')
}
)
</code></pre>
| <python><django><request> | 2023-07-29 13:01:54 | 2 | 12,599 | whitebear |
76,793,693 | 1,340,744 | how to simplify huge arithmatic expression? | <p>I've a huge expression like:</p>
<pre><code>x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52] + FUNC1(z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51] + FUNC0(h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC0(a49) + FUNC2(a49, x49, y49)) + FUNC2(h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC0(a49) + FUNC2(a49, x49, y49), a49, x49) + y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51] + FUNC1(h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC0(a49) + FUNC2(a49, x49, y49) + w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC1(a49 + v49 + FUNC1(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52]) + RET(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + movr[53] + m1[53]) + RET(a49 + v49 + FUNC1(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52]) + RET(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + movr[53] + m1[53], x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + movr[54] + m1[54]) + RET(h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC0(a49) + FUNC2(a49, x49, y49) + w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC1(a49 + v49 + FUNC1(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52]) + RET(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + movr[53] + m1[53]) + RET(a49 + v49 + FUNC1(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52]) + RET(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) ...
</code></pre>
<p>I need to simplify this expression by replacing some part of repeating expression with other variables.
For example:</p>
<pre><code>a = RET(v49, t49, z49)
b= w49 + h49 + FUNC1(v49) + a + movr[50] + m1[50]
and so on...
</code></pre>
<p>my problem is; this is really huge expression (like 2MB long expression) and doing this manually is near impossible and also without mistakes.</p>
<p>now my question is; is there any app that'll do such thing? or any python program to do so?</p>
<p>I can program python easily, but I lack of such algorithm knowing.</p>
<p>any help appreciated.</p>
| <python><math><simplify> | 2023-07-29 12:57:51 | 1 | 382 | HabibS |
76,793,620 | 9,258,134 | Regex - Difference between ^(abc|a)$ and ^abc|a$ | <p>I have the input word <code>abcab</code> and the two following regular expressions:</p>
<p><code>^abc|(abc(abc|a))$</code><br>
<code>^(abc|(abc(abc|a)))$</code></p>
<p>Why does the first regex return the match <code>abc</code>, while the second one matches nothing (intended behavior)?</p>
<p>The only difference are the brackets between <code>^</code> and <code>$</code> and as far as I'm concerned this shouldn't change anything.</p>
<p>I tested it on regex101.com with Python regex.</p>
| <python><regex> | 2023-07-29 12:39:21 | 1 | 467 | Sheldon |
76,793,574 | 1,942,868 | How to get the file object from request | <p>I send file with javascript like this,</p>
<p>fetch png from server and send to the python</p>
<pre><code> fetch(fileUrl,{headers: headers})
.then(response => response.blob())
.then(blob => new File([blob], "image.png"))
.then(file => {
var formData = new FormData();
formData.append("metadata",JSON.stringify(metadata));
formData.append("file",file);
axios.post(
ai_api,formData,{}
).then(function (response) {
console.log("sent success");
}
</code></pre>
<p>then in django</p>
<pre><code>@api_view(["POST"])
def jobs(request):
metadata = request.POST.get("metadata")
file = request.POST.get("file")
print(metadata) # I can get the metadata here!!
print(file) # None
</code></pre>
<p>why this file is None?</p>
<p>How can I get the file itself?</p>
| <javascript><python><django> | 2023-07-29 12:28:06 | 1 | 12,599 | whitebear |
76,793,554 | 4,231,821 | How to merge two datasets in python | <p>I have 1 dataset</p>
<pre><code>NotesData = [{"Labels" : "Q1-17" , "EPCCO" : "This is Eppco Note" , "QACCO" : "This is QACCO Notes"}]
</code></pre>
<p>I have another datset</p>
<pre><code>ChartData = [ {'ForDate': '2020-12-31T00:00:00', 'ForYear': 2020, 'Labels': 'Q1-17', 'EPCCO': 29.459162790697675, 'QACCO': 20.10097777777778}]
</code></pre>
<p>I want to see if the label of any object present in both datasets than i want to add all the keys in 2nd datasets from first datasets except Labels</p>
<p>For Example My Data would look like this</p>
<pre><code>[ {'ForDate': '2020-12-31T00:00:00', 'ForYear': 2020, 'Labels': 'Q1-17', 'EPCCO': 29.459162790697675, 'QACCO': 20.10097777777778 , 'EPCCO_Note' : 'This is Eppco Note' , 'QACCO_Note' : 'This is QACCO Notes'}]
</code></pre>
<p>I can afford to loop on my first dataset because there will not be much objects , but I can not afford to loop on my second data sets as the data is too large it can be thousand of objects. and the keys EPPCO AND QACCO is dynamic so it can be any key .</p>
| <python><dataset> | 2023-07-29 12:21:44 | 2 | 527 | Faizan Naeem |
76,793,386 | 14,037,055 | I have a data frame with two columns, A and B. Now, how can I store the data frame values in a list, separating them with commas and parentheses? | <p>I have the following dataframe</p>
<pre><code>import pandas as pd
data = [[0.8,0.23], [0.3,.40], [0.6,0.8], [0.7,0.9]]
df = pd.DataFrame(data, columns=['A','B'])
df
</code></pre>
<p>My expected output is</p>
<pre><code>List= [(0.8,0.23), (0.3,.40), (0.6,0.8), (0.7,0.9)]
</code></pre>
| <python><pandas><dataframe> | 2023-07-29 11:38:21 | 2 | 469 | Pranab |
76,793,331 | 12,462,568 | Python script using `asyncio` library won't stop running | <p>I am using the libraries <code>telethon</code> and <code>asyncio</code> to scrape messages on Telegram but somehow my Python script below won't stop running. It got to the stage where it's printing out all the filtered messages correctly (under code line <code>print(message.date, message.text, message.sender_id)</code>), but its not outputting the csv file (under code line <code>df.to_csv('filename.csv', encoding='utf-8')</code>). It just keeps running.</p>
<pre><code>import asyncio
import nest_asyncio
from telethon.sync import TelegramClient
import pandas as pd
username = "MY-USERNAME" # your Telegram account username
api_id = MY-API-ID # your Telegram account API ID
api_hash = "MY-API-HASH" # your Telegram account API hash
phone = "MY-PHONE-NO" # your Telegram account mob. no. with country code
channel_username = "clickhouse_en" # channel username
start_date_value = "2023-07-01 00:00:00" # Specify the date and time range (in UTC)
end_date_value = "2023-07-25 23:59:59" # Specify the date and time range (in UTC)
keywords_value = [] # Specify the keywords to filter, eg. keywords_value = ["data", "report"]
# Apply nest_asyncio to enable running an event loop within a running loop
nest_asyncio.apply()
async def main(start_date=None, end_date=None, keywords=None):
# Convert date and time range to pandas timestamp format
start_date = pd.Timestamp(start_date)
end_date = pd.Timestamp(end_date)
# Convert keywords to lowercase
keywords = [keyword.lower() for keyword in keywords]
data = []
async with TelegramClient(username, api_id, api_hash) as client:
async for message in client.iter_messages("https://t.me/" + channel_username):
if start_date.timestamp() <= pd.Timestamp(message.date).timestamp() <= end_date.timestamp():
# Conver message to lowercase and split the message into individual words
words_lower_in_msg = str(message.text).lower().split()
# 'if not keywords' means when no keywords are given, ie. keywords_value = []
if not keywords or any(keyword == word_lower_in_msg for word_lower_in_msg in words_lower_in_msg for keyword in keywords):
print(message.date, message.text, message.sender_id)
data.append([message.date, message.text, message.sender_id])
# creates a new dataframe
df = pd.DataFrame(data, columns=["message.date", "message.text", "message.sender_id"])
# creates a csv file
df.to_csv('filename.csv', encoding='utf-8')
# Get the event loop and run the main function
loop = asyncio.get_event_loop()
loop.run_until_complete(main(start_date = start_date_value, end_date = end_date_value, keywords = keywords_value))
</code></pre>
<p>I am using Jupyter Notebook, which means I need to use the <code>nest_asyncio</code> library in the script above.</p>
<p>Would really appreciate if anyone can help with the above and let me know where I had gone wrong. Many thanks in advance!</p>
| <python><python-asyncio><telegram><telethon><nest-asyncio> | 2023-07-29 11:23:21 | 1 | 2,190 | Leockl |
76,793,329 | 8,222,791 | Selenium (python) : finding an element relative to a previously-found one | <p>I'm web-scrapping a page with the selenium module in python. I've looked for a certain element, and found it, like this:</p>
<pre><code>drv = webdriver.Firefox(options=opt)
drv.get(url)
x = drv.find_element('xpath', './/div[@class="col-lg-2 col-md-3 col-xs-6 guest-item "]')
</code></pre>
<p>Now I'd like to do a search for another element, using an XPath expression, but only inside the sub-tree that is below the <code>x</code> element, not across the entire document.</p>
<p>How can I specify that the XPath search should start from the <code>x</code> element and not from the document's root ?</p>
| <python><selenium-webdriver><xpath> | 2023-07-29 11:22:13 | 1 | 2,303 | joao |
76,792,890 | 10,735,143 | How to return JSON data sorted in FastAPI? | <p>Although I am returning a sorted dictionary by value, FastAPI response is not sorted.</p>
<p>I'm using</p>
<ul>
<li>python: 3.8.10</li>
<li>fastapi: 0.95.2</li>
</ul>
<h5>This is my endpoint:</h5>
<pre class="lang-bash prettyprint-override"><code>@classifier_router.post("/groupClassifier")
async def group_classifier(
# current_user: User = Depends(get_current_user),
group_id: str = Query(default="1069302375", description=GROUP_ID_DESC),
starttime: Union[datetime , None] = DEFAULT_START_TIME,
endtime: Union[datetime , None] = DEFAULT_END_TIME):
result = handler.group_classifier(
[group_id],
starttime,
endtime)
if result==None:
raise HTTPException(
status_code=500
)
else:
return result
</code></pre>
<h5>Result which returned in swagger:</h5>
<pre><code>{
"class_scores": {
"1": 34.8,
"2": 22.1,
"3": 16.9,
"4": 7.8,
"5": 26,
"6": 23.6
}
}
</code></pre>
| <python><fastapi> | 2023-07-29 09:14:06 | 2 | 634 | Mostafa Najmi |
76,792,848 | 13,370,214 | Create sample dataset for benchmark coversion Pyspark | <p>I have a dataframe df1 which has around 4000 rows and has a string column 'variable' now i want to create a sample dataset for df1 from dataframe df which has 1M rows. Both the dataset have "variable" column in common</p>
<ol>
<li>The sample dataset to be created should 2X to the size of df1.</li>
<li>The sample dataset to be created should contain the same proportion of variable count as df1.</li>
<li>for example the the count of variable 'BLR' is 750 out of 4000 records which is around 18% in df1. now the sample dataset should also contain 1500 out of 8000 records which accounts to 18%</li>
</ol>
| <python><dataframe><pyspark> | 2023-07-29 09:00:52 | 1 | 431 | Harish reddy |
76,792,728 | 5,272,967 | Documenting Literal Types in Python for IDE Tooltips | <p>Using Python's <code>TypedDict</code>, one can easily constrain the input dictionaries and provide documentation for each dictionary key:</p>
<pre class="lang-py prettyprint-override"><code>import typing
class MyType(typing.TypedDict):
a: str
"""Parameter a"""
b: int
"""Parameter b"""
def fun(x: MyType):
...
</code></pre>
<p>Most IDEs will list all possible declared keys, and keys' documentation can be displayed in tooltips.</p>
<p><a href="https://i.sstatic.net/PzHiM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PzHiM.png" alt="example1" /></a></p>
<p><a href="https://i.sstatic.net/xZDYE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xZDYE.png" alt="example2" /></a></p>
<p>Now, let's consider a similar approach for string literals. Suppose we have the following function:</p>
<pre class="lang-py prettyprint-override"><code>import typing
def fun(x: typing.Literal["a", "b", "c"]):
...
</code></pre>
<p>In my IDE (VS Code), it hints at all possible inputs, i.e., <code>"a"</code>, <code>"b"</code>, <code>"c"</code>. Is there a way to document these parameters and have it displayed in tooltips, similar as for <code>TypedDict</code> example?</p>
| <python><python-typing> | 2023-07-29 08:28:17 | 0 | 313 | andywiecko |
76,792,247 | 1,800,928 | Unable to Generate Summary in Bullet Points using Langchain | <p>I am currently working with a chat agent from the Langchain project. My goal is to generate a summary of a video content in bullet points, but I've run into an issue. The agent is capable of summarizing the content, but it doesn't format the summary in bullet points as I'm instructing it to.</p>
<p>Here's the agent initialization code I've used:</p>
<pre><code>agent = initialize_agent(
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=memory,
)
</code></pre>
<p>Then, I'm feeding the following prompt to the agent:</p>
<pre><code>prompt = "summarize in bullets: https://www.youtube.com/watch?v=n2Fluyr3lbc"
</code></pre>
<p>The agent provides a summary but doesn't format the output into bullet points as expected. Instead, it simply generates the content in paragraph form.</p>
<p>Can anyone provide any guidance on why the agent might not be following the 'bullet point' instruction in the prompt? Is there a specific way I should be formulating my prompt or is there a setting in the agent initialization I might be missing?</p>
<p>Any help or guidance is appreciated.</p>
<p>What I Tried and Expected:</p>
<p>I have initialized the Langchain agent with the appropriate settings and then passed it a prompt to summarize a YouTube video link. My expectation was that the agent, given the prompt "summarize in bullets: [YouTube Link]", would provide a bullet-point summary of the content in the video.</p>
<p>I chose this approach as I believed the agent should be capable of interpreting and executing this instruction based on the understanding and processing capabilities the Langchain models typically exhibit. I expected the output to be a concise list of key points extracted from the video, each point presented as a separate bullet point.</p>
<p>What Actually Resulted:</p>
<p>In reality, the agent did provide a summary of the video content, indicating that it correctly processed the video and carried out the 'summarize' part of the instruction. However, it did not format the summary in bullet points as I instructed. Instead, the summary was provided in the form of a single, unstructured paragraph.</p>
<p>Therefore, while the agent demonstrated its ability to summarize the content, it did not adhere to the formatting instruction. The challenge, in this case, is figuring out why the 'bullet point' aspect of the instruction was not carried out.</p>
| <python><chatbot><langchain><agent><large-language-model> | 2023-07-29 05:59:36 | 1 | 2,185 | Ayyaz Zafar |
76,792,243 | 2,558,016 | How to use the free domain given by ngrok? | <p>In their release of ngrok <a href="https://ngrok.com/blog-post/new-ngrok-domains" rel="nofollow noreferrer">https://ngrok.com/blog-post/new-ngrok-domains</a>, they now provide free domains to free users as of April 2023. I can see my one free domain at <a href="https://dashboard.ngrok.com/cloud-edge/domains" rel="nofollow noreferrer">https://dashboard.ngrok.com/cloud-edge/domains</a>. However, I don't know how to use it.</p>
<p>I have tried
<code>ngrok http -subdomain=xxx.ngrok-free.app 80</code>
and
<code>ngrok http -subdomain=xxx 80</code></p>
<p>But it will give me an error</p>
<pre><code>Custom subdomains are a feature on ngrok's paid plans.
Failed to bind the custom subdomain 'xxx.ngrok-free.app' for the account 'James Arnold'.
This account is on the 'Free' plan.
</code></pre>
<p>Is there a documentation on how to use this free domain by ngrok?</p>
| <python><django><flask><ngrok> | 2023-07-29 05:59:13 | 2 | 790 | James Arnold |
76,791,962 | 5,053,475 | Sort lists keeping maximum distance between repeating elements | <p>Given a list, ie:
[1,1,2,2,3,5]</p>
<p>I'd like to reorder it:
[1,2,3,5,1,2]</p>
<hr />
<p><strong>EDIT after the comment: Final goal is to maximizes the smallest distance between any two occurrences of the same number, or minimize the closeness of repeating elements and spread them out as much as possible in the list</strong></p>
<p>Duplicate questions:
<a href="https://stackoverflow.com/questions/12375831/algorithm-to-separate-items-of-the-same-type/76792233#76792233">Algorithm to separate items of the same type</a></p>
<p><a href="https://stackoverflow.com/questions/75701937/maximize-the-minimum-distance-between-the-same-elements/76792326#76792326">Maximize the minimum distance between the same elements</a></p>
<p>I came up with this solution, that's the onw I'll use, if nothing better will be proposed:
<a href="https://stackoverflow.com/a/76792326/5053475">https://stackoverflow.com/a/76792326/5053475</a></p>
<hr />
<p>What alghoritm can I use?</p>
<p>Proceeding by logic:
I was thinking on calculate the occurencies, let's say, ie:
[1, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 44, 4, 5, 6, 1, 4, 5, 3, 6, 7, 8, 9, 11, 123, 22, 44, 1, 5, 5]</p>
<p>would give this "occurency count" matrix 2x30:
[(3, 10), (1, 4), (5, 4), (44, 2), (4, 2), (6, 2), (7, 1), (8, 1), (9, 1), (11, 1), (123, 1), (22, 1)]</p>
<p>so I would:</p>
<ul>
<li>calculate all the numbers that have more than 1 repetition, in this case, 6 numbers</li>
<li>There are 3 numbers with 2 occurrences:
<ul>
<li>The 6, it will go in poisition 3 (0+5-2) and 24 (29-5)</li>
<li>The 4, will go in positions 4 (0+5-1) and 25 (29-5+1)</li>
<li>The 44, will go in positions 5 (0+5) and 26 (29-5+2)</li>
</ul>
</li>
<li>There are 2 numbers with 4 occurencies
<ul>
<li>The 5 will go between positions 1 (0+6-3-1) and 27 (29-6+3), every aprox 6 numbers</li>
<li>The 1 will go between positions 2 (0+6-3) and 28 (29-6+3+1), every aprox 6 numbers</li>
</ul>
</li>
<li>There is 1 number with 10 occurencies
<ul>
<li>The 3 will go between positions 0 and 29, every aprox 3 numbers.</li>
</ul>
</li>
</ul>
<p>And so on.
Then fullfill the empty position with the non repeating numbers.
When some collision (position already occupied) alternate +1, -1, +2, -2 and so on till find a free position
I came up with this code, that boldly follow my algorithm,</p>
<pre><code>from collections import defaultdict
def find_nearest_none(arr, position):
if arr[position] is None:
return position
step = 1
while True:
left_pos = position - step
right_pos = position + step
if left_pos >= 0 and arr[left_pos] is None:
return left_pos
elif right_pos < len(arr) and arr[right_pos] is None:
return right_pos
elif left_pos < 0 and right_pos >= len(arr):
# Both left and right positions are out of bounds
return False
step += 1
def max_distance_list(nums):
num_occurrences = {}
t = len(nums)
out = [None] * t
for num in nums:
num_occurrences[num] = num_occurrences.get(num, 0) + 1
num_occurrences = sorted(num_occurrences.items(), key=lambda item: item[1], reverse=True)
grouped_data = defaultdict(list)
for key, value in num_occurrences:
grouped_data[value].append(key)
print(grouped_data)
start_pos = 0
for x, y in dict(grouped_data).items():
print("Start pos:", start_pos)
for z in y:
sep = t // x
pos = start_pos;
for i in range(x):
free_pos = find_nearest_none(out, pos)
out[free_pos] = z
pos+=sep;
start_pos+=1;
return out
</code></pre>
<p>Output: [3, 1, 5, 3, 44, 4, 3, 6, 1, 3, 5, 7, 3, 8, 1, 3, 5, 44, 3, 4, 6, 3, 1, 5, 3, 9, 11, 3, 123, 22]</p>
<p>but that's not an optimal result, it's just acceptable, and far away to be fast.
Is there some "already done" function to do this, better and faster?</p>
<p>If not, I'll proceed in chunks and will use this 'acceptable' result since
it's for creating a queue a crawler, to distantiate calls to the same IP and/or domains</p>
<p>Appreciate any help</p>
| <python><list><logic> | 2023-07-29 03:35:57 | 3 | 734 | Daniele Rugginenti |
76,791,619 | 2,497,309 | Switch VLC screens using python-vlc | <p>I have multiple live streams running which I'm playing with python-vlc.</p>
<p>So something like this:</p>
<pre><code>stream1 = vlc.MediaPlayer("rtsp://...")
stream2 = vlc.MediaPlayer("rtsp://...")
stream1.play()
stream1.audio_toggle_mute()
stream2.play()
stream2.toggle_fullscreen()
</code></pre>
<p>I can use toggle_fullscreen to display one but how can I switch between multiple streams and bring a specific one to the foreground? Would I just have to stop and start it again or is there an easier way?</p>
| <python><vlc><python-vlc> | 2023-07-29 00:22:18 | 1 | 947 | asm |
76,791,616 | 4,268,602 | Saving results of plot_RGB_chromaticities_in_chromaticity_diagram_CIE1931 with higher resolution | <p>I am using the <code>colour</code> library in python.</p>
<p>I am calling <code>plot_RGB_chromaticities_in_chromaticity_diagram_CIE1931</code> as such:</p>
<pre><code>fig, axes = plot_RGB_chromaticities_in_chromaticity_diagram_CIE1931(RGB=RGB, scatter_kwargs={'c': colors, 'marker': '.', 'label': "e"}, filename='e.jpeg', dpi=8000, show_pointer_gamut=False)
</code></pre>
<p>No matter what <code>filename</code> format I pass, I cannot get a higher resolution. <code>dpi</code> does not appear to do anything. <code>fig.dpi</code>, once returned, shows <code>200.0</code> no matter what I pass.</p>
<p>Is there a way around this? How can I save at a higher resolution?</p>
<p>Is it possible for me to edit the <code>dpi</code> after the fact and save it?</p>
| <python><matplotlib><colors> | 2023-07-29 00:20:51 | 1 | 4,156 | Daniel Paczuski Bak |
76,791,609 | 1,207,193 | Decorator to create @property getters for all my class _attributes Python3+ | <p>I am writting a class using SQLAlchemy that represents an object. I want many of its members attribute (in fact all) started with <code>_</code> (underscore) to have a read-only @property. I don't want to write that code that many times. So I thought about a decorator to apply to my class to read all its attributes and create a <code>@property</code> for the ones that start with <code>_</code> with the name without the <code>_</code> underscore. Like <code>self._age</code> gets a <code>@property def age(self): return self._age</code>.</p>
<p>I am struggling a bit since I have forgotten a bit about decorators I got so far as writting the bellow code:</p>
<pre><code>def make_readonly_properties(cls):
for attr_name in dir(cls):
if attr_name.startswith('_'):
setattr(cls, attr_name[1:], property(getattr(cls, attr_name)))
return cls
</code></pre>
<p>And this is a sample class to test the idea, not working so far.</p>
<pre><code>@make_readonly_properties
class X:
def __init__(self):
self._age = 34
self._address = 'Unknown street at City Z'
self.dog_name = 'Saitama'
</code></pre>
<p>I just started to wonder if there is not a PEP for this? Could be useful for <code>__</code> two underscores too.</p>
| <python><python-3.x><python-decorators> | 2023-07-29 00:18:23 | 2 | 7,852 | imbr |
76,791,326 | 10,061,193 | Cookiecutter Template Dependency Management | <p>I'm developing a cookiecutter template. My template has some dependencies. For example, in <code>hooks/post_gen_project.py</code> I want to show a message using <code>rich.print</code>. How can I satisfy such a requirement?</p>
<p>Also, I've seen many cookiecutter templates with either <code>setup.py</code>, <code>pyproject.toml</code>, or <code>setup.cfg</code> files included in their root directory. What is the point of that?! At first, I thought that might be a way to add dependencies to my template but that didn't work and wondering about the whole idea behind that. <em>Unfortunately, cookiecutter has a weak documentation describing all these concepts and details!</em></p>
<p>Thanks.</p>
| <python><templates><jinja2><dependency-management><cookiecutter> | 2023-07-28 22:27:25 | 1 | 394 | Sadra |
76,791,157 | 3,755,036 | where are logs for GCP API calls surfaced? | <p>I have python code, calling an API to list all my instances. Here's the API I'm calling. <a href="https://cloud.google.com/compute/docs/reference/rest/v1/instances/list" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/reference/rest/v1/instances/list</a></p>
<p>However, I can't seem to locate the logs in log explorer. In AWS this was pretty straightforward. Any idea where GCP surfaces them?</p>
| <python><google-cloud-platform><google-api><google-cloud-logging> | 2023-07-28 21:40:51 | 1 | 818 | chickenman |
76,791,066 | 1,591,921 | Call trio from inside a greenlet | <p>I've tried this:</p>
<pre><code>import trio
from gevent import monkey, spawn
monkey.patch_all()
async def async_double(x):
return 2 * x
def run():
return trio.run(async_double, 3)
g = spawn(run)
g.join()
</code></pre>
<p>But I get an error:</p>
<pre><code>Traceback (most recent call last):
File "src/gevent/greenlet.py", line 908, in gevent._gevent_cgreenlet.Greenlet.run
File "/Users/xxx/t.py", line 10, in run
return trio.run(async_double, 3)
File "/Users/xxx/Library/Python/3.9/lib/python/site-packages/trio/_core/_run.py", line 1990, in run
runner = setup_runner(
File "/Users/xxx/Library/Python/3.9/lib/python/site-packages/trio/_core/_run.py", line 1885, in setup_runner
io_manager = TheIOManager()
File "<attrs generated init trio._core._io_kqueue.KqueueIOManager>", line 15, in __init__
self.__attrs_post_init__()
File "/Users/xxx/Library/Python/3.9/lib/python/site-packages/trio/_core/_io_kqueue.py", line 33, in __attrs_post_init__
force_wakeup_event = select.kevent(
AttributeError: module 'select' has no attribute 'kevent'
2023-07-28T21:06:31Z <Greenlet at 0x105d22370: run> failed with AttributeError
</code></pre>
<p>I've also tried importing and patching gevent first, but that fails even before running:</p>
<pre><code>from gevent import monkey, spawn
monkey.patch_all()
import trio
</code></pre>
<p>Traceback:</p>
<pre><code> File "/Users/xxx/t.py", line 3, in <module>
import trio
File "/Users/xxx/Library/Python/3.9/lib/python/site-packages/trio/__init__.py", line 18, in <module>
from ._core import (
File "/Users/xxx/Library/Python/3.9/lib/python/site-packages/trio/_core/__init__.py", line 27, in <module>
from ._run import (
File "/Users/xxx/Library/Python/3.9/lib/python/site-packages/trio/_core/_run.py", line 2458, in <module>
raise NotImplementedError("unsupported platform")
NotImplementedError: unsupported platform
</code></pre>
<p>I think I managed to get the first approach (importing trio first) working at times (but that code was a little different and a lot more complex), but I want to know if that was just lucky and would have issues later on.</p>
<p>This is on MacOS, but it needs to be cross platform.</p>
<p>Edit: Note that "This is impossible, dont even try" is a completely acceptable answer, as long as it is properly motivated :)</p>
<p>Edit: My end goal is to allow users of Locust (a load test framework which uses gevent for concurrency) to also run things that depend on trio (like for example Playwright)</p>
| <python><gevent><python-trio> | 2023-07-28 21:16:09 | 2 | 11,630 | Cyberwiz |
76,791,009 | 3,842,845 | How to connect to Azure File Share and read csv file using Python | <p>I am trying to write Python code to connect to Azure File Share with URL with some credentials/keys:</p>
<p>I am running this code from an IDE (Visual Studio Code).</p>
<p>But, I am not sure how to put <strong>URL info</strong>.</p>
<pre><code>from azure.storage.file import FileService
storageAccount='cocacola'
accountKey='xxxdrinksomethingxxxx'
file_service = FileService(account_name=storageAccount, account_key=accountKey)
share_name = 'dietcoke'
directory_name = 'test'
file_name = '20230728.csv'
file = file_service.get_file_to_text(share_name, directory_name, file_name)
print(file.content)
</code></pre>
<p>Currently, error message is "azure.common.AzureMissingResourceHttpError: The specified parent path does not exist. ErrorCode: ParentNotFound"</p>
<p>How do I add the URL info code here?</p>
| <python><azure><azure-storage><azure-file-share> | 2023-07-28 21:01:52 | 1 | 1,324 | Java |
76,790,980 | 1,330,381 | How to make vscode navigate generated python files from bazel? | <p>I'm curious if anyone has any tips and techniques for navigating to definitions and modules that are generated code and referenced from python modules checked into the repo. I'm basically looking for a streamlined experienced for code navigation as it pertains to that code present in the bazel cache.</p>
<p>To give a concrete example</p>
<pre class="lang-py prettyprint-override"><code># my_in_repo_source.py
from my.generated.code.module import generated_symbol
</code></pre>
<p>The goal is to be able to command click the symbol <code>generated_symbol</code> and jump down into the bazel cache and locate the definition.</p>
<p>By way of analogy, C++ has the so called <code>includePath</code> setting that can be configured in <code>c_cpp_properties.json</code> which can include things like the bazel convenience symlinks of <code>bazel-bin</code>, <code>bazel-generated</code>, etc.</p>
<p><a href="https://code.visualstudio.com/docs/cpp/c-cpp-properties-schema-reference" rel="nofollow noreferrer">https://code.visualstudio.com/docs/cpp/c-cpp-properties-schema-reference</a></p>
| <python><visual-studio-code><code-generation><bazel> | 2023-07-28 20:54:40 | 1 | 8,444 | jxramos |
76,790,953 | 14,120,046 | GPT finetuning stuck at pending status in python | <p>I'm trying to finetune GPT with some simple datasets but the finetune response status never changes from pending I have been waiting for more than a day.</p>
<p>this is my training data</p>
<pre><code>training_data = [{
"prompt": "Where is the billing ->",
"completion": " You find the billing in the left-hand side menu.\n"
}, {
"prompt": "How do I upgrade my account ->",
"completion": " Visit you user settings in the left-hand side menu, then click 'upgrade account' button at the top.\n"
}]
</code></pre>
<p>Function to check the finetuning status:</p>
<pre><code>def check_fine_tuning_status(fine_tune_id):
response = openai.FineTune.retrieve(id=fine_tune_id)
return response.status, response.fine_tuned_model
</code></pre>
<p>this is the response</p>
<pre><code>{
"object": "fine-tune",
"id": "ft-xxxxxxxxxxxxx",
"hyperparams": {
"n_epochs": 4,
"batch_size": null,
"prompt_loss_weight": 0.01,
"learning_rate_multiplier": null
},
"organization_id": "org-xxxxxxxxx",
"model": "davinci",
"training_files": [
{
"object": "file",
"id": "file-xxxxxxxxxxxxxxx",
"purpose": "fine-tune",
"filename": "file",
"bytes": 274,
"created_at": 1690568560,
"status": "processed",
"status_details": null
}
],
"validation_files": [],
"result_files": [],
"created_at": 1690568561,
"updated_at": 1690568561,
**"status": "pending",**
"fine_tuned_model": null,
"events": [
{
"object": "fine-tune-event",
"level": "info",
"message": "Created fine-tune: ft-xxxxxxxxxxxxx",
"created_at": 1690568561
}
]
}
</code></pre>
<p>and the rest of the code</p>
<pre><code>file_name = "train00.json"
with open(file_name, "w") as output_file:
for entry in training_data:
json.dump(entry, output_file)
output_file.write("\n")
print("upload res\n")
upload_response = openai.File.create(
file=open(file_name, "rb"),
purpose='fine-tune'
)
file_id = upload_response.id
print("fileid", file_id)
print("\nfine tune respose\n")
fine_tune_response = openai.FineTune.create(training_file=file_id, model="davinci")
fine_tune_id = fine_tune_response.id
print("fine_tune_id", fine_tune_id)
print("\n retrieve job\n")
status, fine_tuned_model_id = check_fine_tuning_status("ft-19mQUBNicpzU8eTgWjr6dzth")
while status == "pending":
print("Fine-tuning is still in progress. Waiting...")
time.sleep(30) # Wait for 30 seconds before checking again
status, fine_tuned_model_id = check_fine_tuning_status("ft-bAqHAx0MBwVVVaYcFB6wjeIl")
# Check the final status
if status == "succeeded":
print("Fine-tuning completed successfully!")
print("Fine-tuned model ID:", fine_tuned_model_id)
elif status == "failed":
print("Fine-tuning failed.")
else:
print("Unknown status:", status)
</code></pre>
<p>But I can see in my usage dashboard that finetune training request numbers are increasing, but the status is still pending state if someone could also explain what these dashboard request numbers say that will be more helpful too.
<a href="https://i.sstatic.net/QlZWQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QlZWQ.png" alt="enter image description here" /></a>
I also tried changing the security key and changing the training data models, but nothing has helped, if someone could help me to solve this it would be more helpful.</p>
| <python><python-3.x><openai-api><chatgpt-api><fine-tuning> | 2023-07-28 20:48:38 | 0 | 308 | Arvind suresh |
76,790,888 | 2,052,153 | How can legacy function returning PyObject * be integrated in a solution based on pybind11? | <p>Working on migrating from swig to pybind11 for exposing C++ to python.</p>
<p>One of the functions is implemented purely in basic Py* API such as PyDict_New, PyUnicode_FromString, PyDict_SetItemString. It returns a PyObject *, i.e.</p>
<pre><code>PyObject *foo(const std::string& file) {
...
}
</code></pre>
<p>For now I would just like to reuse this piece of C. But how can I integrate that in the pybind11 solution?</p>
| <python><pybind11> | 2023-07-28 20:37:32 | 1 | 309 | user2052153 |
76,790,729 | 2,417,922 | Why does indexing a Numpy int array cause "TypeError: 'bool' object is not subscriptable"? | <p>I have a Python program that is failing with the error:</p>
<p><code>TypeError: 'bool' object is not subscriptable</code></p>
<p>I understand what the message means, but I do not understand why it applies because I am trying to index a Numpy 2D <code>int</code> array, using a two-tuple <code>(row,col)</code> as the index.</p>
<p>Here is the code and I've put a large <<<==== arrow next to the line that is getting the error message; can you please explain what's going wrong and how to fix it? I think it might be related to the lind I put <code>$$$$</code> next to.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
# An "island" is a 4-connected component of 0 cells, and a "closed island"
# is an island surrpounded on all sides by 1's. An "open island" is an
# island NOT surrounded on all sides by 1's, which is (I hope) equivalent
# to having at least one cell adjacent to the grid border. A cell is
# indexed by a 2-tuple (row,col)
class Solution:
def closedIsland(self, grid: list[list[int]]) -> int:
nrows = len( grid )
ncols = len( grid[ 0 ] )
conn_comp = self.ConnectedComponents( nrows, ncols )
# Gather the connected components of 0's (!)
for irow in range( nrows ):
for icol in range( ncols ):
if grid[ irow ][ icol ] == 1:
continue
conn_comp.onborder = irow == 0 or irow == nrows - 1 or icol == 0 or icol == ncols - 1 $$$$
# Look North
if irow > 0 and grid[ irow - 1][ icol ] == 0:
conn_comp.union( ( irow, icol ), ( irow - 1, icol ) )
# Look West
if icol > 0 and grid[ irow ][ icol - 1 ] == 0:
conn_comp.union( ( irow, icol ), ( irow, icol - 1 ) )
return 0
class ConnectedComponents:
def __init__( self, nrows, ncols ):
self.parent = np.zeros( ( nrows, ncols ), dtype = object )
for row in range( nrows ):
for col in range( ncols ):
self.parent[ row, col ] = ( row, col )
self.size = np.zeros( ( nrows, ncols ), dtype = int )
self.onborder = np.zeros( ( nrows, ncols ), dtype = int )
print( "type(self.onborder)", type( self.onborder ), "self.onborder", self.onborder )
def find( self, rowcol ):
while rowcol != self.parent[ rowcol ]:
rowcol = self.parent[ rowcol ]
return rowcol
def union( self, rowcol_1, rowcol_2 ):
comp1 = self.find( rowcol_1 )
comp2 = self.find( rowcol_2 )
print( "comp1", comp1, "comp2", comp2 )
if self.size[ comp1 ] >= self.size[ comp2 ]:
self.parent[ comp2 ] = comp1
self.size[ comp1 ] += self.size[ comp2 ]
self.onborder[ comp1 ] += self.onborder[ comp2 ] <<<===========
else:
self.parent[ comp1 ] = comp2
self.size[ comp2 ] += self.size[ comp1 ]
self.onborder[ comp2 ] += self.onborder[ comp1 ]
grid = [[1,1,1,1,1,1,1,0],[1,0,0,0,0,1,1,0],[1,0,1,0,1,1,1,0],[1,0,0,0,0,1,0,1],[1,1,1,1,1,1,1,0]]
print( "result", Solution().closedIsland( grid ) )
</code></pre>
| <python><numpy><numpy-ndarray> | 2023-07-28 20:03:16 | 0 | 1,252 | Mark Lavin |
76,790,718 | 2,403,672 | Need to get absolute list of urls for files stored on file server using curl/wget | <p>I have bunch of files(more than 50K) stored at the server:</p>
<pre><code>http://fileserver1.com/sec-test/compressed/mango/exe/small/
</code></pre>
<p>I need to get a file which has 50 files entry of the absolute URL for my tests without downloading the files. For instance:</p>
<p>file_output:</p>
<pre><code>http://fileserver1.com/sec-test/compressed/mango/exe/small/1.exe
http://fileserver1.com/sec-test/compressed/mango/exe/small/2.exe
http://fileserver1.com/sec-test/compressed/mango/exe/small/3.exe
...
http://fileserver1.com/sec-test/compressed/mango/exe/small/50.exe
</code></pre>
<p>Tried a couple of curl and wget commands. I am able to download the files recursively but not getting the path of the file from the file server stored somewhere.</p>
<p><code>wget -r -np -k http://fileserver1/sec-test/compressed/mango/exe/small/ > /dev/null</code></p>
<p>Please can someone help how can I achieve it. Thanks in advance</p>
| <python><linux><shell><curl><wget> | 2023-07-28 20:01:49 | 1 | 736 | deep |
76,790,705 | 2,077,648 | How to calculate the average of multiple iterations using python? | <p>Below is a table with results from multiple runs of different scenario ( like Test_2k, Result_2k, Test_1K, Result_5k). I need to calculate the average score for each scenario as shown in the table.
Could you please suggest how can we calculate the average score for each scenario as in below table using python?</p>
<p><a href="https://i.sstatic.net/eimpE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eimpE.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe><openpyxl><xlsxwriter> | 2023-07-28 19:59:55 | 1 | 967 | user2077648 |
76,790,589 | 3,314,876 | Get coordinates of rectangle contour | <p>I am trying to get the 4 points of a rectangle in an image. I have code that gets the contours of the image and detects if it has 4 points. But I don't know how to get those 4 points from the contour data. Here is what I have so far. It comes from code I've scrounged from various tutorials on image processing I've found.</p>
<pre><code>import cv2
import numpy as np
green = (0, 255, 0) # for drawing contours, etc.
# a normal video capture loop
cap = cv2.VideoCapture(1)
if not cap.isOpened():
print("Cannot open camera")
exit()
while True:
ret, frame = cap.read()
if not ret:
print("Can't receive frame (stream end?). Exiting ...")
break
image = frame.copy() # can tweak this one
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blur, 75, 200)
contours, _ = cv2.findContours(edged, cv2.RETR_LIST,
cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
cv2.drawContours(image, contours, -1, green, 3)
# go through each contour looking for the one with 4 points
# This is a rectangle, and the first one will be the biggest because
# we sorted the contours from largest to smallest
doc_cnts = None
if len(contours) >= 1:
for contour in contours:
# we approximate the contour
peri = cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, 0.05 * peri, True)
if len(approx) == 4:
doc_cnts = approx
break
if doc_cnts is not None:
print(doc_cnts)
cv2.imshow('original', frame)
cv2.imshow('changed', edged)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>When I run this, the edges image looks like it is finding something rectangular in the image. I point my camera at an envelope on the floor. I see lots of small background edges from the carpet patterns, and I do see the larger rectangle one for the envelope.</p>
<p>I don't understand the format of the actual contour data for the rectangle and how I can use it to get actual x,y coordinates. I don't just want to draw contours, I want to test the rectangle to ensure the image is taken front-on.</p>
| <python><opencv><contour> | 2023-07-28 19:33:56 | 2 | 480 | shawnlg |
76,790,573 | 1,887,261 | Method for correct/pythonic way to use @cached_property to set a permanent attribute on a Python class | <p>I am trying to set a permanent attribute on a <code>class</code> to be used by other methods throughout the life of the instance of the <code>class</code></p>
<pre><code>class Test:
@cached_property
def datafile_num_rows(self) -> int:
return self
def set_num_rows(self):
# insert some calculations
calcs = 2
self.datafile_num_rows = calcs
</code></pre>
<p>Given the above class, the following yields the proper output, but it feels quite odd to me to just return <code>self</code>. What would be the most correct/pythonic way to accomplish this?</p>
<pre><code>In [118]: x = Test()
In [119]: x.set_num_rows()
In [120]: x.datafile_num_rows
Out[120]: 2
</code></pre>
| <python><python-class> | 2023-07-28 19:28:49 | 1 | 12,666 | metersk |
76,790,438 | 9,997,212 | How can I make a context manager returning a generator exit successfully when not fully consuming the generator? | <p>I'm reading a CSV file using the <code>with</code>-block in Python. This emulates a file being opened (<code>__enter__</code>), the lines of the file being generated on demand (<code>yield (...)</code>) and the file being closed (<code>__exit__</code>).</p>
<pre class="lang-py prettyprint-override"><code>@contextlib.contextmanager
def open_file():
print('__enter__')
yield (f'line {i}' for i in (1, 2, 3))
print('__exit__')
</code></pre>
<p>I want to read the lines of this CSV and want to return a model for each row. So I'm doing something like that:</p>
<pre class="lang-py prettyprint-override"><code>class Model:
def __init__(self, value: int):
self.value = value
def create_models():
with open_file() as f:
for line in f:
yield Model(int(line.split(' ')[-1]))
</code></pre>
<p>If I run through the whole file, it will call both <code>__enter__</code> and <code>__exit__</code> callbacks:</p>
<pre class="lang-py prettyprint-override"><code>def main1():
for model in create_models():
print(model.value)
main1()
# __enter__
# 1
# 2
# 3
# __exit__
</code></pre>
<p>However, if I don't need all models from the file, it'll not call the <code>__exit__</code> callback:</p>
<pre class="lang-py prettyprint-override"><code>def main2():
for model in create_models():
print(model.value)
if model.value % 2 == 0:
break
main2()
# __enter__
# 1
# 2
</code></pre>
<p>How can I implement a context manager that exits even if I don't iterate over the whole child generator? Is it even possible?</p>
<p>Note that, in this example, I can't change the <code>open_file</code> context manager, since in my code it's the <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow noreferrer"><code>open</code> built-in</a> from Python (I used <code>open_file</code> here just to be a minimal and reproducible example).</p>
<p>What I want to do is something similar to <code>Stream</code>s in Dart:</p>
<pre class="lang-dart prettyprint-override"><code>import 'dart:convert';
class Model {
final int value;
const Model(this.value);
}
void main() {
File('file.csv')
.openRead()
.transform(utf8.decoder)
.transform(const LineSplitter())
.map((line) => int.parse(line.split(',')[1]))
.map((value) => Model(value))
.takeWhile((model) => model.value % 2 == 0)
.forEach((model) {
print(model.value);
});
}
</code></pre>
<p>This would close the file successfully, even if I don't consume all lines from it (<code>takeWhile</code>). Since my Python codebase is all synchronous code, I'd like to avoid introducing <code>async</code> and <code>await</code>, because it would take too long to refactor.</p>
| <python><generator><contextmanager> | 2023-07-28 18:58:31 | 1 | 11,559 | enzo |
76,790,435 | 7,169,895 | Adding button next to tabs in TabWidget | <p>I am looking to add a button for a new tab right next to the open tabs (much like a browser).
However, none of the default Layouts support stacking items on top of each other and <strong>displaying both</strong>. Can anyone give some pointers on how I can accomplish this? A grid layout at best puts them side by side, but puts the button to the far side.</p>
<p>Minimum runnable code:</p>
<pre><code>
from PySide6.QtWidgets import (
QWidget, QVBoxLayout, QTabWidget, QLabel, QGridLayout
)
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.container = QWidget() # Controls container widget.
self.controlsLayout = QVBoxLayout() # Controls container layout. Might need to be Horizontal with Veritical for some.
# Add stuff here before
# See https://pythonbasics.org/pyqt-tabwidget/ on tab widgets
self.tabs_layout = QGridLayout()
self.tabs = QTabWidget()
label1 = QLabel("Widget in Tab 1.")
# have classes for each page
self.tabs.addTab(label1, "First")
self.tabs.addTab(label1, "Second")
self.tabs_layout.addWidget(self.tabs, 1, 0)
# Add new tab button
self.new_tab_button = QPushButton("+")
self.tabs_layout.addWidget(self.new_tab_button, 0, 1)
self.controlsLayout.addLayout(self.tabs_layout)
</code></pre>
<p>I know this is accomplishable because apps like Konsole (which uses Qt behind the scenes) has this functionality.</p>
<p>This has absolute positioning, which I might need
<a href="https://www.loekvandenouweland.com/content/pyside6-widget-absolute-position.html" rel="nofollow noreferrer">https://www.loekvandenouweland.com/content/pyside6-widget-absolute-position.html</a></p>
| <python><pyside6> | 2023-07-28 18:57:44 | 1 | 786 | David Frick |
76,790,322 | 2,748,409 | Using `awkward` array to keep track of two different combinations of particles | <p>I'm building on a previous <a href="https://stackoverflow.com/questions/72834275/using-awkward-array-with-zip-unzip-with-two-different-physics-objects">question</a> about the best way to use <code>awkward</code> efficiently with combinations of particles.</p>
<p>Suppose I have a final state of 4 muons and my hypothesis is that these come from some resonance or particle (Higgs?) that decays to two Z bosons, which themselves decay to 2 muons. The Z's should be neutral. So let me start building a test case.</p>
<pre><code>!curl http://opendata.cern.ch/record/12361/files/SMHiggsToZZTo4L.root --output SMHiggsToZZTo4L.root
</code></pre>
<p>Then import the usual suspects</p>
<pre><code>import awkward as ak
import uproot
import vector
vector.register_awkward()
</code></pre>
<p>Read in the data and create a <code>MomentumArray4D</code>.</p>
<pre><code>infile = uproot.open("SMHiggsToZZTo4L.root")
muon_branch_arrays = infile["Events"].arrays(filter_name="Muon_*")
muons = ak.zip({
"pt": muon_branch_arrays["Muon_pt"],
"phi": muon_branch_arrays["Muon_phi"],
"eta": muon_branch_arrays["Muon_eta"],
"mass": muon_branch_arrays["Muon_mass"],
"charge": muon_branch_arrays["Muon_charge"],
}, with_name="Momentum4D")
</code></pre>
<p>Now physics!</p>
<pre><code>quads = ak.combinations(muons, 4)
mu1, mu2, mu3, mu4 = ak.unzip(quads)
p4 = mu1 + mu2 + mu3 + mu4
</code></pre>
<p>This all works swimmingly and I could plot the mass from the <code>p4</code> object if I wanted.</p>
<p>However, I want to select the data based on the masses and charge combinations of the muon pairs. For example, if I have 4 muons, I have multiple combinations that could have come from the Z.</p>
<ul>
<li>Z1 --> mu1 + mu2, Z2 --> mu3 + mu4</li>
<li>Z1 --> mu1 + mu3, Z2 --> mu2 + mu4</li>
<li>Z1 --> mu1 + mu4, Z2 --> mu2 + mu3</li>
</ul>
<p>So I might require that both Z1 and Z2 are neutral and that they are in some mass window and then use that requirement to mask the <code>p4</code> observables.</p>
<p>I thought <em>maybe</em> I could just do</p>
<pre><code>pairs = ak.combinations(quads,2)
zmu1,zmu2 = ak.unzip(pairs)
zp4 = zmu1 + zmu2
</code></pre>
<p>But I get a lot of errors at the last step. <code>muons</code> and <code>quads</code> are <code><class 'vector.backends.awkward.MomentumArray4D'></code> and <code><class 'awkward.highlevel.Array'></code> objects respectively, so I get that they won't behave the same.</p>
<p>So what's the proper <code>awkward</code> syntax to handle this?</p>
<p>Thanks!</p>
<p>Matt</p>
| <python><physics><awkward-array> | 2023-07-28 18:40:40 | 0 | 440 | Matt Bellis |
76,790,308 | 5,113,701 | How to use QSqlDatabase to import database from disk to in-memory? | <p>I want to import a database from disk to memory at application start and on exit export back from memory to disk. I converted <a href="https://forum.qt.io/topic/101018/qsqldatabase-sqlite-how-to-load-into-memory-and-save-memory-to-disk/18" rel="nofollow noreferrer">this code</a> from C++ to Python. It successfully exports the database from memory to disk:</p>
<pre><code>import sys
from PySide6.QtSql import QSqlDatabase, QSqlQuery
# Create the connection
con_mem = QSqlDatabase.addDatabase("QSQLITE", 'con_mem')
con_mem.setDatabaseName("file::memory:")
con_mem.setConnectOptions('QSQLITE_OPEN_URI;QSQLITE_ENABLE_SHARED_CACHE')
con_disk = QSqlDatabase.addDatabase("QSQLITE", 'con_disk')
con_disk.setDatabaseName("db.sqlite")
# Open the connection
if not con_disk.open() and con_mem.open():
print("Databases open error")
sys.exit(1)
db_mem = QSqlDatabase.database('con_mem')
db_disk = QSqlDatabase.database('con_disk')
def clone_db(scr, des, des_string):
print('Before creating table')
print('scr: ', scr.tables())
print('des: ', des.tables())
# Create a test table at scr
createTableQuery = QSqlQuery(scr)
createTableQuery.exec(
"""
CREATE TABLE contacts (
id INTEGER PRIMARY KEY AUTOINCREMENT UNIQUE NOT NULL,
name VARCHAR(40) NOT NULL,
job VARCHAR(50),
email VARCHAR(40) NOT NULL
)
"""
)
print('After creating table')
print('scr: ', scr.tables())
print('des: ', des.tables())
VacumQuery = QSqlQuery(scr)
VacumQuery.exec(f'VACUUM main INTO "{des_string}"')
print('After vacum')
print('scr: ', scr.tables())
print('des: ', des.tables())
clone_db(db_mem, db_disk, 'db.sqlite')
</code></pre>
<p>Output:</p>
<pre><code>Before creating table
scr: []
des: []
After creating table
scr: ['contacts', 'sqlite_sequence']
des: []
After vacum
scr: ['contacts', 'sqlite_sequence']
des: ['contacts', 'sqlite_sequence']
</code></pre>
<p>But when I swap <code>scr</code> and <code>des</code> this function doesn't work:</p>
<pre><code>clone_db(db_disk, db_mem, 'file::memory:')
</code></pre>
<p>Output:</p>
<pre><code>Before creating table
scr: []
des: []
After creating table
scr: ['contacts', 'sqlite_sequence']
des: []
After vacum
scr: ['contacts', 'sqlite_sequence']
des: []
</code></pre>
<p>It does not clone the disk database to an in-memory one. What is the problem?</p>
<p>I changed <code>VacumQuery</code>:</p>
<pre><code>VacumQuery = QSqlQuery(scr)
if not VacumQuery.exec(f'VACUUM main INTO "{des_string}"'):
print(VacumQuery.lastError().text())
</code></pre>
<p>Result:</p>
<blockquote>
<p>unable to open database: file::memory: Unable to fetch row</p>
</blockquote>
<p>Result of different combinations of in-memory database setup and vacuum statement:</p>
<pre><code># Test 1: No error. Database not imported. A 'file' is created in current directory.
con_mem.setDatabaseName('file:db?mode=memory&cache=shared')
con_mem.setConnectOptions('QSQLITE_OPEN_URI;QSQLITE_ENABLE_SHARED_CACHE')
......
clone_db(db_disk, db_mem, 'file:db?mode=memory&cache=shared')
</code></pre>
<pre><code># Test 2: No error. Database not imported.
con_mem.setDatabaseName(':memory:')
con_mem.setConnectOptions('QSQLITE_OPEN_URI;QSQLITE_ENABLE_SHARED_CACHE')
......
clone_db(db_disk, db_mem, ':memory:')
</code></pre>
<pre><code># Test 3: No error. Database not imported.
con_mem.setDatabaseName(':memory:')
con_mem.setConnectOptions('QSQLITE_ENABLE_SHARED_CACHE')
......
clone_db(db_disk, db_mem, ':memory:')
</code></pre>
| <python><sqlite><pyside6><qsqldatabase> | 2023-07-28 18:38:04 | 1 | 1,354 | tom |
76,790,236 | 6,087,667 | Groupby one year interval with the start as first datapoint of the series | <p>How can I group a time series with 1 year intervals such that the start of the first interval is the first datapoint and the new series is labeled by that starting point?</p>
<p>E.g. here I have a series that starts at <code>2000-01-11</code>, so the first interval should have all datapoints between <code>2000-01-11</code> and <code>2001-01-10</code>, second <code>2001-01-11</code> and <code>2002-01-10</code> etc; the labels of the new series 2000-01-11, 2001-01-11 etc?</p>
<pre><code>import pandas as pd
import numpy as np
i = pd.date_range('2000-01-11', '2022-02-10', freq='D')
t = pd.Series(index=i, data=np.random.randint(0,100,len(i)))
print(t)
t.groupby(pd.Grouper(freq='1Y', origin='start', label='left')).mean()
</code></pre>
<p>This codes seems to bin at the start of the year and label by the end of the year.</p>
| <python><pandas><datetime><group-by><intervals> | 2023-07-28 18:24:08 | 2 | 571 | guyguyguy12345 |
76,790,233 | 3,934,038 | How to make Pylance and Pydantic understand each other when instantiating BaseModel class from external data? | <p>I am trying to instantiate <code>user = User(**external_data)</code>, where <code>User</code> is Pydantic <code>BaseModel</code>, but I am getting error from Pylance, which don't like my <code>external_data</code> dictionary and unable to figure out that data in the dict is actually correct (see first screenshot).</p>
<p><a href="https://i.sstatic.net/kd0wE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kd0wE.png" alt="Type error" /></a></p>
<p>I found a workaround by creating <code>TypedDict</code> with the same declaration as for <code>User(BaseModel)</code>. Now Pylance is happy, but I am not, because I need to repeat myself (see second screenshot).</p>
<p><a href="https://i.sstatic.net/0nbHi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0nbHi.png" alt="Repetition" /></a></p>
<p>Any ideas on how to make Pylance and Pydantic understand each other without repetition?</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
from pydantic import BaseModel
from typing import TypedDict
class UserDict(TypedDict, total=False):
id: int
name: str
signup_ts: datetime
friends: list[int]
class User(BaseModel):
id: int
name: str = "John Doe"
signup_ts: datetime | None = None
friends: list[int] = []
external_data: UserDict = {
'id': 123,
'name': 'Vlad',
'signup_ts': datetime.now(),
'friends': [1, 2, 3],
}
user = User(**external_data)
print(user)
print(user.id)
</code></pre>
<p>Pylance error for the case with no <code>UserDict</code>:</p>
<pre><code>Argument of type "int | str | datetime | list[int]" cannot be assigned to parameter "id" of type "int" in function "__init__"
Type "int | str | datetime | list[int]" cannot be assigned to type "int"
"datetime" is incompatible with "int"PylancereportGeneralTypeIssues
Argument of type "int | str | datetime | list[int]" cannot be assigned to parameter "name" of type "str" in function "__init__"
Type "int | str | datetime | list[int]" cannot be assigned to type "str"
"datetime" is incompatible with "str"PylancereportGeneralTypeIssues
Argument of type "int | str | datetime | list[int]" cannot be assigned to parameter "signup_ts" of type "datetime | None" in function "__init__"PylancereportGeneralTypeIssues
Argument of type "int | str | datetime | list[int]" cannot be assigned to parameter "friends" of type "list[int]" in function "__init__"
Type "int | str | datetime | list[int]" cannot be assigned to type "list[int]"
"datetime" is incompatible with "list[int]"PylancereportGeneralTypeIssues
(variable) external_data: dict[str, int | str | datetime | list[int]]
</code></pre>
| <python><python-typing><pydantic><pyright> | 2023-07-28 18:23:32 | 2 | 2,056 | Vlad Ankudinov |
76,790,222 | 4,701,426 | Can't find and click on element on webpage | <p>I am trying to get Selenium to click on "see more hours" and grab the hours shown on the next page on Google Maps. My issue is performing the first step: clicking.</p>
<p><a href="https://i.sstatic.net/mjLkS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mjLkS.jpg" alt="enter image description here" /></a></p>
<p>What I have tried:</p>
<pre><code>from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
import time
url = 'https://www.google.com/maps/place/211+Clover+Lane/@38.2567237,-85.6849465,13z/data=!4m10!1m2!2m1!1srestuarant!3m6!1s0x886974dd922d909f:0x9c4b766c51b27adc!8m2!3d38.2567237!4d-85.6499276!15sCgpyZXN0dWFyYW50WgwiCnJlc3R1YXJhbnSSARNldXJvcGVhbl9yZXN0YXVyYW504AEA!16s%2Fg%2F1tcvcn03?entry=ttu'
options = Options()
driver = webdriver.Chrome(executable_path=ChromeDriverManager(log_level=0).install(), options=options)
driver.get(url)
time.sleep(1)
response_overview = BeautifulSoup(driver.page_source, 'html.parser')
try:
wait = WebDriverWait(driver, 10)
# hours_bt = wait.until(EC.element_to_be_clickable((By.XPATH, "//button[contains(@aria-label, 'See more hours')]")))
hours_bt = driver.find_element_by_xpath("//button[contains(@aria-label, 'Close')]")
print(hours_bt.text)
hours_bt.click()
time.sleep(1)
hours_page = BeautifulSoup(driver.page_source, 'html.parser')
hours = hours_page.find('div', class_='t39EBf GUrTXd')['aria-label'].replace('\u202f', ' ')
except Exception as e1:
print('e1', e1)
try:
hours_bt = wait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[11]/div[4]/button')))
hours_bt.click()
time.sleep(1)
hours_page = BeautifulSoup(driver.page_source, 'html.parser')
hours = hours_page.find('div', class_='t39EBf GUrTXd')['aria-label'].replace('\u202f', ' ')
except Exception as e2:
hours = None
print('e2', e2)
pass
</code></pre>
<p>The first try says that the element is not interactable and the second try times out. I'm at my wit's ends and appreciate some help. Versions: selenium==3.14.0,</p>
| <python><selenium-webdriver><beautifulsoup> | 2023-07-28 18:21:16 | 3 | 2,151 | Saeed |
76,790,124 | 14,617,547 | nested strings for random.choice python | <p>I have 4 lists and want to choose a random list and then choose a random item of it. this is my code but it doesn't work properly. can you help please?</p>
<pre><code>import random
w=['*', '&' ,'^', '%' ,'$' ,'#', '@' ,'!']
w1=['w','a','s','r','i','t','e','n','d','c']
w2=['W','A','S','R','I','T','E','N','D','C',]
w3=['1','2']
p=''
j=8
while j:
e=random.choice(['w','w1','w2','w3'])
q=random.choice(f'{f"{e}"}')
p+=q
j-=1
print(p)
</code></pre>
<p>and this is the wrong output that doesnt select from the lists:</p>
<pre><code>w3www3ww
</code></pre>
| <python><random> | 2023-07-28 18:04:34 | 2 | 410 | Beryl Amend |
76,790,080 | 15,140,177 | Why my php and python files are not running in mac | <p>I have installed php and python in my mac and made the changes required in <strong>httpd.conf</strong> file as well.</p>
<p>I have restarted the apache web server as well.</p>
<p><strong>Here is my php webpage</strong></p>
<p><a href="https://i.sstatic.net/YB2et.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YB2et.png" alt="enter image description here" /></a></p>
<p><strong>Here is my python webpage</strong></p>
<p><a href="https://i.sstatic.net/WM2nn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WM2nn.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/egF5I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/egF5I.png" alt="enter image description here" /></a></p>
<p>Please find my httpd.conf file <a href="https://drive.google.com/file/d/10iD1xZZcd6VgC66cy2ErH5JHULb_ND3A/view?usp=sharing" rel="nofollow noreferrer">here</a></p>
<p>If anyone can help me understand the issue , I'd appreciate it.</p>
<p>This is to be noted that plain html has no issue. PFB<a href="https://i.sstatic.net/FXfbW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FXfbW.png" alt="enter image description here" /></a></p>
<p>Here is my python version information</p>
<blockquote>
<p>Python 3.11.4 (v3.11.4:d2340ef257, Jun 6 2023, 19:15:51) [Clang
13.0.0 (clang-1300.0.29.30)] on darwin</p>
</blockquote>
<p>Here is my php version information</p>
<blockquote>
<p>PHP 8.2.4 (cli) (built: Mar 16 2023 16:10:27) (NTS) Copyright (c) The
PHP Group Zend Engine v4.2.4, Copyright (c) Zend Technologies
with Zend OPcache v8.2.4, Copyright (c), by Zend Technologies</p>
</blockquote>
<p>Here is my apache server version</p>
<blockquote>
<p>Server version: Apache/2.4.56 (Unix)
Server built: Apr 15 2023 04:26:33</p>
</blockquote>
<p>Here is my macos</p>
<blockquote>
<p>13.4.1 (22F82) Ventura</p>
</blockquote>
<p>FYI:</p>
<ul>
<li>Php used to work in my machine couple of months ago , dunno what happened suddenly.</li>
<li>I have installed Visual studio in my mac and I'm able to execute python programs from Visual studio.</li>
</ul>
| <python><php><macos><apache><cgi> | 2023-07-28 17:58:16 | 1 | 437 | Amogam |
76,790,056 | 2,112,406 | Calculating some constants only when python package is being installed | <p>I have a python module that I'm installing using <code>setup.py</code>. In one of the submodules, say, <code>module_name.submodule</code>, I'm doing a one-time calculation that takes a lot of time. The numbers that are produced won't change, and I will need them for various other calculations. Consequently, every time I load the python module in a script, it takes a long time. Is there any way to calculate and save these numbers only when the python package is being installed?</p>
<p><code>setup.py</code> looks like this:</p>
<pre><code>from setuptools import setup, find_packages
import module_name
setup(
name = 'module_name',
...
packages = find_packages()
)
</code></pre>
<p><code>module_name/submodule.py</code> looks like this:</p>
<pre><code>def func1():
...
CONST_VARS = []
... calculate here (takes a long time)...
</code></pre>
<p>What is the best way to handle this situation? I don't want to write the constant vars into the script by hand (or to a file or something).</p>
| <python><module><package> | 2023-07-28 17:54:30 | 0 | 3,203 | sodiumnitrate |
76,790,026 | 1,473,517 | How to iterate over a list of pairs that fit in gaps in a list | <p>I have an input list, say:</p>
<pre><code>L = [1, 1.3, 2, 2.5]
</code></pre>
<p>I want to iterate over sorted pairs so that each end of the pair fits between two different values in <code>L</code>. In this case an acceptable created list of pairs would be:</p>
<pre><code>[(0.5, 1.2), (0.5, 1.4), (0.5, 2.1), (0.5, 2.6), (1.1, 1.4), (1.1, 2.1), (1.1, 2.6), (1.4, 2.1), (1.4, 2.6), (2.1, 2.6)]
</code></pre>
<p>The first element in a pair is allowed to be less than the smallest value in L and the second element in a pair can be larger than the largest value in L as in the example.</p>
<p>I don't mind what the exact values in the pairs are as long as they are not the same as any value in <code>L</code>.</p>
<p>How can you do this? I would like to iterate over this list of pairs rather than compute the full list and store it.</p>
| <python><list><iteration> | 2023-07-28 17:48:46 | 3 | 21,513 | Simd |
76,790,016 | 11,170,350 | How to fix text encoding in python | <p>I have some text which is not in proper UTF-8 encoding. So when BERT tokenizer tokenize it, it failed.
Here is example</p>
<pre><code>from transformers import AutoTokenizer
s ="๐๐ ๐ฝะฐ๐๐๐ถ๐ะต ๐ถ๐ป๐ฐะพ๐บะต ๐๐ต๐ฎ๐ ๐๐ผ๐'๐ฟ๐ฒ ๐น๐ผ๐ผ๐ธ๐ถ๐ป๐ด ๐ณ๐ผ๐ฟ? \nโก๏ธ๐๐ป๐๐ฒ๐๐๐บ๐ฒ๐ป๐ ๐ฝ๐น๐ฎ๐๐ณ๐ผ๐ฟ๐บ ๐ณ๐ผ๐ฟ ๐ฏ๐ฒ๐ด๐ถ๐ป๐ป๐ฒ๐ฟ๐! \nโก๏ธ ๐๐๐๐ ๐ผ๐ป๐ฒ ๐๐บ๐ฎ๐ฟ๐ ๐ถ๐ป๐๐ฒ๐๐๐บ๐ฒ๐ป๐ ๐ฐ๐ฎ๐ป ๐บ๐ผ๐ฑ๐ถ๐ณ๐ ๐ฒ๐๐ฒ๐ฟ๐๐๐ต๐ถ๐ป๐ด! \n๐๐๐ฒ ๐๐ผ ๐๐ต๐ฒ ๐น๐ฎ๐๐ฒ๐๐ ๐๐ฎ๐ป๐ฐ๐๐ถ๐ผ๐ป๐ ๐ถ๐ป ๐๐ต๐ฒ ๐๐ผ๐ฟ๐น๐ฑ, ๐๐ต๐ถ๐ ๐ถ๐ ๐๐ผ๐๐ฟ ๐ฐ๐ต๐ฎ๐ป๐ฐ๐ฒ ๐ณ๐ผ๐ฟ ๐๐๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ ๐ถ๐ป ๐ณ๐ถ๐ณ๐๐ต ๐บ๐ถ๐ป๐๐๐ฒ๐. ๐๐๐๐ ๐ฟ๐ฒ๐ด๐ถ๐๐๐ฒ๐ฟ ๐ฎ๐ป๐ฑ ๐ฑ๐ถ๐๐ฐ๐ผ๐๐ฒ๐ฟ ๐ผ๐๐ ๐บ๐ผ๐ฟ๐ฒ."
model_name = "bert-base-multilingual-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
print(tokenizer.tokenize(s))
</code></pre>
<p>Output</p>
<pre><code>['[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', "'", '[UNK]', '[UNK]', '[UNK]', '?', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '!', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '!', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', ',', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '.', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '.']
</code></pre>
<p>How to fix that?</p>
| <python><nlp><huggingface-transformers> | 2023-07-28 17:47:04 | 1 | 2,979 | Talha Anwar |
76,789,925 | 550,621 | rdd.zipWithIndex() throwing IllegalArgumentException on very large data set | <p>I'm running a python notebook in Azure Databricks. I am getting an IllegalArgumentException error when trying to add a line number with rdd.zipWithIndex(). The file is 2.72 GB and 1238951 lines (I think, text editor acts wonky with this big of a file). It ran for over 4 hours before it failed. I'm wondering if we are reaching some sort of size limit since the Exception is IllegalArgumentException. I'd like to know how to prevent this exception, and/or any way to make it faster. I was thinking I may have to break it down into smaller files. Any help is appreciated.</p>
<p>Code snippet</p>
<pre><code>runKey = "cca2e0f0-bec0-408a-a5cb-341d26e8b7e0" # this is new id for every file
filePath = "/mnt/my_file_path/my_file.txt"
rdd = sc.textFile(filePath)
rdd = rdd.zipWithIndex().map(lambda line: "{}{}{}{}{}".format(str(runKey), delimiter, str(line[1]+1), delimiter, line[0]))
</code></pre>
<p>Output from error</p>
<pre><code> File "<command-3893172145851236>", line 26, in OpenFileRDD
rdd = rdd.zipWithIndex().map(lambda line: "{}{}{}{}{}".format(str(runKey), delimiter, str(line[1]+1), delimiter, line[0]))
File "/databricks/spark/python/pyspark/rdd.py", line 2524, in zipWithIndex
nums = self.mapPartitions(lambda it: [sum(1 for i in it)]).collect()
File "/databricks/spark/python/pyspark/rdd.py", line 967, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
return_value = get_return_value(
File "/databricks/spark/python/pyspark/sql/utils.py", line 117, in deco
return f(*a, **kw)
File "/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 1234573.0 failed 4 times, most recent failure: Lost task 4.3 in stage 1234573.0 (TID 46064376) (10.0.2.5 executor 5455): java.lang.IllegalArgumentException
at java.nio.CharBuffer.allocate(CharBuffer.java:334)
at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:810)
at org.apache.hadoop.io.Text.decode(Text.java:412)
at org.apache.hadoop.io.Text.decode(Text.java:389)
at org.apache.hadoop.io.Text.toString(Text.java:280)
at org.apache.spark.SparkContext.$anonfun$textFile$2(SparkContext.scala:1065)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:442)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:797)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:521)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2241)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:313)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2978)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2925)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2919)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2919)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1357)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1357)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1357)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3186)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3127)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3115)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1123)
at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2500)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1071)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:165)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:125)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:454)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1069)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:260)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.GeneratedMethodAccessor6189.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.IllegalArgumentException
at java.nio.CharBuffer.allocate(CharBuffer.java:334)
at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:810)
at org.apache.hadoop.io.Text.decode(Text.java:412)
at org.apache.hadoop.io.Text.decode(Text.java:389)
at org.apache.hadoop.io.Text.toString(Text.java:280)
at org.apache.spark.SparkContext.$anonfun$textFile$2(SparkContext.scala:1065)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:442)
at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:797)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:521)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2241)
at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:313)
</code></pre>
| <python><pyspark><databricks><rdd><azure-databricks> | 2023-07-28 17:30:53 | 1 | 361 | zBomb |
76,789,888 | 6,814,713 | Python test to mock/patch to change internal function arguments, while still running function | <p>I'm looking to mock (or patch) a function so that I can replace the arguments it receives. An example of what I want to do:</p>
<pre class="lang-py prettyprint-override"><code># my_module.my_submodule
from some_library import some_module as x
def do_thing(a, b=None):
return a + x.random_number(b)
# my_module.my_other_submodule
from my_module.my_submodule
def do_more_complex_thing(a):
# Note: we do not pass 'b' here
return do_thing(a)
# test.py
from my_module.my_other_submodule import do_more_complex_thing
def test_do_more_complex_thing():
# I want to test this function, but I need to make sure that
# when x.random_number(b) is called, it receives a particular argument.
# Note: I do not want to mock the return from x.random_number, only the
# arguments it receives.
assert do_more_complex_thing(1) == 50
</code></pre>
<p>More concretely, I have some code that calls <code>rand(seed)</code> from <code>pyspark.sql.functions</code> at some point (deeply nested beyond the function I am testing), and I need to override the value of the seed to ensure my tests are deterministic. Changing the function signatures to pass the seed through is not an option, so we need to mock this.</p>
<p>I've taken a look through the unittest.mock documentation and other answers on here, and most of them seem to focus on replacing the mocked function with either a different function, or mocking out the return values. In my case, I still want the function to run like it should, but I just need to change the arguments it receives.</p>
<p>I have tried patching the mocked function with a function where I have returned the target function with manually set arguments, but this ends up with recursion errors, so I am clearly doing something wrong. Closer to my real world example:</p>
<pre class="lang-py prettyprint-override"><code># my_module.spark
import pyspark.sql.functions as f
def do_spark_thing():
...
a = 1 # not settable from method signature
f.rand(a)
# test.py
import pyspark.sql.functions as f
from my_module.spark import do_spark_thing
def test_do_more_complex_thing():
def _set_seed(*args, **kwargs):
return f.rand(1)
with patch('my_module.spark.f.rand', _set_seed):
# Gives recursion error
do_spark_thing()
</code></pre>
<p>I am also using <code>pytest</code> as my core test framework, if there are any better options there vs <code>unittest</code>. What is the recommended way forward for this case?</p>
| <python><pyspark><pytest><python-unittest><python-unittest.mock> | 2023-07-28 17:21:48 | 1 | 2,124 | Brendan |
76,789,849 | 2,983,568 | How to create a heatmap of month/hour combinations count from datetime column? | <p>My dataframe column looks like this:</p>
<pre class="lang-python prettyprint-override"><code>df["created_at"].head()
0 2016-06-01T11:42:22.908Z
1 2016-06-01T11:42:25.111Z
2 2016-06-01T11:42:25.900Z
3 2016-06-01T11:42:26.184Z
4 2016-06-01T11:42:26.350Z
Name: created_at, dtype: object
</code></pre>
<p>I would like a heatmap of this data with hours from 1 to 23 on x-axis and months from 1 to 12 on y-axis. Here's one of my attempts:</p>
<pre class="lang-python prettyprint-override"><code>created_hours = pd.to_datetime(df.created_at).dt.hour
created_months = pd.to_datetime(df.created_at).dt.month
df_new = pd.DataFrame({"Month": created_months, "Hour": created_hours})
grouped = df_new.groupby(["Month", "Hour"])["Hour"].count().to_frame()
grouped.rename(columns={"Hour":"Count"})
res = grouped.pivot_table(index="Month", columns="Hour", values="Count", aggfunc=sum)
# -> ERROR
</code></pre>
<p>I also tried without grouping, creating a series and not converting to a dataframe, pivot instead of pivot_table, sns.heatmap() function from variations of the above but nothing works. As always I looked at many Q&A and tried to adapt the code, no luck, and the code above does not look to me as the right way to do it. I don't know if I am supposed to convert this to a dataframe, group or not, or even create 2 columns for month/hour. How can I get this heatmap to be displayed?</p>
| <python><pandas><dataframe><seaborn><heatmap> | 2023-07-28 17:15:11 | 1 | 4,665 | evilmandarine |
76,789,843 | 12,176,250 | Converting multiple dates, using regex and strptime and applying results using lamda apply | <p>I'm mapping date formats using regex, and I want to convert my dates to <code>"%Y-%m-%d"</code></p>
<p>When testing with <code>"%Y-%m-%d %H:%M:%S"</code>, my dates don't convert correctly. Can anyone see what I'm doing wrong? I have no idea how to deal with the unconverted data remaining.</p>
<p>Here is the test code for your reference.</p>
<pre><code>from datetime import datetime
import re
import pandas as pd
def conv_date(dte: str) -> datetime: #actul is datetime
acceptable_mappings = {
"\d{4}-\d{2}-\d{2}": "%Y-%m-%d",
"\d{2}-\d{2}-\d{4}": "%d-%m-%Y",
"\d{4}/\d{2}/\d{2}": "%Y/%m/%d",
"\d{2}/\d{2}/\d{4}": "%d/%m/%Y",
"\d{8}": '%d%m%Y',
"\d{2}\s\d{2}\s\d{4}": '%d %m %Y',
"\d{4}-\d{2}-\d{2}\s\d{2}\:\d{2}\:\d{2}": "%Y-%m-%d %H:%M:%S",
}
for regex in acceptable_mappings.keys():
if re.match(regex, dte):
return datetime.strptime(dte, acceptable_mappings[regex])
raise Exception(f"Expected date in one of supported formats, got {dte}")
def full_list_parse(unclean_list: list) -> list:
return [conv_date(dte) for dte in unclean_list]
mock_dict = [
{"name": "dz", "role": "legend", "date": "2023-07-26"},
{"name": "mc", "role": "sounds like a dj", "date": "26-07-2023"},
{"name": "xc", "role": "loves xcom", "date": "2023/07/26"},
{"name": "lz", "role": "likes a to fly", "date": "26/07/2023"},
{"name": "wc", "role": "has a small bladder", "date": "26072023"},
{"name": "aa", "role": "warrior of the crystal", "date": "26 07 2023"},
{"name": "xx", "role": "loves only-fans", "date": "2023-07-26 12:46:21"},
]
df = pd.DataFrame(mock_dict)
if __name__ == "__main__":
print(df)
df['date_clean'] = df['date'].apply(lambda x: conv_date(x))
print(df)
</code></pre>
<p>my results:</p>
<pre><code>ValueError: unconverted data remains: 12:46:21
name role date
0 dz legend 2023-07-26
1 mc sounds like a dj 26-07-2023
2 xc loves xcom 2023/07/26
3 lz likes a to fly 26/07/2023
4 wc has a small bladder 26072023
5 aa warrior of the crystal 26 07 2023
6 xx loves only-fans 2023-07-26 12:46:21
</code></pre>
<p>my desired results:</p>
<pre><code> name role date date_clean
0 dz legend 2023-07-26 2023-07-26
1 mc sounds like a dj 26-07-2023 2023-07-26
2 xc loves xcom 2023/07/26 2023-07-26
3 lz likes a to fly 26/07/2023 2023-07-26
4 wc has a small bladder 26072023 2023-07-26
5 aa warrior of the crystal 26 07 2023 2023-07-26
6 xx loves only-fans 2023-07-26 12:46:21 2023-07-26
</code></pre>
| <python><python-3.x><regex><dataframe><lambda> | 2023-07-28 17:13:13 | 0 | 346 | Mizanur Choudhury |
76,789,581 | 12,176,250 | strptime not formatting my date correctly | <p>I was wondering if anyone smarter than me can see what i'm doing wrong.</p>
<p>I'm mapping date formats using regex, and I want to convert my dates to <code>"%Y-%m-%d"</code></p>
<p>When testing with <code>"%Y-%m-%d %H:%M:%S"</code>, my dates don't convert correctly. Can anyone see what I'm doing wrong? I have no idea how to deal with the unconverted data remaining.</p>
<p>Here is the test code for your reference.</p>
<pre><code>from datetime import datetime
import re
import pandas as pd
def conv_date(dte: str) -> datetime: #actul is datetime
acceptable_mappings = {
"\d{4}-\d{2}-\d{2}": "%Y-%m-%d",
"\d{4}-\d{2}-\d{2}\s\d{2}\:\d{2}\:\d{2}": "%Y-%m-%d %H:%M:%S",
}
for regex in acceptable_mappings.keys():
if re.match(regex, dte):
return datetime.strptime(dte, acceptable_mappings[regex])
raise Exception(f"Expected date in one of supported formats, got {dte}")
def full_list_parse(unclean_list: list) -> list:
return [conv_date(dte) for dte in unclean_list]
mock_dict = [
{"name": "xx", "role": "loves only-fans", "date": "2023-07-26 12:46:21"},
{"name": "dz", "role": "legend", "date": "2023-07-26"},
]
df = pd.DataFrame(mock_dict)
if __name__ == "__main__":
print(df)
df['date_clean'] = df['date'].apply(lambda x: conv_date(x))
print(df)
</code></pre>
<p>my results:</p>
<pre><code>ValueError: unconverted data remains: 12:46:21
name role date
0 xx loves only-fans 2023-07-26 12:46:21
1 dz legend 2023-07-26
</code></pre>
<p>my desired results:</p>
<pre><code> name role date
0 xx loves only-fans 2023-07-26
1 dz legend 2023-07-26
</code></pre>
| <python><python-3.x><regex> | 2023-07-28 16:30:33 | 1 | 346 | Mizanur Choudhury |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.