QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,156,086 | 22,113,674 | Python JSON - Remove 'value' key and its value if only 'key' presents | <p>I have a JSON data. I want to remove the 'value' key if only 'key' presents.</p>
<p>I want to remove 'Job' and 'Mob No' and keep rest of them as it is.</p>
<p>Actual output:</p>
<pre class="lang-json prettyprint-override"><code>{
"Name": "Nelson",
"Country": "India",
"Main": [
{
"Property": "House",
"details": [
{
"key": "No",
"value": "1"
},
{
"key": "Place",
"value": "KAR"
},
{
"key": "Job"
},
{
"key": "Mob No"
}
]
}
]
}
</code></pre>
<p>Expected output:</p>
<pre class="lang-json prettyprint-override"><code>{
"Name": "Nelson",
"Country": "India",
"Main": [
{
"Property": "House",
"details": [
{
"key": "No",
"value": "1"
},
{
"key": "Place",
"value": "KAR"
}
]
}
]
}
</code></pre>
<p>I tried using the below function:</p>
<pre><code>def remove_empty_values(data):
if isinstance(data, dict):
for key, value in list(data.items()):
if isinstance(value, list) or isinstance(value, dict):
remove_empty_values(value)
elif key == "attributes":
for detail in value:
if isinstance(detail, dict) and "key" in detail and "value" in detail:
if "key" in detail and "value" in detail:
if not detail["value"]:
del detail["value"]
elif key == "value" and not value:
del data[key]
</code></pre>
<p>And calling above function by loading it in json.loads but the output is None.</p>
<p>I am not sure what is wrong.</p>
| <python><json> | 2023-09-22 08:44:53 | 2 | 339 | mpr |
77,156,063 | 958,967 | dependency_injector Python for decoupled projects | <p>I have 3 main modules</p>
<ul>
<li>application (knows about domain)</li>
<li>domain (no dependencies)</li>
<li>persistence (knows about application)</li>
<li>main.py (knows about everything)</li>
</ul>
<p>I want to use mediatr and dependency_injector while keeping the inter module dependencies as above.</p>
<p>Im running into the following error during injection,
<code>AttributeError: 'Provide' object has no attribute 'get_person'</code>
In my domain layer, i have a simple Person model.</p>
<pre class="lang-py prettyprint-override"><code># ./domain/models.py
from dataclasses import dataclass
@dataclass
class Person:
id :str
name :str
gender :str
</code></pre>
<p>In my application layer, I have 3 files,</p>
<pre class="lang-py prettyprint-override"><code># ./application/interfaces/database.py
from abc import ABC, abstractmethod
from domain.models import Person
class IDBConnection(ABC):
# This has to be implemented by persistence layer
@abstractmethod
def get_person(self, id :str) -> Person:
pass
</code></pre>
<pre class="lang-py prettyprint-override"><code># ./application/containers.py
from dependency_injector import containers, providers
class ApplicationContainer(containers.DeclarativeContainer):
dbconnection = providers.Dependency()
</code></pre>
<pre class="lang-py prettyprint-override"><code># ./application/personcontext/queries/person.py
from domain.models import Person
from mediatr import Mediator
from application.interfaces.database import IDBConnection
from dependency_injector.wiring import inject, Provide
from application.containers import ApplicationContainer
class PersonQuery:
def __init__(self, id :str) -> None:
self.id = id
@inject
@Mediator.handler
class PersonQueryHandler:
def __init__(self, db:IDBConnection=Provide[ApplicationContainer.dbconnection]) -> None:
self.db:IDBConnection = db
def handle(self, request :PersonQuery):
return self.db.get_person(request.id)
</code></pre>
<p>In my persistence layer,</p>
<pre class="lang-py prettyprint-override"><code># ./persistence/containers.py
from dependency_injector import containers,providers
from application.interfaces.database import IDBConnection
from persistence.storage import DBConnection
from application.containers import ApplicationContainer
class PersistenceContainer(containers.DeclarativeContainer):
database = providers.Singleton(IDBConnection, DBConnection())
application_container = providers.Container(
ApplicationContainer,
dbconnection=database
)
</code></pre>
<pre class="lang-py prettyprint-override"><code># ./persistence/storage.py
from application.interfaces.database import IDBConnection
from domain.models import Person
class DBConnection(IDBConnection):
def get_person(self, id :int) -> Person:
return Person(id=id, name="Razali", gender="Male")
</code></pre>
<p>finally in my main,</p>
<pre class="lang-py prettyprint-override"><code>from mediatr import Mediator
from application.personcontext.queries.person import PersonQuery
from dependency_injector import containers, providers
from dependency_injector.wiring import inject, Provide
from application.interfaces.database import IDBConnection
from persistence.storage import DBConnection
from persistence.containers import PersistenceContainer
from application.containers import ApplicationContainer
class Container(containers.DeclarativeContainer):
mediator = providers.Singleton(Mediator)
application_container = providers.Container(ApplicationContainer)
persistence_container = providers.Container(PersistenceContainer)
@inject
def main(mediator:Mediator=Provide[Container.mediator]):
response = mediator.send(PersonQuery(1))
print(response)
if __name__ == '__main__':
container = Container()
container.init_resources()
container.wire(modules=[__name__])
main()
</code></pre>
| <python><dependency-injection> | 2023-09-22 08:41:39 | 1 | 885 | RStyle |
77,156,051 | 6,290,062 | Trouble reading json-like input with python library types | <p>I have a some data read from a database (that I cannot change the structure of) where a single row might be something like</p>
<pre><code>'[{
"field1": "test",
"numericfield2": Decimal("1.8333333"),
"datetimefield3": datetime.datetime(2023, 9, 21, 12, 00)
}]'
</code></pre>
<p>Obviously to make this intelligible in python I could do something like:</p>
<pre><code>import json
json.loads('[{"field1": "test"}]')
</code></pre>
<p>however, when trying that on the whole payload I get a <code>JSONDecodeError: Expecting value: line [i] column [j] (char [j-1])</code> (where i and j refer to the position of the problematic fields.</p>
<p>The error is caused by the string data being of python (datetime and decimal) types when evaluated. Is there some way to get the json decoder to use something like:</p>
<pre><code>import decimal
float(decimal.Decimal("1.8333333"))
# 1.83333
import datetime
datetime.datetime(2023, 9, 21, 12, 00).strftime("%Y-%m-%d %H:%M")
# '2023-09-21 12:00'
</code></pre>
<p>when attempting to read in the data? i.e. return all fields that are in Decimal 'type' to a float and all fields of datetime 'type' to a datetime. I know a little about using <code>object_hooks</code> with the <code>json.loads</code> method and it seems like the right approach but havent been able to quite get it work with this problem.</p>
| <python><json><decode> | 2023-09-22 08:39:48 | 1 | 917 | Robert Hickman |
77,155,902 | 10,767,925 | check updates on each subfolder of subfolders dcmp python | <p>I am trying to build a back-up script in order to back-up all my pc to an external ssd(lost all my data 3 times by now).</p>
<p>It works great for the files and main folders but until now I am not able to check if there were any updates on the subfolders.</p>
<p>The logic is this:
We have 1 source folder and 1 destination folder. If we have a new folder or a new file created in the source folder, copy it to the destination folder. If we have a new or modified file or folder inside any subfolder, copy what is new or modified to the same path in the destination folder.</p>
<p>Example:</p>
<pre><code>dir_src = r"C:\Users\UNIX\Desktop\backup\x"
dir_dst = r"C:\Users\UNIX\Desktop\backup\y"
</code></pre>
<p>This work perfect, if I have a new or modified file or folder in source folder.</p>
<p>But, here is the tricky part: if I have a new or modified file or folder in C:\Users\UNIX\Desktop\backup\x\w\a or C:\Users\UNIX\Desktop\backup\x\w\a\new.txt the code won't get it, it just stuck to the main hierarchy. I do not want to copy the entire folder (100gb+) for a modified file or folder that has maybe 7kb because it will take more time than needed.</p>
<p>Attaching the entire code below:</p>
<pre><code>
import os, shutil
import os.path, time
import filecmp
from filecmp import dircmp
fob = open(r"C:\Users\UNIX\Desktop\backup\log.txt", "a")
dir_src = r"C:\Users\UNIX\Desktop\backup\x"
dir_dst = r"C:\Users\UNIX\Desktop\backup\y"
left_folder = []
right_folder =[]
def start_log():
fob.write("\n")
fob.write("===============")
fob.write("Started at: %s" % time.strftime("%c"))
fob.write("\n")
def write_log():
fob.write("File name: %s" % os.path.basename(pathname))
fob.write(" Last modified date: %s" %time.ctime(os.path.getmtime(pathname)))
fob.write(" Copied on: %s" % time.strftime("%c"))
fob.write("\n")
def print_diff_files(dcmp):
global name, left_folder, right_folder, left_file, right_file
for name in dcmp.diff_files:
left_file = f"{dcmp.left}\{name}"
right_file = f"{dcmp.right}\{name}"
left_folder.append(left_file)
right_folder.append(right_file)
for sub_dcmp in dcmp.subdirs.values():
print_diff_files(sub_dcmp)
start_log()
for w in os.listdir(dir_src):
name = ""
pathname = os.path.join(dir_src, w)
try:
spathname = os.path.join(dir_dst, w)
if os.path.isdir(spathname):
dcmp = dircmp(pathname, spathname)
print_diff_files(dcmp)
if name != "":
for file in left_folder:
shutil.copy2(left_file, right_file)
write_log()
print("New folder version added: " + str(pathname))
print(f"Modified files: {right_file}")
elif not filecmp.cmp(pathname, spathname):
shutil.copy(pathname, dir_dst)
write_log()
print("New file version added: " + str(spathname))
except:
if os.path.isdir(pathname):
shutil.copytree(pathname, spathname, dirs_exist_ok=True)
write_log()
print("New folder added: " + str(spathname))
else:
spathname = os.path.join(dir_dst, w)
shutil.copy(pathname, dir_dst)
write_log()
print("New file added: " + str(spathname))
fob.close()
print("Done")
</code></pre>
| <python><backup><shutil><os.path> | 2023-09-22 08:16:16 | 0 | 302 | Catalin |
77,155,783 | 3,336,412 | Alembic doesn't detect unique constraints | <p>I'm using tiangolo sqlmodel and in my database-model I wanted to add unique-constraints, so one is kind of a business-key and the other is for mapping tables - both aren't recognized by alembic.</p>
<p>The one for the business-key is in a base-table (because all non-mapping-tables inherit from this class) and looks like this:</p>
<pre class="lang-py prettyprint-override"><code>class BusinessKeyModel(PydanticBase):
businessKey: Optional[str] = Field(
alias="businessKey",
max_length=255,
description=DescriptionConstants.BUSINESS_KEY,
nullable=True,
unique=True # <-- added this before generating new migration
)
class BaseTableModel(SQLModel, BusinessKeyModel):
...
class User(GUIDModel, BaseTableModel):
guid: Optional[UUID] = Field(
...,
primary_key=True,
description=DescriptionConstants.GUID,
sa_column=Column(
"guid",
UNIQUEIDENTIFIER,
nullable=False,
primary_key=True,
server_default=text("newsequentialid()"),
),
)
</code></pre>
<p>so when I now add <code>unique=True</code> to the BusinessKeyModel.businessKey and try to generate a new migration with alembic, (with autogenerate) it doesn't detect the changes.</p>
<p>Same goes for my mapping-tables, after I added <code>UniqueConstraint</code> into my <code>__table_args__</code> I think it should detect the changes:</p>
<pre class="lang-py prettyprint-override"><code>class UserRoleMappingBase(BaseMappingModel, GUIDModel):
userId: UUID
roleId: UUID
class UserRoleMapping(UserRoleMappingBase, table=True):
__table_args__ = (
UniqueConstraint("userId", "roleId"), # <-- added this before generating new migration
{"schema": "dbx_v2"}
)
</code></pre>
| <python><sqlalchemy><alembic><sqlmodel> | 2023-09-22 07:57:29 | 1 | 5,974 | Matthias Burger |
77,155,768 | 6,333,797 | How to demonstrate numpy gradient usage with a given 2-d function? | <p>I'm quite confused by numpy gradient usage on N-D array. I write some code snippets to understand its usage on 1-D array as the following:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
# 1-d
t = np.linspace(0, 0.04, 101)
w = 2*np.pi*50
y = np.sin(w*t)
dy = w * np.cos(w*t)
dy_numeric = np.gradient(y, 0.0004, edge_order=1)
plt.plot(t, y, label='f')
plt.plot(t, dy, label='df')
plt.plot(t, dy_numeric, label='df_num')
plt.legend()
plt.show()
</code></pre>
<p>The example above shows how it works and its internal relationships.But I can't do the same with a given 2-d function as the following:</p>
<pre class="lang-py prettyprint-override"><code>x = np.linspace(-10, 10, 201)
y = np.linspace(-10, 10, 201)
f = x ** 2 + y ** 2
df_dx = 2 * x
df_dy = 2 * y
# the exact gradient expression of this 2-d function is [2*x, 2*y]
# but how can I use np.gradient to compute its numeric values?
# df_vals = np.gradient(?,?)
</code></pre>
| <python><numpy><math> | 2023-09-22 07:55:24 | 1 | 308 | Coneain |
77,155,538 | 1,866,775 | Why does the AUROC of the validation set differ between training and subsequent evaluation? | <p>Given the following minimal example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.python.keras.layers import Dense
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.optimizer_v2.adam import Adam
negatives = np.random.beta(0.6, 1.2, 100)
positives = np.random.beta(1.7, 0.9, 100)
x = np.concatenate((negatives, positives))
y = np.concatenate((np.full(negatives.size, 0), np.full(positives.size, 1)))
model = Sequential()
model.add(Dense(1, input_shape=(1,), activation="sigmoid"))
model.compile(optimizer=Adam(learning_rate=0.1),
loss="binary_crossentropy",
metrics=[tf.keras.metrics.AUC(curve="ROC", name="AUROC")])
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=42)
model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=100, verbose=True, batch_size=10000))
auroc_val = tf.keras.metrics.AUC(curve="ROC", name="AUROC")(y_test, model.predict(x_test).flatten()).numpy()
print(f"{auroc_val=}")
</code></pre>
<p>I'd expect <code>auroc_val</code> to be equal to <code>val_AUROC</code> (from the last training epoch). However, in reality, it is not.</p>
<p>Output of one run:</p>
<pre><code>...
Epoch 98/100
5/5 [==============================] - 0s 5ms/step - loss: 0.5247 - AUROC: 0.8144 - val_loss: 0.5277 - val_AUROC: 0.8143
Epoch 99/100
5/5 [==============================] - 0s 5ms/step - loss: 0.5242 - AUROC: 0.8143 - val_loss: 0.5290 - val_AUROC: 0.8144
Epoch 100/100
5/5 [==============================] - 0s 5ms/step - loss: 0.5250 - AUROC: 0.8145 - val_loss: 0.5264 - val_AUROC: 0.8145
auroc_val=0.83203566
</code></pre>
<p>Output of another run:</p>
<pre><code>...
Epoch 100/100
5/5 [==============================] - 0s 5ms/step - loss: 0.4771 - AUROC: 0.8055 - val_loss: 0.6642 - val_AUROC: 0.8054
auroc_val=0.7224694
</code></pre>
<p>Why is this?</p>
| <python><tensorflow><machine-learning><keras> | 2023-09-22 07:15:00 | 1 | 11,227 | Tobias Hermann |
77,155,405 | 6,653,602 | Why postman is returning empty responses in SSE requests? | <p>So I am trying to create a Flask endpoint that will stream the response from OpenAI chatgpt API:</p>
<pre><code>def get_report_stream(msg):
completion = openai.ChatCompletion.create(engine="gpt-4", messages=[
{"role": "system", "content": prompt()},
{"role": "user", "content": str(msg['Val'])},
], stream=True)
for line in completion:
if 'content' in line['choices'][0]['delta']:
yield line['choices'][0]['delta']['content']
@app.route("/load", methods=["POST"])
def load_bs_into_df():
stream = request.form.get('stream', default=False, type=bool)
file_path = 'input.xlsx'
input_data = load_data_from_excel(file_path)
if stream:
return Response(stream_with_context(get_report_stream(input_data)), mimetype='text/event-stream')
else:
report = get_report_whole(input_data)
return {"df": input_data, "report": report}
</code></pre>
<p>When I run curl command:</p>
<pre><code>curl -X POST -d "stream=true" http://127.0.0.1:5000/load
</code></pre>
<p>It will start streaming the response in the console.</p>
<p>But when I send request through postman then it will give some empty responses and thats it:</p>
<p><a href="https://i.sstatic.net/eH1fY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eH1fY.png" alt="enter image description here" /></a></p>
<p>Any idea what is happening?</p>
| <python><postman><openai-api> | 2023-09-22 06:53:00 | 1 | 3,918 | Alex T |
77,155,340 | 2,805,482 | Selenium scroll flickr page to get all the images | <p>Hello I am trying to get Flickr public images from flicker group and I am able to parse the html and get the image href however struggling to find a way to get all the images from a page as we scroll the page down. My below code only returns all the hrefs for images on front page but how can I get all the href after scrolling to the bottom.</p>
<pre><code>from bs4 import BeautifulSoup
import urllib.request
from selenium import webdriver
import time
op = webdriver.ChromeOptions()
op.add_argument('headless')
driver = webdriver.Chrome(options=op)
url = "https://www.flickr.com/groups/allfreepictures/pool/page3041"
driver.get(url=url)
html1 = driver.page_source
soup = BeautifulSoup(html1, 'html.parser')
image_urls = [link['href'] for link in soup.findAll("a", {"class": "overlay"})]
print(image_urls)
</code></pre>
| <python><html><selenium-webdriver><python-requests-html> | 2023-09-22 06:37:54 | 2 | 1,677 | Explorer |
77,155,115 | 4,027,373 | How to type hint subclass method with base class method hint without explicitly annotating it | <p>I want to hint that A has a construct parameter c is int</p>
<p>For origin code:</p>
<pre class="lang-py prettyprint-override"><code>class B:
def __init__(c: int):
pass
class A(B):
def __init__(a: str, **kwargs):
super().__init__(**kwargs)
</code></pre>
<p>I can do it explicitly like:</p>
<pre class="lang-py prettyprint-override"><code>class B:
def __init__(c: int):
pass
class A(B):
def __init__(a: str, c: int):
super().__init__(c=c)
</code></pre>
<p>I am just wondering if we could have something like:</p>
<pre class="lang-py prettyprint-override"><code>class B:
def __init__(c: int):
pass
class A(B):
def __init__(a: str, **kwargs: GetHintFrom(B.__init__)):
super().__init__(**kwargs)
</code></pre>
| <python><python-typing> | 2023-09-22 05:48:28 | 0 | 938 | Deo Leung |
77,154,965 | 5,736,835 | Compile error for my code in Kattis problem | <p>I have this bit of python code for the Left Beehind problem but i keep getting compile error and no information as to why:</p>
<pre><code>def decision(sweetJars, sourJars):
if sweetJars == sourJars == 0:
return
if sweetJars + sourJars == 13:
print("Never speak again.")
elif sweetJars > sourJars:
print("To the convention.")
elif sourJars > sweetJars:
print("Left beehind.")
else:
print("Undecided.")
for _ in range(int(input())):
values = input().split()
a = int(values[0])
b = int(values[1])
decision(a,b)
</code></pre>
<p>I'm using Python 3. What am i getting wrong here?</p>
| <python> | 2023-09-22 04:57:51 | 1 | 1,532 | WDUK |
77,154,893 | 1,330,381 | How to create a pytest fixture that is a list of other fixtures in the test session | <p>I have a use case where I have some code that needs iterate over a family of fixtures and execute a common set of operations upon each fixture value. Right now I'm handling it with statically defined lists that I'm pushing off to plugins but this is somewhat tedious. I'd like to instead have some dynamic means of populating the list of fixtures. Here's a snippet of a static creation of the <code>fix_list</code> fixture where I hardcode the list of other fixtures I know in advance I wish to be part of the list...</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture
def fix_foo():
print("fix_foo")
return "foo"
@pytest.fixture
def fix_fiz_foo():
print("fix_fiz_foo")
return "fiz_foo"
@pytest.fixture
def fix_other():
print("fix_other")
return "other"
@pytest.fixture
def fix_other2():
print("fix_other2")
return "other2"
def test_foo(fix_list):
assert not fix_list
@pytest.fixture
def fix_list(fix_foo, fix_fiz_foo)
return [fix_foo, fix_fiz_foo]
</code></pre>
<p>How could I populate <code>fix_list</code> dynamically at runtime without this static connection to other fixtures?</p>
<p>Acceptable solutions would be me being able to select by fixture name where I impose a naming convention like the example above where each fixture name ends with <code>"_foo"</code> in its identifier, or I could go by the fixture value type even or some attribute present in its value with some distinct attribute value I could test against.</p>
<hr />
<h2>Failed Attempt:</h2>
<p>I've tried snooping around the fixture manager but I'm finding the <code>cached_result</code> is always <code>None</code> because this fixture holds no explicit reference to any other fixture in its function signature. When I actually statically reference a desired fixture then the <code>cached_result</code> gets populated but that defeats the purpose of tackling variability across different test projects I'm trying to juggle and avoid writing all the static combinations I'm having to face.</p>
<p>Here's my attempt and the shortcoming's I'm hitting a dead end on...</p>
<pre class="lang-py prettyprint-override"><code>
@pytest.fixture(autouse=True)
def fix_list(request):
print("fix_list")
foo_fixtures = [fix_name for fix_name in request._fixturemanager._arg2fixturedefs if fix_name.endswith("_foo")]
for fix_name in foo_fixtures:
fix_def = request._fixturemanager._arg2fixturedefs[fix_name][0]
if fix_def.cached_result is None:
print(f"fix_name = {fix_name}, null cached_result")
else:
print(f"fix_name = {fix_name}, cached_result = {fix_def.cached_result}")
</code></pre>
<p>executing this test module yields the following output</p>
<pre><code>$ pytest -s test_proj/test_module.py
============================================== test session starts ============================================
platform linux -- Python 3.8.13, pytest-7.0.1, pluggy-1.2.0
rootdir: /home/USERX/test_proj
plugins: order-1.0.1, dependency-0.5.1, depends-1.0.1, anyio-3.7.1, rerunfailures-10.1, metadata-3.0.0, json-report-1.4.1, pytest_check-1.0.5
collected 1 item
test_proj/test_module.py fix_list
fix_name = fix_fiz_foo, null cached_result
fix_name = fix_foo, null cached_result
.
=============================================
</code></pre>
<p>I can see the two fixtures are filtered by name, but I have no <code>cached_result</code> to produce a list with any meaningful values. I hope I'm not crashing into some pytest design limitation trying to tease out these dynamic lookups. There's a bit of a chicken and egg scenario with this cached_results I'm hoping I'm just missing something that I can do to hatch the egg and get access to it without referencing it statically anywhere.</p>
| <python><pytest><pytest-fixtures> | 2023-09-22 04:31:40 | 1 | 8,444 | jxramos |
77,154,728 | 2,112,406 | How to add extra python-only functions in a pybind11 project built with scikit | <p>I'm trying to build upon this amazing example: <a href="https://github.com/pybind/scikit_build_example" rel="nofollow noreferrer">https://github.com/pybind/scikit_build_example</a></p>
<p>I basically want to figure out how to add more functions or classes that are purely python. I thought I would need to add them to <code>src/scikit_build_example</code> as separate <code>*.py</code> files. For instance, add a file called <code>cube.py</code>:</p>
<pre><code>from _core import square
def cube(num):
return num * square(num)
</code></pre>
<p>where I've defined an additional <code>square</code> function on the C++ side, as a part of the <code>_core</code> module.</p>
<p>But, when I do <code>pip install</code> and try to use it, I get</p>
<pre><code>AttributeError: module 'scikit_build_example' has no attribute 'cube'
</code></pre>
<p>How am I supposed to do this?</p>
| <python><c++><pybind11> | 2023-09-22 03:36:20 | 0 | 3,203 | sodiumnitrate |
77,154,717 | 3,649,629 | How to scrape node's text which has <!-- --> inside? | <p>I am writing a web crawler for scraping information from job boards. I completed my first crawler, but it has some more issues to resolve.</p>
<p>For some companies titles I get this <code>ПАО\xa0</code> as a result. This text in cyrillic, but I request and save it in <code>UTF-8</code> encoding. Attribute and content inspection of node for this cases shown this text content:</p>
<pre><code>ПАО
<!---->
'company's name'
</code></pre>
<p>This <code><!----></code> prevents scraper to do it work well and I haven't resolved yet this issue. Have you faced with this during your experience with scraping and can you suggest a right way to handle this?</p>
<p>I use <code>scrapy</code> to process it.</p>
<p><strong>UPDATE</strong> here is the code (endpoint is hidden because their <code>/robots.txt</code> deny all crawlers except from the key search engines)</p>
<pre><code>import scrapy
class HHSpider(scrapy.Spider):
name = 'hh-spider'
start_urls = [
'https:<ENDPOINT>'
]
def __init__(self):
self.BASE_URL = 'https://hh.ru'
self.JOB_SELECTOR = '.vacancy-serp-item-body'
self.JOB_TITLE_SELECTOR = '.serp-item__title::text'
self.JOB_COMPANY_SELECTOR = '.bloko-link_kind-tertiary::text'
self.JOB_COMPANY_URL_SELECTOR = '.bloko-link_kind-tertiary::attr(href)'
self.JOB_COMPENSATION_SELECTOR = '.bloko-header-section-2::text'
self.NEXT_SELECTOR = '.bloko-button[data-qa="pager-next"]::attr(href)'
def parse(self, response):
for vacancy in response.css(self.JOB_SELECTOR):
yield {
'jobTitle' : vacancy.css(self.JOB_TITLE_SELECTOR).get(),
'compensation' : vacancy.css(self.JOB_COMPENSATION_SELECTOR).get(),
'company' : vacancy.css(self.JOB_COMPANY_SELECTOR).get(),
'companyUrl' : self.BASE_URL + vacancy.css(self.JOB_COMPANY_URL_SELECTOR).get()
}
next_page = response.css(self.NEXT_SELECTOR).get()
if next_page is not None:
yield scrapy.Request(response.urljoin(next_page))
</code></pre>
| <python><web-scraping><scrapy> | 2023-09-22 03:32:51 | 1 | 7,089 | Gleichmut |
77,154,695 | 3,777,717 | Are comparisons really allowed to return numbers instead of booleans, and why? | <p>I've found a surprising sentence in the Python documentation under <a href="//docs.python.org/3/library/stdtypes.html#truth-value-testing" rel="nofollow noreferrer">Truth Value Testing</a>:</p>
<blockquote>
<p>Operations and built-in functions that have a Boolean result always return 0 or False for false and 1 or True for true, unless otherwise stated.</p>
</blockquote>
<p>Relational operators seem to have no excepting statements, so IIUC they could return <code>0</code> and <code>1</code> instead of <code>False</code> and <code>True</code> on values of some built-in types (e.g. <code>7 < 3</code>), even nondeterministically.</p>
<p>Thus, in order to satisfy a specification requiring of my code to produce values of type <code>bool</code> or for defensive programming (whenever that's important), should I wrap logical expressions in calls to <code>bool</code>?</p>
<p>Additional question: why does this latitude exist? Does it make things easier somehow for CPython or another implementation?</p>
<p><strong>EDIT</strong>
The question has been answered and I've accepted, but I'd like to add that in <a href="//peps.python.org/pep-0285/" rel="nofollow noreferrer">PEP 285 – Adding a bool type</a> I've found the following statements:</p>
<ol>
<li>All built-in operations that conceptually return a Boolean result will be changed to return False or True instead of 0 or 1; for example, comparisons, the “not” operator, and predicates like isinstance().</li>
<li>All built-in operations that are defined to return a Boolean result will be changed to return False or True instead of 0 or 1. In particular, this affects comparisons (<, <=, ==, !=, >, >=, is, is not, in, not in), the unary operator ‘not’, the built-in functions callable(), hasattr(), isinstance() and issubclass(), the dict method has_key(), the string and unicode methods endswith(), isalnum(), isalpha(), isdigit(), islower(), isspace(), istitle(), isupper(), and startswith(), the unicode methods isdecimal() and isnumeric(), and the ‘closed’ attribute of file objects. The predicates in the operator module are also changed to return a bool, including operator.truth().</li>
<li>The only thing that changes is the preferred values to represent truth values when returned or assigned explicitly. Previously, these preferred truth values were 0 and 1; the PEP changes the preferred values to False and True, and changes built-in operations to return these preferred values.</li>
</ol>
<p>However, PEPs seem to be less authoritative than the documentation (of which the language and library references are the main parts) and there are numerous deviations from PEPs (many of them mentioned explicitly). So until the team updates it, the stronger guarantees of the PEP aren't to be trusted, I think.</p>
<p><strong>EDIT</strong>
I've reported it on GH <a href="//github.com/python/cpython/issues/109791" rel="nofollow noreferrer">as a Python issue</a></p>
| <python><types><type-conversion><boolean><truthiness> | 2023-09-22 03:23:56 | 1 | 1,201 | ByteEater |
77,154,484 | 21,370,869 | Is there a way to view/peek into the uri that Invoke-RestMethod is constructing? | <p>I don't know much about restAPIs, I am using PowerShell to learn more about them.</p>
<p>Currently I want see the final URI address that is constructed by clients such as <code>Invoke-RestMethod</code> and python's <code>requests</code> library.</p>
<p>Looking at the documentation examples for <code>Invoke-RestMethod</code>, <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/invoke-restmethod?view=powershell-7.3#example-4-simplified-multipart-form-data-submission" rel="nofollow noreferrer">Example 4: Simplified Multipart/Form-Data Submission</a>, constructs a URI (I think) using the <code>$Form</code> hashtable and then <code>Invokest-RestMethod</code> is invoked with it.</p>
<p>Is there a way to intercept this message? If not, then, is there by any chance a public restAPI out there that lets you see the call you made to it, as it is received it?</p>
<p>For example, <a href="http://jsonplaceholder.typicode.com/" rel="nofollow noreferrer">Placeholder</a>, is useful tool I recently came across, its for "Free fake API for testing and prototyping." It would be nice to have something similar for seeing the URI address that is being sent to a website.</p>
<p>The API documentation for Regex101, has a few nice example like this one <a href="https://github.com/firasdib/Regex101/wiki/API#python-3" rel="nofollow noreferrer">Python</a> but none of them are in PowerShell, I am trying convert the Python ones to PowerShell but its hard without seeing what the final result is.</p>
<p>I am working in the dark here and I feel that I need to actually see what I am doing.</p>
<pre><code>$Uri = 'https://api.contoso.com/v2/profile'
$Form = @{
firstName = 'John'
lastName = 'Doe'
email = 'john.doe@contoso.com'
avatar = Get-Item -Path 'c:\Pictures\jdoe.png'
birthday = '1980-10-15'
hobbies = 'Hiking','Fishing','Jogging'
}
$Result = Invoke-RestMethod -Uri $Uri -Method Post -Form $Form
</code></pre>
<p>With the above example, what is the URI string <code>Invoke-RestMethod -Uri $Uri -Method Post -Form $Form</code> is going transmit to <code>api.contoso.com</code>?</p>
<p>PS: I am not designing a restAPI, merely learning how to call them.</p>
| <python><powershell><rest> | 2023-09-22 01:50:51 | 0 | 1,757 | Ralf_Reddings |
77,154,413 | 20,358 | Converting a Python object To XML without namespaces | <p>I am using the XmlSerializer from <a href="https://xsdata.readthedocs.io/en/latest/api/reference/xsdata.formats.dataclass.serializers.XmlSerializer.html" rel="nofollow noreferrer">xsdata</a> to convert a python3.9 dataclass object into XML.</p>
<p>This is the code</p>
<pre><code># create serializer
XML_SERIALIZER = XmlSerializer(config=SerializerConfig(xml_declaration=False))
# initialize class with multiple properties
my_obj = MyDataClass(prop1='some val', prop2='some val' , , ,)
# serialize object
serialized_value = XML_SERIALIZER.render(my_obj)
</code></pre>
<p>This generates an xml representation of the object but with things I don't want in the xml like this xmlns... xsi:type</p>
<pre><code> <SomeProperty xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="SomeTypePropetryInMyDataClass">
<code>
XYZ
</code>
</SomeProperty>
</code></pre>
<p>I tried render like this too <code>XML_SERIALIZER.render(my_obj, ns_map=None)</code>, but that didn't work either.</p>
<p>Does anyone know how to render that without the namespaces and type information added? Is there another XML serializer/deserializer for python that is more flexible?</p>
| <python><xml><serialization><deserialization><xml-deserialization> | 2023-09-22 01:21:20 | 1 | 14,834 | user20358 |
77,154,353 | 5,631,598 | How to interpret a span of text as literals in regex | <p>I am dynamically building a regex pattern that does the following:</p>
<ul>
<li><code>{token}</code> - a search string that is generated and passed to my regex function during runtime</li>
<li><code>page</code> - after the <code>token</code>, find the first occurrence of the word <code>page</code></li>
<li>number - extract the number after the word <code>page</code></li>
</ul>
<p>The pattern that I came up with is <code>{token}.*?page.*?(\d+)</code>, where <code>{token}</code> gets replaced by a string that is generated at runtime. This is working for the most part except when <code>{token}</code> contains special characters such as <code>\^$?*+()[]{}</code>.</p>
<p><strong>Is there anything I can wrap around <code>{token}</code> so that everything inside is taken as literals?</strong> Right now whenever I encounter special characters I'm getting errors such as</p>
<pre><code>re.error: unbalanced parenthesis at position 26
</code></pre>
<p>If it matters, I'm using Python. So for example:</p>
<pre><code>import re
# The values for these two are determined at runtime
sampleText = 'These requirements result in a financial institution having to: search for certain indicia linked to an account holder (see paragraph 7.24 for a list of indicia); and/or</bbox></span></p><p text-alignment="left"><span class="text css_983095220"><bbox page="66" x="70.25" y="251.5419921875" height="10.7080078125" width="440.99853515625">request that account holders self-certify their residence status.</bbox></span></p><p text-alignment="left"><span class="text css_983095220"><bbox page="66" x="40.25" y="285.2919921875" height="109.7080078125" width="508.53515625">'
token = "7.24 for a list of indicia); and/or request that a"
# sometimes throws an error
regexResult = re.search(token+'.*?page.*?(\d+)', sampleText)
print(regexResult)
</code></pre>
| <python><regex> | 2023-09-22 00:54:04 | 0 | 381 | Calseon |
77,154,257 | 3,402,703 | selecting a file in python to attach to a whatsapp web chat with selenium | <p>I want to use python and selenium to send some text and images to a group of my contacts. I already have a working code for sending the text, but have failed to find a way to send the images. I'm particularly stuck at selecting the image.</p>
<p>My code is this:</p>
<pre><code>from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.service import Service
import time
import pyautogui
service = Service(log_path = "log.log")
driver = webdriver.Firefox(service = service)
driver.get("https://web.whatsapp.com")
print("Scan QR Code, And then Enter")
input()
print("Logged In")
contacts = ["John Doe", "Mary Jane"]
text = """This text works"""
file_path = "/home/xyz/Downloads/abcdef.jpeg"
for contact in contacts:
input_box_search = driver.find_element(By.XPATH, "/html/body/div[1]/div/div/div[4]/div/div[1]/div/div/div[2]/div/div[1]")
input_box_search.click()
time.sleep(2)
for i in contact:
input_box_search.send_keys(i)
time.sleep(0.05)
selected_contact = driver.find_element(By.XPATH, "//span[@title='"+contact+"']")
selected_contact.click()
inp_xpath = "/html/body/div[1]/div/div/div[5]/div/footer/div[1]/div/span[2]/div/div[2]/div[1]/div/div[1]"
input_box = driver.find_element(By.XPATH, inp_xpath)
time.sleep(2)
input_box.click()
for i in text:
input_box.send_keys(i)
time.sleep(0.05)
input_box.send_keys(Keys.ENTER)
time.sleep(2)
add_sign = driver.find_element(By.XPATH, "/html/body/div[1]/div/div/div[5]/div/footer/div[1]/div/span[2]/div/div[1]/div[2]/div/div")
add_sign.click()
time.sleep(1)
photo_icon = driver.find_element(By.XPATH, "/html/body/div[1]/div/div/div[5]/div/footer/div[1]/div/span[2]/div/div[1]/div[2]/div/span/div/ul/div/div[2]/li/div/span")
photo_icon.click()
### This one fails. So does keyboard
pyautogui.typewrite(file_path)
send_icon = driver.find_element(By.XPATH, "/html/body/div[1]/div/div/div[3]/div[2]/span/div/span/div/div/div[2]/div/div[2]/div[2]/div/div")
send_icon.click()
driver.quit()
</code></pre>
<p>I am able to click the plus sign, select the "photos and videos" item in the menu, but then a file selector window is open and I haven't been able to select the file I want. I'm working in an ubuntu machine, and if I just Ctrl-v the path to the file after the window pops-up, the pasted text is taken as search, the file is selected, and just an additional ENTER is taken as selecting the file (closing the window and leaving the code ready to click the send button <code>send_icon</code>.</p>
<p>My problem is I can't find how to send the file path to that specific window, so I can't select the file to send it.<a href="https://i.sstatic.net/qxiBX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qxiBX.png" alt="This is the window in which I can't select the file" /></a></p>
<p>I'm open to a different approach (I've been working on this for some hours now), as long as it doesn't involve the whatsapp API (I want to use selenium for this specific task).</p>
<h3>What I've Tried</h3>
<p>I went through the following SO questions that seem to be related: <a href="https://stackoverflow.com/questions/76538449/how-to-send-file-attachment-to-whatsapp-web-using-puppeteer">using puppeteer</a>, <a href="https://stackoverflow.com/questions/72264185/how-to-send-image-media-messages-on-whatsapp-web?rq=2">usign javascript</a>, <a href="https://stackoverflow.com/questions/75167870/how-to-send-a-file-to-specific-contact-in-whatsapp-using-url-launcher-flutter?rq=2">using flutter</a>. All answers involve using the API, so none of them work in my case.</p>
<p>I also tried to use <code>pyautogui.typewrite(file_path)</code> and <code>keyboard.press_and_release(file_path)</code>, but they send the key presses to the window from which I'm running the script (the terminal or emacs), and not to the one opened by the browser.</p>
| <python><selenium-webdriver><whatsapp> | 2023-09-22 00:12:24 | 1 | 6,507 | PavoDive |
77,154,241 | 9,571,463 | Chunk a JSON Array of Objects until each Array item is of byte length < a Static Threshold | <p>I have a <code>list</code> of <code>dict</code> that follow a consistent structure where each <code>dict</code> has a <code>list</code> of integers. However, I need to make sure each <code>dict</code> has a bytesize (when converted to a JSON string) less than a specified threshold.</p>
<p>If the <code>dict</code> exceeds that bytesize threshold, I need to chunk that dict's integer <code>list</code>.</p>
<p>Attempt:</p>
<pre><code>
import json
payload: list[dict] = [
{"data1": [1,2,3,4]},
{"data2": [8,9,10]},
{"data3": [1,2,3,4,5,6,7]}
]
# Max size in bytes we can allow. This is static and a hard limit that is not variable.
MAX_SIZE: int = 25
def check_and_chunk(arr: list):
def check_size_bytes(item):
return True if len(json.dumps(item).encode("utf-8")) > MAX_SIZE else False
def chunk(item, num_chunks: int=2):
for i in range(0, len(item), num_chunks):
yield item[i:i+num_chunks]
# First check if the entire payload is smaller than the MAX_SIZE
if not check_size_bytes(arr):
return arr
# Lets find the items that are small and items that are too big, respectively
small, big = [], []
# Find the indices in the payload that are too big
big_idx: list = [i for i, j in enumerate(list(map(check_size_bytes, arr))) if j]
# Append these items respectively to their proper lists
item_append = (small.append, big.append)
for i, item in enumerate(arr):
item_append[i in set(big_idx)](item)
# Modify the big items until they are small enough to be moved to the small_items list
for i in big:
print(i)
# This is where I am unsure of how best to proceed. I'd like to essentially split the big dictionaries in the 'big' list such that it is small enough where each element is in the 'small' result.
</code></pre>
<p>Example of a possible desired result:</p>
<pre><code>payload: list[dict] = [
{"data1": [1,2,3,4]},
{"data2": [8,9,10]},
{"data3": [1,2,3,4]},
{"data3": [5,6,7]}
]
</code></pre>
| <python><json><list><dictionary> | 2023-09-22 00:05:51 | 4 | 1,767 | Coldchain9 |
77,154,149 | 8,481,155 | BigQuery chaning order of columns while WRITE_TRUNCATE | <p>I create a BigQuery table with a few columns from Terraform.</p>
<pre><code>column1, column2, column3, column4
</code></pre>
<p>In a Python script I write a Pandas Dataframe to this BigQuery table. But the dataframe has the columns in a different order.</p>
<p>df.head()</p>
<pre><code>column4, column1, column2, column3.
</code></pre>
<p>My python script to write.</p>
<pre><code>dataset_ref = client.dataset("dataset")
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = 'WRITE_TRUNCATE'
load_job = client.load_table_from_dataframe(
pandas_df, dataset_ref.table("table"),
job_config=job_config)
</code></pre>
<p>After the script is run, I could see that the order of the columns in the BigQuery table is changed to <code>column4, column1, column2, column3</code>.</p>
<p>So next time when I try to run Terraform apply it sees it as a change and deletes and recreates the table again and I'm loosing all the contents of the table.</p>
<p>Is there any way to write the dataframe without forcefully changing the order of the original schema?</p>
| <python><python-3.x><google-bigquery> | 2023-09-21 23:32:02 | 1 | 701 | Ashok KS |
77,154,059 | 11,277,108 | Is it possible to perform validation on a field amended after initialisation of the model? | <p>The below:</p>
<pre><code>import pydantic as pdc
class Test(pdc.BaseModel):
test: int | None
test = Test(test=None)
test.test = "10"
print(type(test.test))
</code></pre>
<p>Results in <code><class 'str'></code>.</p>
<p>Is it possible to ensure that validation is performed on a field amended after initialisation of the model?</p>
| <python><pydantic> | 2023-09-21 22:57:48 | 0 | 1,121 | Jossy |
77,154,013 | 13,142,245 | FastAPI.Path : TypeError: Path() missing 1 required positional argument: 'default' | <p>From tutorials I've seen the below used. However, when I try to replicate (running Docker container locally) I observe the below error.</p>
<pre><code>@app.get("/get-student/{student_id}") # path parameters and query parameters have no overlap
def get_student(student_id: int = Path(
description="student ID",
gt=0 #minimum ID = 1,
)
):
return students[student_id]
</code></pre>
<p>Error:</p>
<pre><code> File "/code/app/main.py", line 37, in <module>
def get_student(student_id: int = Path(
TypeError: Path() missing 1 required positional argument: 'default'
</code></pre>
<p>When I review <a href="https://fastapi.tiangolo.com/tutorial/path-params-numeric-validations/#import-path" rel="nofollow noreferrer">official documentation</a>, default is not passed in as an argument. Nor do I see any data on how it should be used.</p>
<p>This leads me to two questions:</p>
<ol>
<li>Why is default required in my use case?</li>
<li>Is using a Path best practice / what problem does it solve?</li>
</ol>
<p>Edit: Documentation claims that adding * as the first arg will resolve. However, I have not observed this to be the case.</p>
<pre><code>from fastapi import FastAPI, Path
app = FastAPI()
@app.get("/items/{item_id}")
async def read_items(*, item_id: int = Path(title="The ID of the item to get"), q: str):
results = {"item_id": item_id}
if q:
results.update({"q": q})
return results
</code></pre>
<p>Additionally Docs also call for using Annotated, however, I'm still observing the same error:</p>
<pre><code>@app.get("/get-student/{student_id}") # path parameters and query parameters have no overlap
def get_student(student_id = Annotated[int, Path(
description="student ID",
gt=0 #minimum ID = 1,
)]):
return students[student_id]
</code></pre>
| <python><docker><fastapi> | 2023-09-21 22:41:24 | 1 | 1,238 | jbuddy_13 |
77,153,903 | 16,717,009 | Finding the largest power of 10 that divides evenly into a list of numbers | <p>I'm trying to scale down a set of numbers to feed into a DP subset-sum algorithm. (It blows up if the numbers are too large.) Specifically, I need to find the largest power of 10 I can divide into the numbers without losing precision. I have a working routine but since it will run often in a loop, I'm hoping there's a faster way than the brute force method I came up with. My numbers happen to be Decimals.</p>
<pre><code>from decimal import Decimal
import math
def largest_common_power_of_10(numbers: list[Decimal]) -> int:
"""
Determine the largest power of 10 in list of numbers that will divide into all numbers
without losing a significant digit left of the decimal point
"""
min_exponent = float('inf')
for num in numbers:
if num != 0:
# Count the number of trailing zeros in the number
exponent = 0
while num % 10 == 0:
num //= 10
exponent += 1
min_exponent = min(min_exponent, exponent)
# The largest power of 10 is 10 raised to the min_exponent
return int(min_exponent)
decimal_numbers = [Decimal("1234"), Decimal("5000"), Decimal("200")]
result = largest_common_power_of_10(decimal_numbers)
assert(result == 0)
decimal_numbers = [Decimal(470_363_000.0000), Decimal(143_539_000.0000), Decimal(1_200_000.0000)]
result = largest_common_power_of_10(decimal_numbers)
assert(result == 3)
divisor = 10**result
# Later processing can use scaled_list
scaled_list = [x/divisor for x in decimal_numbers]
assert(scaled_list == [Decimal('470363'), Decimal('143539'), Decimal('1200')])
reconstituted_list = [x * divisor for x in scaled_list]
assert(reconstituted_list == decimal_numbers)
</code></pre>
| <python><math> | 2023-09-21 22:09:07 | 4 | 343 | MikeP |
77,153,861 | 5,960,363 | Parsing raw multipart/form data with FastAPI | <h2>Context</h2>
<p>I'd like to use FastAPI to ingest SendGrid's <a href="https://docs.sendgrid.com/for-developers/parsing-email/setting-up-the-inbound-parse-webhook" rel="nofollow noreferrer">inbound parse webhook</a>, which comes in as a raw multipart/form-data (see below). My code is receiving the payload, but I get a "422 unprocessable entity" error when I try to parse it with FastAPI (indicating Pydantic rejects it).</p>
<p><strong>Is there a way to parse this kind of data with FastAPI, or do I need to do so manually and then pass it to FastAPI in a different format?</strong> (If so, a FastAPI example would be much appreciated)</p>
<h2>Minimal Reproducible Example</h2>
<p>Simple FastAPI app to receive the data:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
from fastapi import FastAPI, Form
app = FastAPI()
@app.post("/inbound_email")
async def inbound_email(
headers: Any = Form(...),
subject: Any = Form(...),
):
try:
print(headers)
print(subject)
except Exception as e:
print(f"inbound_email exception: {e}")
# Make code accessible to the web: I'm starting uvicorn on a port I've told ngrok to listen to
if __name__ == "__main__":
import uvicorn
uvicorn.run("min_reproducible_example:app", host="127.0.0.1", port=54200, reload=True)
</code></pre>
<p>SendGrid docs contain a full <a href="https://stackoverflow.com/questions/77153861/parsing-raw-multipart-form-data-with-fastapi#:%7E:text=inbound%20parse%20webhook">sample of the webhook payload</a>, but to simplify I'll use a truncated version for our example (keeping the --xYzZY s consistent).</p>
<p>Here's the <code>curl</code> from zsh terminal:</p>
<pre><code>curl --location 'https://myurlhere.ngrok-free.app/inbound_email' \
--data '--xYzZY
Content-Disposition: form-data; name="headers"
Content-Type: multipart/alternative; boundary="00000000000021b9ea0605bbd8ac"
--xYzZY
Content-Disposition: form-data; name="subject"
Inbound test 5
--xYzZY--
'
</code></pre>
<p>The app prints:</p>
<blockquote>
<p>"POST /inbound_email HTTP/1.1" 422 Unprocessable Entity</p>
</blockquote>
<p>If I make the same request from Postman, I get a more detailed response from the app:</p>
<pre><code>*422 Unprocessable Entry*
{
"detail": [
{
"type": "missing",
"loc": [
"body",
"headers"
],
"msg": "Field required",
"input": null,
"url": "https://errors.pydantic.dev/2.3/v/missing"
}
]
}
</code></pre>
<p>I'm wondering if SendGrid's raw multipart format is something FastAPI doesn't have the tools to parse, as if I try to parse the <code>request</code> directly into <code>headers</code> and <code>subject</code>, I am unsuccessful:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request
app = FastAPI()
@app.post("/inbound_email")
async def inbound_email(
request: Request,
):
try:
# Access the raw request body
raw_body = await request.body()
print(raw_body.decode("utf-8"))
# Access the form data directly from request.form()
form_data = await request.form()
body_headers = form_data.get("headers")
subject = form_data.get("subject")
print("body_headers:", body_headers)
print("subject:", subject)
except Exception as e:
print(f"inbound_email exception: {e}")
if __name__ == "__main__":
import uvicorn
uvicorn.run("min_reproducible_example:app", host="127.0.0.1", port=54200, reload=True)
</code></pre>
<p>Terminal output is:</p>
<pre><code>--xYzZY
Content-Disposition: form-data; name="headers"
Content-Type: multipart/alternative; boundary="00000000000021b9ea0605bbd8ac"
--xYzZY
Content-Disposition: form-data; name="subject"
Inbound test 5
--xYzZY--
body_headers: None
subject: None
</code></pre>
<h3>A few additional notes:</h3>
<p>When sending from Postman, I send body text in "raw" format, with the default headers. (I have tried other headers explicitly specifying format, boundary etc, but that hasn't worked).</p>
<p>Noteably, I can send POST requests to FastAPI in any other format (including Postman's "form-data") and get everything to work. It's just this raw text format that I can't get to parse.</p>
<p>Thank you for any help!</p>
| <python><forms><http><fastapi><sendgrid> | 2023-09-21 21:57:21 | 0 | 852 | FlightPlan |
77,153,859 | 616,460 | Unexpected results using "in" with an Enum | <p>The following program (checked on Python 3.9 and 3.10) outputs <code>False</code>, but I expected it to output <code>True</code>:</p>
<pre><code>import enum
class OtherTest(str, enum.Enum):
CLIP = "clip"
class Test(str, enum.Enum):
PACK = "pack"
CLIP = OtherTest.CLIP
print("clip" in (Test.PACK, Test.CLIP))
</code></pre>
<p>Why is <code>"clip"</code> not in <code>(Test.PACK, Test.CLIP)</code>?</p>
<p>It outputs <code>True</code> if I do this instead:</p>
<pre><code>import enum
class OtherTest(str, enum.Enum):
CLIP = "clip"
class Test(str, enum.Enum):
PACK = "pack"
CLIP = "clip" # Set to "clip" instead of OtherTest.CLIP
print("clip" in (Test.PACK, Test.CLIP))
</code></pre>
<p>I can't figure out what is going on in the first program.</p>
| <python><enums> | 2023-09-21 21:57:12 | 1 | 40,602 | Jason C |
77,153,478 | 1,913,367 | Get function passed as an argument name inside another function | <p>I am trying to measure elapse time for different functions with simple method like this:</p>
<pre><code>def taketime(executable,exname='method'):
tic = time.time()
exname = executable.__name__
out = executable
toc = time.time()
print('Time taken for ',exname,' ',toc-tic,' sec')
return out
</code></pre>
<p>Executable is another method, can be whatever. Nevertheless, the program does not work as the line <code>exname = executable.__name__</code> is actually already running executable and trying to get the property <strong>name</strong> from the output.</p>
<p>What is the correct way to get the name of the executable passed to another function?</p>
<p>Small test program (not working):</p>
<pre><code>import time
def taketime(executable,exname=None):
tic = time.time()
if exname is None: exname = executable.__name__
out = executable
toc = time.time()
print('Time taken for ',exname,' ',toc-tic,' sec')
return out
# -----------------------------------------
def hello():
print('Hello!')
return
dd = taketime(hello())
</code></pre>
<p>Of course it works when I do this:
<code>dd = taketime(hello(),'hello')</code></p>
| <python><function> | 2023-09-21 20:31:21 | 1 | 2,080 | msi_gerva |
77,153,464 | 5,213,015 | DJANGO REST API - How do I restrict user access to API? | <p>So I’ve been putting together a REST API using William Vincent’s REST API’s with Django book.
I have everything up and going according to the book but I’m a bit of a noob so I need some clarification from the pros.</p>
<p>How can I restrict a user with a token to see certain information within my API?</p>
<p>Created user with Token:
<a href="https://i.sstatic.net/tKNJA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tKNJA.png" alt="enter image description here" /></a></p>
<p>i added <code>authentication_classes = [TokenAuthentication]</code> to <code>class UserList</code> thinking if a user is logged in with a token that logged in user would be able to access that information of my api but I get the below:</p>
<p><a href="https://i.sstatic.net/c8FOB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c8FOB.png" alt="enter image description here" /></a></p>
<p>When I remove <code>authentication_classes = [TokenAuthentication]</code>, I get the below.</p>
<p><a href="https://i.sstatic.net/AGrn4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AGrn4.png" alt="enter image description here" /></a></p>
<p>All users are able to see my API and I don’t want that, I only want users with a Token to view my api.</p>
<p>Any help is gladly appreciated!</p>
<p>Thanks!
Code below</p>
<p><strong># api/views.py</strong></p>
<pre><code>from django.contrib.auth import get_user_model
from rest_framework import generics, permissions
from rest_framework.authentication import TokenAuthentication
from .serializers import UserSerializer
# Display List View - User
class UserList(generics.ListAPIView):
queryset = get_user_model().objects.all()
serializer_class = UserSerializer
permission_classes = (permissions.IsAuthenticated,)
authentication_classes = [TokenAuthentication]
</code></pre>
<p><strong># api/serializers.py</strong></p>
<pre><code>from django.contrib.auth import get_user_model
from rest_framework import serializers
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = get_user_model()
fields =('id', 'username', 'email',)
</code></pre>
<p><strong>#api/urls.py</strong></p>
<pre><code>from django.urls import path
from .views import (UserList)
urlpatterns = [
path('users/', UserList.as_view()),
]
</code></pre>
<p><strong>#master_application/urls.py</strong></p>
<pre><code>urlpatterns = [
path('admin/', admin.site.urls),
path('', include('users.urls')),
path('api/', include('api.urls')),
path('api-auth/', include('rest_framework.urls')),
path('api/rest-auth/', include('rest_auth.urls')),
path('api/rest-auth/registration/', include('rest_auth.registration.urls')),
path('', include('django.contrib.auth.urls')),
path('users/', include('users.urls')),
path('users/', include('django.contrib.auth.urls')),
]
</code></pre>
<p><strong>settings.py</strong></p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAdminUser',
'rest_framework.permissions.IsAuthenticated',
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.SessionAuthentication',
'rest_framework.authentication.TokenAuthentication',
]
}
</code></pre>
| <python><django><django-rest-framework><django-rest-auth> | 2023-09-21 20:29:58 | 2 | 419 | spidey677 |
77,153,433 | 6,471,140 | how to get history of metrics using Sagemaker SDK | <p>In sagemaker</p>
<pre><code>estimator.training_job_analytics.dataframe()
</code></pre>
<p>allows you to get a dataframe that contains the final value for every metric, but how to get the history of values for the metrics? I'm looking for something similar to what keras allows using</p>
<pre><code>history = model.fit(X,y)
plt.plot(history.history["val_accuracy"]
</code></pre>
<p>but for sagemaker buil-in training jobs using sagemaker SDK</p>
| <python><amazon-web-services><machine-learning><amazon-sagemaker> | 2023-09-21 20:24:44 | 1 | 3,554 | Luis Leal |
77,153,334 | 3,387,716 | How to revert N choose 2 combinations? | <p>I have a set of pairs that can contain the result of mutliple "N choose 2" combinations:</p>
<pre class="lang-py prettyprint-override"><code>inputs = {
('id1', 'id2'), ('id1', 'id3'), ('id1', 'id4'),
('id2', 'id3'), ('id2', 'id4'),
('id3', 'id4'), ('id3', 'id5'),
('id4', 'id5'),
('id5', 'id6'),
}
</code></pre>
<p>And I would like to reverse those combinations, like this:</p>
<pre class="lang-py prettyprint-override"><code>recombinations = [
('id1', 'id2', 'id3', 'id4'),
('id3', 'id4', 'id5'),
('id5', 'id6'),
]
</code></pre>
<p>I managed to do it using brute-force:</p>
<pre class="lang-py prettyprint-override"><code>ids = list(sorted( {i for i in itertools.chain(*inputs)} ))
excludes = set()
recombinations = {tuple(i) for i in map(sorted, inputs)}
for i in range(3, len(ids)+1):
for subset in itertools.combinations(ids, i):
for j in range(i-1, len(subset)):
combs = set(itertools.combinations(subset, j))
if all(tup in recombinations for tup in combs):
recombinations.add(subset)
excludes = excludes.union(combs)
for tup in excludes:
recombinations.remove(tup)
print(recombinations)
</code></pre>
<pre class="lang-py prettyprint-override"><code>{('id1', 'id2', 'id3', 'id4'), ('id3', 'id4', 'id5'), ('id5', 'id6')}
</code></pre>
<p>Is there a smarter way to do it? Or some optimizations that I can add to the code?</p>
| <python> | 2023-09-21 20:05:39 | 1 | 17,608 | Fravadona |
77,153,264 | 2,083,119 | speed up files uploading using aiofiles & aiohttp? | <p>I need to speed up files uploading script.
Please, advise how to fix error <strong>Enable tracemalloc to get the object allocation traceback</strong>
or there's already known way to do it differently?</p>
<pre><code>import asyncio
import aiofiles
import aiohttp
async def upload_file(session, local_path):
file_data = {
'me': 'yo'
}
async with aiofiles.open(local_path, 'rb') as fp:
file_content = await fp.read()
response = session.post('http:/my_url', data=file_content, json=file_data)
async def upload_files(paths):
async with aiohttp.ClientSession() as session:
await asyncio.gather(*[upload_file(session, **path) for path in paths])
async def main():
await upload_files([
{'local_path': '1.txt'},
])
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
</code></pre>
| <python><python-asyncio><python-aiofiles><pytest-aiohttp> | 2023-09-21 19:53:52 | 1 | 1,809 | Leo |
77,153,150 | 15,587,184 | How to Select First N Key-ordered Values of column within a grouping variable in Pandas DataFrame | <p>I have a dataset:</p>
<pre><code>import pandas as pd
data = [
('A', 'X'),
('A', 'X'),
('A', 'Y'),
('A', 'Z'),
('B', 1),
('B', 1),
('B', 2),
('B', 2),
('B', 3),
('B', 3),
('C', 'L-7'),
('C', 'L-9'),
('C', 'L-9'),
('T', 2020),
('T', 2020),
('T', 2025)
]
df = pd.DataFrame(data, columns=['ID', 'SEQ'])
print(df)
</code></pre>
<p>I want to create a key grouping ID and SEQ in order to select the first 2 rows of each different SEQ within each ID Group</p>
<p>For instance the ID A, has 3 distinct keys "A X", "A Y" and "A Z" in the order of the dataset the first two keys are "A X" and "A Y" thus I must select the first two rows (if available) of each thus</p>
<p>"A X", "A X", "A Y" why? because "A Z" is another key.</p>
<p>I've tried using the groupby and head functions, but I couldn't find a way to achieve this specific result. What can I try next?</p>
<pre><code>(df
.groupby(['ID','SEQ'])
.head(2)
)
</code></pre>
<p>This code is returning the original dataset and I wonder if I can solve this problem using method chaining, as it is my preferred style in Pandas.</p>
<p>The final correct output is:</p>
<p><a href="https://i.sstatic.net/hr7pG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hr7pG.png" alt="enter image description here" /></a></p>
| <python><pandas><chaining><method-chaining> | 2023-09-21 19:31:28 | 5 | 809 | R_Student |
77,153,141 | 6,162,679 | Why variable assignment behaves differently in Python? | <p>I am new to Python and I am quite confused about the following code:</p>
<p>Example 1:</p>
<pre><code>n = 1
m = n
n = 2
print(n) # 2
print(m) # 1
</code></pre>
<p>Example 2:</p>
<pre><code>names = ["a", "b", "c"]
visitor = names
names.pop()
print(names) # ['a', 'b']
print(visitor) # ['a', 'b']
</code></pre>
<p>Example 1 shows that <code>n</code> is 1 and <code>m</code> is 1. However, example 2 shows that <code>names</code> is ['a', 'b'], and <code>visitor</code> is also ['a', 'b'].</p>
<p>To me, example 1 and example 2 are similar, so I wonder why the results are so different? Thank you.</p>
| <python><variables> | 2023-09-21 19:30:16 | 3 | 922 | Yang Yang |
77,153,102 | 247,542 | How to make Django respect locale when accessing localflavor? | <p>I'm trying to apply multi-language support to a Django application, and I have the basic configuration working so I can set a language session key and automatically pull the correct translations for the each <code>gettext</code> call around all my field labels.</p>
<p>However, I have some Ajax calls that populate fields for country/state/province names, pulling data from the <a href="https://pypi.org/project/django-localflavor/" rel="nofollow noreferrer">django-localflavor</a> package, which includes full translation support.</p>
<p>However, when I return, say, <code>localflavor.jp.jp_prefectures.JP_PREFECTURES</code> in my JSON response, even though my language session key is set to Japanese, and the values in that dictionary are wrapped with gettext, the values returned are in English, not Japanese.</p>
<p>My Ajax view looks something like:</p>
<pre><code>from localflavor.jp.jp_prefectures import JP_PREFECTURES
def get_country_states(request):
lang = request.session.get('l', 'en')
code = request.GET.get('code')
if code == 'ja':
states = {k: str(v) for k, v in JP_PREFECTURES}
elif code == ...
handle other cases
return HttpResponse(json.dumps({'states': states}))
</code></pre>
<p>How do I ensure <code>str(v)</code> renders the correct translated value based on the current language <code>l</code> session setting? I thought this happened automatically?</p>
<p>Even though the values in JP_PREFECTURES are gettext instances, if I try and return them directly in the JSON response, I get the error:</p>
<pre><code>Object of type __proxy__ is not JSON serializable
</code></pre>
<p>I tried <a href="https://stackoverflow.com/a/58432346/247542">this solution</a> by doing:</p>
<pre><code>from django.utils import translation
def get_country_states(request):
lang = request.session.get('l', 'en')
code = request.GET.get('code')
if code == 'ja':
with translation.override('ja'):
states = {k: translation.gettext(v) for k, v in JP_PREFECTURES}
elif code == ...
handle other cases
return HttpResponse(json.dumps({'states': states}))
</code></pre>
<p>but that had no effect. It still renders everything in English.</p>
| <python><django><django-localflavor><django-translated-fields> | 2023-09-21 19:23:25 | 0 | 65,489 | Cerin |
77,153,093 | 3,155,240 | Numpy logically merging boolean arrays with numpy.NaN | <p>Let's say I have 2 numpy arrays. The <strong>first</strong> one will <strong>always</strong> contain <strong>ONLY</strong> True or False values. The <strong>second</strong> may contain <strong>True</strong> or <strong>False</strong>, or it could have <strong>numpy.NaN</strong>. I want to merge the 2 arrays such that if the second array doesn't have a True or False value (it's value is numpy.NaN), it will take the value from the same location as the first array, otherwise, it will take on the True or False value. Confusing? Great! Here are some examples:</p>
<pre><code># example 1
a1 = np.array([True, False, True, False, True], dtype=object)
a2 = np.array([np.NaN, np.NaN, np.NaN, np.NaN, np.NaN], dtype=object)
output = a1.combinaficate(a2)
# prints [True, False, True, False, True]
# example 2
a1 = np.array([True, False, True, False, True], dtype=object)
a2 = np.array([np.NaN, True, np.NaN, False, np.NaN], dtype=object)
output = a1.combinaficate(a2)
# prints [True, True, True, False, True]
# example 3
a1 = np.array([True, True, True, True, True], dtype=object)
a2 = np.array([np.NaN, np.NaN, np.NaN, False, np.NaN], dtype=object)
output = a1.combinaficate(a2)
# prints [True, True, True, False, True]
# example 4
a1 = np.array([False, False, False, False, False], dtype=object)
a2 = np.array([np.NaN, np.NaN, True, False, np.NaN], dtype=object)
output = a1.combinaficate(a2)
# prints [False, False, True, False, False]
</code></pre>
<p>I know that I could write a for loop, but the spirit of the question is "Is there a way to use strictly numpy to make this computation?".</p>
<p>Thank you.</p>
| <python><numpy> | 2023-09-21 19:22:05 | 3 | 2,371 | Shmack |
77,153,068 | 9,462,829 | Conda error http 000 when trying to install libraries on Windows | <p>Since yesterday I have been having problems with <code>conda</code>. To preface, I've tried everything in <a href="https://stackoverflow.com/questions/50125472/issues-with-installing-python-libraries-on-windows-condahttperror-http-000-co/62483686#62483686">this thread</a>. The only thing I'm not sure I've done right is the proxy servers because I've never done something like that.</p>
<p>Now, the background. I work at an organization that has many network blocks, but <code>conda</code> worked without problem. I was able to create environments and install libraries. But, yesterday I needed to create an environment with <code>tensorflow</code> and Windows told me the path was too long. Wasn't able to extend path length because of the blocks, so I tried to uninstall miniconda and reinstall it in C:/ . Since then, problems emerged. Libraries installed from <code>conda-forge</code> were returning a Timeout with the dreaded</p>
<pre><code> CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://conda.anaconda.org/anaconda/win-64
</code></pre>
<p>But, packages from other channels did work, I could do something like this: <code>conda create -n myenv tensorflow gensim</code>. I googled and tried everything, uninstalled, reinstalled in the original path, but the problem just kept growing. Now I'm getting that error every time, I can't install anything using <code>conda install</code>.</p>
<p>After running the debugger I noticed the problem is I can't make a request to a page like <code>https://conda.anaconda.org/conda-forge/win-64/current_repodata.json</code>. Even if I try on Python using requests, on R, using <code>curl</code> or Postman, I'll get a Timeout. BUT, that same request works on the browser without problem.</p>
<p>That's all the background I've got. Does anyone have any idea? Any way to further understand why I'm getting a TimeoutError ?</p>
<p>Thanks a lot</p>
| <python><ssl><anaconda><conda> | 2023-09-21 19:18:07 | 1 | 6,148 | Juan C |
77,152,992 | 867,889 | How to create lazy initialization (proxy) wrapper in python? | <p>Given an expensive to initialize object and 2 consumers:</p>
<pre><code>class A:
def __init__():
time.sleep(42)
self.foo = 1
def func1(obj: A):
pass
def func2(obj: A):
print(A.foo)
</code></pre>
<p>How to create a <code>wrapper</code> that would delay initialization of <code>A</code> until any of its fields are accessed?</p>
<pre><code>proxy = wrapper(A)
func1(proxy) # executes immediately
func2(proxy) # causes object initialization and waits 42 seconds.
</code></pre>
<p>In other words, how to delay object initialization until any of its properties are accessed in a conventional way <code>a.foo</code> or at worst <code>a.__dict__['foo']</code>.</p>
| <python><proxy><lazy-initialization> | 2023-09-21 19:03:22 | 1 | 10,083 | y.selivonchyk |
77,152,847 | 10,982,755 | How do I call a specific function when a SIGTERM or SIGKILL system signal is received? | <p>I'm in need of a solution when I want to execute a particular function in my flask app before shutting the app completely (Essentially, a graceful shutdown).</p>
<p>I'm running a flask server with uwsgi (1 process, 4 threads) and I have tried the following to capture signals.</p>
<ol>
<li>Use Python signal and register signal in main thread. Unable to propagate them to worker or sub threads.</li>
<li>Used the above solution with <code>"py-callos-afterfork": "True"</code> in uwsgi config file and still of no use</li>
<li>Tried preStop lifecycle hook in Kube but does not seem to do anything. Even the echo command is not printing. And I also assume the preStop hook runs as a separate process instead of the using the existing process</li>
</ol>
<p>I have searched and there is no proper documentation for graceful shutdown with flask and uwsgi. There's a mention of master fifo with uwsgi but there are no examples of how to use them.</p>
<p>What could be a viable setup to perform graceful shutdown for my application? I'm open to other web frameworks (FastAPI?) or other web servers (maybe gunicorn) as well.</p>
<p>TLDR: Unable to capture SIGTERM and SIGKILL signals using above mentioned methods for my existing setup (flask framework with uwsgi web server). What could be a viable option for me to perform graceful shutdown? Reference articles or example are much appreciated. Thank you!</p>
| <python><flask><webserver><uwsgi> | 2023-09-21 18:39:18 | 0 | 617 | Vaibhav |
77,152,606 | 395,857 | How can I only download the .parquet files of a HuggingFace dataset without generating the .arrow files once the download is completed? | <p>I want to download all the .parquet files of HuggingFace dataset, e.g. <a href="https://huggingface.co/datasets/uonlp/CulturaX" rel="nofollow noreferrer"><code>uonlp/CulturaX</code></a>, without generating the .arrow files once the download is completed.</p>
<p>If I use:</p>
<pre><code>from datasets import load_dataset
ds = load_dataset("uonlp/CulturaX", "ar")
</code></pre>
<p>It will download all the .parquet files of HuggingFace dataset but it will also generate the .arrow files once the download is completed:</p>
<p><a href="https://i.sstatic.net/bgp3u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bgp3u.png" alt="enter image description here" /></a></p>
<ul>
<li><code>downloads</code> contains the .parquet files.</li>
<li><code>uonlp___cultura_x</code> contains the .arrow files.</li>
</ul>
<p>How can I only download the .parquet files of a HuggingFace dataset without generating the .arrow files once the download is completed?</p>
| <python><download><parquet><huggingface><huggingface-datasets> | 2023-09-21 18:00:17 | 1 | 84,585 | Franck Dernoncourt |
77,152,470 | 10,164,750 | Shifting Columns values towards left based on the value in them | <p>I have business scenario, where I have an <code>id</code> column and <code>value</code> columns.
There are multiple <code>value</code> columns like value_1, value_2 etc.</p>
<p>The idea is to shift values of <code>value</code> column to left.</p>
<p>The Input is as below:</p>
<pre><code>Input
+------------+-----------+---------+----------+---------+---------+---------+
| id| value_1| value_2| value_3| value_4| value_5| value_6|
+------------+-----------+---------+----------+---------+---------+---------+
| 1| 1011| 1012| 1011| 2018| null| 1011|
| 2| null| 1012| null| 2018| null| 2022|
| 7| 1011| 1011| 1016| 2018| null| null|
| 8| 1011| 1012| 1014| 2018| null| null|
+------------+-----------+---------+----------+---------+---------+---------+
</code></pre>
<p>The next step is to replace null values with actual by shifting the columns towards left. The idea is to all the valid valued columns should be kept to left and columns having <code>null</code> to be shfited right.
The intermedidate output looks as below:</p>
<pre><code>Output1
+------------+-----------+---------+----------+---------+---------+---------+
| id| value_1| value_2| value_3| value_4| value_5| value_6|
+------------+-----------+---------+----------+---------+---------+---------+
| 1| 1011| 1012| 1011| 2018| 1011| null|
| 2| 1012| 2018| 2022| null| null| null|
| 7| 1011| 1011| 1016| 2018| null| null|
| 8| 1011| 1012| 1014| 2018| null| null|
+------------+-----------+---------+----------+---------+---------+---------+
</code></pre>
<p>Then, the duplicate values of rows must be removed and if possible the valid values must be shifted further left.
The sample output looks as below:</p>
<pre><code>Output2
+------------+-----------+---------+----------+---------+---------+---------+
| id| value_1| value_2| value_3| value_4| value_5| value_6|
+------------+-----------+---------+----------+---------+---------+---------+
| 1| 1011| 1012| 2018| null| null| null|
| 2| 1012| 2018| 2022| null| null| null|
| 7| 1011| 1016| 2018| null| null| null|
| 8| 1011| 1012| 1014| 2018| null| null|
+------------+-----------+---------+----------+---------+---------+---------+
</code></pre>
<p>Any leads on this would be helpful. Thanks.</p>
| <python><apache-spark><pyspark> | 2023-09-21 17:36:54 | 2 | 331 | SDS |
77,152,372 | 7,321,700 | Incorrect Sum by Groupby Dataframe on multiple columns | <p><strong>Scenario:</strong> After generating a dataframe by concatenating a dict of dataframes, I want to groupy the result given 2 columns, and output the sum.</p>
<p><strong>Dataframe:</strong> testoutput2</p>
<pre><code>+-----+-------+---------+-----------+--------------------+----------+----------+----------+----------+
| Key | Index | Level 4 | IEA WEM22 | Region IEA Level 1 | 2018 IEA | 2019 IEA | 2020 IEA | 2021 IEA |
+-----+-------+---------+-----------+--------------------+----------+----------+----------+----------+
| FRA | - | 12,000 | 12,000 | Advanced economies | 54 | 37 | 26 | 0.00 |
| FRA | 1 | 11,000 | 11,000 | Advanced economies | 8 | 8 | 7 | 11 |
| FRA | 3 | 31,100 | 31,100 | Advanced economies | 4 | 3 | 3 | 0.00 |
| BEL | - | 12,000 | 12,000 | Advanced economies | 8 | 9 | 7 | 0.00 |
| BEL | 1 | 11,000 | 11,000 | Advanced economies | 1 | 1 | 1 | 2 |
| BEL | 3 | 31,100 | 31,100 | Advanced economies | 1 | 1 | 1 | 0.00 |
+-----+-------+---------+-----------+--------------------+----------+----------+----------+----------+
</code></pre>
<p><strong>Wanted output:</strong> A dict of dataframes, where each key is an Year. The content of a dataframe inside the dict would be:</p>
<pre><code>key 2018 IEA
+---------+--------------------+-----+
| Level 4 | Region IEA Level 1 | Sum |
+---------+--------------------+-----+
| 12,000 | Advanced economies | 62 |
| 11,000 | Advanced economies | 9 |
| 31,100 | Advanced economies | 5 |
+---------+--------------------+-----+
</code></pre>
<p><strong>Issue:</strong> When running the code below, I get an incorrect result, where the sums are not the expected values:</p>
<pre><code>+---------+--------------------+-----+
| Level 4 | Region IEA Level 1 | Sum |
+---------+--------------------+-----+
| 12,000 | Advanced economies | 61 |
| 11,000 | Advanced economies | 8 |
| 31,100 | Advanced economies | 4 |
+---------+--------------------+-----+
</code></pre>
<p><strong>Obs:</strong> I am currently testing this code with a smaller snippet of data (as per the example here). However, the objective is to run this with a much larger dataset. In this case, there is only one entry in the region column, whereas in the main dataset there are more.</p>
<p><strong>Code:</strong></p>
<pre><code>scen_name = "IEA"
scen_reg_out_dict={}
year_list_2 = [2018,2019,2020,2021]
for year_var in year_list_2:
scen_reg_out_dict[str(year_var) + " " + scen_name] = testoutput2.groupby(['Level 4','Region IEA Level 1'])[str(year_var) + " " + scen_name].agg(['sum']).astype('int64')
</code></pre>
<p><strong>Question:</strong> What could be causing the calculation to fail?</p>
| <python><pandas><dataframe> | 2023-09-21 17:18:48 | 2 | 1,711 | DGMS89 |
77,152,258 | 2,772,805 | How to find centroids or geolocate ISO 3166 codes? | <p>Is there a way to geolocate ISO 3166-1 alpha-3 codes (see <a href="https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes</a>) and get the centroid of the item ?</p>
<p>I would like to place a marker (a disk proportionnal to a quantity) for different countries indicated by the ISO 3166.</p>
<p>I work with python and have tested geopy.geocoders but without success.</p>
<p>Any help welcomed</p>
| <python><geospatial><geopy><iso-3166> | 2023-09-21 16:59:23 | 1 | 429 | PBrockmann |
77,152,235 | 1,767,106 | Python build error with Stable Diffusion repository: `ERROR: Could not find a version that satisfies the requirement triton==2.0.0` | <p>I'm trying to run the Python build for the Stable Diffusion SDXL project: <a href="https://github.com/Stability-AI/generative-models" rel="noreferrer">https://github.com/Stability-AI/generative-models</a></p>
<p>I'm mostly following the simple instructions provided on the github page. The project page says it supports Python 3.8 and Python 3.10. I tried Python 3.10, got an error that some dependency required <code>Requires-Python >=3.7,<3.10</code>, so I tried Python 3.8, and I get a different error, which I show below:</p>
<pre class="lang-bash prettyprint-override"><code>git clone git@github.com:Stability-AI/generative-models.git
cd generative-models
rm -rf .pt2
/opt/homebrew/opt/python@3.8/bin/python3.8 -m venv .pt2
source .pt2/bin/activate
pip3 install -r requirements/pt2.txt
</code></pre>
<p>I'm snipping out some of the logs and including the error and a few lines before the error. I'm not sure what is wrong and what can I do to get this working?</p>
<pre><code><snip>
Collecting torchmetrics>=1.0.1
Using cached torchmetrics-1.1.2-py3-none-any.whl (764 kB)
Collecting torchvision>=0.15.2
Using cached torchvision-0.15.2-cp38-cp38-macosx_11_0_arm64.whl (1.4 MB)
Collecting tqdm>=4.65.0
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Collecting transformers==4.19.1
Using cached transformers-4.19.1-py3-none-any.whl (4.2 MB)
ERROR: Ignored the following versions that require a different python version: 0.55.2 Requires-Python <3.5; 1.11.0 Requires-Python <3.13,>=3.9; 1.11.0rc1 Requires-Python <3.13,>=3.9; 1.11.0rc2 Requires-Python <3.13,>=3.9; 1.11.1 Requires-Python <3.13,>=3.9; 1.11.2 Requires-Python <3.13,>=3.9; 1.25.0 Requires-Python >=3.9; 1.25.0rc1 Requires-Python >=3.9; 1.25.1 Requires-Python >=3.9; 1.25.2 Requires-Python >=3.9; 1.26.0 Requires-Python <3.13,>=3.9; 1.26.0b1 Requires-Python <3.13,>=3.9; 1.26.0rc1 Requires-Python <3.13,>=3.9; 2.1.0 Requires-Python >=3.9; 2.1.0rc0 Requires-Python >=3.9; 2.1.1 Requires-Python >=3.9; 3.8.0 Requires-Python >=3.9; 3.8.0rc1 Requires-Python >=3.9
ERROR: Could not find a version that satisfies the requirement triton==2.0.0 (from versions: none)
ERROR: No matching distribution found for triton==2.0.0
</code></pre>
| <python><python-3.x> | 2023-09-21 16:55:35 | 2 | 20,816 | clay |
77,152,217 | 580,937 | How properly collect a VARIANT column in snowpark | <p>I have a Snowpark dataframe that retrieves a single VARIANT column from a table.
<code>regions = session.table("REGIONS").select("features")</code>
How do I take the value of that features column and assign it to a dictionary variable? <code>to_dict()</code> adds <code>“FEATURES”</code> as a key, and the json as the value. Not what I want.
I’ve tried: <code>regions_dict = json.loads(regions.select(F.col("features")))</code> That gets the error <code>TypeError: the JSON object must be str, bytes or bytearray, not DataFrame</code></p>
| <python><dataframe><snowflake-cloud-data-platform> | 2023-09-21 16:53:14 | 1 | 2,758 | orellabac |
77,152,143 | 580,937 | Is it possible for a Snowpark stored procedure to produce a DataFrame as its output? | <p>I want to create an snowflake stored procedure using snowpark. Inside of the procedure I will create some dataframe and performs some operations like filter, sort, join etc. At the end I just want to return that "dataframe", does snowpark supports that? Can someone provide an example of how to achieve this?</p>
| <python><dataframe><stored-procedures><snowflake-cloud-data-platform> | 2023-09-21 16:41:01 | 1 | 2,758 | orellabac |
77,152,116 | 580,937 | How are anaconda packages "deployed" when using snowpark in my UDF or proc | <p>I'd like to gain a deeper understanding of the implications of accepting Anaconda's terms while working with Snowpark. Specifically, I'm curious about what happens to Anaconda packages in the context of Snowflake. Does accepting the Anaconda terms lead to the physical installation of these packages within my Snowflake account? Additionally, I'm interested in whether this installation process occurs before accepting the terms or after.</p>
| <python><anaconda><snowflake-cloud-data-platform><libraries> | 2023-09-21 16:36:41 | 1 | 2,758 | orellabac |
77,152,114 | 20,712 | For a function decorator, how can a single optional kwarg item be type-constrained, but otherwise kept generic? | <p>I have a decorator that looks for an optional <code>kwargs</code> item, <code>foo</code>, with all other <code>*args</code> and <code>**kwargs</code> passed through to the decorated function.</p>
<p>I want the typing hints to specify which type that specific item must be, if it is present.</p>
<hr />
<p><strong>For the following snippet, how can I create the typing partial constraint on <code>FnParams</code> such that <em>IFF</em> <code>foo</code> item is present, then its type MUST be <code>str</code> ?</strong></p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, ParamSpec, TypeVar, TypedDict, Required, NotRequired, Unpack, cast, Any
FnRet = TypeVar("FnRet")
# how can this be partially constrained to _required_ type `str`,
# if `foo` item is present?
FnParams = ParamSpec("FnParams")
FnType = Callable[FnParams, FnRet]
def decorator(fn: FnType) -> FnType:
def wrapper(*args: FnParams.args, **kwargs: FnParams.kwargs) -> FnRet: # type: ignore[type-var]
if "foo" in kwargs:
assert type(kwargs["foo"]) == str
return cast(FnRet, fn(*args, **kwargs))
return wrapper
## Should pass:
@decorator
def ok_1() -> int:
return 42
class RequiredFooKwArgs(TypedDict):
foo: Required[str]
@decorator
def ok_required_foo(*_args: Any, **kwargs: Unpack[RequiredFooKwArgs]) -> int:
return 42
class NotRequiredFooKwArgs(TypedDict):
foo: NotRequired[str]
@decorator
def ok_not_required_foo(*_args: Any, **kwargs: Unpack[NotRequiredFooKwArgs]) -> int:
return 42
## The Badies:
class BadTypeRequiredFooKwArgs(TypedDict):
foo: Required[int] # <--- this should be flagged as the wrong type
@decorator
def bad_type_required_foo(*_args: Any, **kwargs: Unpack[BadTypeRequiredFooKwArgs]) -> int:
return 42
class BadTypeNotRequiredFooKwArgs(TypedDict):
foo: NotRequired[int] # <--- this should be flagged as the wrong type
@decorator
def bad_type_not_required_foo(*_args: Any, **kwargs: Unpack[BadTypeNotRequiredFooKwArgs]) -> int:
return 42
</code></pre>
<hr />
<p>When I type check this code with <code>mypy</code> 1.5.1, I get:</p>
<pre class="lang-bash prettyprint-override"><code>mypy --enable-incomplete-feature=Unpack ~/tmp.py
Success: no issues found in 1 source file
</code></pre>
<p>What I'd like is to augment <code>FnParams</code> with a <code>foo</code> field type constraint and then have some python type-checker fail the static analysis.</p>
| <python><python-typing> | 2023-09-21 16:36:24 | 1 | 24,426 | Ross Rogers |
77,152,103 | 6,716,760 | 3D plot with orthogonal style with diagonal axis | <p>I would like to plot a plot in the style how we are used to plot them in school. Here is an example:</p>
<p><a href="https://i.sstatic.net/r7zIa.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r7zIa.jpg" alt="This is the projection I want ("like in school")" /></a></p>
<p>Summary:</p>
<ul>
<li>3D Plot</li>
<li>the y-z plane is parallel to the screen (horizontal y, vertical z)</li>
<li>the x axis is diagonal</li>
</ul>
<p>The Y-Z Axis is parallel to the screen (horizontal y, vertical z. Usually the X axis would now point toward the screen. But I would like to change that so that it is directed diagonally downwards (like one would draw it on a sheet of paper sometimes). Unfortunately, I am not aware of how this projection is named (oblique image), but I am pretty sure it is orthogonal and that I need some kind of additional projection.</p>
<p>I have tried it with a custom projection for the axis like this:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d', proj_type='ortho')
# Some sample data
x, y, z = np.random.rand(3, 100)
ax.scatter(x, y, z)
# Apply the transformation
ax.get_proj = lambda: np.dot(Axes3D.get_proj(ax), transform)
# Set labels and view
ax.view_init(elev=0, azim=0)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
# Manual transformation matrix
c = np.cos(np.deg2rad(30))
s = np.sin(np.deg2rad(30))
transform = np.array([
[1, 0, 0, 0],
[0, c, -s, 0],
[0, s, c, 0],
[0, 0, 0, 1]
])
# Apply the transformation
ax.get_proj = lambda: np.dot(Axes3D.get_proj(ax), transform)
plt.show()
</code></pre>
<p>This is my result (not very impressive so far :)
<a href="https://i.sstatic.net/Jngk6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jngk6.png" alt="enter image description here" /></a></p>
<p>How can I fix this to look like the cube?</p>
| <python><matplotlib><projection><diagonal><matplotlib-3d> | 2023-09-21 16:35:06 | 1 | 345 | Torben |
77,152,080 | 580,937 | Operations on groups in Snowpark | <p>I'm applying a function on a grouped dataframe in Pandas.
Has anyone had to convert this into the Snowpark API. If so, what were the functions you used?</p>
| <python><pandas><dataframe><snowflake-cloud-data-platform> | 2023-09-21 16:31:53 | 1 | 2,758 | orellabac |
77,152,064 | 580,937 | Are there cost differences when running a query using snowpark in jupyter notebook? | <p>I would like to know if running queries using snowpark from a Jupyter Notebook has a particular toll on cost.</p>
| <python><jupyter-notebook><snowflake-cloud-data-platform> | 2023-09-21 16:30:09 | 1 | 2,758 | orellabac |
77,151,844 | 5,892,689 | ValueError: Invalid frequency when resampling | <p>I want to resample a temporal series index with pandas resample. In my case I divide to total period by the amount of intervals to get the step. Then sometimes I get the error:</p>
<pre><code>ValueError: Invalid frequency: 65.117651L
</code></pre>
<p>Below I show the index and the error.
You can reproduce the error:</p>
<pre><code>pd.DataFrame(index=index).resample('65.117651L')
...
ValueError: Invalid frequency: 65.117651L
</code></pre>
<p>Should I modify the frequency a bit the ValueError disapears.</p>
<p>Any idea?</p>
<p>The index is:</p>
<pre><code>index = DatetimeIndex(['2023-09-14 10:41:11.816999936',
'2023-09-14 10:41:12.002000128',
'2023-09-14 10:41:12.101999872',
'2023-09-14 10:41:12.112999936',
'2023-09-14 10:41:12.210000128',
'2023-09-14 10:41:12.308999936',
'2023-09-14 10:41:12.311000064',
'2023-09-14 10:41:12.408999936',
'2023-09-14 10:41:12.424000',
'2023-09-14 10:41:12.523000064',
'2023-09-14 10:41:12.828000',
'2023-09-14 10:41:12.829999872',
'2023-09-14 10:41:12.924000'],
dtype='datetime64[ns]', name='TS', freq=None)
</code></pre>
| <python><pandas><datetimeindex><resample> | 2023-09-21 15:56:31 | 1 | 688 | Guido |
77,151,808 | 20,122,390 | How can I group tasks with asyncio in Python to execute them combined serially and concurrently? | <p>I have the following function:</p>
<pre><code>async def pre_processing_list_sic_codes(
self,
start_date: datetime,
final_date: datetime,
sic_codes: List[str]
) -> Any:
batch_length = 80
batches = [sic_codes[i : i + batch_length] for i in range(0, len(sic_codes), batch_length)]
task = []
for batch in batches:
task.append(
self.processing(
sic_codes=batch,
start_date=start_date,
final_date=final_date,
)
)
# here
</code></pre>
<p>Then, the task list contains all the tasks that need to be executed. What I normally do is simply:</p>
<pre><code>await gather(*task)
</code></pre>
<p>However, these tasks involve operations with a database and since there are so many threads, I reach the connection limit. My idea is to group these tasks (for example 3 or 4) so that the tasks are executed serially within the group and at the same time the groups are executed asynchronously. How can I implement it?</p>
| <python><asynchronous><async-await><python-asyncio> | 2023-09-21 15:51:49 | 1 | 988 | Diego L |
77,151,685 | 7,188,690 | transpose the columns and make dictionaries as values in the other column | <p>I have this spark data frame. I am trying to convert the columns to rows and make one of the columns gw are the key of the dictionary in the other column.</p>
<pre><code>+-----+-----+-----+-------+
| gw |rrc |re_est|
+-----+-------------------+
| 210.142.27.137 | 1400.0| 26.0|
| 210.142.27.202| 2300| 12 |
</code></pre>
<p>expected output like this:</p>
<pre><code>+-----+-----------+-
| index |gw_mapping|
+-----+------
| rrc | {210.142.27.137:1400.0, 210.142.27.202: 2300}|
| re_est | {10.142.27.137:26.0, 210.142.27.202:12 }|
</code></pre>
<p>What I have done:</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import collect_list, concat_ws, lit, map_from_arrays, expr
# Create a SparkSession
spark = SparkSession.builder.appName("DataFramePivot").getOrCreate()
# Your initial DataFrame
data = [("210.142.27.137", 1400.0, 26.0),
("210.142.27.202", 2300, 12)]
columns = ["gw", "rrc", "re_est"]
df = spark.createDataFrame(data, columns)
# Pivot the DataFrame and format the output
pivot_df = df.groupBy().agg(
map_from_arrays(collect_list(lit("gw")), collect_list("rrc")).alias("rrc"),
map_from_arrays(collect_list(lit("gw")), collect_list("re_est")).alias("re_est")
)
# Create an array with 'rrc' and 're_est' keys
keys_array = lit(["rrc", "re_est"])
# Combine the 'rrc' and 're_est' maps into a single map
combined_map = map_from_arrays(keys_array, array(pivot_df['rrc'], pivot_df['re_est']))
# Explode the combined map into separate rows
result_df = combined_map.selectExpr("explode(map) as (index, gw_mapping)")
# Show the result with the desired formatting
result_df.show(truncate=False)
</code></pre>
<p>I am unable to get the output somehow.</p>
| <python><pyspark> | 2023-09-21 15:34:15 | 1 | 494 | sam |
77,151,492 | 1,874,170 | Can this algorithm be reversed in better than O(N^2) time? | <p>Say I've got the following algorithm to <a href="https://en.wikipedia.org/wiki/Bijective_function" rel="nofollow noreferrer">biject</a> <code>bytes -> Natural</code>:</p>
<pre class="lang-py prettyprint-override"><code>def rank(s: bytes) -> int:
k = 2**8
result = 0
offset = 0
for i, w in enumerate(s):
result *= k
result += w
offset += (k**i)
return result + offset
</code></pre>
<p>The decoder (at least to the best of my limited abilities) is as follows:</p>
<pre class="lang-py prettyprint-override"><code>def unrank(value: int) -> bytes:
k = 2**8
# 1. Get length
import itertools
offset = 0
for length in itertools.count(): #! LOOP RUNS O(N) TIMES !#
offset += (k**length) #! LONG ADDITION IS O(N) !#
if offset > value:
value = value - (offset - k**length)
break
# 2. Get value
result = bytearray(length)
for i in reversed(range(length)):
value, result[i] = divmod(value, k) # (Can be done with bit shifts, ignore for complexity)
return bytes(result)
</code></pre>
<p>Letting <code>N ≈ len(bytes) ≈ log(int)</code>, this decoder clearly has a worst-case runtime of <code>O(N^2)</code>. Granted, it performs well (<2s runtime) for practical cases (≤32KiB of data), but I'm still curious if it's fundamentally possible to beat that into something that swells less as the inputs get bigger.</p>
<hr />
<pre class="lang-py prettyprint-override"><code># Example / test cases:
assert rank(b"") == 0
assert rank(b"\x00") == 1
assert rank(b"\x01") == 2
...
assert rank(b"\xFF") == 256
assert rank(b"\x00\x00") == 257
assert rank(b"\x00\x01") == 258
...
assert rank(b"\xFF\xFF") == 65792
assert rank(b"\x00\x00\x00") == 65793
assert unrank(0) == b""
assert unrank(1) == b"\x00"
assert unrank(2) == b"\x01"
# ...
assert unrank(256) == b"\xFF"
assert unrank(257) == b"\x00\x00"
assert unrank(258) == b"\x00\x01"
# ...
assert unrank(65792) == b"\xFF\xFF"
assert unrank(65793) == b"\x00\x00\x00"
assert unrank(2**48+1) == b"\xFE\xFE\xFE\xFE\xFF\x00"
</code></pre>
| <python><algorithm><time-complexity><ranking><lexicographic-ordering> | 2023-09-21 15:06:06 | 3 | 1,117 | JamesTheAwesomeDude |
77,151,389 | 2,504,762 | pandas dataframe how to add diff column which calculates difference between two consecutive rows | <p>I have a dataframe with two rows, and I want to add one more row which shows the difference between two rows.</p>
<pre class="lang-py prettyprint-override"><code> data = [
(10, 20, 30, 40, 50, 60, 70),
(10, 30, 30, 40, 50, 60, 100)
]
df = pd.DataFrame(data, columns=["a", "b", "c", "d", "d", "f", "g"])
</code></pre>
<p>following works, but it adds extra row with <code>nan</code></p>
<pre class="lang-py prettyprint-override"><code>pd.concat([df, df.diff()])
</code></pre>
<pre><code> a b c d d f g
0 10.0 20.0 30.0 40.0 50.0 60.0 70.0
1 10.0 30.0 30.0 40.0 50.0 60.0 100.0
0 NaN NaN NaN NaN NaN NaN NaN
1 0.0 10.0 0.0 0.0 0.0 0.0 30.0
</code></pre>
| <python><pandas> | 2023-09-21 14:53:05 | 1 | 13,075 | Gaurang Shah |
77,151,033 | 10,194,070 | python cryptography can't installed on RHEL 8.6 when python version is 3.9 | <p>we have tried to install the <code>cryptography</code> module on our RHEL 8.6 linux machine but without success</p>
<p>here short details about our server</p>
<pre><code>pip3 --version
pip 23.2.1 from /usr/local/lib/python3.9/site-packages/pip (python 3.9)
python3 --version
Python 3.9.16
pip3 list | grep setuptools
setuptools 68.2.2
more /etc/redhat-release
Red Hat Enterprise Linux release 8.6 (Ootpa)
uname -a
4.18.0-372.9.1.el8.x86_64 #1 SMP Fri Apr 15 22:12:19 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>and examples from installation proccess</p>
<pre><code>pip3 install --no-cache-dir --no-index "/tmp/1/cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl"
ERROR: cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl is not a supported wheel on this platform.
</code></pre>
<p>and with diff version</p>
<pre><code>pip3 install --no-cache-dir --no-index "/tmp/cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl"
ERROR: cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl is not a supported wheel on this platform.
</code></pre>
<p>or different version</p>
<pre><code> pip3 install --no-cache-dir --no-index "/tmp/cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl"
ERROR: cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl is not a supported wheel on this platform.
</code></pre>
<p>or</p>
<pre><code>pip3 install --no-cache-dir --no-index "/tmp/cryptography-37.0.3-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl"
ERROR: cryptography-37.0.3-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl is not a supported wheel on this platform.
</code></pre>
<p>or</p>
<pre><code>pip3 install --no-cache-dir --no-index "/tmp/cryptography-41.0.4.tar.gz"
Processing /tmp/cryptography-41.0.4.tar.gz
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [2 lines of output]
ERROR: Could not find a version that satisfies the requirement setuptools>=61.0.0 (from versions: none)
ERROR: No matching distribution found for setuptools>=61.0.0
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>I also downgrade the setuptools to version 38.4.0 but the same error and we reinstall the setuptools to original version - 68.2.2</p>
<p>so we not understand if cryptography cant installed on RHEL 8.6 version with python 3.9</p>
| <python><python-3.x><pip><rhel> | 2023-09-21 14:07:40 | 1 | 1,927 | Judy |
77,150,937 | 12,013,353 | Incorrect Fourier coefficients signs resulting from scipy.fft.fft | <p>I analysed a triangle wave using the Fourier series by hand, and then using the <code>scipy.fft</code> package. What I'm getting are the same absolute values of the coefficients, but with opposite signs.<br />
<a href="https://i.sstatic.net/cgQN2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cgQN2.png" alt="Triangular wave" /></a></p>
<p><strong>By hand:</strong><br />
I took the interval [-1,1], calculated the integral to get <strong>a0 = 1</strong>, then <strong>a1</strong>, <strong>a3</strong> and <strong>a5</strong> using the result of integration:<br />
<a href="https://i.sstatic.net/ByRdW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ByRdW.png" alt="integral1" /></a></p>
<p>And all the <strong>bn</strong> are 0.<br />
Then I simply constructed the series, where the 1st term is <strong>a0/2</strong>, the second <strong>a1×cos(n Pi t/T)</strong> and so on, and plotted these waves, which summed give a good approximation of the original signal.<br />
The coefficients are:</p>
<pre><code>a0 = 0.5
a1 = -0.4053
a3 = -0.0450
a5 = -0.0162
</code></pre>
<p><a href="https://i.sstatic.net/vCF5d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vCF5d.png" alt="individual waves" /></a> <a href="https://i.sstatic.net/7WPmm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7WPmm.png" alt="summed waves" /></a></p>
<p><strong>Scipy.fft:</strong><br />
I defined the sampling frequency fs=50, and created the space for the function with <code>space=np.linspace(-1,1,2*fs)</code>.<br />
Then defined the <code>fft1 = fft.fft(triang(space))</code>, where "triang" is the function which generates the signal. I immediately scaled the results by dividing by the number of samples (100) and multiplying each term except the 0th by 2. Next were the frequencies <code>freq1 = fft.fftfreq(space.shape[-1], d=1/fs)</code>.
The resulting coefficients (the real parts pertaining to <strong>an</strong>) are:</p>
<pre><code>a0 = 0.5051
a1 = 0.4091
a3 = 0.0452
a5 = 0.0161
</code></pre>
<p>As you can see, the absolute values are correct, but the signs are not. What am I missing?<br />
<a href="https://i.sstatic.net/PwIAn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PwIAn.png" alt="individual waves-scipy" /></a> <a href="https://i.sstatic.net/LRYIM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRYIM.png" alt="summed waves-scipy" /></a></p>
<p>And one bonus question if I may - when doing it this way (using scipy) and plotting each wave separately, should I add the phase angle from <code>np.angle(fft1[n])</code> into each term, like <strong>a1×cos(n Pi t/T + theta) + b1×sin(n Pi t/T + theta)</strong>? I'd say yes, but in this example all the <strong>bn</strong> are 0, and so are the phase angles, so I couldn't really test it.</p>
| <python><scipy><signal-processing><fft> | 2023-09-21 13:55:05 | 1 | 364 | Sjotroll |
77,150,911 | 9,318,372 | Equivalent to `pip install -r requirements-dev.txt` for `pyproject.toml`? | <p>Is there an equivalent <code>pip</code> command to install development dependencies (without installing the main project) if they are listed in (subgroups of) the <code>[project.optional-dependencies]</code> table of <code>pyproject.toml</code> instead of a separate <code>requirements-dev.txt</code> file?</p>
| <python><pip><requirements.txt> | 2023-09-21 13:50:41 | 0 | 1,721 | Hyperplane |
77,150,855 | 9,069,780 | Python crashes with LD_PRELOAD and ThreadSanitizer library | <p>I have a scenario where a python script loads a shared object on a Ubuntu 20 x64 system. The shared object is instrumented with thread sanitizer. However, once the libary loads it spawns a "cannot allocate memory in static TLS block" error. The usual method to deal with this is to preload libtsan.so.0 with LD_PRELOAD but prepending this to the python call will yield a segmentation fault.</p>
<p>Are there special precautions for using tsan instrumented so with python?</p>
<p>Regards</p>
| <python><sanitizer><thread-sanitizer> | 2023-09-21 13:44:24 | 0 | 1,073 | Desperado17 |
77,150,840 | 3,103,767 | pandas: Use apply to get columns with two highest values | <p>I have a pandas dataframe and for each row would like to get the indices to the columns containing the two highest values</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"A": [10,20,30], "B": [20, 30, 10], "C": [3,4,5], "D": [4,5,6], "E": [7,6,7]})
df.apply(lambda x: np.argsort(x)[::-1][0:2], raw=True, axis=1)
# eventually i would like to assign them to two columns, but the above already fails...
df['first'], df['second'] = zip(*df.apply(lambda x: np.argsort(x)[::-1][0:2], raw=True, axis=1))
</code></pre>
<p>I get the error <code>ValueError: Shape of passed values is (3, 2), indices imply (3, 5)</code>, how do i prevent this?</p>
| <python><pandas><dataframe><apply> | 2023-09-21 13:42:38 | 3 | 983 | Diederick C. Niehorster |
77,150,818 | 4,021,436 | Converting (or getting) Matplotlib to find Helvetica (or equivalent) non-tff font files | <p>I am working on RHEL 8.6. I am using <code>Python-3.6.8</code> and <code>matplotlib-3.0.3</code>. I am trying to get Matplotlib to use Helvetica (or an Open Source Clone). I followed Red Hat's <a href="https://access.redhat.com/solutions/6957193" rel="nofollow noreferrer">instructions</a> on installing it :</p>
<pre><code>$ dnf install xorg-x11-fonts-*
$ cat /etc/fonts/conf.d/01-xorg-x11-fonts.conf # Created this file...
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<dir>/usr/share/X11/fonts/75dpi</dir>
<dir>/usr/share/X11/fonts/100dpi</dir>
</fontconfig>
$ fc-cache -fv
$ fc-list | grep -i helvetica
/usr/share/X11/fonts/75dpi/helvR24-ISO8859-1.pcf.gz: Helvetica:style=Regular
/usr/share/X11/fonts/75dpi/helvBO08.pcf.gz: Helvetica:style=Bold Oblique
.
.
.
/usr/share/X11/fonts/100dpi/helvB24-ISO8859-1.pcf.gz: Helvetica:style=Bold
/usr/share/X11/fonts/75dpi/helvBO18.pcf.gz: Helvetica:style=Bold Oblique
$ rm -rf ~/.cache/matplotlib/ # As suggested by several posts
</code></pre>
<p>In Python, I checked to see what fonts are available and so I did :</p>
<pre><code>$ python
Python 3.6.8 (default, May 31 2023, 10:28:59)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import matplotlib.font_manager
>>> matplotlib.font_manager.findSystemFonts(fontpaths=None, fontext='ttf')
['/usr/share/fonts/msttcorefonts/ariali.ttf', '/usr/share/fonts/msttcorefonts/courbi.ttf', '/usr/share/fonts/msttcorefonts/trebucbd.ttf', '/usr/share/fonts/msttcorefonts/ariblk.ttf', '/usr/share/fonts/google-droid/DroidSansDevanagari-Regular.ttf', '/usr/share/fonts/dejavu/DejaVuSans-Bold.ttf', '/usr/share/fonts/urw-base35/NimbusMonoPS-Italic.otf', '/usr/share/fonts/urw-base35/NimbusRoman-Italic.otf', '/usr/share/fonts/google-droid/DroidSansGeorgian.ttf', '/usr/share/fonts/google-droid/DroidSansHebrew-Bold.ttf', '/usr/share/fonts/msttcorefonts/georgiai.ttf', '/usr/share/fonts/msttcorefonts/trebucbi.ttf', '/usr/share/fonts/dejavu/DejaVuSansCondensed-Bold.ttf', '/usr/share/fonts/liberation-sans/LiberationSans-Regular.ttf', '/usr/share/fonts/msttcorefonts/arial.ttf', '/usr/share/fonts/google-droid/DroidSansJapanese.ttf', '/usr/share/fonts/msttcorefonts/impact.ttf', '/usr/share/fonts/google-droid/DroidSansEthiopic-Regular.ttf', '/usr/share/fonts/google-droid/DroidSans.ttf', '/usr/share/fonts/msttcorefonts/webdings.ttf', '/usr/share/fonts/urw-base35/NimbusMonoPS-Bold.otf', '/usr/share/fonts/urw-base35/URWGothic-Demi.otf', '/usr/share/fonts/dejavu/DejaVuSansMono-Bold.ttf', '/usr/share/fonts/msttcorefonts/andalemo.ttf', '/usr/share/fonts/msttcorefonts/cour.ttf', '/usr/share/fonts/msttcorefonts/verdanab.ttf', '/usr/share/fonts/msttcorefonts/verdana.ttf', '/usr/share/fonts/urw-base35/NimbusRoman-Bold.otf', '/usr/share/fonts/google-droid/DroidSansArabic.ttf', '/usr/share/fonts/urw-base35/P052-Roman.otf', '/usr/share/fonts/urw-base35/URWBookman-Light.otf', '/usr/share/fonts/urw-base35/URWGothic-DemiOblique.otf', '/usr/share/fonts/urw-base35/C059-Bold.otf', '/usr/share/fonts/msttcorefonts/times.ttf', '/usr/share/fonts/urw-base35/NimbusSansNarrow-Regular.otf', '/usr/share/fonts/abattis-cantarell/Cantarell-BoldOblique.otf', '/usr/share/fonts/urw-base35/NimbusRoman-Regular.otf', '/usr/share/fonts/urw-base35/C059-BdIta.otf', '/usr/share/fonts/urw-base35/P052-Italic.otf', '/usr/share/fonts/urw-base35/URWBookman-Demi.otf', '/usr/share/fonts/urw-base35/Z003-MediumItalic.otf', '/usr/share/fonts/urw-base35/NimbusSans-Bold.otf', '/usr/share/fonts/urw-base35/URWGothic-BookOblique.otf', '/usr/share/fonts/msttcorefonts/arialbi.ttf', '/usr/share/fonts/urw-base35/NimbusSansNarrow-Bold.otf', '/usr/share/fonts/google-droid/DroidSansArmenian.ttf', '/usr/share/fonts/msttcorefonts/comicbd.ttf', '/usr/share/fonts/urw-base35/C059-Roman.otf', '/usr/share/fonts/google-droid/DroidSansThai.ttf', '/usr/share/fonts/dejavu/DejaVuSansMono-BoldOblique.ttf', '/usr/share/fonts/msttcorefonts/courbd.ttf', '/usr/share/fonts/urw-base35/NimbusSans-Italic.otf', '/usr/share/fonts/msttcorefonts/timesbd.ttf', '/usr/share/fonts/msttcorefonts/verdanaz.ttf', '/usr/share/fonts/abattis-cantarell/Cantarell-Bold.otf', '/usr/share/fonts/msttcorefonts/couri.ttf', '/usr/share/fonts/google-droid/DroidSans-Bold.ttf', '/usr/share/fonts/dejavu/DejaVuSans-BoldOblique.ttf', '/usr/share/fonts/urw-base35/URWBookman-DemiItalic.otf', '/usr/share/fonts/msttcorefonts/verdanai.ttf', '/usr/share/fonts/dejavu/DejaVuSans-Oblique.ttf', '/usr/share/fonts/urw-base35/NimbusSansNarrow-Oblique.otf', '/usr/share/fonts/dejavu/DejaVuSans.ttf', '/usr/share/fonts/google-droid/DroidSansFallback.ttf', '/usr/share/fonts/urw-base35/URWGothic-Book.otf', '/usr/share/fonts/urw-base35/NimbusMonoPS-Regular.otf', '/usr/share/fonts/msttcorefonts/trebuc.ttf', '/usr/share/fonts/dejavu/DejaVuSansMono.ttf', '/usr/share/fonts/urw-base35/P052-BoldItalic.otf', '/usr/share/fonts/urw-base35/P052-Bold.otf', '/usr/share/fonts/dejavu/DejaVuSansCondensed-Oblique.ttf', '/usr/share/fonts/msttcorefonts/timesbi.ttf', '/usr/share/fonts/google-droid/DroidSansTamil-Bold.ttf', '/usr/share/fonts/google-droid/DroidSansTamil-Regular.ttf', '/usr/share/fonts/urw-base35/NimbusSans-BoldItalic.otf', '/usr/share/fonts/urw-base35/C059-Italic.otf', '/usr/share/X11/fonts/TTF/GohaTibebZemen.ttf', '/usr/share/fonts/google-droid/DroidSansHebrew-Regular.ttf', '/usr/share/fonts/urw-base35/NimbusSansNarrow-BoldOblique.otf', '/usr/share/fonts/msttcorefonts/georgia.ttf', '/usr/share/fonts/google-droid/DroidSansEthiopic-Bold.ttf', '/usr/share/fonts/dejavu/DejaVuSansCondensed.ttf', '/usr/share/fonts/urw-base35/NimbusSans-Regular.otf', '/usr/share/fonts/dejavu/DejaVuSans-ExtraLight.ttf', '/usr/share/fonts/liberation-sans/LiberationSans-Bold.ttf', '/usr/share/fonts/msttcorefonts/comic.ttf', '/usr/share/fonts/abattis-cantarell/Cantarell-Oblique.otf', '/usr/share/fonts/urw-base35/NimbusRoman-BoldItalic.otf', '/usr/share/fonts/urw-base35/D050000L.otf', '/usr/share/fonts/msttcorefonts/arialbd.ttf', '/usr/share/fonts/dejavu/DejaVuSansMono-Oblique.ttf', '/usr/share/fonts/msttcorefonts/trebucit.ttf', '/usr/share/fonts/urw-base35/URWBookman-LightItalic.otf', '/usr/share/fonts/dejavu/DejaVuSansCondensed-BoldOblique.ttf', '/usr/share/fonts/liberation-sans/LiberationSans-BoldItalic.ttf', '/usr/share/fonts/msttcorefonts/tahoma.ttf', '/usr/share/fonts/msttcorefonts/timesi.ttf', '/usr/share/fonts/abattis-cantarell/Cantarell-Regular.otf', '/usr/share/fonts/liberation-sans/LiberationSans-Italic.ttf', '/usr/share/fonts/urw-base35/NimbusMonoPS-BoldItalic.otf', '/usr/share/fonts/msttcorefonts/georgiab.ttf', '/usr/share/fonts/msttcorefonts/georgiaz.ttf']
</code></pre>
<p>Clearly, the fonts in <code>/usr/share/X11/fonts/*/*.pcf.gz</code> are not being found by matplotlib.
I also have Tex Gyre installed (where <a href="https://www.gust.org.pl/projects/e-foundry/tex-gyre/heros/index_html" rel="nofollow noreferrer">Tex Gyre Heros</a> is also a Helvetica replacement)</p>
<pre><code>$ dnf repoquery -l texlive-tex-gyre
Not root, Subscription Management repositories not updated
Last metadata expiration check: 0:23:21 ago on Thu 21 Sep 2023 08:58:10 AM EDT.
/usr/share/licenses/texlive-tex-gyre
.
.
.
$ find /usr/share/texlive/texmf-dist/doc/fonts/tex-gyre/ -name "*.ttf" -print
$ find /usr/share/texlive/texmf-dist/doc/fonts/tex-gyre/ -name "*.otf" -print
</code></pre>
<p>According to the <a href="https://matplotlib.org/stable/api/font_manager_api.html" rel="nofollow noreferrer">documentation</a>, matplotlib's font manager can only use <code>*.ttf</code> and <code>*.afm</code>.</p>
<p>Question :</p>
<ol>
<li>Is there a way to convert or otherwise get matplotlib to find (and use) the non-ttf fonts installed in <code>/usr/share/texlive/texmf-dist/doc/fonts/tex-gyre/</code> or <code>/usr/share/X11/fonts/*/*.pcf.gz</code>?</li>
</ol>
| <python><matplotlib><fonts> | 2023-09-21 13:39:47 | 0 | 5,207 | irritable_phd_syndrome |
77,150,721 | 4,628,597 | Filter xarray ZARR dataset with GeoDataFrame | <p>I am reading a ZARR file from a s3 bucket with xarray. I got to successfully filter by time and latitude/longitude:</p>
<pre class="lang-py prettyprint-override"><code> def read_zarr(self, dataset: str, region: Region) -> Any:
# Read ZARR from s3 bucket
fs = s3fs.S3FileSystem(key="KEY", secret="SECRET")
mapper = fs.get_mapper(f"{self.S3_PATH}{dataset}")
zarr_ds = xr.open_zarr(mapper, decode_times=True)
# Filter by time
time_period = pd.date_range("2013-01-01", "2023-01-31")
zarr_ds = zarr_ds.sel(time=time_period)
# Filter by latitude/longitude
region_gdf = region.geo_data_frame
latitude_slice = slice(region_gdf.bounds.miny[0], region_gdf.bounds.maxy[0])
longitude_slice = slice(region_gdf.bounds.minx[0], region_gdf.bounds.maxx[0])
return zarr_ds.sel(latitude=latitude_slice, longitude=longitude_slice)
</code></pre>
<p>The problem is that this returns a rectangle of data (actually a cuboid, if we consider the time dimension). For geographical regions that are long and thin, this will represent a huge waste, as I will first download years of data, to then discard most of it. Example with California:</p>
<p><a href="https://i.sstatic.net/mTV9K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mTV9K.png" alt="enter image description here" /></a></p>
<p><strong>I would like to intersect the ZARR coordinates with the region ones. How can I achieve it?</strong></p>
| <python><geojson><geopandas><python-xarray><zarr> | 2023-09-21 13:26:43 | 1 | 2,509 | 0xSwego |
77,150,588 | 11,064,604 | Python subprocess communicating with Powershell and Pipeline Operators | <p>I am looking to find the starttime of all chrome instances on my windows computer. In powershell, I can do this via <code>get-process chrome | Format-Table StartTime</code>.</p>
<p>I want to do this in a python script and use the output of this. My code is below:</p>
<pre><code>import subprocess
call = "powershell get-process chrome | powershell Format-Table ProcessName, StartTime"
process = subprocess.Popen(call, stdout=subprocess.PIPE, stderr=None, shell=True)
outputs = process.communicate()
print(outputs)
</code></pre>
<p>The output for this command is <code>['']</code>, even with chrome open.</p>
<h3>Observations</h3>
<p>If I change <code>call</code> to</p>
<pre><code>call = "powershell get-process chrome"
</code></pre>
<p>this outputs the table, as expected. I think the error has to do with the pipeline operator.</p>
| <python><powershell><subprocess> | 2023-09-21 13:12:07 | 2 | 353 | Ottpocket |
77,150,574 | 9,392,446 | Iterate through nested lists in pandas column and create dictionary based on values in nested lists | <p>I have a dataframe like:</p>
<pre><code>id some_binary_col some_amount_col nested_lists
123 0 100 ['email_rule','phone_rule','score_rule']
456 1 500 ['address_rule','zip_rule']
121 1 300 ['zip_rule','phone_rule']
122 0 100 ['score_rule','phone_rule','new_rule']
133 1 200 ['email_rule','address_rule','zip_rule']
</code></pre>
<p>reproducible:</p>
<pre><code>ids = [123,456,121,122,133]
some_binary_col = [0,1,1,0,1]
some_amount_col = [100,500,300,100,200]
nested_lists = [
['email_rule','phone_rule','score_rule']
,['address_rule','zip_rule']
,['zip_rule','phone_rule']
,['score_rule','phone_rule','new_rule']
,['email_rule','address_rule','zip_rule']
]
df = pd.DataFrame()
df['id'] = ids
df['some_binary_col'] = some_binary_col
df['some_amount_col'] = some_amount_col
df['nested_lists'] = nested_lists
</code></pre>
<p>I'm trying to make a new dictionary that counts the <code>some_binary_col</code> if = 1 per rule in the <code>nested_loop</code> col like:</p>
<pre><code>rule_binary_col_dict = {
'email_rule': 1
,'phone_rule': 1
,'score_rule': 0
,'address_rule': 2
,'zip_rule': 3
,'new_rules': 0
}
</code></pre>
<p>The <code>nested_list</code> can have infinite unique lists/elements.</p>
<p>What I'm not very good at is looping through each element in the nested list...
like:</p>
<pre><code>for i in df['nested_lists']:
for j in i:
**some condition**
</code></pre>
<p>I don't know how to get to each element in the nested lists.</p>
| <python><pandas> | 2023-09-21 13:10:34 | 2 | 693 | max |
77,150,535 | 1,609,428 | What are the "word types" in gensim? | <p>I am training a large <code>word2vec</code> model with <code>gensim</code> and using logging to follow the training process. The log shows that</p>
<pre><code>PROGRESS: at sentence #3060000, processed 267654284 words, keeping 940042 word types
</code></pre>
<p>What are these word types? The unique words among the 200M+ tokens in the data? I cannot find anything in the documentation.</p>
| <python><gensim> | 2023-09-21 13:06:36 | 1 | 19,485 | ℕʘʘḆḽḘ |
77,150,223 | 9,542,989 | Connect to Couchbase Capella with Python: UnAmbiguousTimeoutException | <p>I am trying to connect to Couchbase Capella using the Python SDK. I have tried doing this exactly as outlined here,
<a href="https://docs.couchbase.com/python-sdk/current/hello-world/start-using-sdk.html" rel="nofollow noreferrer">https://docs.couchbase.com/python-sdk/current/hello-world/start-using-sdk.html</a></p>
<p>This is what my code looks like,</p>
<pre><code>from datetime import timedelta
# needed for any cluster connection
from couchbase.auth import PasswordAuthenticator
from couchbase.cluster import Cluster
# needed for options -- cluster, timeout, SQL++ (N1QL) query, etc.
from couchbase.options import (ClusterOptions, ClusterTimeoutOptions,
QueryOptions)
# Update this to your cluster
endpoint = "<cluster>.cloud.couchbase.com"
username = "<my-user>"
password = "<my-password>"
bucket_name = "travel-sample"
# User Input ends here.
# Connect options - authentication
auth = PasswordAuthenticator(username, password)
# get a reference to our cluster
options = ClusterOptions(auth)
# Sets a pre-configured profile called "wan_development" to help avoid latency issues
# when accessing Capella from a different Wide Area Network
# or Availability Zone(e.g. your laptop).
options.apply_profile('wan_development')
cluster = Cluster('couchbases://{}'.format(endpoint), options)
</code></pre>
<p>However, I keep getting the following error,</p>
<pre><code>couchbase.exceptions.UnAmbiguousTimeoutException: UnAmbiguousTimeoutException(<ec=14, category=couchbase.common, message=unambiguous_timeout (14), C Source=/home/ec2-user/workspace/python/sdk/python-packaging-pipeline/py-client/src/connection.cxx:199>)
</code></pre>
<p>This does not help, because the exception is raised at the call to <code>connect()</code>,
<br>
<a href="https://stackoverflow.com/questions/74934711/couchbase-python-access-to-sample-project-throws-error-unambiguoustimeoutexcep">Couchbase / Python Access to sample project throws error UnAmbiguousTimeoutException error code 14</a></p>
<p>I saw a few other similar posts, but they have not helped.</p>
<p>Note that I have allowed my IP address as well.</p>
<p>What is going on here?</p>
<p>EDIT: Here are the DEBUG logs for my script,</p>
<pre><code>[2023-09-21 23:08:13.362] [34212,34212] [debug] 1632ms, Selected nameserver: "127.0.0.53" from "/etc/resolv.conf"
[2023-09-21 23:08:13.416] [34212,34212] [debug] 54ms, open cluster, id: "db5876-95a9-4e45-b130-b8be0b1a2cc516", core version: "1.0.0-dp.8", {"bootstrap_nodes":[{"hostname":"cb.qx8kjhcigkz1-bsp.cloud.couchbase.com","port":"11207"}],"options":{"analytics_timeout":"120000ms","bootstrap_timeout":"120000ms","config_idle_redial_timeout":"300000ms","config_poll_floor":"50ms","config_poll_interval":"2500ms","connect_timeout":"20000ms","disable_mozilla_ca_certificates":false,"dns_config":{"nameserver":"127.0.0.53","port":53,"timeout":"20000ms"},"dump_configuration":false,"enable_clustermap_notification":false,"enable_compression":true,"enable_dns_srv":true,"enable_metrics":true,"enable_mutation_tokens":true,"enable_tcp_keep_alive":true,"enable_tls":true,"enable_tracing":true,"enable_unordered_execution":true,"idle_http_connection_timeout":"4500ms","key_value_durable_timeout":"20000ms","key_value_timeout":"20000ms","management_timeout":"120000ms","max_http_connections":0,"metrics_options":{"emit_interval":"600000ms"},"network":"auto","query_timeout":"120000ms","resolve_timeout":"20000ms","search_timeout":"120000ms","show_queries":false,"tcp_keep_alive_interval":"60000ms","tls_verify":"peer","tracing_options":{"analytics_threshold":"1000ms","key_value_threshold":"500ms","management_threshold":"1000ms","orphaned_emit_interval":"10000ms","orphaned_sample_size":64,"query_threshold":"1000ms","search_threshold":"1000ms","threshold_emit_interval":"10000ms","threshold_sample_size":64,"view_threshold":"1000ms"},"transactions_options":{"cleanup_config":{"cleanup_client_attempts":false,"cleanup_lost_attempts":false,"cleanup_window":"0ms","collections":[]},"durability_level":"none","expiration_time":"0ns","query_config":{"scan_consistency":"not_bounded"}},"trust_certificate":"","use_ip_protocol":"any","user_agent_extra":"pycbc/4.1.8 (python/3.8.18)","view_timeout":"120000ms"}}
[2023-09-21 23:08:13.425] [34212,34214] [debug] 8ms, Query DNS-SRV: address="cb.qx8kjhcigkz1-bsp.cloud.couchbase.com", service="_couchbases", nameserver="127.0.0.53:53"
[2023-09-21 23:08:13.897] [34212,34214] [debug] 472ms, DNS UDP returned 1 records
[2023-09-21 23:08:13.897] [34212,34214] [info] 0ms, replace list of bootstrap nodes with addresses from DNS SRV of "cb.qx8kjhcigkz1-bsp.cloud.couchbase.com": ["svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207"]
[2023-09-21 23:08:13.898] [34212,34214] [debug] 0ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516]: use default CA for TLS verify
[2023-09-21 23:08:13.898] [34212,34214] [debug] 0ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516]: loading 141 CA certificates from Mozilla bundle. Update date: "Tue Aug 22 03:12:04 2023 GMT", SHA256: "23c2469e2a568362a62eecf1b49ed90a15621e6fa30e29947ded3436422de9b9"
[2023-09-21 23:08:13.952] [34212,34214] [debug] 53ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> attempt to establish MCBP connection
[2023-09-21 23:08:14.054] [34212,34214] [debug] 101ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> connecting to 18.224.61.170:11207, timeout=20000ms
[2023-09-21 23:08:34.057] [34212,34214] [debug] 20003ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> reached the end of list of bootstrap nodes, waiting for 500ms before restart
[2023-09-21 23:08:34.557] [34212,34214] [debug] 500ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> attempt to establish MCBP connection
[2023-09-21 23:08:34.605] [34212,34214] [debug] 47ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> connecting to 18.224.61.170:11207, timeout=20000ms
[2023-09-21 23:08:54.606] [34212,34214] [debug] 20000ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> reached the end of list of bootstrap nodes, waiting for 500ms before restart
[2023-09-21 23:08:55.106] [34212,34214] [debug] 500ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> attempt to establish MCBP connection
[2023-09-21 23:08:55.146] [34212,34214] [debug] 40ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> connecting to 18.224.61.170:11207, timeout=20000ms
[2023-09-21 23:09:15.147] [34212,34214] [debug] 20001ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> reached the end of list of bootstrap nodes, waiting for 500ms before restart
[2023-09-21 23:09:15.648] [34212,34214] [debug] 500ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> attempt to establish MCBP connection
[2023-09-21 23:09:15.768] [34212,34214] [debug] 120ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> connecting to 18.224.61.170:11207, timeout=20000ms
[2023-09-21 23:09:35.769] [34212,34214] [debug] 20000ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> reached the end of list of bootstrap nodes, waiting for 500ms before restart
[2023-09-21 23:09:36.269] [34212,34214] [debug] 500ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> attempt to establish MCBP connection
[2023-09-21 23:09:36.313] [34212,34214] [debug] 44ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> connecting to 18.224.61.170:11207, timeout=20000ms
[2023-09-21 23:09:56.317] [34212,34214] [debug] 20003ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> reached the end of list of bootstrap nodes, waiting for 500ms before restart
[2023-09-21 23:09:56.817] [34212,34214] [debug] 500ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> attempt to establish MCBP connection
[2023-09-21 23:09:56.850] [34212,34214] [debug] 32ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> connecting to 18.224.61.170:11207, timeout=20000ms
[2023-09-21 23:10:13.952] [34212,34214] [debug] 17102ms, all nodes failed to bootstrap, triggering DNS-SRV refresh, ec=unambiguous_timeout (14), last endpoint="svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207"
[2023-09-21 23:10:13.952] [34212,34214] [warning] 0ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> unable to bootstrap in time
[2023-09-21 23:10:13.953] [34212,34214] [debug] 0ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> stop MCBP connection, reason=do_not_retry
[2023-09-21 23:10:13.953] [34212,34214] [debug] 0ms, Query DNS-SRV: address="cb.qx8kjhcigkz1-bsp.cloud.couchbase.com", service="_couchbases", nameserver="127.0.0.53:53"
[2023-09-21 23:10:13.953] [34212,34214] [debug] 0ms, PYCBC: create conn callback completed
Traceback (most recent call last):
File "connect_couchbase.py", line 30, in <module>
[2023-09-21 23:10:13.956] [34212,34214] [debug] 2ms, [db5876-95a9-4e45-b130-b8be0b1a2cc516/4b242f-59b5-cd40-0ae3-e094269c2ca35a/tls/-] <svc-dqis-node-001.qx8kjhcigkz1-bsp.cloud.couchbase.com:11207> destroy MCBP connection
cluster = Cluster('couchbases://{}'.format(endpoint), options)
File "/home/minura/anaconda3/envs/couchbase/lib/python3.8/site-packages/couchbase/cluster.py", line 99, in __init__
[2023-09-21 23:10:14.031] [34212,34214] [debug] 75ms, DNS UDP returned 1 records
self._connect()
File "/home/minura/anaconda3/envs/couchbase/lib/python3.8/site-packages/couchbase/logic/wrappers.py", line 98, in wrapped_fn
raise e
File "/home/minura/anaconda3/envs/couchbase/lib/python3.8/site-packages/couchbase/logic/wrappers.py", line 82, in wrapped_fn
ret = fn(self, *args, **kwargs)
File "/home/minura/anaconda3/envs/couchbase/lib/python3.8/site-packages/couchbase/cluster.py", line 105, in _connect
raise ErrorMapper.build_exception(ret)
couchbase.exceptions.UnAmbiguousTimeoutException: UnAmbiguousTimeoutException(<ec=14, category=couchbase.common, message=unambiguous_timeout (14), C Source=/home/ec2-user/workspace/python/sdk/python-packaging-pipeline/py-client/src/connection.cxx:199>)
[2023-09-21 23:10:14.061] [34212,34212] [debug] 29ms, PYCBC: exception_base_dealloc completed
[2023-09-21 23:10:14.061] [34212,34212] [debug] 0ms, dealloc transaction_config
[2023-09-21 23:10:14.076] [34212,34212] [debug] 15ms, dealloc transaction_query_options
</code></pre>
| <python><couchbase> | 2023-09-21 12:25:45 | 0 | 2,115 | Minura Punchihewa |
77,150,045 | 22,113,674 | Pyspark - How to assign column names to default key 'key' and its values to 'value' | <p>Consider below sample data -</p>
<p><a href="https://i.sstatic.net/VhEMp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VhEMp.png" alt="enter image description here" /></a></p>
<p>Doing below code -</p>
<pre><code>from pyspark.sql.functions import *
df_result1 = df_data.groupBy(col("Name").alias("Name"), col("Country").alias("Country"), col("Property").alias("Property")).agg(
(collect_list(struct(col("No"), col("Place"))).alias("details")),
)
df_result2 = df_result1.groupBy(col("Name").alias("Name"), col("Country").alias("Country")).agg(collect_list(struct(col("Property"), col("details"))).alias("Main"))
final = df_result2.toJSON().collect()
</code></pre>
<p>Actual Output -</p>
<pre><code>{
"Name": "David",
"Country": "Dubai",
"Main": [
{
"Property": "House",
"details": [
{
"No": "1",
"Place": "JLT"
}
]
}
]
}
</code></pre>
<p>Desired Output -</p>
<pre><code>{
"Name": "David",
"Country": "Dubai",
"Main": [
{
"Property": "House",
"details": [
{
"key": "No",
"value": "1"
},
{
"key": "Place",
"value": "JLT"
}
]
}
]
}
</code></pre>
<p>Kindly suggest how to achieve this.</p>
| <python><json><pyspark> | 2023-09-21 11:59:32 | 1 | 339 | mpr |
77,150,017 | 7,486,038 | Dash Visualization to highlight a Table Row Based on Hover data | <p>I am trying to build an interactive dash plotly visual , containing a bunch of images and a table. The following code generates some synthetic data, with X having 536 normalized heatmaps of dimension 96x96 and y containing integers 0 to 9. Thus 0 has 200 samples , 1 has 10 samples and so on. For each element in y , there's a 96x96 heatmap with values element/10.</p>
<pre><code>from dash import Dash, dcc, html, Input, Output,callback
from dash import dash_table
import plotly.express as px
import plotly.graph_objects as go
import pandas as pd
import numpy as np
import math
samples = [200,10,5,40,60,100,2,100,9,10]
idx = [n for n in range(10)]
for i,smp in zip(idx,samples):
if i == 0:
X = np.full((smp,96, 96), i/10)
y = np.full((smp,),i)
else:
tempx = np.full((smp,96, 96), i/10)
tempy = np.full((smp,),i)
X = np.r_[X,tempx]
y = np.r_[y,tempy]}
</code></pre>
<p>Now when I build the visualization, I get the expected behavior. When I select a number from the dropdown , the corresponding images show up along with a data-frame showing the chosen digit and the sample index. I want to add a behavior, such that when I hover on a image, the corresponding row in data-frame should get highlighted.</p>
<pre><code>app = Dash(__name__)
tbl_cols = ['Choice','Samples']
app.layout = html.Div([
dcc.Dropdown([i for i in range(10)],
0,
id='my_dropdown'),
html.Div([
html.Div([
dcc.Graph(id='my_picbox',style={'display':'inline-block'})
],style={'width': '40%', 'display': 'inline-block'}),
html.Div([
dash_table.DataTable(
id = 'table',
columns = [{'name': i, 'id': i} for i in tbl_cols]),
],style={'width': '60%', 'display': 'inline-block'})
],style={'display': 'flex'})
])
@callback(
[Output(component_id='my_picbox', component_property='figure'),
Output(component_id='table', component_property='data')],
Input(component_id='my_dropdown', component_property='value')
)
def update_plot(digit):
if digit is not None:
samples = np.where(y==digit)[0]
if len(samples)>20:
samples = np.random.choice(samples,20)
imgs = np.empty((20,96,96),)
imgs[:]= np.nan
imgs[0:len(samples)]=X[samples, :, :]
fig = px.imshow(imgs[:, :, :],
binary_string=False,
zmin=0,
zmax=1,
facet_col=0,
aspect = 'auto',
facet_col_wrap=5,
facet_row_spacing = 0,
color_continuous_scale='rdylgn')
for i in fig.layout.annotations:
n = int(i['text'].split('=')[1])
try:
i['text']=str(samples[n])
except:
i['text']=' '
fig.update_layout(margin = dict(t=70, l=50, r=0, b=5),
#coloraxis_showscale = False,
height = 600,
width = 600,)
df = pd.DataFrame(data = {'Samples':samples})
df['Choice'] = digit
data = df[tbl_cols].to_dict('records')
return fig,data
if __name__ == '__main__':
app.run_server(debug=True, port=8056)
</code></pre>
<p><a href="https://i.sstatic.net/e3mRI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e3mRI.png" alt="enter image description here" /></a></p>
| <python><plotly><plotly-dash> | 2023-09-21 11:55:48 | 1 | 326 | Arindam |
77,149,812 | 3,387,716 | Intersection and difference of multiple sets | <p>I have multiple lists of "associations" as input:</p>
<pre class="lang-py prettyprint-override"><code>input1 = [
('id1', 'id2', 'id3', 'id4'),
('id5',),
('id6', 'id7', 'id8')
]
input2 = [
('id1', 'id2', 'id4'),
('id3',),
('id5',),
('id7', 'id6', 'id8')
]
input3 = [
('id1', 'id2'),
('id3', 'id4'),
('id5',),
('id8', 'id7', 'id6')
]
</code></pre>
<ul>
<li><p>Each input contains all the ids.</p>
</li>
<li><p>Each id appears a single time in an input.</p>
</li>
<li><p>The number of inputs can vary from 1 to 4.</p>
</li>
</ul>
<hr />
<p>I would like to process those inputs and generate something like this:</p>
<pre class="lang-py prettyprint-override"><code>assocs = {
# never associated
0: [
('id5',),
],
# associated a single time
1: [
('id1', 'id3'),
('id2', 'id3'),
],
# associated twice
2: [ #
('id1', 'id4'),
('id2', 'id4'),
('id3', 'id4'),
],
# associated everytime
3: [
('id1', 'id2'),
('id6', 'id7', 'id8'),
]
}
</code></pre>
<p>I can somehow count the number of times that each association is encountered with:</p>
<pre class="lang-py prettyprint-override"><code>temp = defaultdict(int)
for ids in map(sorted, itertools.chain(input1, input2, input3)):
for i in range(1, len(ids)+1):
for comb in itertools.combinations(ids, i):
temp[comb] += 1
</code></pre>
<p>But now I'm stuck with the following step, which would be cleaning the <code>temp</code> dict; not to mention that I'm not sure that I chose the right strategy in the first place.</p>
| <python> | 2023-09-21 11:28:30 | 1 | 17,608 | Fravadona |
77,149,797 | 6,703,783 | How to create a new column in dataframe whose value is derived from other columns of the dataframe | <p>I have a dataframe which has columns <code>a</code>,<code>b</code>. I want to create, in same dataframe, another column whose value (for each row) should be <code>a*b</code>. How do I do that?</p>
<p>I tried few examples but none are working</p>
<pre><code>short_df['Revenue'] = short_df.(lambda row: (row['UnitPrice']*row['Quantity']))
display(short_df.limit(10))
</code></pre>
| <python><dataframe><pyspark><apache-spark-sql> | 2023-09-21 11:25:37 | 1 | 16,891 | Manu Chadha |
77,149,732 | 1,652,219 | Can a primary_key be avoided in SQLAlchemy when creating a table? | <p>I have a database with a bunch of huge tables, and I would like to</p>
<ol>
<li>Use SQLAlchemy to interact with these tables</li>
<li>Create similar tables (with a few rows) in a local sqlite database for testing</li>
<li>Test SQLAlchemy queries against sqlite tables for unittests</li>
</ol>
<p>So I found out that one can get metadata from a remote table in the following way:</p>
<pre><code># Imports
from sqlalchemy import create_engine, Table, MetaData
# Creating engine for interacting with db
engine = create_engine("mssql+pyodbc://my_server/my_database?trusted_connection=yes&driver=my_driver")
# Fetching table metadata
metadata = MetaData()
my_table = Table("my_table",
metadata,
schema="my_database.my_schema",
autoload_with=engine)
# Listing columns, their types and settings
list(table.columns)
</code></pre>
<p>In practice I would like the definition of this table to live in Python, such that my unit tests are not dependent on connecting to a remote database. So I tried to construct it in SQLAlchemy.</p>
<pre><code># Imports
from sqlalchemy.dialects.mssql import DATE, VARCHAR, TINYINT
# Declaring base
Base = declarative_base()
# Constructing table
class MyTable(Base):
__tablename__ = 'my_table'
col1= Column('col1', DATE(), nullable=False),
col2= Column('col2', VARCHAR(length=15), nullable=False),
col3= Column('col3', TINYINT(), nullable=False)
</code></pre>
<p>When I construct the table with the same types as those identified above I get the following error:</p>
<pre><code>ArgumentError: Mapper mapped class Targets->my_table could not assemble any primary key columns for mapped table 'my_table'
</code></pre>
<p>Apparently the SQLAlchemy ORM requires tables to have a primary_key if you construct them yourself. But the table I want to "mirror" does not have a primary_key in the database, and that is something I cannot change due to the company's organization and setup.</p>
<p>Is there a smarter way to construct a table with identical types?</p>
| <python><sql-server><sqlite><sqlalchemy> | 2023-09-21 11:17:09 | 1 | 3,944 | Esben Eickhardt |
77,149,625 | 21,859,039 | How to extract values from a dictionary based on index | <p>Lets say I have one dictionary, one list with indexes that are not na, and indexes that are na. I want to extract the information based on each different object inside the dictionary with this list of indexes, but I want to know the key name of each object.</p>
<pre><code>notna_idxs = [1, 2, 3, 4, 6, 7, 8, ]
na_idxs = [0, 5, 9]
dd = {
0: [0, 1, 2, 3, 4],
1: [0, 1, 2, 3, 4],
}
</code></pre>
<p>Then we can match the indexes continuously. In this case, the object 0 has the first 4 indexes (0,1,2,3,4) and the object 1 has the other indexes (5,6,7,8,9).
The final dictionary I want to obtain is</p>
<pre><code>#dd
{
0: [1, 2, 3, 4],
1: [1, 2, 3],
}
</code></pre>
<p>I have tried to create a new list with all the values and extract those values with the correct indexes, but don't understand how to create the dictionary again with the corresponding keys and values on each object.</p>
<pre><code>final = []
for row in dd:
final.extend(dd[row])
corrected_final = []
for idx in notna_idxs:
corrected_final.append(final[idx])
</code></pre>
| <python><dictionary> | 2023-09-21 10:58:43 | 2 | 301 | Jose M. González |
77,149,581 | 3,019,905 | How do I build a grpc Enum invariant from an int in python? | <p>I have a grpc method used to query the internal state of some remote object on the server. This internal state includes an enum for the status, which I have replicated in the expected gRPC response message like this:</p>
<pre><code>rpc QueryState (QueryRequest) returns (QueryResponse) {}
enum Status {
STATUS_UNSPECIFIED = 0;
STATUS_ONLINE = 1;
STATUS_OFFLINE = 2;
}
message QueryRequest {}
message QueryResponse {
STATUS status = 1;
}
</code></pre>
<p>The problem I have is that in the server-side implementation of <code>QueryState</code> is that I don't know how to properly build the <code>QueryResponse</code> object. This is because the actual value of the <code>status</code> field in the internal state object is an int (EnumInt, actually) which I then need to translate to the corresponding gRPC Enum invariant and I don't know how.</p>
<p>Any help would be greatly appreciated.</p>
| <python><python-3.x><grpc><grpc-python> | 2023-09-21 10:53:13 | 1 | 1,518 | Dash83 |
77,149,470 | 8,176,763 | using sklearn pipeline with xgboost and onehotencoder | <p>I have a pipeline in sklearn like this:</p>
<pre><code>from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OneHotEncoder
import xgboost as xgb
pipe = make_pipeline(OneHotEncoder(drop='first'),xgb.XGBRegressor())
</code></pre>
<p>My data comprises of 2 columns that are categorical in X:</p>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=42)
</code></pre>
<p>So X_train looks like this:</p>
<pre><code>array([['Bob Dylan', 'WMQ'],
['James Doke', 'WMQ'],...
</code></pre>
<p>and y_train is a numeric array.</p>
<p>Now I want to use GRIDsearchCV with my pipeline:</p>
<pre><code>gbm_param_grid = {
'xgbregressor__colsample_bytree': [0.3,0.7],
'xgbregressor__n_estimators': [50],
'xgbregressor__max_depth': [2, 5]
}
grid_mse = GridSearchCV(pipe,param_grid=gbm_param_grid,scoring="neg_mean_squared_error",cv=4,verbose=1)
</code></pre>
<p>And I fit into the data:</p>
<pre><code>grid_mse.fit(X_train,y_train)
</code></pre>
<p>And get this strange error:</p>
<pre><code>xgboost.core.XGBoostError: [18:35:22] C:\buildkite-agent\builds\buildkite-windows-cpu-autoscaling-group-i-0cec3277c4d9d0165-1\xgboost\xgboost-ci-windows\src\data\array_interface.h:135: Check failed: typestr.size() == 3 || typestr.size() == 4: `typestr' should be of format <endian><type><size of type in bytes>.
</code></pre>
<p>What am I making wrong here ?</p>
| <python><scikit-learn><xgboost> | 2023-09-21 10:37:44 | 0 | 2,459 | moth |
77,149,445 | 21,185,825 | PyQt - cannot add buttons to tree children | <p>I create a tree-view that should look like:</p>
<pre><code>parent1
path branch [button]
path branch [button]
...
</code></pre>
<p>I create the tree like this</p>
<pre><code>def create_tree(self,containerWidget):
self.repo_tree = QTreeView(self)
self.repo_tree.setGeometry(10, 10, 580, 380)
self.repo_tree_model = QStandardItemModel()
self.repo_tree.setModel(self.repo_tree_model)
containerWidget.addWidget(self.repo_tree)
</code></pre>
<p>I add a parent and the children like this</p>
<pre><code>def add_tree_repo(self,repo_url, repo_name, branch, repo_path,config,worktrees):
root_item = QStandardItem(repo_name)
self.repo_tree_model.appendRow(root_item)
for worktree in worktrees:
child1 = QStandardItem(worktree["path"])
child2 = QStandardItem("[" + worktree["branch"] + "]")
root_item.appendRow([child1,child2])
button = QToolButton()
button.setMaximumSize(button.sizeHint())
self.repo_tree.setIndexWidget(child2.index(), button)
self.repo_tree.update()
self.repo_tree.expandAll()
</code></pre>
<p>only the path displays:</p>
<pre><code>parent1
path
path
...
</code></pre>
<p>what am I missing ? any help appreciated</p>
| <python><button><pyqt><tree> | 2023-09-21 10:32:59 | 1 | 511 | pf12345678910 |
77,149,259 | 10,771,559 | plotly go.bar add legend for categorical colour variable | <p>I am trying to use go.bar to make subplot bar graphs. I want to colour the bars by the categorical materials variable. I have managed to this, but also want a shared legend between the subplots showing the color by material.</p>
<pre><code>materials=['food', 'flowers', 'chess', 'garden', 'kitchen']
x=['browns', 'highes', 'jacks', 'johns', 'harrys']
y=[1,3,5,6,7]
x2=['trevors', 'elens', 'georges', 'marias', 'franks']
y2=[3,7,8,9,5]
fig = make_subplots(rows=3, cols=1)
color_map = {'food':'red',
'flowers':'blue',
'chess':'yellow',
'garden':'green',
'kitchen':'grey'}
colors = [color_map[category] for category in materials]
fig.add_trace(
go.Bar(x=x,
y=y,
# name=materials,
marker_color=colors),
row=1, col=1
)
fig.add_trace(
go.Bar(x=x2,
y=y2,
# name=materials,
marker_color=colors),
row=2, col=1
)
fig.show()
</code></pre>
<p>This is what it currently looks like:
<a href="https://i.sstatic.net/y0J1Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y0J1Q.png" alt="enter image description here" /></a></p>
<p>how can I get the legend to show the bars are coloured by material?</p>
| <python><plotly> | 2023-09-21 10:09:45 | 1 | 578 | Niam45 |
77,149,255 | 11,887,333 | Executable scripts installed via `pip` of different Python versions | <p>To my knowledge, <code>pip</code> can install both packages and executable scripts, in latter case it creates a command-line wrapper so we can run the package from the command-line.</p>
<p>Let's say I have different versions of Python installed on my machine, such as <code>python3.10</code> and <code>python3.11</code>. If I install a script, say <code>pytest</code>, to <code>python3.11</code> by running <code>python3.11 -m pip install pytest</code>, I can then invoke command <code>pytest</code> from the command line:</p>
<pre class="lang-bash prettyprint-override"><code>$ pytest --version
pytest 7.4.0
</code></pre>
<p>Now, I decide to install <code>pytest</code> to <code>python3.10</code> as well using <code>python3.10 -m pip install pytest</code>. With pytest installed in both versions of Python, if I run <code>pytest</code> in command line, how do I know which <code>pytest</code> will be invoked? Is there a way to specify and invoke <code>pytest</code> of a particular version as opposed to the other?</p>
<p>As an aside, <code>pytest</code> here could be any other script that could be installed via <code>pip</code>.</p>
| <python><pip><command-line-interface> | 2023-09-21 10:09:14 | 1 | 861 | oeter |
77,149,205 | 6,515,755 | Gotenberg img convertion for pdf size opimization | <p>I use <a href="https://gotenberg.dev/" rel="nofollow noreferrer">https://gotenberg.dev/</a> docker image for converting html with img to pdf.</p>
<p>I have a webp image of reasonable size.</p>
<p>I generate pdf with pyhon code like</p>
<pre><code>with io.BytesIO() as tmp_index_html:
tmp_index_html.write(b"""
<html>
<head>
<title>My img</title>
</head>
<body>
<img href="img.webp" />
</body>
</html>
""")
tmp_index_html.seek(0)
with open(img_webp, "rb") as img_webp:
response = requests.post(
HTML_TO_PDF_URL,
files={
"index.html": tmp_index_html,
"img.webp": img_webp,
},
timeout=2400,
)
with open("resulf.pdf", "wb") as pdf:
pdf.write(response.content)
</code></pre>
<p>The problem is that size of pdf is rather bigger then size of oroginal webp.</p>
<p>I found that webp is not something native for pdf
<a href="https://en.wikipedia.org/wiki/PDF#Imaging_model" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/PDF#Imaging_model</a>
=> I gess img is converted converted while "printing to pdf" with not the best for size optimization params.</p>
<p>Question: how should i preprocess my image to get reasonable small size of pdf ?</p>
| <python><image><pdf><gotenberg> | 2023-09-21 10:02:39 | 1 | 12,736 | Ryabchenko Alexander |
77,149,187 | 9,816,919 | opuslib decoding fails with "corrupt stream" on the opus header packet | <p>I have a valid ogg audio file (works in several players including <a href="https://opus-codec.org/release/dev/2018/09/18/opus-tools-0_2.html" rel="nofollow noreferrer">opus-tools opusdec</a>; ffmpeg correctly identifies the file and reports no problems). The audio stream has two channels, a sample rate of 48kHz, and a 20ms frame duration.</p>
<p>You can recreate a very similar file using ffmpeg</p>
<pre class="lang-bash prettyprint-override"><code>ffmpeg -i "a media file with audio" \
-f opus -frame_duration 20 -ar 48000 -ac 2 \
audio.ogg
</code></pre>
<p>In a hex editor, I was able to extract the first Opus packet/segment/frame according to the <a href="https://en.wikipedia.org/wiki/Ogg#File_format" rel="nofollow noreferrer">Ogg file format</a>.</p>
<p>Here it is:
<code>4F 70 75 73 48 65 61 64 01 02 38 01 80 BB 00 00 00 00 00</code></p>
<p><a href="https://i.sstatic.net/QjIM7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QjIM7.png" alt="hex editor screenshot of opus packet in ogg file" /></a></p>
<p>Then, I attempt to use <a href="https://opus-codec.org/release/stable/2023/04/20/libopus-1_4.html" rel="nofollow noreferrer">libopus v1.4</a> (via the <a href="https://pypi.org/project/opuslib/" rel="nofollow noreferrer">opuslib</a> Python bindings) to begin decoding the stream:</p>
<pre class="lang-py prettyprint-override"><code>from opuslib import Decoder
SAMPLE_RATE = 48000 # hertz
FRAME_DURATION = 20 # milliseconds
FRAMES_PER_SECOND = 1000 // FRAME_DURATION
SAMPLES_PER_FRAME = SAMPLE_RATE // FRAMES_PER_SECOND
CHANNELS = 2
SAMPLES_PER_CHANNEL_FRAME = SAMPLES_PER_FRAME // CHANNELS
packet = bytes.fromhex("4F 70 75 73 48 65 61 64 01 02 38 01 80 BB 00 00 00 00 00")
decoder = Decoder(SAMPLE_RATE, CHANNELS)
result = decoder.decode(packet, SAMPLES_PER_CHANNEL_FRAME)
print(result)
</code></pre>
<p>Which produces this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\...\test.py", line 50, in <module>
result = decoder.decode(packet, SAMPLES_PER_CHANNEL_FRAME)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\...\site-packages\opuslib\classes.py", line 56, in decode
return opuslib.api.decoder.decode(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\...\site-packages\opuslib\api\decoder.py", line 246, in decode
raise opuslib.exceptions.OpusError(result)
opuslib.exceptions.OpusError: b'corrupted stream'
</code></pre>
<p>I have tried multiple variations of the <code>CONSTANT</code> calculations.
I have tried decoding the second packet, the third packet, the first and second packet concatenated, etc.
All to no avail.</p>
<p>I get the same error for every ogg file I test. What am I doing wrong?</p>
<p>I'm on Windows 11 and built <code>libopus</code> from source using the included VS .sln files with preset ReleaseDLL_fixed x64.</p>
| <python><audio><opus> | 2023-09-21 10:00:06 | 1 | 853 | Gaberocksall |
77,149,178 | 14,244,880 | Python Process fails map() silently | <p>When assigning tasks to <code>ProcessPoolExecutor</code> using <code>.map()</code>, any failures in mapping are swallowed silently. Is there anyway to force a compile error?</p>
<p>Simplest way to replicate:</p>
<pre><code>def hello(x):
print("hello", x)
if __name__ == '__main__':
a = [x for x in range(10)]
with ProcessPoolExecutor() as ppe:
ppe.map(hello) # Missing one parameter here, but no error reported!!!
print("bye")
</code></pre>
<p>Prints out:</p>
<pre><code>bye <<< No errors!!!
</code></pre>
| <python><multiprocessing> | 2023-09-21 09:59:26 | 1 | 306 | z.x.99 |
77,149,150 | 17,884,397 | Iterating over a 2d sliding window over a numpy array | <p>I have a function <code>my_fun</code> which works on a <code>kxk</code> window of an image and returns a scalar.</p>
<p>I want to iterate over the image pixels and extract, for each pixel, a window of size <code>kxk</code> around it.<br />
The <code>k</code> parameter is an odd number.</p>
<p>I am looking for an iterator, similar to Matlab <code>nlfilter</code>.</p>
<p>What would be fast and also with minimal memory overhead?</p>
<p>It should be like:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
my_array = np.arange(25).reshape((5,5))
win_radius = (1, 1) # k = 3
for i in range(my_array.shape[0]):
for j in range(my_array.shape[1]):
win_rows = max(0, i - win_radius[0]), min(my_array.shape[0], i + win_radius[0] + 1)
win_cols = max(0, j - win_radius[1]), min(my_array.shape[1], j + win_radius[1] + 1)
my_win = my_array[win_rows[0]:win_rows[1], win_cols[0]:win_cols[1]]
print(my_win)
</code></pre>
<p>the output is:</p>
<pre class="lang-py prettyprint-override"><code>[[0 1]
[5 6]]
[[0 1 2]
[5 6 7]]
[[1 2 3]
[6 7 8]]
[[2 3 4]
[7 8 9]]
[[3 4]
[8 9]]
[[ 0 1]
[ 5 6]
[10 11]]
[[ 0 1 2]
[ 5 6 7]
[10 11 12]]
[[ 1 2 3]
[ 6 7 8]
[11 12 13]]
[[ 2 3 4]
[ 7 8 9]
[12 13 14]]
[[ 3 4]
[ 8 9]
[13 14]]
...
[[17 18 19]
[22 23 24]]
[[18 19]
[23 24]]
</code></pre>
| <python><numpy><performance><image-processing> | 2023-09-21 09:57:43 | 1 | 736 | Eric Johnson |
77,148,941 | 14,468,588 | How to reconstruct MMW image using electrical field in each direction at Reciever points | <p>I have used a structure including 109 Receivers and 4 transmitters in order to do MMW imaging(multistatic imaging). Imaging was done in 3 different frequencies. So, now I have electrical field in each direction at each receiver point resulted from each transmitter at those frequencies. I just want to know how can I implement it in matlab or python.
Can you give me a step-by-step guideline, sample code or document for implementation?<br />
Actually, this is the best article I found in this subject, But i can not understand its implementation perfectly:</p>
<p>"Walk-through millimeter wave imaging testbed
based on double multistatic cross arrays"</p>
<p><strong>UPDATE</strong></p>
<p>Here is the link to the available data:</p>
<p><a href="https://drive.google.com/file/d/1k1pyp4LvzBPAo5wf6O2omFKrfwU8FxK0/view?usp=sharing" rel="nofollow noreferrer">nearfield data</a></p>
| <python><matlab><signal-processing><imaging><back-projection> | 2023-09-21 09:31:11 | 0 | 352 | mohammad rezza |
77,148,846 | 997,381 | Python: Get substring of a string with a closest match to another string | <p>A nice algorithmic trivia for you today. :)</p>
<p>I have two strings – one is a longer sentence and the another one is shorter sentence that was discovered by LLM within the longer one. Let's see an example:</p>
<ul>
<li><strong>Long sentence:</strong> "If you're a coder you should consider buying a MacBook Pro 15inch with an M2 from Apple that will provide you with a plenty of computing power for your AI use-cases."</li>
<li><strong>Short sentence:</strong> "Apple MacBook Pro 15" M2"</li>
</ul>
<p>I need to mark the long sentence string with what is closest to the short string. The outcome would be the char <code>[start:end]</code> position indexes.</p>
<p>Acceptable outcomes could be like this (one of):</p>
<pre><code>If you're a coder you should consider buying a MacBook Pro 15inch with an M2 from Apple that will provide you with a plenty of computing power for your AI use-cases.
^^^^^^^^^^^^^^^^^^ [47:65]
/or/
If you're a coder you should consider buying a MacBook Pro 15inch with an M2 from Apple that will provide you with a plenty of computing power for your AI use-cases.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [47:76]
/or/
If you're a coder you should consider buying a MacBook Pro 15inch with an M2 from Apple that will provide you with a plenty of computing power for your AI use-cases.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [47:87]
</code></pre>
<p>I considered:</p>
<ul>
<li>membership operator,</li>
<li><code>difflib</code> methods,</li>
<li>regex,</li>
<li>Levenshtein,</li>
</ul>
<p>but nothing really suits the case.</p>
<p>The closes to what I can think of is this:</p>
<ol>
<li>Get the <code>length = len(short_string)</code>.</li>
<li>Split <code>long_string</code> by whitespace into a set of substrings of <code>length</code> length.</li>
<li>Calculate Levenshtein difference between <code>short_string</code> and each substring.</li>
<li>The closest distance wins it.</li>
</ol>
<pre><code>short_string = "four five eight"
long_string = "one two three four five six seven eight nine"
length = 3
substrings = [
"one two three",
"two three four",
"three four five",
"four five six",
"five six seven",
"six seven eight",
"seven eight nine"
]
for sentence in substrings:
Levenshtein.distance(sentence, short_string)
winner = "four five six"
</code></pre>
<p>Any other ideas or open-source tools that you can think of?</p>
| <python><string-comparison><levenshtein-distance> | 2023-09-21 09:18:38 | 1 | 1,404 | cadavre |
77,148,835 | 3,099,733 | python print method will raise encoding error when following | or > in powershell on windows | <p>Given the following python script <code>test.py</code></p>
<pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
print("Er, Süleyman")
</code></pre>
<p>Here is the result of running this script in powershell</p>
<pre><code>python test.py # success
Er, Süleyman
python test.py > tmp.txt
print("Er, Süleyman")
UnicodeEncodeError: 'gbk' codec can't encode character '\u0308' in position 6: illegal multibyte sequence
python test.py | Out-File -Encoding utf8 tmp.txt
print("Er, Süleyman")
UnicodeEncodeError: 'gbk' codec can't encode character '\u0308' in position 6: illegal multibyte sequence
</code></pre>
<p>I have no idea about how to wrong around this.</p>
<p>The default language of my laptop is Chinese, by running the following code, I get the output <code>cp936</code>.</p>
<pre class="lang-py prettyprint-override"><code>import locale
print(locale.getpreferredencoding())
</code></pre>
<p>I also try write buffer directly and the error is gone, but the content is incorrect.</p>
<pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
import sys
sys.stdout.buffer.write("Er, Süleyman".encode('utf-8'))
</code></pre>
<pre><code>python test.py > test.txt
cat test.txt
Er, Su虉leyman
</code></pre>
<h2>Update</h2>
<p>I found a solution to change the default encoding of powershell in this answer: <a href="https://stackoverflow.com/questions/57131654/using-utf-8-encoding-chcp-65001-in-command-prompt-windows-powershell-window/57134096#57134096">Using UTF-8 Encoding (CHCP 65001) in Command Prompt / Windows Powershell (Windows 10)</a></p>
<blockquote>
<p>In PSv5.1 or higher, where > and >> are effectively aliases of Out-File, you can set the default encoding for > / >> / Out-File via the $PSDefaultParameterValues preference variable:</p>
<p><code>$PSDefaultParameterValues['Out-File:Encoding'] = 'utf8'</code></p>
</blockquote>
<p>Though it is not a problem of Python, but without knowledge of the difference among the default encoding of Python Stdout, Windows and Powershell, it will be hard to find the right solution. For example, <a href="https://stackoverflow.com/questions/492483/setting-the-correct-encoding-when-piping-stdout-in-python">Setting the correct encoding when piping stdout in Python</a> is trying to fix from the Python's side but in fact it will introduce a new problem: force stdout to use <code>gtk</code> encoding will end up with some unsupported chars are displayed incorrectly. So I think this issue still have value for those who are not familiar with such complicated situation.</p>
| <python><powershell><gbk> | 2023-09-21 09:17:13 | 0 | 1,959 | link89 |
77,148,809 | 4,239,879 | Can't get print_control_identifiers() to work in a very basic scenario | <p><code>Neither GUI element (wrapper) nor wrapper method 'print_control_identifiers' were found (typo?)</code> error is returned for Application and DialogWrapper classes. WindowSpecification class returns None.</p>
<p>I was expecting the <code>WindowSpecification</code> to return at least <em>something</em>. Because a few milliseconds later I am able to interact with it. For example I successfully iterate like this: <code>for child in self.main_window_specification.Children():</code></p>
<p>Code:</p>
<pre><code>def run(self, output_mod_name):
""" Runs the ArtManager executable """
logger.info("Starting art manager")
self.app = Application(backend="win32").start(self.path)
logger.debug(f"Type of self.app: {type(self.app)}")
try:
logger.debug("This is expected to fail")
logger.debug(f"{self.app.print_control_identifiers()}")
except Exception as exc:
traceback_formatted = traceback.format_exc()
logger.debug(traceback_formatted)
logger.debug("Launched art manager process")
# wait for the window to come up
expected_window_title = f"S: {output_mod_name} | A: {output_mod_name} | Art Manager"
self.main_window_dialog = self.app.window(best_match=expected_window_title).wait('ready', timeout=10)
logger.debug(f"Type of self.main_window_dialog: {type(self.main_window_dialog)}")
try:
logger.debug("This is expected to fail too")
logger.debug(f"{self.main_window_dialog.print_control_identifiers()}")
except Exception as exc:
traceback_formatted = traceback.format_exc()
logger.debug(traceback_formatted)
logger.info("Art manager started")
self.main_window_specification = self.app.window(best_match=expected_window_title)
logger.debug(f"Type of self.main_window_specification: {type(self.main_window_specification)}")
try:
logger.debug("This is expected to work")
logger.debug(f"{self.main_window_specification.print_control_identifiers()}")
except Exception as exc:
traceback_formatted = traceback.format_exc()
logger.debug(traceback_formatted)
logger.debug("Aquired handle on Art Manager window")
</code></pre>
<p>Output:</p>
<pre><code>2023-10-09 12:21:24,756 - (art_manager.py:76) - tqma - INFO - Starting art manager
2023-10-09 12:21:26,007 - (art_manager.py:78) - tqma - DEBUG - Type of self.app: <class 'pywinauto.application.Application'>
2023-10-09 12:21:26,008 - (art_manager.py:80) - tqma - DEBUG - This is expected to fail
2023-10-09 12:21:26,008 - (art_manager.py:84) - tqma - DEBUG - Traceback (most recent call last):
File "src\binary_automation\art_manager.py", line 81, in run
logger.debug(f"{self.app.print_control_identifiers()}")
File "pywinauto\application.py", line 180, in __call__
AttributeError: Neither GUI element (wrapper) nor wrapper method 'print_control_identifiers' were found (typo?)
2023-10-09 12:21:26,008 - (art_manager.py:85) - tqma - DEBUG - Launched art manager process
2023-10-09 12:21:26,021 - (art_manager.py:90) - tqma - DEBUG - Type of self.main_window_dialog: <class 'pywinauto.controls.hwndwrapper.DialogWrapper'>
2023-10-09 12:21:26,021 - (art_manager.py:92) - tqma - DEBUG - This is expected to fail too
2023-10-09 12:21:26,021 - (art_manager.py:96) - tqma - DEBUG - Traceback (most recent call last):
File "src\binary_automation\art_manager.py", line 93, in run
logger.debug(f"{self.main_window_dialog.print_control_identifiers()}")
AttributeError: 'DialogWrapper' object has no attribute 'print_control_identifiers'
2023-10-09 12:21:26,021 - (art_manager.py:98) - tqma - INFO - Art manager started
2023-10-09 12:21:26,022 - (art_manager.py:100) - tqma - DEBUG - Type of self.main_window_specification: <class 'pywinauto.application.WindowSpecification'>
2023-10-09 12:21:26,022 - (art_manager.py:102) - tqma - DEBUG - This is expected to work
2023-10-09 12:21:26,101 - (art_manager.py:103) - tqma - DEBUG - None
2023-10-09 12:21:26,101 - (art_manager.py:107) - tqma - DEBUG - Aquired handle on Art Manager window
</code></pre>
| <python><pywinauto> | 2023-09-21 09:14:02 | 1 | 2,360 | Ivan |
77,148,664 | 10,694,589 | calculate the RMS intensity image with opencv | <p>I would like to calculate the intensity of I(x,y)/Io(x,y).</p>
<p>First, I read my images using rawpy because I have .nef files (Nikon raw).
Then I use opencv to transform my images into grayscale and I calculate I(x,y)/Io(x,y).
Where I(x,y) is "brut" and Io(x,y) is "init".</p>
<p>But after dividing the two images (cv2.divide) I use cv2.meanStdDev(test) and I have an "Nan" value.</p>
<p>And when I plot "test" using matplotlib I get this :</p>
<p><a href="https://i.sstatic.net/NLFjW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NLFjW.png" alt="enter image description here" /></a></p>
<p>and when I use imshow from cv2 I get what I want :</p>
<p><a href="https://i.sstatic.net/8tfsU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8tfsU.png" alt="enter image description here" /></a></p>
<p>I don't understand why I get nan from cv2.meanStdDev(test) and the differences between the two plots ?</p>
<pre><code>import numpy as np
import cv2
import rawpy
import rawpy.enhance
import matplotlib.pyplot as plt
####################
# Reading a Nikon RAW (NEF) image
init="/media/alexandre/Transcend/Expérience/Ombroscopie/eau/initialisation/2023-09-19_19-02-33.473.nef"
brut="/media/alexandre/Transcend/Expérience/Ombroscopie/eau/DT0.2/2023-09-20_10-34-27.646.nef"
bruit="/media/alexandre/Transcend/Expérience/Ombroscopie/eau/bruit-electronique/2023-09-18_18-59-34.994.nef"
####################
# This uses rawpy library
print("reading init file using rawpy.")
raw_init = rawpy.imread(init)
image_init = raw_init.postprocess(use_camera_wb=True, output_bps=16)
print("Size of init image read:" + str(image_init.shape))
print("reading brut file using rawpy.")
raw_brut = rawpy.imread(brut)
image_brut = raw_brut.postprocess(use_camera_wb=True, output_bps=16)
print("Size of brut image read:" + str(image_brut.shape))
print("reading bruit file using rawpy.")
raw_bruit = rawpy.imread(bruit)
image_bruit = raw_bruit.postprocess(use_camera_wb=True, output_bps=16)
print("Size of bruit image read:" + str(image_bruit.shape))
####################
# (grayscale) OpenCV
print(image_init.dtype)
init_grayscale = cv2.cvtColor(image_init, cv2.COLOR_RGB2GRAY).astype(float)
brut_grayscale = cv2.cvtColor(image_brut, cv2.COLOR_RGB2GRAY).astype(float)
bruit_grayscale = cv2.cvtColor(image_bruit, cv2.COLOR_RGB2GRAY).astype(float)
print(np.max(brut_grayscale))
print(init_grayscale.dtype)
"test = (brut_grayscale)/(init_grayscale)"
init = init_grayscale-bruit_grayscale
test = cv2.divide((brut_grayscale),(init_grayscale))
print(test.shape)
print(test.dtype)
print(type(test))
print(test.max())
print(test.min())
####################
# Irms
mean, std_dev = cv2.meanStdDev(test)
intensite_rms = std_dev[0][0]
print("Intensité RMS de l'image :", intensite_rms)
####################
# Matplotlib
import matplotlib.pyplot as plt
plt.imshow(test, cmap='gray')
plt.show()
# Show using OpenCV
import imutils
image_rawpy = imutils.resize(test, width=1080)
cv2.imshow("image_rawpy read file: ", image_rawpy)
cv2.waitKey(0)
cv2.destroyAllWindows("image_rawpy read file: " , image_rawpy)
</code></pre>
<p>output :</p>
<pre><code>reading init file using rawpy.
Size of init image read:(5520, 8288, 3)
reading brut file using rawpy.
Size of brut image read:(5520, 8288, 3)
reading bruit file using rawpy.
Size of bruit image read:(5520, 8288, 3)
uint16
37977.0
float64
(5520, 8288)
float64
<class 'numpy.ndarray'>
nan
nan
Intensité RMS de l'image : nan
^CTraceback (most recent call last):
File "ombro.py", line 62, in <module>
plt.show()
File "/home/alexandre/.local/lib/python3.8/site-packages/matplotlib/pyplot.py", line 368, in show
return _backend_mod.show(*args, **kwargs)
File "/home/alexandre/.local/lib/python3.8/site-packages/matplotlib/backend_bases.py", line 3544, in show
cls.mainloop()
File "/home/alexandre/.local/lib/python3.8/site-packages/matplotlib/backends/backend_qt.py", line 1023, in mainloop
qt_compat._exec(qApp)
File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
File "/home/alexandre/.local/lib/python3.8/site-packages/matplotlib/backends/qt_compat.py", line 262, in _maybe_allow_interrupt
old_sigint_handler(*handler_args)
</code></pre>
| <python><numpy><opencv><matplotlib><rawpy> | 2023-09-21 08:55:23 | 1 | 351 | Suntory |
77,148,649 | 4,541,649 | Parametrize schema name when creating DB tables with sqlalchemy | <p>I have a module <code>db_models.py</code> where I define my tables like so:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, String, Date
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class MyTable(Base):
__tablename__ = 'mytable'
__table_args__ = {'comment': 'test comment'}
my_date = Column(Date, primary_key=True)
my_word = Column(String(12))
class MyOtherTable(Base):
__tablename__ = 'myothertable'
__table_args__ = {'comment': 'test comment'}
my_date = Column(Date, primary_key=True)
my_song = Column(String(96))
</code></pre>
<p>Then in another module I import <code>Base</code> and there I need to create the tables in an arbitrary schema. So I'm trying to change schema for each table in <code>Base.metadata</code> like so (<code>my_session</code> is an existing DB session):</p>
<pre class="lang-py prettyprint-override"><code>from db_models import Base
Base.metadata.schema = 'test-schema'
Base.metadata.create_all(my_session.get_bind())
</code></pre>
<p>This is not working and the tables are created in the default schema. How do I make this work?</p>
| <python><database><oop><sqlalchemy><database-schema> | 2023-09-21 08:53:38 | 1 | 1,655 | Sergey Zakharov |
77,148,597 | 14,998,459 | How to suppress a pip install error in jupyter notebook? | <p>I am trying to install a pip package, without getting any output of that cell, but I can't suppress this error:</p>
<pre><code>!pip install -qqq llama-cpp-python --force-reinstall --upgrade --no-cache-dir --progress-bar off
</code></pre>
<blockquote>
<p>ERROR: pip's dependency resolver does not currently take into account
all the packages that are installed. This behaviour is the source of
the following dependency conflicts.
cudf 23.8.0 requires cupy-cuda11x>=12.0.0, which is not installed.
cuml 23.8.0 requires cupy-cuda11x>=12.0.0, which is not installed.
dask-cudf 23.8.0 requires cupy-cuda11x>=12.0.0, which is not
installed. apache-beam 2.46.0 requires dill<0.3.2,>=0.3.1.1, but you
have dill 0.3.7 which is incompatible.</p>
</blockquote>
<p>and the list goes on...</p>
<ol>
<li>How could I suppress this error message?</li>
<li>How could I resolve it?</li>
</ol>
| <python><python-3.x><jupyter-notebook><pip> | 2023-09-21 08:46:59 | 0 | 716 | TkrA |
77,148,586 | 17,473,587 | Django exclude() query with violation | <p>I have this model:</p>
<pre><code>class SettingMenus(models.Model):
cat = models.ForeignKey(Setting, on_delete=models.CASCADE, related_name="menus", null=True)
index = models.SmallIntegerField(validators=[MaxValueValidator(32766)], default=0)
name = models.CharField(max_length=100, null=True)
path = models.CharField(max_length=200, null=True, blank=True)
parent = models.ForeignKey('self', null=True, blank=True, related_name='sub_categories', on_delete=models.CASCADE)
role = models.ManyToManyField(UserRole, related_name="settingMenus", blank=True)
tables = models.ManyToManyField(AppModels, related_name="setting_menus", blank = True,)
</code></pre>
<p>And a query in view:</p>
<pre><code>nv1menus = SettingMenus.objects.filter(
cat__company__name=company,
cat__application="common",
index=0,
role__in=request.user.groups.all()
).exclude(sub_categories__role__in=request.user.groups.all()).distinct()
</code></pre>
<p>How I can violating <code>sub_categories__role__in=request.user.groups.all()</code> which is in exclude?</p>
<p>I have tried:</p>
<p><code>sub_categories__role__in!=request.user.groups.all()</code></p>
<p>or</p>
<p><code>not sub_categories__role__in=request.user.groups.all()</code></p>
<p>But not works.</p>
<h2>Update:</h2>
<p>This model inheritance <code>Grops</code> and <code>Permission</code> from <code>django.contrib.auth.models</code></p>
<pre><code>class UserRole(Group, CommonModel):
description = models.TextField(blank=True, null=True, verbose_name="Group Description")
def __str__(self):
return self.name
class UserAccess(Permission, CommonModel):
description = models.CharField(max_length=255, null=True, blank=True, verbose_name="Permission Description")
def __str__(self):
return self.name
class User(AbstractUser):
id = models.BigAutoField(primary_key=True)
mobile_number = models.CharField(max_length=15, null=True, blank=True)
bio = models.TextField(verbose_name="About Me", null=True, blank=True)
groups = models.ManyToManyField(
UserRole,
verbose_name="User Groups",
blank=True,
related_name="users",
related_query_name="user"
)
user_permissions = models.ManyToManyField(
UserAccess,
verbose_name="User Permissions",
blank=True,
related_name="users",
related_query_name="user"
)
def __str__(self):
return self.username
</code></pre>
<p>Here related model for <code>SettingMenus</code> model</p>
| <python><django><django-models><django-views> | 2023-09-21 08:46:05 | 1 | 360 | parmer_110 |
77,148,469 | 11,637,422 | How to re-order a dataframe based on the first non-zero value of each row | <p>I have a pandas dataframe of sales of stores opened in different dates, with the columns being months. I want to reorder the dataframe so that the columns go for 1st month since opening, 2nd month since opening... This is so that I can comparer 1st month performance for each store. The dataframe is something like:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[0.0, 0.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0],
[0.0, 0.0, 0.0, 3.0, 1.0, 1.0, 7.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 15.0, 16.0, 17.0, 18.0]
]
, columns=['Jan-19', 'Feb-19', 'Mar-19', 'Apr-19', 'May-19', 'June-19', 'Jul-19', 'Aug-19'])
I want it to be something like:
A B C D E F G H
0 3.0 4.0 5.0 6.0 7.0 8.0
1 3.0 1.0 1.0 3.0 7.0 0.0
2 15.0 16.0 17.0 18.0
</code></pre>
<p>I have tried the following to iterate over each row and find the first non zero value:</p>
<pre><code>
# define a function to find the first non-zero value in a row
def first_non_zero(row):
for val in row:
if val != 0:
return val
return 0
# apply the function to each row of the DataFrame
df['first_non_zero'] = df.apply(first_non_zero, axis=1)
</code></pre>
<p>I am more unsure about how to go about re-ordering the whole dataframe. Any ideas?</p>
| <python><pandas><dataframe> | 2023-09-21 08:31:07 | 4 | 341 | bbbb |
77,148,343 | 4,453,737 | How to split query string using Multiple Delimiter (=, !=, >=) using regex | <p>I want to split query string which might have multiple delimiter.</p>
<p>My strings are,
<code>book_id=123&start_date>=2023-09-12&end_date<=2023-09-14&status!=return</code></p>
<p>We have 6 delimiters, <code>=, !=, >, <, >=, <=</code></p>
<p>I want to split string if any one of the above delimiter were found in a string.</p>
<p>First, I split string using '&' then i tried,</p>
<p><code>re.findall('[^=!=><>=<=\s]+|[=!=><>=<=]', 'sample!=2023-09-01')</code>
and i got this, <code>['sample', '!', '=', '2023-09-01']</code></p>
<p><code>re.findall('[^=!=><>=<=\s]+|[=!=><>=<=]', 'sample=2023-09-01')</code>
<code>['sample', '=', '2023-09-01']</code></p>
<p>response is uneven.</p>
<p>I want <code>key, delim, val = ['sample', '!=', '2023-09-01']</code></p>
<p>instead of
<code>['sample', '!', '=', '2023-09-01']</code>.</p>
<p>My previous regex expression was, <code>re.split('[=|!=|>|<|>=|<=]', param)</code> this also gave me same result.</p>
<p>I referred: <a href="https://stackoverflow.com/questions/60960298/regex-split-by-multiple-delimiter">regex split by multiple delimiter</a></p>
| <python><regex><delimiter> | 2023-09-21 08:12:43 | 1 | 20,393 | Mohideen bin Mohammed |
77,148,259 | 8,444,568 | Omitting middle characters if string is too long when display a pandas DataFrame | <p>When display a DataFrame in Jupyter Notebook, if the string value is too long, the last characters will be omitted:</p>
<pre><code>df = pd.DataFrame({'A':['ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ']})
display(df)
</code></pre>
<p>output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRST...</td>
</tr>
</tbody>
</table>
</div>
<p>I want to change the behavior and only ommit the middle characters of the string when it's too long:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>ABCDEFGHIJKLMNOPQRSTUVW...DEFGHIJKLMNOPQRSTUVWXYZ</td>
</tr>
</tbody>
</table>
</div>
<p>Is it possible?</p>
| <python><pandas> | 2023-09-21 07:59:04 | 2 | 893 | konchy |
77,148,243 | 16,521,194 | How to check if an object has an attribute, even Optional? | <p>I was wondering if there was a way to list every attribute of a class, discriminating between optional and non-optional.</p>
<p>This method would ideally be able to handle the following example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict, Optional, Union
class TestClass:
id: int
name: Optional[str]
@staticmethod
def from_dict(testclass_dict: Dict[str, Union[int, str]]):
test: TestClass = TestClass()
non_opt_attributes, opt_attributes = attributes_lister(TestClass)
for attribute_name in non_opt_attributes:
try:
setattr(test, attribute_name, testclass_dict[attribute_name])
except KeyError as ke:
print(f"{attribute_name} is a non optional attribute of TestClass")
return None
for attribute in opt_attributes:
try:
setattr(test, attribute_name, testclass_dict[attribute_name])
except KeyError as ke:
setattr(test, attribute_name, None)
return test
</code></pre>
<p>This would then have this result:</p>
<pre class="lang-py prettyprint-override"><code>test1: TestClass = TestClass.from_dict({"name": "name"})
test2: TestClass = TestClass.from_dict({"id": 5})
test3: TestClass = TestClass.from_dict({"id": 5, "name": "name"})
test4: TestClass = TestClass.from_dict({"id": 5, "name": "name", "random": "value"})
</code></pre>
<pre class="lang-py prettyprint-override"><code>test1 = None
test2 = TestClass(id=5)
test3 = TestClass(id=5, name="name")
test4 = TestClass(id=5, name="name")
</code></pre>
<h2>EDIT</h2>
<p>I corrected my code following <a href="https://stackoverflow.com/users/2357112/user2357112">@user2357112</a>'s comment that <code>Optional</code> doesn't authorize absence, but simply for it to be <code>None</code>.<br />
That said, I don't see it having too much of an impact on the original question.</p>
| <python><python-typing> | 2023-09-21 07:56:28 | 1 | 1,183 | GregoirePelegrin |
77,148,198 | 1,532,338 | Failed to create folder in S3 bucket using boto3 | <p>I am trying to create a folder/directory in S3 bucket using boto3 library. Here is my sample code.</p>
<pre class="lang-py prettyprint-override"><code>import boto3
access_key = "REDACTED"
secret_key = "REDACTED"
endpoint = "http://REDACTED"
o = boto3.client(
's3',
endpoint_url=endpoint,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
#MyBucket already exists
o.put_object(Bucket="MyBucket", Key="foo/")
</code></pre>
<p>Folder creation failed with the following error</p>
<pre><code>Traceback (most recent call last):
File "/tmp/tst.py", line 14, in <module>
o.put_object(Bucket="MyBucket", Key="foo/")
File "/home/abhishek.kulkarni/nutest/my_nutest_virtual_env/lib/python2.7/site- packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/abhishek.kulkarni/nutest/my_nutest_virtual_env/lib/python2.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (MissingContentMD5ForObjectLock) when calling the PutObject operation: You must provide a Content-MD5 header with object lock fields.
</code></pre>
| <python><amazon-s3><boto3> | 2023-09-21 07:49:21 | 1 | 3,846 | Abhishek Kulkarni |
77,148,138 | 1,609,428 | How to iterate and read one row at at time from multiple parquet files? | <p>I know we can use <code>pyarrow</code> to read batches of rows from a given parquet file</p>
<pre><code>import pyarrow.parquet as pq
parquet_file = pq.ParquetFile('/mybigparquet.pq')
for i in parquet_file.iter_batches(batch_size=100):
do something
</code></pre>
<p>The problem is that I have several parquet files that are stored in the same directory, <code>/mybigparquetfiles/</code> as the files are generated from spark and they are named like <code>part_0001</code>, <code>part_0002</code>, etc.</p>
<p>I need to create an <strong>iterator</strong> that will yield one line at a time scanning through all the files. So by the time the yield iterator has finished <strong>all the files</strong> would have scanned exactly once.</p>
<p>Can I do something like this? Obviously the data is too large to fit into the RAM
Thanks!</p>
| <python><pyarrow> | 2023-09-21 07:41:59 | 1 | 19,485 | ℕʘʘḆḽḘ |
77,147,856 | 3,758,912 | Mixing Http and Websockets in Python Quart | <p>So my application needed to employ both a RESTful backend and a websockets backend. So I switched my API from Flask to Quart. However, even when following the docs to word the issue remains.</p>
<pre class="lang-py prettyprint-override"><code>@app.websocket("/gen-image")
async def gen_image_ws():
while True:
data = await websocket.receive()
if data == "Hello":
await websocket.send("World")
@app.route("/gen-image", methods=["GET", "POST"])
async def gen_image():
...
</code></pre>
<p>But I get a bad handshake error</p>
<pre><code>Starting WebSocket connection to ws://localhost:5500/gen-image
websocket: bad handshake
</code></pre>
<p>What am I doing wrong?</p>
| <python><websocket><python-asyncio><quart> | 2023-09-21 06:59:20 | 0 | 776 | Mudassir |
77,147,718 | 15,142,650 | Google gmail api rate limite exceed error | <p>I am using google gmail api for read otp, but whenever I try to run this function it somethimes gave <code>user rate limit exceed</code>, here what I should change to overcome this error. this error i am getting intermittently</p>
<pre><code>def get_otp():
logger.info("------ get_otp function started --------")
print("get_otp function started")
"""Shows basic usage of the Gmail API.
Lists the user's Gmail labels.
"""
creds = None
# The file token.json stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.json'):
creds = Credentials.from_authorized_user_file('token.json', SCOPES)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.json', 'w') as token:
token.write(creds.to_json())
try:
# Call the Gmail API
service = build('gmail', 'v1', credentials=creds)
...rest of the code to read otp
</code></pre>
| <python><google-api><gmail><gmail-api> | 2023-09-21 06:36:50 | 1 | 1,658 | sanket kheni |
77,147,449 | 5,844,628 | Transform comma separated string into separate rows in data frame | <p>I have a dataFrame that looks like this:</p>
<pre><code>user | items
--------------------------------------
user1 | apple, bread, cheese, orange
user2 | apple, corn, strawberry, squash
. | .
. | .
. | .
</code></pre>
<p>Final dataFrame should look like:</p>
<pre><code>user | items
--------------------------------------
user1 | apple
user1 | bread
user1 | cheese
user1 | orange
user2 | apple
user2 | corn
user2 | strawberry
user2 | squash
. | .
. | .
. | .
</code></pre>
<p>How would I transform the original dataFrame into the final dataFrame?</p>
| <python><pandas><dataframe> | 2023-09-21 05:43:13 | 3 | 417 | user5844628 |
77,147,430 | 2,759,719 | Calculating sales tax with Stripe tax | <p>I'm trying to calculate Stripe tax using Stripe Python library</p>
<pre><code>resp = stripe.tax.Calculation.create(
currency=USD_ISO_CODE,
line_items=[{"amount": total_base_cost, "reference": "L1"}],
shipping_cost={"amount": shipping_cost, "tax_code": "txcd_92010001"},
customer_details={
"address": {
"line1": "500 Anton Blvd",
"city": "Costa Mesa",
"state": "CA",
"postal_code": "92626",
"country": "US",
},
"address_source": "billing",
},
)
</code></pre>
<p>I've printed the response from Stripe here</p>
<pre><code>{
"amount_total": 1354611,
"currency": "usd",
"customer": null,
"customer_details": {
"address": {
"city": "Costa Mesa",
"country": "US",
"line1": "500 Anton Blvd",
"line2": null,
"postal_code": "92626",
"state": "CA"
},
"address_source": "billing",
"ip_address": null,
"tax_ids": [],
"taxability_override": "none"
},
"expires_at": 1695435437,
"id": "taxcalc_1NscS9GRnWL3YYIB9Tn5yAAG",
"livemode": false,
"object": "tax.calculation",
"shipping_cost": {
"amount": 0,
"amount_tax": 0,
"tax_behavior": "exclusive",
"tax_code": "txcd_92010001"
},
"tax_amount_exclusive": 0,
"tax_amount_inclusive": 0,
"tax_breakdown": [
{
"amount": 0,
"inclusive": false,
"tax_rate_details": {
"country": "HK",
"percentage_decimal": "0.0",
"state": null,
"tax_type": null
},
"taxability_reason": "not_subject_to_tax",
"taxable_amount": 0
}
],
"tax_date": 1695262637
}
</code></pre>
<p>Afaik California has destination-based sales tax of 7.25% and this is not being calculated. The reason given in the response is <code>"taxability_reason": "not_subject_to_tax"</code>.</p>
<p>In my Stripe tax settings, I've set the goods to <code>Tangible goods</code> and the business origin is HK.</p>
<p>What am I doing wrong here? Why is Stripe not adding in CA's sales tax?</p>
| <python><stripe-payments><stripe-tax> | 2023-09-21 05:38:23 | 0 | 4,199 | mrQWERTY |
77,147,297 | 1,609,428 | use gensim with a pyarrow iterable | <p>Consider this code</p>
<pre><code>import pyarrow.parquet as pq
from gensim.models import Word2Vec
parquet_file = pq.ParquetFile('/mybigparquet.pq')
for i in parquet_file.iter_batches(batch_size=100):
print("training on batch")
batch = i.to_pandas()
model = Word2Vec(sentences= batch.tokens, vector_size=100, window=5, workers=40, min_count = 10,epochs = 10)
</code></pre>
<p>As you can see, I am trying to train a <code>word2vec</code> model using a very large parquet file that does not fit entirely into my RAM. I know that <code>gensim</code> can operate on iterables (not generators, as the data need to be scanned twice in word2vec) and I know that <code>pyarrow</code> allows me to generate batches (of even one row) from the file.</p>
<p>Yet, this code is not working correctly. I think I need to write my <code>pyarrow</code> loop as a proper generator but I do not know how to do this.</p>
<p>What do you think?
Thanks!</p>
| <python><gensim><pyarrow> | 2023-09-21 04:59:43 | 1 | 19,485 | ℕʘʘḆḽḘ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.