QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,781,957
| 7,875,799
|
How to prevent premature deletion of built-in functions by garbage collector on application exit
|
<p>Here is a short script with a class that dumps some data to disk upon destruction:</p>
<pre><code>import pickle
class MyClass(object):
def __init__(self):
self._some_data = "lorem ipsum dolor sit"
def __del__(self):
with open("/some/file/to/dump/data.pickle", 'wb') as ofile:
pickle.dump(self._some_data, ofile)
my_instance = MyClass()
# script ends, causing garbage collector to clean up and delete my_instance, which triggers __del__()
</code></pre>
<p>The issue is that <code>__del__()</code> uses the built-in <code>open()</code> function, which may be deleted by the garbage collector on application exit before <code>my_instance</code> is deleted. This then causes an <em>"NameError: name 'open' is not defined"</em> error. This reason for this problem is described in more detail here (but without solution): <a href="https://stackoverflow.com/questions/64679139/nameerror-name-open-is-not-defined-when-trying-to-log-to-files">NameError: name 'open' is not defined When trying to log to files</a></p>
<p>However, when playing around I found a workaround by overwriting the open function with itself by adding the silly looking line <code>open = open</code>:</p>
<pre><code>import pickle
open = open
class MyClass(object):
def __init__(self):
self._some_data = "lorem ipsum dolor sit"
def __del__(self):
with open("/some/file/to/dump/data.pickle", 'wb') as ofile:
pickle.dump(self._some_data, ofile)
my_instance = MyClass()
</code></pre>
<p>This will now work and dump <code>self._some_data</code> to disk.</p>
<p>I have multiple questions:</p>
<ol>
<li>Why precisely does this actually work? My first instinct was that <code>MyClass</code> is depending on the local <code>open()</code>, so the garbage collector will delete <code>open()</code> after <code>my_instance</code>. However, in that case i do not understand why the first version does not work because similarly, it is depending on the built-in <code>open()</code>, which should also prevent its premature deletion.</li>
<li>Is this workaround actually reliable? I might just have been lucky and that in another version of python this will/would not work.</li>
<li>Is there a better way to solve this problem? While there is some elegance in the simplicity of this workaround, this also looks very hacky.</li>
</ol>
|
<python><garbage-collection>
|
2024-01-08 16:39:47
| 1
| 2,462
|
Leander Moesinger
|
77,781,920
| 1,911,091
|
protobuf serializetostring in python ends with \x00, reparsing in C++ fails
|
<p>I have the following proto file :</p>
<pre><code>// [START declaration]
syntax = "proto3";
...
// message from cloud to device
message Downstream
{
// types
message GetValues{}
// fields
uint32 TargetDeviceAddress=1;
oneof sub_message{
Axis target_axis=2; // new axis values to set
GetValues get_light=3; // get light sensor values
}
}
...
// [END messages]
</code></pre>
<p>and use the following code in python to serialize protobuf data (to send it over the wire)</p>
<pre><code> downstream = RobotControlInterface_pb2.Downstream()
downstream.TargetDeviceAddress = 1
downstream.get_light.SetInParent()
if downstream.IsInitialized():
downstream_serialized = downstream.SerializeToString() #despite its name, returns the bytes type, not the str type.
print(downstream_serialized)
print(type(downstream_serialized))
</code></pre>
<p>The result is :</p>
<pre><code>b'\x08\x01\x1a\x00
<class "bytes">
</code></pre>
<p>Trying to ParseFromString in C++ fails.
The python data is recived in C++ like:</p>
<pre><code>std::string serializedPayload = (char*)message->payload;
control_interface::Downstream downstream_message;
if (downstream_message.ParseFromString(serializedPayload))
//Fails every time
</code></pre>
<p>But now the data changed to
<a href="https://i.sstatic.net/gqz7M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gqz7M.png" alt="enter image description here" /></a></p>
<p>Which is the same like "\x08\x01\x1a"!!!</p>
<p>How can this be solved ?</p>
|
<python><c++><protocol-buffers>
|
2024-01-08 16:33:26
| 0
| 1,442
|
user1911091
|
77,781,909
| 4,025,749
|
Priority is not integer in clickup api
|
<p>I am using <a href="https://github.com/Imzachjohnson/clickupython" rel="nofollow noreferrer">clickuppython</a> to call clickup API. When I set Priority in Clickup dashboard (e.g. High), I get this error</p>
<pre class="lang-py prettyprint-override"><code> # Example request | Creating a task in a list
c = client.ClickUpClient(API_KEY)
tasks = c.get_tasks(self.list_id, include_closed=True)
</code></pre>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "C:\PROJECT\Clickup\main.py", line 29, in main
click_up_client.fetch_data()
File "C:\PROJECT\Clickup\request_manager\clickup\client.py", line 40, in fetch_data
tasks = c.get_tasks(self.list_id, include_closed=True)
File "C:\Users\x\.conda\envs\general\lib\site-packages\clickupython\client.py", line 631, in get_tasks
return models.Tasks.build_tasks(fetched_tasks)
File "C:\Users\x\.conda\envs\general\lib\site-packages\clickupython\models.py", line 703, in build_tasks
return Tasks(**self)
File "pydantic\main.py", line 406, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Tasks
tasks -> 2 -> priority
value is not a valid integer (type=type_error.integer)
</code></pre>
<p>This is a part of the result as JSON from postman and clickup API:</p>
<pre class="lang-json prettyprint-override"><code> "watchers": [],
"checklists": [],
"tags": [],
"parent": null,
"priority": {
"color": "#f8ae00",
"id": "2",
"orderindex": "2",
"priority": "high"
},
</code></pre>
<p>as you see, the Priority is a JSON object, not an int.
from the <a href="https://clickup.com/api/developer-portal/tasks/#setting-priority" rel="nofollow noreferrer">documentation</a>, it should be:</p>
<blockquote>
<p>Setting priority<br />
priority is a number that corresponds to the Priorities available in the ClickUp UI. As in the ClickUp UI, priorities cannot be customized. 1 is Urgent 2 is High 3 is Normal 4 is Low</p>
</blockquote>
<p>So what is the solution?</p>
|
<python><clickup-api>
|
2024-01-08 16:30:10
| 1
| 616
|
Omid Erfanmanesh
|
77,781,835
| 8,280,171
|
Passing Secret Value to DocumentDB
|
<p>I'm creating a secret and passing it as a user password to AWS DocumentDB with Python CDK but i keep getting Could not parse SecretString JSON error.</p>
<pre><code> excluded_characters = "!@#$%^&*()`~,}{[]_=+'?\/|<>:;-" + '"'
generated_database_password = secretsmanager.Secret(
self,
"DatabasePassword",
generate_secret_string=secretsmanager.SecretStringGenerator(
password_length=8,
exclude_characters=excluded_characters,
exclude_punctuation=True,
),
)
# Creating secret manager to store database environment variables
cluster_secret_manager = secretsmanager.Secret(
self,
"DbSecrets",
secret_name="db_secret",
secret_object_value={
"db_admin": SecretValue.unsafe_plain_text("admin"),
"db_pw": generated_database_password.secret_value,
},
)
vpc = ec2.Vpc.from_lookup(self, "VPCLookup", vpc_id=props.vpc_id)
application_subnet = vpc.select_subnets(
subnet_group_name="application"
).subnet_ids
cluster = docdbelastic.CfnCluster(
self,
"MongoDbCluster",
admin_user_name=f"{props.customer}",
auth_type="PLAIN_TEXT",
cluster_name=f"{props.customer}-mongodb-cluster",
shard_capacity=2,
shard_count=4,
admin_user_password=generated_database_password.secret_value_from_json("database_password").unsafe_unwrap(),
subnet_ids=application_subnet,
)
</code></pre>
<p>how do i convert the value to str? or am i using the wrong method?</p>
|
<python><amazon-web-services><aws-cdk><aws-secrets-manager>
|
2024-01-08 16:12:34
| 1
| 705
|
Jack Rogers
|
77,781,781
| 6,630,397
|
instantiate a django form in readonly mode in a view function
|
<p>I have a django 4 form having some widgets for the user to select some values:</p>
<pre class="lang-py prettyprint-override"><code>from django import forms
from .app.model import MyModel
from bootstrap_datepicker_plus.widgets import DatePickerInput
class FooForm(forms.ModelForm):
# stuff
class Meta:
model = FooModel
fields = [
"user_id",
"created_at",
"some_other_field",
]
widgets = {
"user_id": forms.NumberInput(),
"created_at": DatePickerInput(),
"some_other_field": forms.NumberInput(),
}
</code></pre>
<p>I instantiate that form in several functions in the <code>views.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>my_form_instance = forms.FooForm(
data=data,
user=request.user,
)
</code></pre>
<p>One function <code>create_or_edit_foo()</code> for creating a new or editing an existing record. And another function <code>delete_foo()</code> to delete a record. In the deletion function (used by a specific endpoint <code>/foo/12/delete</code>), I want to display the form but in read-only mode.</p>
<p>How could I achieve that?</p>
<p>I know I can add <code>attrs={'readonly': True,}</code> in each of the widget to make it non-editable.
But this will also apply to the form instances used by the create_or_edit function in my views.</p>
<p>I'm using django 4.2 on Ubuntu 22.04 with Python 3.10.6.</p>
|
<python><django><django-forms><readonly><django-widget>
|
2024-01-08 16:01:31
| 2
| 8,371
|
swiss_knight
|
77,781,712
| 15,912,168
|
DataFrame to XML Conversion: Price Value Multiplication Issue in Pandas
|
<p>I'm encountering an issue while working with pandas in Python. I have a script that receives a DataFrame containing product details, prices, and other information. My goal is to convert this DataFrame into an XML format to subsequently send it to a database. There are additional processes involved in this workflow.</p>
<p>The problem arises when handling the 'price' field. The expected price value is 336117.6. However, during the conversion or insertion process, the value seems to be getting multiplied by 10, resulting in 3.361.176.</p>
<p>Here's a snippet of the XML structure:</p>
<pre><code><row>
<Price>336117.6</Price>
</row>
</code></pre>
<p>To address this issue, I tried converting the 'Price' column to a numeric type using the pd.to_numeric function as follows:</p>
<pre><code>df['preço'] = pd.to_numeric(df['Preço'], errors='coerce',)
</code></pre>
<p>Unfortunately, this didn't resolve the issue, and the behavior remains consistent.</p>
<p>I would appreciate any insights or suggestions on how to troubleshoot and resolve this issue. Thank you in advance for your help!</p>
|
<python><pandas><dataframe>
|
2024-01-08 15:49:57
| 1
| 305
|
WesleyAlmont
|
77,781,708
| 23,196,983
|
seaborn jointplot axes don't match after adding a colorbar
|
<p>I am using seaborn to create a jointplot with kind='hist'. When I add the colorbar for the histogram to the plot the grid for the jointplot and the single histogram no longer match. I am pretty sure this is because I add the colorbar to the axis, but I have no idea how to do it differently and wasn't able to find anything helpful here or elsewhere.</p>
<pre class="lang-py prettyprint-override"><code>import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.axes_grid1 import make_axes_locatable
#Function I use to generate some sampling weights (my real data is very imbalanced)
def generate_weights(n_bins:int, data:np.ndarray)->np.ndarray:
hist_data, bin_edges = np.histogram(data, bins=n_bins)
indices = np.digitize(data, bin_edges[:-1])-1
weights = 1.0/hist_data[indices]
return weights
#generating some data to plot
dataset_1 = np.random.normal(0, 50, size=1000)
dataset_2 = np.random.normal(9, 55, size=1000)
#generating weights for the data
weights_1 = generate_weights(50, dataset_1)
#weights_2 = generate_weights(50, dataset_2)
#calculate the min and max of both datasets to use them later for calculating the bins and for the limits of the axes
min_val = min([np.min(dataset_1), np.min(dataset_2)])
max_val = max([np.max(dataset_1), np.max(dataset_2)])
bin_width = (max_val-min_val)/100
# calculate bins
bins_2d = (np.linspace(min_val, max_val, 100), np.linspace(min_val, max_val, 100))
sns.set_style('darkgrid')
figsize = (8,6)
jointplot_hist = sns.jointplot(
x = dataset_1,
y = dataset_2,
kind = 'hist',
cmap = 'viridis',
bins = bins_2d,
**{'weights': weights_1}
)
#set axis labels
jointplot_hist.set_axis_labels('dataset 1', 'dataset_2')
#set axis limits
jointplot_hist.ax_joint.set_xlim([min_val, max_val])
jointplot_hist.ax_joint.set_ylim([min_val, max_val])
#plot a diagonal line
plt.plot([min_val, max_val], [min_val, max_val], color='red', linestyle='-', linewidth=0.5)
#here I add the colorbar
divider = make_axes_locatable(jointplot_hist.ax_joint)
cax = divider.append_axes('right', size='5%', pad=0.05)
mappable = jointplot_hist.ax_joint.collections[0]
plt.colorbar(mappable, cax=cax)
</code></pre>
<p>Plot where the grid no longer matches:</p>
<p><img src="https://i.sstatic.net/IY6mS.png" alt="Plot where the grid no longer matches" /></p>
<p>I tried to set the colorbar differently, like this:</p>
<pre class="lang-py prettyprint-override"><code>cbar_ax = jointplot_hist.fig.add_axes([0.85, 0.15, 0.05, 0.7])
mappable = jointplot_hist.ax_joint.collections[0]
jointplot_hist.fig.colorbar(mappable, cax=cbar_ax)
</code></pre>
<p>but then I do not have it placed between the jointplot and the individual plot which is the design I would like to get. Also adjusting the size is tricky.</p>
|
<python><matplotlib><seaborn>
|
2024-01-08 15:48:35
| 1
| 310
|
Frede
|
77,781,377
| 1,396,516
|
How to run a one-off Python unzipping script (19Gb .7z archive) on GC?
|
<p>I need to extract files from the archive on a one-off basis. The Python code that can handle the task is very simple:</p>
<pre><code>from py7zr import unpack_7zarchive
import shutil
shutil.register_unpack_format('7zip', ['.7z'], unpack_7zarchive)
path = 'gs://my_bucket/my_archive.7z'
shutil.unpack_archive(path, '')
</code></pre>
<p>I need it to run on the cloud because the archive is huge (19Gb).</p>
<p>There are many unzipping solutions listed <a href="https://stackoverflow.com/questions/49541026/how-do-i-unzip-a-zip-file-in-google-cloud-storage">here</a>, but all Pythonic solutions are run as Google Functions, and Google Functions have a <a href="https://cloud.google.com/functions/quotas" rel="nofollow noreferrer">16Gb memory limit</a>.</p>
|
<python><google-cloud-storage><7zip>
|
2024-01-08 14:58:00
| 1
| 3,567
|
Yulia V
|
77,781,150
| 730,858
|
Descriptor not being invoked and not getting the desired output
|
<pre><code>class MaxLength:
def __init__(self, max_length):
self.max_length = max_length
def __get__(self, instance, owner):
print("get called")
return self.max_length
def __set__(self, instance, value):
if not isinstance(value, str):
raise ValueError("Value must be a string")
if len(value) > self.max_length:
raise ValueError(f"String length exceeds the maximum length of {self.max_length}")
class LimitedString:
def __init__(self, value, max_length):
self._max_length = MaxLength(max_length)
self.value = value
@property
def max_length(self):
return self._max_length
@max_length.setter
def max_length(self, value):
self.value = MaxLength(self._max_length)
limited_str = LimitedString("Short", 10)
# Access the 'max_length' attribute
print(limited_str.max_length) #Should give Output: 10
limited_str.value = "Too long string" #Should Raises a ValueError
</code></pre>
<p>I am trying create a descriptor <code>MaxLength</code> such that when we call class <code>LimitedString("Short", 10)</code> with <code>value="Short"</code> and <code>max_length=10</code> when we try to assign a value of string larger than the max length it raises a <code>ValueError</code>. also, Performing all validation in descriptor class <code>MaxLength</code></p>
<p>I neither get the output value of 10 for <code>limited_str.max_length</code> nor does it raises value error for <code>limited_str.max_length</code></p>
|
<python>
|
2024-01-08 14:22:26
| 0
| 4,694
|
munish
|
77,781,132
| 2,971,574
|
Transform pyspark dataframe to code / syntax
|
<p>Let's assume I've got the following pyspark dataframe in Databricks:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>some_other_column</th>
<th>price_history</th>
</tr>
</thead>
<tbody>
<tr>
<td>test1</td>
<td>[{"date":"2021-03-21T01:20:33Z","price_tag":"N","price":"9.23","price_promotion":"1.34","AKT":false,"my_column":null,"supplier":"some_supplier"}]</td>
</tr>
<tr>
<td>test2</td>
<td>[{"date":"2021-03-23T01:20:33Z","price_tag":null,"price":"10.40","price_promotion":null,"AKT":true,"my_column":"something","supplier":null}]</td>
</tr>
</tbody>
</table>
</div>
<p>You can directly see that there is some nested column structure in the column "price_history". But how do I get the code to construct this table? Is there a way to transform this table to actual code which can then be used to create the table again?</p>
<p>The only way I know to get more information about the table is to use <code>df.schema</code> but the resulting schema does not tell me how to create dataframes that have complicated datastructures (like structs, maptype, arraytype, etc.) manually. If there is a way to get the code tests could be written much easier as example data could be created easily.</p>
<p>So what I'd like to get is the following code:</p>
<pre><code>from datetime import datetime
from decimal import Decimal
from pyspark.sql.types import (StructType, StructField, StringType,
ArrayType, TimestampType, BooleanType,
DecimalType)
schema = StructType([
StructField("some_other_column", StringType()),
StructField('price_history', ArrayType(StructType([
StructField('date', TimestampType()),
StructField('price_tag', StringType()),
StructField('price', DecimalType(12, 2)),
StructField('price_promotion', DecimalType(12, 2)),
StructField('AKT', BooleanType()),
StructField("my_column", StringType()),
StructField("supplier", StringType()),
]))),
])
data = [
('test1', [(datetime(2021, 3, 21, 1, 20, 33), 'N',
Decimal('9.23'), Decimal('1.34'), False, None, 'some_supplier')]),
('test2', [(datetime(2021, 3, 23, 1, 20, 33), None,
Decimal('10.40'), None, True, 'something', None)]),
]
df = spark.createDataFrame(schema=schema, data=data)
</code></pre>
|
<python><pyspark>
|
2024-01-08 14:19:48
| 2
| 555
|
the_economist
|
77,781,015
| 1,803,648
|
global python library installation where pipx is not an option
|
<p>I understand as of <a href="https://peps.python.org/pep-0668/" rel="nofollow noreferrer">PEP-0668</a> we're supposed to be installing python executables via <code>pipx</code>. What about packages such as a <a href="https://mopidy.com/" rel="nofollow noreferrer">mopidy</a> plugin <a href="https://mopidy.com/ext/spotify/" rel="nofollow noreferrer">mopidy-spotify</a>? It doesn't install via <code>pipx</code>, and as mopidy service is ran as <code>mopidy</code> user it should be installed globally.</p>
<p>Is <code>sudo python3 -m pip install Mopidy-Spotify --break-system-packages</code> appropriate in such situation?</p>
<p>Edit: let's assume mopidy itself was installed via OS package manager, eg from <a href="https://packages.debian.org/testing/mopidy" rel="nofollow noreferrer">apt</a>.</p>
<hr />
<p>See also: <a href="https://stackoverflow.com/questions/75602063/pip-install-r-requirements-txt-is-failing-this-environment-is-externally-mana">pip install -r requirements.txt is failing: "This environment is externally managed"</a></p>
|
<python><linux><pip><debian><pipx>
|
2024-01-08 14:08:07
| 0
| 598
|
laur
|
77,780,873
| 1,413,513
|
Removing certain end-standing values from list in Python
|
<p>Is there an elegant Pythonic way to perform something like <code>rstrip()</code> on a list?</p>
<p>Imagine, I have different lists:</p>
<pre><code>l1 = ['A', 'D', 'D']
l2 = ['A', 'D']
l3 = ['D', 'A', 'D', 'D']
l4 = ['A', 'D', 'B', 'D']
</code></pre>
<p>I need a function that will remove <em>all end-standing</em> <code>'D'</code> elements from a given list (but not those that come before or in between other elements!).</p>
<pre><code>for mylist in [l1, l2, l3, l4]:
print(mylist, ' => ', remove_end_elements(mylist, 'D'))
</code></pre>
<p>So the desired output would be:</p>
<pre><code>['A', 'D', 'D'] => ['A']
['A', 'D'] => ['A']
['D', 'A', 'D', 'D'] => ['D', 'A']
['A', 'D', 'B', 'D'] => ['A','D','B']
</code></pre>
<p>One implementation that does the job is this:</p>
<pre><code>def remove_end_elements(mylist, myelement):
counter = 0
for element in mylist[::-1]:
if element != myelement:
break
counter -= 1
return mylist[:counter]
</code></pre>
<p>Is there a more elegant / efficient way to do it?</p>
<p>To answer comment-questions:</p>
<ul>
<li>Either a new list or modifying the original list is fine (although the above implementation has creating a new list in mind).</li>
<li>The real lists contain multi-character-strings (lines from a text file).</li>
<li>What I'm actually trying to strip away are lines that fulfill certain criteria for "empty" (no characters OR only whitespace OR only whitespace and commas). I have that check implemented elsewhere.</li>
<li>These empty lines can be an arbitrary number at the end of the list, but in most cases will be 1.</li>
</ul>
<hr />
<p>I timed the different solutions offered so far, with simulated data close to my actual use case, and the actual is_empty_line() function that I'm using:</p>
<ul>
<li><a href="https://stackoverflow.com/a/77786049/1413513">Kelly Bundy's solution</a>: 0.029670200019609183</li>
<li><a href="https://stackoverflow.com/a/77781070/1413513">Guy's solution</a>: 0.038380099984351546</li>
<li>my original solution: 0.03837349999230355</li>
<li><a href="https://stackoverflow.com/a/77785322/1413513">cards' solution</a>: 0.0408437000005506</li>
<li><a href="https://stackoverflow.com/a/77781188/1413513">Timeless' solution</a>: 0.08083210000768304</li>
</ul>
<p>Which one performs better does seem to depend on the complexity of the <code>is_empty_line()</code> function (except for Timeless' solution, which is consistently slower than everything else, and KellyBundy's solution, which is consistently faster).</p>
|
<python><list>
|
2024-01-08 13:54:46
| 5
| 5,461
|
CodingCat
|
77,780,829
| 22,466,650
|
How to avoid redundant logins when instantiating inter-dependent objects?
|
<p>I have two classes: <code>LogSess</code> (for login) and <code>DataReq</code> (for fetching a json).</p>
<p>The problem I face is that the instantiation of <code>DataReq</code> depends on the one of <code>LogSss</code>. So whenever I create <code>DataReq</code> objects, I keep logged in again and again. The desired behaviour is to trigger the login only if it's the first time we instantiate a <code>LogSess</code> object or if the cookies have expired or if the website returns a response that contains the text "You must login".</p>
<p>Do you guys know how to achieve this ? Is this even possible ? I'm open to any suggestion.</p>
<p>PS : I have three more similar classes like <code>DataReq</code> that also depends on <code>LogSess</code>.</p>
<pre><code>import requests
URL_LOGIN = "https://www/login..."
URL_TDATA = "https://www/services-..."
class LogSess:
def __init__(self, username="foo", password="bar"):
self.username = username
self.password = password
@property
def session(self):
with requests.Session() as session:
session.post(URL_LOGIN, data={"login": self.username, "password": self.password})
return session
class DataReq:
def __init__(self, name, data):
self.name = name
self.data = None
@property
def data(self):
response = LogSess().session.post(URL_TDATA, data={"name": self.name})
try:
return response.json()
except JSONDecodeError:
print("Session expired! Need to re-login.")
</code></pre>
|
<python><class><python-requests>
|
2024-01-08 13:51:28
| 1
| 1,085
|
VERBOSE
|
77,780,714
| 8,921,867
|
SQLAlchemy why have model classes inherit from class Base(DeclarativeBase) instead of DeclarativeBase directly?
|
<p>The ORM quickstart guide for SQLAlchemy 2.0 defines a practice where you should define a simple child class of <code>DeclarativeBase</code> and let all your model classes inherit from that.</p>
<p>Example straight from the <a href="https://docs.sqlalchemy.org/en/20/orm/quickstart.html#learn-the-above-concepts-in-depth" rel="nofollow noreferrer">docs</a>:</p>
<pre><code>>>> class Base(DeclarativeBase):
... pass
>>> class User(Base):
... __tablename__ = "user_account"
... # more stuff here
>>> class Address(Base):
... __tablename__ = "address"
... # more stuff here
</code></pre>
<p>Now I'm wondering why this is defined the "standard" approach instead of just letting all my model classes inherit from <code>DeclarativeBase</code> directly, as <code>Base</code> doesn't seem to add any additional functionality? Something like</p>
<pre><code>>>> class User(DeclarativeBase):
... __tablename__ = "user_account"
... # more stuff here
>>> class Address(DeclarativeBase):
... __tablename__ = "address"
... # more stuff here
</code></pre>
|
<python><sqlalchemy>
|
2024-01-08 13:41:54
| 0
| 2,172
|
emilaz
|
77,780,485
| 11,462,274
|
A template to bring the CMD window to the foreground while executing any Python code
|
<p>When I want to bring VSCode to the foreground by running any Python in the terminal, I do it this way:</p>
<pre class="lang-python prettyprint-override"><code>import subprocess
import os
VS_CODE_PATH = r'C:\Users\Computador\AppData\Local\Programs\Microsoft VS Code\Code.exe'
def attention() -> None:
subprocess.Popen([VS_CODE_PATH, os.getcwd()])
</code></pre>
<p>I tried directly using the path to CMD:</p>
<pre class="lang-python prettyprint-override"><code>import subprocess
import os
CMD_PATH = r'C:\WINDOWS\system32\cmd.exe'
def attention() -> None:
subprocess.Popen([CMD_PATH, os.getcwd()])
</code></pre>
<p>But it didn't bring what's running to the foreground as I wanted.</p>
<p>How should I proceed so that he can perform this desired action for me?</p>
|
<python><cmd>
|
2024-01-08 13:14:53
| 1
| 2,222
|
Digital Farmer
|
77,780,347
| 1,391,441
|
Evaluating Laplacian using sympy does not match Wolfram-Alpha
|
<p>I have a function whose Laplacian needs to be solved and evaluated. Using <a href="https://www.sympy.org/" rel="nofollow noreferrer"><code>sympy</code></a> and <a href="https://www.wolframalpha.com/input?i=laplacian%20A%2Fsqrt%28x%5E2%2B%28B%2Bsqrt%28y%5E2%2BC%29%29%5E2%29&lang=es" rel="nofollow noreferrer">Wolfram-Alpha</a> I can obtain the Laplacian and evaluate it, the problem is that the values do not match.</p>
<pre><code>import numpy as np
import sympy
# Constants
A = -1.3
B = 5.4
C = 7.9
# Evaluation points for variables `x,y`
x_eval = 2.4
y_eval = 6.5
# Solve and evaluate Laplacian of `func` using sympy
x = sympy.symbols('rho', real=True, positive=True)
y = sympy.symbols('z', real=True, positive=True)
func = A / sympy.sqrt(x**2 + (B + sympy.sqrt(y**2 + C**2))**2)
laplace_f = sympy.laplace_transform(func, x, y, noconds=True)
print("sympy:", laplace_f.evalf(subs={x: x_eval, y: y_eval}))
# Evaluate solved Laplacian of `func` by Wolfram-Alpha
WA_laplace_f = (
-(A * (B**3 * C + B**2 * (4 * C - y_eval**2) * np.sqrt(C + y_eval**2) + B * (5 * C**2 + C * (x_eval**2 + 3 * y_eval**2) - 2 * y_eval**4) +
np.sqrt(C + y_eval**2) * (2 * C**2 + C * (y_eval**2 - x_eval**2) - y_eval**2 * (x_eval**2 + y_eval**2)))) /
((C + y_eval**2)**(3/2) * (B**2 + 2 * B * np.sqrt(C + y_eval**2) + C + x_eval**2 + y_eval**2)**(5/2)))
print("Wolfram:", WA_laplace_f)
</code></pre>
<p>The results are:</p>
<pre><code>sympy: -0.0127943826586774
Wolfram: -0.0002685293588919364
</code></pre>
<p>What am I missing here?</p>
|
<python><sympy><laplacian>
|
2024-01-08 12:58:59
| 0
| 42,941
|
Gabriel
|
77,780,228
| 469,294
|
pandas Series replace with backfill alternative
|
<p>The documentation of <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.replace.html" rel="nofollow noreferrer">pandas.Series.replace</a> includes an example:</p>
<pre><code>>> import pandas as pd
>> s = pd.Series([1, 2, 3, 4, 5])
>> s.replace([1, 2], method='bfill')
0 3
1 3
2 3
3 4
4 5
dtype: int64
</code></pre>
<p>but it results now in a warning:</p>
<blockquote>
<p>FutureWarning: The 'method' keyword in Series.replace is deprecated
and will be removed in a future version.</p>
</blockquote>
<p>what would be then the alternate way of getting the same result stated in the example?</p>
|
<python><pandas>
|
2024-01-08 12:45:35
| 1
| 5,177
|
yucer
|
77,780,165
| 143,336
|
ROS2 colcon build fails with ModuleNotFoundError for Python dependency in virtual environment
|
<p>When trying to build my ROS2 project, I get a <code>ModuleNotFoundError</code> when compiling one of the C modules, due to a missing dependency called <code>em</code>:</p>
<pre><code>% colcon build --cmake-clean-cache
Starting >>> r1_messages
--- stderr: r1_messages
CMake Error at /Users/mryall/src/mawson/ros2-iron-build/install/share/rosidl_adapter/cmake/rosidl_adapt_interfaces.cmake:57 (message):
execute_process(/opt/homebrew/Frameworks/Python.framework/Versions/3.12/bin/python3.12
-m rosidl_adapter --package-name r1_messages --arguments-file
/Users/mryall/src/mawson/r1-ros/build/r1_messages/rosidl_adapter__arguments__r1_messages.json
--output-dir
/Users/mryall/src/mawson/r1-ros/build/r1_messages/rosidl_adapter/r1_messages
--output-file
/Users/mryall/src/mawson/r1-ros/build/r1_messages/rosidl_adapter/r1_messages.idls)
returned error code 1:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/mryall/src/mawson/ros2-iron-build/install/lib/python3.11/site-packages/rosidl_adapter/__main__.py", line 19, in <module>
sys.exit(main())
^^^^^^
File "/Users/mryall/src/mawson/ros2-iron-build/install/lib/python3.11/site-packages/rosidl_adapter/main.py", line 53, in main
abs_idl_file = convert_to_idl(
^^^^^^^^^^^^^^^
File "/Users/mryall/src/mawson/ros2-iron-build/install/lib/python3.11/site-packages/rosidl_adapter/__init__.py", line 18, in convert_to_idl
from rosidl_adapter.msg import convert_msg_to_idl
File "/Users/mryall/src/mawson/ros2-iron-build/install/lib/python3.11/site-packages/rosidl_adapter/msg/__init__.py", line 16, in <module>
from rosidl_adapter.resource import expand_template
File "/Users/mryall/src/mawson/ros2-iron-build/install/lib/python3.11/site-packages/rosidl_adapter/resource/__init__.py", line 19, in <module>
import em
ModuleNotFoundError: No module named 'em'
Call Stack (most recent call first):
/Users/mryall/src/mawson/ros2-iron-build/install/share/rosidl_cmake/cmake/rosidl_generate_interfaces.cmake:132 (rosidl_adapt_interfaces)
CMakeLists.txt:14 (rosidl_generate_interfaces)
---
Failed <<< r1_messages [4.13s, exited with code 1]
</code></pre>
<p>My ROS2 installation was built with a Python virtual environment, which I have activated. But even though the virtual-env is activated and all the required libraries are installed with <code>pip</code>, the ROS modules built with CMake don't seem to be able to find it.</p>
<p>I tried to check that the <code>em</code> library was installed. Running <code>pip install empy</code> confirms that it is already installed:</p>
<pre><code>% pip install empy
Requirement already satisfied: empy in /Users/mryall/src/mawson/ros2-iron/iron_venv/lib/python3.11/site-packages (3.3.4)
</code></pre>
|
<python><cmake><virtualenv><ros2><colcon>
|
2024-01-08 12:40:10
| 1
| 10,735
|
Matt Ryall
|
77,779,927
| 10,896,698
|
Change file custom properties/columns in SharePoint with Python API
|
<p>I'm trying update custom created column for files in a SharePoint using Python API.</p>
|
<python><sharepoint><sharepoint-api>
|
2024-01-08 12:15:04
| 1
| 467
|
nklsla
|
77,779,818
| 12,243,638
|
Create a dataframe from permutation of two columns from two different dataframes
|
<p>There are two dataframes which look like the below:</p>
<p>df_1:</p>
<pre><code> A1
0 2023-12-30
1 2023-12-31
</code></pre>
<p>df_2:</p>
<pre><code> B1 B2 B3
501 Sam 159cm 300gm
502 Tam 175cm 400gm
</code></pre>
<p>I want to make another dataframe with the combination of these two. The result should look like below:</p>
<p>df_result:</p>
<pre><code> A1 B1 B2 B3
0 2023-12-30 Sam 159cm 300gm
1 2023-12-31 Sam 159cm 300gm
2 2023-12-30 Tam 175cm 400gm
3 2023-12-31 Tam 175cm 400gm
</code></pre>
<p>That means, every row of df_2 should be repeated for each element in df_1['A1'] (index is not my concern, anything is accepted). I did it succesfully using a <code>for loop</code>. But it takes a long time when the dataframe is large. Is there any pythonic/pandas way for it?</p>
|
<python><pandas>
|
2024-01-08 12:04:41
| 2
| 500
|
EMT
|
77,779,721
| 11,426,624
|
np.select join all true values together
|
<p>I have a dataframe and would like to check for each row which of my conditions are true. If multiple are true I would like to return all of those choices with np.select. How can I do this?</p>
<pre><code>df = pd.DataFrame({'cond1':[True, True, False, True],
'cond2':[False, False, True, True],
'cond3':[True, False, False, True],
'value': [1, 3, 3, 6]})
conditions = [df['cond1'] & (df['value']>4),
df['cond2'],
df['cond2'] & (df['value']>2),
df['cond3'] & df['cond2']]
choices = [ '1', '2', '3', '4']
df["class"] = np.select(conditions, choices, default=np.nan)
</code></pre>
<p>I get this</p>
<pre><code> cond1 cond2 cond3 value class
0 True False True 1 nan
1 True False False 3 nan
2 False True False 3 2
3 True True True 6 1
</code></pre>
<p>but would like to get this</p>
<pre><code> cond1 cond2 cond3 value class
0 True False True 1 nan
1 True False False 3 nan
2 False True False 3 2 and 3
3 True True True 6 1 and 2 and 3 and 4
</code></pre>
|
<python><pandas><dataframe><numpy>
|
2024-01-08 11:51:24
| 2
| 734
|
corianne1234
|
77,779,183
| 5,479,129
|
Emacs lsp update pylsp server
|
<p>I'm trying to update the lsp server in python-mode based on the current project in order to use the correct virtualenv.</p>
<p>I'm doing something like this</p>
<pre><code>(use-package python-mode
:ensure t
:preface
(defun projectile-set-lsp-venv ()
"Set pyenv version matching project name."
(let ((project (projectile-project-name)))
;; How can I update pylsp server here?
;;(setq lsp-pylsp-server-command (concat project "venv/bin/pylsp"))
))
:init
(add-hook 'projectile-after-switch-project-hook #'projectile-set-lsp-venv)
)
</code></pre>
<p>But I don't find a way to update the pylsp server. Using setq lsp-pylsp-server-command is not working, I suppose because it's not reloaded.
Any suggestion?</p>
|
<python><emacs>
|
2024-01-08 11:03:30
| 0
| 1,398
|
Isky
|
77,779,141
| 1,043,882
|
Data cardinality is ambiguous in shape of input to LSTM layer
|
<p>Suppose I have these 7 time-series samples:</p>
<pre><code>1,2,3,4,5,6,7
</code></pre>
<p>I know there is a relation between each sample and its two earlier ones. It means when you know two earlier samples are <code>1,2</code> then you can predict the next one must be <code>3</code> and for <code>2,3</code> the next one is <code>4</code> and so on.</p>
<p>Now I want to train a <code>RNN</code> with a <code>LSTM</code> layer for above samples. What I did is:</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
X = np.array([[[1]],[[2]],[[3]],[[4]],[[5]],[[6]],[[7]]])
Y = np.array([[3],[4],[5],[6],[7]])
model = keras.Sequential([
layers.LSTM(16, input_shape=(2, 1)),
layers.Dense(1, activation="softmax")
])
model.compile(optimizer="rmsprop",
loss="mse",
metrics=["accuracy"])
model.fit(X, Y, epochs=1, batch_size=1)
</code></pre>
<p>But I have encounter with this error:</p>
<pre><code>ValueError: Data cardinality is ambiguous:
x sizes: 7
y sizes: 5
Make sure all arrays contain the same number of samples.
</code></pre>
<p>I do not know how I have to change the shape of <code>X</code> and <code>Y</code> to solve the problem?</p>
|
<python><tensorflow><keras><neural-network><lstm>
|
2024-01-08 10:59:28
| 1
| 14,064
|
hasanghaforian
|
77,778,901
| 5,457,202
|
Bugged Park transform on three-phase signal
|
<p>I'm supposed to apply Clarke-Park transformation to an electrical signal so other people at my office can study the "quality" (?) of said signal.</p>
<p>I was given a Jupyter Notebook replicating the tutorial from <a href="https://pypi.org/project/ClarkePark/" rel="nofollow noreferrer">here</a>, which is pretty much this:</p>
<pre><code>import ClarkePark
import numpy as np
import matplotlib.pyplot as plt
end_time = 3/float(60)
step_size = end_time/(1000)
delta=0
t = np.arange(0,end_time,step_size)
wt = 2*np.pi*float(60)*t
rad_angA = float(0)*np.pi/180
rad_angB = float(240)*np.pi/180
rad_angC = float(120)*np.pi/180
A = (np.sqrt(2)*float(127))*np.sin(wt+rad_angA)
B = (np.sqrt(2)*float(127))*np.sin(wt+rad_angB)
C = (np.sqrt(2)*float(127))*np.sin(wt+rad_angC)
d, q, z = ClarkePark.abc_to_dq0(A, B, C, wt, delta)
</code></pre>
<p><code>A</code>,<code>B</code> and <code>C</code> are generated using the <code>np.sin()</code> function and the <code>wt</code> array, which is an evenly-spaced time list whose elements have been multiplied by 2 * pi * 60, and they look like this:
<a href="https://i.sstatic.net/fd15Z.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fd15Z.jpg" alt="enter image description here" /></a></p>
<p>The DQ0 transform of said signal looks like this:
<a href="https://i.sstatic.net/8OFjg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8OFjg.png" alt="enter image description here" /></a></p>
<p>However, when I apply that transformation to my signals, read by a PLC connected to an actual electricity generator (a solar panel in this case I think) looks a bit less perfect.</p>
<p>This is my source signal (head of a DataFrame, resulted from reading three txt files holding the values for Time and each component of the signal, R, S and T):</p>
<pre><code> Time IESTR IESTS IESTT
0 0.000167 584.00769 -786.99103 171.411900
1 0.000250 587.18536 -764.24414 150.228470
2 0.000333 590.26617 -742.32513 123.431180
3 0.000417 631.94775 -770.98303 102.068790
4 0.000500 640.10138 -754.37830 84.461655
</code></pre>
<p><a href="https://i.sstatic.net/ngQRI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ngQRI.png" alt="enter image description here" /></a></p>
<p>Notice how lines a bit more "jerky" than the generated ones:
<a href="https://i.sstatic.net/PzSow.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PzSow.png" alt="enter image description here" /></a></p>
<p>When I try to transform those signals to the DQ0 space, I get something that looks mainly as an alpha beta transform.
<a href="https://i.sstatic.net/irmfq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/irmfq.png" alt="enter image description here" /></a></p>
<p>Any help on why the Clarke-Parke transform doesn't look like it's supposed to?</p>
|
<python><signal-processing>
|
2024-01-08 10:33:14
| 0
| 436
|
J. Maria
|
77,778,798
| 15,098,472
|
Import parameters from config, but create the config first if it does not exist
|
<p>I have some global constants, that are saved in a .py file and that are needed at many different places in the codebase:</p>
<pre class="lang-py prettyprint-override"><code>from important_parameters import ROOT_DIR, ANOTHER_DIR
</code></pre>
<p>This import is needed in many files, and is more convenient than passing it around all the time, I think.
The issue is, that these parameters depend on the user, i.e. these are relative directory paths. That is, the config file must be created first, if another user runs the script. So I was thinking about something like this:</p>
<pre class="lang-py prettyprint-override"><code>if not os.path.exists('important_parameters.py'):
args = parser.parse_args()
create_params_file(args)
</code></pre>
<p>the arguments could be be specified at the top, for example:</p>
<pre class="lang-py prettyprint-override"><code>parser = argparse.ArgumentParser()
parser.add_argument(
'--root_data_dir', type=str,required=True
)
parser.add_argument(
'--another_data_dir', type=str, required=True
)
</code></pre>
<p>afterwards, we can create the config file:</p>
<pre class="lang-py prettyprint-override"><code>def create_params_file(args):
with open('important_parameters.py', 'w') as file:
file.write(f'ROOT_DIR = "{os.path.join(args.root_data_dir)}"\n')
file.write(f'ANOTHER_DIR = "{os.path.join(args.another_data_dir)}"\n')
</code></pre>
<p>and use an if condition in the main block:</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == "__main__":
if not os.path.exists('important_parameters.py'):
args = parser.parse_args()
create_params_file(args)
</code></pre>
<p>However, this does not work if the file does not exist, because then I get an import error at the top. Any elegany solutoin for my situation?</p>
|
<python><import>
|
2024-01-08 10:17:16
| 2
| 574
|
kklaw
|
77,778,689
| 4,405,942
|
pytest fake filesystem and unittest mock interfere
|
<p>I have a large pytest suite for a django project which uses plenty of mockers (<a href="https://docs.python.org/3/library/unittest.mock.html" rel="nofollow noreferrer">unittest.mock</a>, <a href="https://pypi.org/project/requests-mock/" rel="nofollow noreferrer">requests-mock</a> and <a href="https://github.com/pnuckowski/aioresponses" rel="nofollow noreferrer">aioresponses</a>) and a <a href="https://pytest-pyfakefs.readthedocs.io/en/latest/usage.html" rel="nofollow noreferrer">fake filesystem</a>. I have set up the fake filesystem to use some directories which are required by django:</p>
<pre><code>PROJECT_BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
fs.add_real_paths(
[
PROJECT_BASE_DIR,
os.path.dirname(django.__file__),
]
)
</code></pre>
<p>At random, mockers stop working. <code>AssertionError: Expected 'some_mocked_method' to be called once. Called 0 times.</code> I recieve no other errors or warnings, just the above message. <s>I cannot reproduce this in a minimal example, since it only occurrs on some executions, not every time. When it happens, the tests freeze and I cannot even wait for the test suite to finish. I run them in parallel with <code>pytest-xdist</code> and 8 workers, which might also play a part in the problem.</s> (see EDIT)</p>
<p>I do not even know where to start on this problem. Has anyone experienced something similar when using the combination betweek <code>unittest.mock</code> and <code>pyfakefs</code>?
Is there any alternative to <code>pyfakefs</code>? I could not find anything similar.
Is there any reason why these two would even interfere?</p>
<p>Here are the package versions I'm using</p>
<pre><code>pyfakefs==5.3.2
pytest==7.4.3
pytest-django==4.7.0
pytest-mock==3.12.0
pytest-xdist==3.5.0
</code></pre>
<p><strong>EDIT</strong></p>
<p>Finally, I did produce a minimal example: <a href="https://github.com/konnerthg/minimal_django_pyfakefs_unittest_mock" rel="nofollow noreferrer">https://github.com/konnerthg/minimal_django_pyfakefs_unittest_mock</a></p>
<p>Am I missing something very simple and obvious? Or is this a bug?</p>
<p><strong>EDIT 2</strong></p>
<p>Here's the output after running pytest in the project linked above. After <code>pytest -m without_fakefs</code> (this is a dummy test case which checks that the pytest setup is correct): <code>2 passed</code>, as expected.</p>
<p>But after running <code>pytest -m with_fakefs</code>, i.e. the exact same test case, but with the <code>fs</code> fixture, I receive:</p>
<pre><code>___________________________________________________ test_fs_and_mocker[1] ____________________________________________________
client = <django.test.client.Client object at 0x7f1e94f7a850>
fs = <pyfakefs.fake_filesystem.FakeFilesystem object at 0x7f1e94f7bb90>, counter = 1
@pytest.mark.with_fakefs
@pytest.mark.parametrize("counter", range(2))
def test_fs_and_mocker(
client,
fs,
counter,
):
with unittest.mock.patch("mysite.views.foo") as mocker:
client.get("/foo/")
> mocker.assert_called()
.../minimal_django_pyfakefs_unittest_mock/mysite/tests/mysite/views_test.py:14:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <MagicMock name='foo' id='139769325143440'>
def assert_called(self):
"""assert that the mock was called at least once
"""
if self.call_count == 0:
msg = ("Expected '%s' to have been called." %
(self._mock_name or 'mock'))
> raise AssertionError(msg)
E AssertionError: Expected 'foo' to have been called.
/usr/lib/python3.11/unittest/mock.py:908: AssertionError
================================================== short test summary info ===================================================
FAILED tests/mysite/views_test.py::test_fs_and_mocker[1] - AssertionError: Expected 'foo' to have been called.
=================================== 1 failed, 1 passed, 2 deselected, 2 warnings in 0.25s ====================================
</code></pre>
<p>Note that the first case (i.e. the first execution of the same test) always succeeds. If I run it 10 times rather than 2, the first succeeds and the other 9 fail. Moreover, if I run the cases in parallel with <code>xdist</code> and 8 workers, the first 8 succeed and everything thereafter fails.</p>
<p>I run Ubuntu 23.04, 64-bit, Kernel Version Linux 6.2.0-36-generic on a Lenovo IdeaPad 5 Pro 16ARH7 with an AMD Ryzen™ 7 6800HS Creator Edition × 16 processor. Python 3.11.4</p>
|
<python><pytest><pytest-django><pyfakefs>
|
2024-01-08 10:01:32
| 0
| 2,180
|
Gerry
|
77,778,336
| 11,801,298
|
Expand distance between values in pandas dataframe
|
<p>How to expand distance between values in pandas dataframe?</p>
<pre><code> A
1 3
2 5
3 6
5 5
6 9
</code></pre>
<p>I want to increase the distance between adjacent elements by x times, for example by two times.</p>
<p>Expected output:</p>
<pre><code> A B
1 3 3
2 5 7 # (3 + 2 * 2)
3 6 9 # (7 + 1 * 2)
5 5 7 # (9 - 1 * 2)
6 9 15 # (7 + 4 * 2)
</code></pre>
|
<python><pandas>
|
2024-01-08 09:23:24
| 2
| 877
|
Igor K.
|
77,777,699
| 11,630,148
|
Weird bug when creating a user in django
|
<p>I have a weird bug in django where I have 2 different profile models for each user type and one profile model creates two profile models when I create a User.</p>
<p>Here's my code for the user model:</p>
<pre class="lang-py prettyprint-override"><code>class User(TimeStampedModel, AbstractUser):
class Types(models.TextChoices):
"""Default user for Side Project."""
COMPANY = "COMPANY", "Company"
JOB_SEEKER = "JOB SEEKER", "Job Seeker"
base_type = Types.COMPANY
# Type of User
type = models.CharField(_("Type"), max_length=50, choices=Types.choices, default=base_type)
# First and last name do not cover name patterns around the globe
name = models.CharField(_("Name of User"), blank=True, max_length=255)
first_name = None # type: ignore
last_name = None # type: ignore
def get_absolute_url(self) -> str:
"""Get URL for user's detail view.
Returns:
str: URL for user detail.
"""
return reverse("users:detail", kwargs={"username": self.username})
def save(self, *args, **kwargs):
if not self.id:
self.type = self.base_type
return super().save(*args, **kwargs)
</code></pre>
<p>Here's my code for the seeker profile model</p>
<pre class="lang-py prettyprint-override"><code>class SeekerProfile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='seeker_profile')
@receiver(post_save, sender=User)
def create_or_update_seeker_profile(sender, instance, created, **kwargs):
if instance.type == 'JOB SEEKER':
SeekerProfile.objects.get_or_create(user=instance)
elif instance.type == 'COMPANY':
CompanyProfile.objects.get_or_create(user=instance)
@receiver(post_save, sender=User)
def save_seeker_profile(sender, instance, **kwargs):
if instance.type == 'JOB SEEKER':
try:
instance.seeker_profile.save()
except SeekerProfile.DoesNotExist:
# Handle the case where the profile doesn't exist yet
SeekerProfile.objects.create(user=instance)
else:
try:
instance.company_profile.save()
except CompanyProfile.DoesNotExist:
# Handle the case where the profile doesn't exist yet
CompanyProfile.objects.create(user=instance)
</code></pre>
<p>Here's my code for the company profile model</p>
<pre class="lang-py prettyprint-override"><code>class CompanyProfile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='company_profile')
@receiver(post_save, sender=User)
def create_or_update_company_profile(sender, instance, created, **kwargs):
if instance.type == 'COMPANY':
if created:
CompanyProfile.objects.create(user=instance)
else:
try:
instance.company_profile.save()
except CompanyProfile.DoesNotExist:
CompanyProfile.objects.create(user=instance)
@receiver(post_save, sender=User)
def save_company_profile(sender, instance, **kwargs):
if instance.type == 'COMPANY':
try:
instance.company_profile.save()
except CompanyProfile.DoesNotExist:
# Handle the case where the profile doesn't exist yet
CompanyProfile.objects.create(user=instance)
</code></pre>
<p>What happens is when I create a user with a company user type, the model creates a company profile model. But when I create a seeker user type, the model creates a seeker profile model and a company profile model. What do you think I'm doing wrong? I've been going in circles with ChatGPT with this</p>
|
<python><django>
|
2024-01-08 07:51:27
| 2
| 664
|
Vicente Antonio G. Reyes
|
77,777,436
| 700,023
|
Why doesn't Spark change the physical plan after caching?
|
<p>I'm trying to understand how caching of some dataframes alters the physical plan.
Its my understanding that if a dataframe is cached then subsequent computation that uses that dataframe should not re-evaluate the operations needed by the dataframe.</p>
<p>Unfortunately there does not seem to be a difference between caching the result and un-persisting it afterwards (which should clear the cache and restore the original physical plan.</p>
<p>example of <code>df.explain(mode='simple')</code> before<code> cache().count()</code>:</p>
<pre><code>== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- InMemoryTableScan [idx#12021L, (REDACTED), ... 32 more fields], StorageLevel(disk, memory, deserialized, 1 replicas)
+- AdaptiveSparkPlan isFinalPlan=false
...
</code></pre>
<p>after <code>df.cache().count()</code>:</p>
<pre><code>== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- InMemoryTableScan [idx#12021L, (REDACTED), ... 32 more fields], StorageLevel(disk, memory, deserialized, 1 replicas)
+- AdaptiveSparkPlan isFinalPlan=false
...
</code></pre>
<p>How would you explain that the physical plan is the same, and that it's not final? Should be final after its execution.</p>
<p>By the way, this is my spark conf:</p>
<pre><code> .config("spark.master", "local[*]")
.config("spark.executor.memory", "16g")
.config("spark.driver.memory", "16g")
.config("spark.sql.execution.arrow.pyspark.enabled", "true")
.config("spark.sql.execution.pythonUDF.arrow.enabled", "true")
.config("spark.sql.execution.arrow.maxRecordsPerBatch", "10000")
.config("spark.sql.execution.arrow.pyspark.selfDestruct.enabled", "true")
.config("spark.sql.execution.arrow.pyspark.fallback.enabled","false")
.config("spark.logConf", "true") # allows us to change the log level later in the code
.config("spark.ui.port", 4042)
.config("spark.eventLog.dir", config["local_log_dir"])
.config("spark.sql.codegen.wholeStage", False)
.config("spark.python.profile","false")
.config("spark.sql.constraintPropagation.enabled","false")
.config("spark.sql.adaptive.enabled", True)
.config("spark.sql.adaptive.skewJoin.enabled", True)
</code></pre>
<p>I'm surprised to see that <code>isFinalPlan</code> is kept as <code>false</code>.
what am I missing here?</p>
|
<python><dataframe><apache-spark><pyspark><caching>
|
2024-01-08 07:11:28
| 1
| 5,069
|
fstab
|
77,777,407
| 5,157,081
|
How can I pass data from API endpoint (request) to middleware in FastAPI?
|
<p>I would like to implement a logic, where every API endpoint will have its API Credits (of <code>int</code> type), which I am going to use in an HTTP middleware to deduct credits from the user's balance.</p>
<p>I have searched a lot, but have been unable to find a solution. Could anyone help me to achieve this? I am providing a pseudo-code to explain what I am trying to achieve.</p>
<pre class="lang-py prettyprint-override"><code>@app.middleware("http")
async def add_process_time_header(request: Request, call_next):
# here I want to read API credits for request
print(request.api_credit)
response = await call_next(request)
return response
@app.post("/myendpoint1")
async def myendpoint1():
# define api credit for this endpoint
api_credit = 2
return {"message": "myendpoint"}
@app.post("/myendpoint2")
async def myendpoint2():
# define api credit for this endpoint
api_credit = 5
return {"message": "myendpoint2"}
</code></pre>
|
<python><fastapi><middleware><starlette><fastapi-middleware>
|
2024-01-08 07:05:42
| 1
| 341
|
Moiz Travadi
|
77,776,619
| 11,053,801
|
Global variable get undefined on re-run of Flask app
|
<p>global variables get undefined on re-run of flask app,
I have 3 files,</p>
<p>api.py</p>
<pre><code>import trainer
app = Flask(__name__)
@app.route('/start', methods=['POST'])
def apifunc(id):
result = trainer.consume(id)
return jsonify(result)
if __name__ == '__main__':
app.run(debug=True, port=LISTEN_PORT, threaded=True, use_reloader=False)
</code></pre>
<p>trainer.py</p>
<pre><code>from utilities import function1
def consume(id)
value = function1(agg1)
... some codes...
return value
</code></pre>
<p>utilities.py</p>
<pre><code>aggregate = 30
def function1(agg1):
global aggregate
... some codes...
print(aggregate)
</code></pre>
<p>when we run the app for the first time and hit the end point "/start", it picks the global variable "aggregate" value, it works,</p>
<p>but
on 2nd hit, it throws error,</p>
<pre><code>ERROR:api:500 Internal Server Error: name 'aggregate' is not defined
</code></pre>
<p>but when I stop and re-start the app and hit end point it works. Not sure why that's happening.</p>
|
<python><flask>
|
2024-01-08 05:06:14
| 1
| 1,616
|
hanzgs
|
77,775,828
| 15,455,922
|
Database config for Django and SQL Server - escaping the host
|
<p>I have setup my Django app to use Microsoft SQL Server database. This is my database config.</p>
<pre><code> DATABASES = {
'default': {
'ENGINE': 'mssql',
'NAME': "reporting",
'HOST': '192.168.192.225\SQL2022;',
'PORT': 1433,
'USER': "sa",
'PASSWORD': "Root_1421",
'OPTIONS': {
'driver': 'ODBC Driver 17 for SQL Server',
},
}
}
</code></pre>
<p>The SQL Server database is installed on my desktop machine and <code>DESKTOP-RC52TD0\SQL2022</code> is the host\instance name. When print my config, I get the following.</p>
<pre><code>{'default': {'ENGINE': 'mssql', 'NAME': 'reporting', 'HOST': 'DESKTOP-RC52TD0\\SQL2022', 'USER': 'sa', 'PASSWORD': 'Root_1421', 'OPTIONS': {'driver': 'ODBC Driver 17 for SQL Server'}}}
</code></pre>
<p>Notice that in HOST, the value has two slashes added to it, which is causing my app to not be able to connect to the database. I believe this is because of Python escaping feature for strings. How do can I escape so that I end up with a single slash and not double? As I am getting the following error:</p>
<blockquote>
<p>django.db.utils.OperationalError: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')</p>
</blockquote>
<p>I've tried the following and still same result:</p>
<ol>
<li><code>r'DESKTOP-RC52TD0\SQL2022'</code></li>
<li><code>'DESKTOP-RC52TD0\\SQL2022'</code></li>
<li><code>r'DESKTOP-RC52TD0\SQL2022'</code></li>
</ol>
<p>Any ideas how I can escape in the context of a dictionary. In a string adding double slash <code>\\</code> works but not when used in a dictionary.</p>
<p>I am running the app using docker-compose.</p>
<pre><code>version: '3'
services:
# sqlserver:
# image: mcr.microsoft.com/mssql/server
# hostname: 'sqlserver'
# environment:
# ACCEPT_EULA: 'Y'
# MSSQL_SA_PASSWORD: 'P@55w0rd'
# ports:
# - '1433:1433'
# volumes:
# - sqlserver-data:/var/opt/mssql
web:
image: landsoft/reporting-api
network_mode: host
build:
context: ./app
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- ./app:/code
ports:
- "8000:8000"
environment:
- DB_HOST="DESKTOP-RC52TD0\SQL2022"
- DB_NAME=reporting
- DB_USER=sa
- DB_PASSWORD=Root_1421
- DB_PORT=1433
# depends_on:
# - sqlserver
# volumes:
# sqlserver-data:
# driver: local
</code></pre>
|
<python><sql-server><django><escaping>
|
2024-01-08 02:03:30
| 1
| 746
|
Mess
|
77,775,597
| 8,893,835
|
How to convert these functions to act on the entire dataframe and speed up my python code
|
<p>In an attempt to backtest the much-discussed trading approach known as the Smart Money Concept, I made a Python class with a few functions.</p>
<p>Now, the mistake I made was to make each function operate on the last candle/row and return results only for that row/candle. This has turned out to be a mistake because it will take a very long time to loop over the <code>dataframe</code> and feed each row to these functions if I was to backtest with six months' worth of data.</p>
<p>I require assistance with:</p>
<p>Converting the public functions/methods to act on the entire <code>dataframe</code> via <code>vectorization</code> and return the entire <code>dataframe</code>.</p>
<p><code>is_uptrend()</code>, <code>has_bull_choch()</code></p>
<hr />
<p>Below is the source code:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.ndimage import maximum_filter1d, minimum_filter1d
from scipy.signal import find_peaks
from scipy import stats
import numpy as np
import pandas as pd
class SmartMoney:
def get_supports_and_resistances(self, df: pd.DataFrame) -> pd.DataFrame:
df['is_support'] = 0
df['is_resistance'] = 0
df = self._get_resistances(df=df)
df = self._get_supports(df=df)
return df
# Get support zones
def _get_supports(self, df: pd.DataFrame) -> pd.DataFrame:
if len(df) < 1:
return df
smoothed_low = minimum_filter1d(df.low, self.filter_size) if self.filter_size > 0 else df.low
minimas, _ = find_peaks(x=-smoothed_low, prominence=self.look_back(df=df))
if len(minimas) > 0:
df.loc[minimas, 'is_support'] = 1
return df
# Get resistances zones
def _get_resistances(self, df: pd.DataFrame) -> pd.DataFrame:
if len(df) < 1:
return df
smoothed_high = maximum_filter1d(df.high, self.filter_size) if self.filter_size > 0 else df.high
maximas, _ = find_peaks(smoothed_high, prominence=self.look_back(df=df))
if len(maximas) > 0:
df.loc[maximas, 'is_resistance'] = 1
return df
def look_back(self, df: pd.DataFrame) -> int:
return round(np.mean(df['high'] - df['low']))
def is_uptrend(self, df: pd.DataFrame) -> bool:
if self._meets_requirement(df=df) == False:
return False
return (
df.loc[df['is_resistance'] == 1, 'high'].iloc[-1] > df.loc[df['is_resistance'] == 1, 'high'].iloc[-2] and
df.loc[df['is_support'] == 1, 'low'].iloc[-1] > df.loc[df['is_support'] == 1, 'low'].iloc[-2]
)
def is_downtrend(self, df: pd.DataFrame) -> bool:
if self._meets_requirement(df=df) == False:
return False
return (
df.loc[df['is_resistance'] == 1, 'high'].iloc[-1] < df.loc[df['is_resistance'] == 1, 'high'].iloc[-2] and
df.loc[df['is_support'] == 1, 'low'].iloc[-1] < df.loc[df['is_support'] == 1, 'low'].iloc[-2]
)
def _meets_requirement(self, df: pd.DataFrame, minimum_required: int = 2) -> bool:
return len(df.loc[df['is_resistance'] == 1]) >= minimum_required and len(df.loc[df['is_support'] == 1]) >= minimum_required
# Check if there's Change of Character (as per Smart Money Concept)
def has_bull_choch(self, df: pd.DataFrame, in_pullback_phase = False, with_first_impulse = False) -> bool:
if df[df['is_resistance'] == 1].empty:
return False
left, right = self._get_left_and_right(df = df, divide_by_high=False)
if len(left[left['is_resistance'] == 1]) < 1 or right.shape[0] < 1:
return False
# if we only want CHoCH that broke on first impulse move
if with_first_impulse:
if left.loc[left['is_resistance'] == 1, 'high'].iloc[-1] > right.loc[right['is_resistance'] == 1, 'high'].iloc[0] :
return False
# if we want CHoCH in pullback phase
if in_pullback_phase:
if right.iloc[right[right['is_resistance'] == 1].index[-1], right.columns.get_loc('high')] < right['high'].iloc[-1]:
return False
tmp = right[right['high'] > left.loc[left['is_resistance'] == 1, 'high'].iloc[-1]]
if tmp.shape[0] > 0 :
return True
return False
def _get_left_and_right(self, df: pd.DataFrame, divide_by_high = True) -> tuple[pd.DataFrame, pd.DataFrame]:
# Get the lowest/highest support df
off_set = df['low'].idxmin() if divide_by_high == False else df['high'].idxmax()
# Get list of df before lowest support
left = df[:off_set]
# take only resistance and leave out support
# left = left[left['is_resistance'] == 1]
left.reset_index(drop=True, inplace=True)
# Get list aft the df after loweset support
right = df[off_set:]
# take only resistance and leave out support
# right = right[right['is_resistance'] == 1]
right.reset_index(drop=True, inplace=True)
return pd.DataFrame(left), pd.DataFrame(right)
</code></pre>
<p>Test Data:</p>
<pre class="lang-py prettyprint-override"><code>import yfinance as yfd
ticker_symbol = "BTC-USD"
start_date = "2023-06-01"
end_date = "2023-12-31"
bitcoin_data = yf.download(ticker_symbol, start=start_date, end=end_date)
# Reset the index to make the date a regular column
df = bitcoin_data.reset_index()
df.rename(columns={'Date': 'time', 'Open': 'open', 'High': 'high', 'Low': 'low', 'Close': 'close', 'adj close': 'adj close', 'Volume': 'volume'}, inplace=True)
</code></pre>
<h1>This is how i would like the code to work</h1>
<pre class="lang-py prettyprint-override"><code>from smart_money import SmartMoney
sm = SmartMoney()
# Get minimas and maximas (support and resistance)
df = sm.get_supports_and_resistances(df=df)
df = sm.is_uptrend(df=df)
df = sm.has_bull_choch(df=df)
</code></pre>
<p>Remember the objective is to have these function return a <code>Dataframe</code> with new column (name of column should be be the function name), column value can be a 1 or 0.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>time</th>
<th>open</th>
<th>high</th>
<th>low</th>
<th>close</th>
<th>volume</th>
</tr>
</thead>
<tbody>
<tr>
<td>324</td>
<td>2023-11-28</td>
<td>37242.70</td>
<td>38377.00</td>
<td>36868.41</td>
<td>37818.87</td>
</tr>
<tr>
<td>325</td>
<td>2023-11-29</td>
<td>37818.88</td>
<td>38450.00</td>
<td>37570.00</td>
<td>37854.64</td>
</tr>
<tr>
<td>326</td>
<td>2023-11-30</td>
<td>37854.65</td>
<td>38145.85</td>
<td>37500.00</td>
<td>37723.96</td>
</tr>
<tr>
<td>327</td>
<td>2023-12-01</td>
<td>37723.97</td>
<td>38999.00</td>
<td>37615.86</td>
<td>38682.52</td>
</tr>
<tr>
<td>328</td>
<td>2023-12-02</td>
<td>38682.51</td>
<td>38821.59</td>
<td>38641.61</td>
<td>38774.95</td>
</tr>
</tbody>
</table>
</div>
|
<python><algorithmic-trading><pyalgotrade>
|
2024-01-08 00:50:06
| 2
| 468
|
chawila
|
77,775,538
| 11,210,476
|
Python: How to statically refer to the return type of a function
|
<p>If I imported a function from a module, can I make the linter & type checker to refer to its return type?</p>
<ul>
<li>I don't want to call the function</li>
<li>I don't want runtime solution, like <code>inpsect</code> module.</li>
<li>For some reason, I can't import the type itself, e.g. its not meant to be imported.</li>
</ul>
<pre><code>def func(a: T1, b:T2) -> T3: ...
</code></pre>
<pre><code>
from module import func
x: func.return_type = 22 # ideal solution ?
</code></pre>
|
<python><python-typing>
|
2024-01-08 00:32:02
| 1
| 636
|
Alex
|
77,775,267
| 2,144,868
|
Combine argparse.MetavarTypeHelpFormatter, argparse.ArgumentDefaultsHelpFormatter and argparse.HelpFormatter
|
<p>I want to display default values, argument type, and big spacing for <code>--help</code>.</p>
<p>But if I do</p>
<pre class="lang-py prettyprint-override"><code>import argparse
class F(argparse.MetavarTypeHelpFormatter, argparse.ArgumentDefaultsHelpFormatter, lambda prog: argparse.HelpFormatter(prog, max_help_position = 52)): pass
parser = argparse.ArgumentParser(
prog = 'junk',
formatter_class = F)
</code></pre>
<p>It gives the following error</p>
<pre class="lang-bash prettyprint-override"><code>TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
</code></pre>
<p>Does anyone know how to combine these three formatters correctly?</p>
|
<python><argparse>
|
2024-01-07 23:01:34
| 2
| 305
|
llodds
|
77,775,036
| 825,924
|
How to disable load_ssh_configs in fabric (used as library) from user code?
|
<p>I'm using <code>fabric</code> as a library and want to avoid loading my local <code>~/.ssh/config</code> file (etc).</p>
<p><a href="https://docs.fabfile.org/en/latest/concepts/configuration.html#disabling-ssh-config" rel="nofollow noreferrer">Fabrics configuration docs</a> declare: "To do so, simply set the top level config option <code>load_ssh_configs</code> to <code>False</code>."</p>
<p>Unfortunately they do not say how to execute this simple act. I want to use code to change this default. I do not want to create more (config) files.</p>
<pre class="lang-python prettyprint-override"><code># not like this
load_ssh_configs = False # does not work in code; works from config file, I guess
# not like this either
import fabric
conf = fabric.Config()
conf.load_ssh_configs = False # too late, conf.base_ssh_config already loaded
# hopeless
global_defs = fabric.Config.global_defaults() # not a singleton, but a copy
global_defs.load_ssh_configs = False # no effect
# this works, but needs parameter for every `fabric.Connection`
conf2 = fabric.Config(lazy=True) # should be called e.g. `load_files`, not `lazy`
c = fabric.Connection('host1', config=conf2) # finally skipped `.ssh/config`
# works as well, but even more complicated
import paramiko
ssh_conf = paramiko.config.SSHConfig()
conf3 = fabric.Config(ssh_config=ssh_conf)
c = fabric.Connection('host1', config=conf3)
</code></pre>
<p>Am I missing something? Is there no simple way of globally disabling <code>ssh_config</code>-loading from within my code?</p>
|
<python><paramiko><fabric>
|
2024-01-07 21:42:44
| 1
| 35,122
|
Robert Siemer
|
77,774,853
| 4,547,189
|
Superset guest token returns login page
|
<p>I am trying to embed the dashboard into an iframe on another site. Both the sites are on HTTP only. Here are the configs in the superset file</p>
<pre><code>PUBLIC_ROLE_LIKE_GAMMA: True
SESSION_COOKIE_SECURE = False
SESSION_COOKIE_SAMESITE = 'LAX'
SESSION_COOKIE_HTTPONLY = True
PUBLIC_ROLE_LIKE: 'Gamma'
HTTP_HEADERS = {'X-Frame-Options': 'ALLOWALL'}
</code></pre>
<p>The back end code is as follows:</p>
<pre><code>myobj = {
"user": {
"username": "username",
"first_name": "*****",
"last_name": "*****"
},
"resources": [{
"type": "dashboard",
"id": "1"
}],
"rls": []
}
login = requests.post(login_url, json={"password": "Password$","provider": "db","refresh": True, "username": "username"})
login_response = json.loads(login.text)
headers = {'Authorization': f"Bearer {login_response['access_token']}"}
csrf = requests.get(csrf_url, headers = headers)
csrf_token = json.loads(csrf.text)['result']
headers_csrf = {
'accept': 'application/json',
'Authorization': f"Bearer {login_response['access_token']}",
'Referer': csrf_url,
'X-CSRFToken': csrf_token,
}
print(headers_csrf)
guest_token = requests.post(guest_token_url, json = myobj, headers = headers_csrf)
guest_token = json.loads(guest_token.text)['token']
print(guest_token)
final_url = f"http://superset.hookit.korenet.local?token={guest_token}&next=/superset/dashboard/p/jWQx9XBR4Bw/?standalone=true"
</code></pre>
<p>The backend code generates the following URL that is inserted in the iFrame on another site:</p>
<pre><code>http://superset.domain.local?token=*************************.eyJ1c2VyIjp7Imxhc3RfbmFtZSI6IkthbnNhcmEiLCJ1c2VybmFtZSI6InRrYW5zYXJhIiwiZmlyc3RfbmFtZS****************************************Ym9hcmQiLCJpZCI6IjEifV0sInJsc19ydWxlcyI6W10sImlhdCI6MTcwNDY2MDYzNS42ODgwMDYyLCJleHAiOjE3MDQ2NjA5MzUuNjg4MDA2MiwiYXVkIjoiaHR0cDovLzAuMC4wLjA6ODA4MC8iLCJ0eXBlIjoiZ3Vlc3QifQ.t2SZsWg66njR1He20cAcMA9mhkafuCZBqrX58jabqbo&next=/superset/dashboard/p/jWQx9XBR4Bw/?standalone=true
</code></pre>
<p>The issue is that the iFrame always shows the login page only. I tried the URL in incognito window and this also showed the login page.
Any thoughts on what I am doing wrong ? We would like to see the Dashboard as a guest user as the external users need to see the dashboard in the other site.</p>
<p>Thanks for your help.
Regards</p>
|
<python><embed><apache-superset>
|
2024-01-07 21:00:06
| 1
| 648
|
tkansara
|
77,774,595
| 14,120,387
|
Reformat CloudWatch logs sent to S3 using Kinesis Firehose for JSON
|
<p>I have a working setup that sends CloudWatch logs to a S3 bucket using Kinesis Firehose. Unfortunately the files in S3 do not contain properly formatted JSON. A properly formatted array of JSON objects will look like this: [{}, {}, {}]. However, the S3 files created by Firehose look like this: {}{}{}. I am attempting to modify a Kinesis Firehose Data Transform Lambda Blueprint by adding the square brackets at beginning and end and commas between the JSON objects. The Lambda Blue print I am attempting to modify is called "Process CloudWatch logs sent to Kinesis Firehose". Here are the relevant parts:</p>
<pre><code>import base64
import json
import gzip
import boto3
def transformLogEvent(log_event):
"""Transform each log event.
The default implementation below just extracts the message and appends a newline to it.
Args:
log_event (dict): The original log event. Structure is {"id": str, "timestamp": long, "message": str}
Returns:
str: The transformed log event.
"""
return log_event['message'] + ',\n'
def processRecords(records):
for r in records:
data = loadJsonGzipBase64(r['data'])
recId = r['recordId']
# CONTROL_MESSAGE are sent by CWL to check if the subscription is reachable.
# They do not contain actual data.
if data['messageType'] == 'CONTROL_MESSAGE':
yield {
'result': 'Dropped',
'recordId': recId
}
elif data['messageType'] == 'DATA_MESSAGE':
joinedData = ''.join([transformLogEvent(e) for e in data['logEvents']])
dataBytes = joinedData.encode("utf-8")
encodedData = base64.b64encode(dataBytes).decode('utf-8')
yield {
'data': encodedData,
'result': 'Ok',
'recordId': recId
}
else:
yield {
'result': 'ProcessingFailed',
'recordId': recId
}
def loadJsonGzipBase64(base64Data):
return json.loads(gzip.decompress(base64.b64decode(base64Data)))
def lambda_handler(event, context):
isSas = 'sourceKinesisStreamArn' in event
streamARN = event['sourceKinesisStreamArn'] if isSas else event['deliveryStreamArn']
region = streamARN.split(':')[3]
streamName = streamARN.split('/')[1]
records = list(processRecords(event['records']))
projectedSize = 0
recordListsToReingest = []
for idx, rec in enumerate(records):
originalRecord = event['records'][idx]
if rec['result'] != 'Ok':
continue
# If a single record is too large after processing, split the original CWL data into two, each containing half
# the log events, and re-ingest both of them (note that it is the original data that is re-ingested, not the
# processed data). If it's not possible to split because there is only one log event, then mark the record as
# ProcessingFailed, which sends it to error output.
if len(rec['data']) > 6000000:
cwlRecord = loadJsonGzipBase64(originalRecord['data'])
if len(cwlRecord['logEvents']) > 1:
rec['result'] = 'Dropped'
recordListsToReingest.append(
[createReingestionRecord(isSas, originalRecord, data) for data in splitCWLRecord(cwlRecord)])
else:
rec['result'] = 'ProcessingFailed'
print(('Record %s contains only one log event but is still too large after processing (%d bytes), ' +
'marking it as %s') % (rec['recordId'], len(rec['data']), rec['result']))
del rec['data']
else:
projectedSize += len(rec['data']) + len(rec['recordId'])
# 6000000 instead of 6291456 to leave ample headroom for the stuff we didn't account for
if projectedSize > 6000000:
recordListsToReingest.append([createReingestionRecord(isSas, originalRecord)])
del rec['data']
rec['result'] = 'Dropped'
# call putRecordBatch/putRecords for each group of up to 500 records to be re-ingested
if recordListsToReingest:
recordsReingestedSoFar = 0
client = boto3.client('kinesis' if isSas else 'firehose', region_name=region)
maxBatchSize = 500
flattenedList = [r for sublist in recordListsToReingest for r in sublist]
for i in range(0, len(flattenedList), maxBatchSize):
recordBatch = flattenedList[i:i + maxBatchSize]
# last argument is maxAttempts
args = [streamName, recordBatch, client, 0, 20]
if isSas:
putRecordsToKinesisStream(*args)
else:
putRecordsToFirehoseStream(*args)
recordsReingestedSoFar += len(recordBatch)
print('Reingested %d/%d' % (recordsReingestedSoFar, len(flattenedList)))
print('%d input records, %d returned as Ok or ProcessingFailed, %d split and re-ingested, %d re-ingested as-is' % (
len(event['records']),
len([r for r in records if r['result'] != 'Dropped']),
len([l for l in recordListsToReingest if len(l) > 1]),
len([l for l in recordListsToReingest if len(l) == 1])))
# encapsulate in square brackets for proper JSON formatting
last = len(event['records'])-1
start = '['+base64.b64decode(records[0]['data']).decode('utf-8')
end = base64.b64decode(records[last]['data']).decode('utf-8')+']'
records[0]['data'] = base64.b64encode(start.encode('utf-8'))
records[last]['data'] = base64.b64encode(end.encode('utf-8'))
return {'records': records}
</code></pre>
<p>}</p>
<p>I added the comma before the \n in the transformLogEvent method and these lines at the end of the lambda_handler method to add the square brackets:</p>
<pre><code>last = len(event['records'])-1
start = '['+base64.b64decode(records[0]['data']).decode('utf-8')
end = base64.b64decode(records[last]['data']).decode('utf-8')+']'
records[0]['data'] = base64.b64encode(start.encode('utf-8'))
records[last]['data'] = base64.b64encode(end.encode('utf-8'))
</code></pre>
<p>I am getting this error when testing this Lambda function:</p>
<pre><code>{
"errorMessage": "'data'",
"errorType": "KeyError",
"requestId": "04ac151c-c429-484d-813f-ffd5d65286e2",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 270, in lambda_handler\n start = '['+base64.b64decode(records[0]['data']).decode('utf-8')\n"
]
}
</code></pre>
<p>I think this is an error in how I am referencing the Python List. I would like to know how to fix this error and/or if someone has a better solution on how to solve the JSON format problem keeping in mind that CloudWatch Logs events are sent to Kinesis Data Firehose in compressed gzip format.</p>
|
<python><amazon-web-services><amazon-s3><aws-lambda><amazon-cloudwatch>
|
2024-01-07 19:45:11
| 1
| 418
|
Brian G
|
77,774,517
| 7,961,594
|
How to create a Python GTK3 TreeView column which contains BOTH text and images
|
<p>I have a GTK3 Python program with a TreeView, and I would like to have a column that mostly contains text, but can also contain an image. I've been asking ChatGPT, but this is as close as I can get:</p>
<pre><code>import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, GdkPixbuf
class TreeViewExample(Gtk.Window):
def __init__(self):
super().__init__(title="TreeView Table Example")
self.set_default_size(400, 300)
# Create a ListStore model with two columns
self.model = Gtk.ListStore(object, str) # Using 'object' type for the first column
self.model.append([self.load_image("path/to/your/image.png"), "Value A"]) # First row with an image
self.model.append(["Row 2", "Value B"]) # Second row with text
self.model.append(["Row 3", "Value C"])
# Create a TreeView
self.treeview = Gtk.TreeView(model=self.model)
# Create the first column with custom cell renderers
column1 = Gtk.TreeViewColumn("Column 1")
self.treeview.append_column(column1)
# Create a text renderer for the first column
cell_renderer_text = Gtk.CellRendererText()
column1.pack_start(cell_renderer_text, expand=True)
column1.add_attribute(cell_renderer_text, "text", 1) # Display text
column1.set_cell_data_func(cell_renderer_text, self.render_first_column) # Custom rendering function
# Create an image renderer for the first column
cell_renderer_pixbuf = Gtk.CellRendererPixbuf()
column1.pack_start(cell_renderer_pixbuf, expand=True)
column1.add_attribute(cell_renderer_pixbuf, "pixbuf", 0) # Display image
# Create the second column with a text renderer
cell_renderer2 = Gtk.CellRendererText()
column2 = Gtk.TreeViewColumn("Column 2", cell_renderer2, text=1)
self.treeview.append_column(column2)
# Add the TreeView to the window
self.add(self.treeview)
def load_image(self, image_path):
# Load an image from file and return a GdkPixbuf.Pixbuf
try:
pixbuf = GdkPixbuf.Pixbuf.new_from_file(image_path)
return pixbuf
except GLib.Error:
print(f"Error loading image from {image_path}")
return None
def render_first_column(self, column, cell, model, iter_, data=None):
# Custom rendering function for the first column
value = model.get_value(iter_, 0)
if isinstance(value, GdkPixbuf.Pixbuf):
cell.set_property("pixbuf", value) # Display the image
else:
cell.set_property("text", str(value)) # Display text
if __name__ == "__main__":
app = TreeViewExample()
app.connect("destroy", Gtk.main_quit)
app.show_all()
Gtk.main()
</code></pre>
<p>It seems right that the first model column is an "object" so it can either be a str or a pixbuf. But the call to set_cell_data_func has a parameter of cell_renderer_text, so when the self.render_first_column function is called, the cell parameter always seems to be a CellRendererText, so when the value is a pixbuf and it runs the command:</p>
<pre><code>cell.set_property("pixbuf", value)
</code></pre>
<p>If fails with:</p>
<pre><code>builtins.TypeError: object of type `GtkCellRendererText' does not have property `pixbuf'
</code></pre>
<p>This post (<a href="https://stackoverflow.com/questions/13648323/how-to-make-a-column-with-a-string-and-pixbuf-in-gtktreeview">How to make a column with a string and pixbuf in GtkTreeview?</a>) seems to put both of the CellRenderers together without using a cell_data_func, but my attempts along those lines haven't been successful either.</p>
<p>Can someone help me turn this into working code?</p>
<p>I would also appreciate some help understanding the column.pack_start and column.add_attribute methods. It seems like the pack_start method is for having multiple CellRenderers in the same cell, but how does it know which one to use?</p>
<p>Thanks,
Jeff</p>
|
<python><treeview><gtk3>
|
2024-01-07 19:25:43
| 1
| 1,095
|
Jeff H
|
77,774,515
| 18,321,042
|
PdfWriter Python how to update field with multiline text
|
<p>I've been trying for hours to insert a multiline text into a PDF with Python but none of the potential solutions worked. I've looked at multiple libraries and other solutions but none of them were using PdfWriter or helped solve my problem
Here are a few of my tries:</p>
<ul>
<li>\n</li>
<li>\r</li>
<li>\t</li>
<li>unicode characters for tabs and returns</li>
<li>textwrap</li>
<li>\u0009
But none of them work. It produces the pdf with the correct text but it doesn't go to the new line.</li>
</ul>
<p>The pdf can be found <a href="https://5.imimg.com/data5/SELLER/Doc/2021/4/QI/LK/YD/7115850/pdf-conversion-services.pdf" rel="nofollow noreferrer">here</a></p>
<p>Here is my code:</p>
<pre><code>from pypdf import PdfReader, PdfWriter
file_path = os.path.join(script_directory, 'folder/', 'sample.pdf')
reader = PdfReader(file_path)
page_0 = reader.pages[0]
writer = PdfWriter()
writer.add_page(page_0)
name = current_user.name
address = session['address']
writer.update_page_form_field_values(writer.pages[0], {"1": f'{name}\r\n{address}'})
with open(os.path.join(script_directory, 'folder/', 'sample_filled_out.pdf'), "wb") as output_stream:
writer.write(output_stream)
</code></pre>
<p>Any help or guidance would be appreciated, thank you!</p>
|
<python><newline><pdf-writer>
|
2024-01-07 19:24:47
| 1
| 575
|
Liam
|
77,774,435
| 5,332,914
|
Reading And Filtering a CSV files Column
|
<p>I am reading a dataframe and finding the len using a condition like this:</p>
<pre><code>import pandas as pd
gf = pd.read_csv(raw_github_csv_file_url)
print(len(gf[gf["gender"]=="M"]))
</code></pre>
<p>My CSV has these columns:</p>
<p>id | profile | author | gender | flag
<a href="https://i.sstatic.net/rs60G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rs60G.png" alt="same csv file" /></a></p>
<p>But I am getting this error:</p>
<blockquote>
<blockquote>
<p>KeyError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in
get_loc(self, key, method, tolerance) 3801 try:
-> 3802 return self._engine.get_loc(casted_key) 3803 except KeyError as err:</p>
</blockquote>
<p>4 frames pandas/_libs/hashtable_class_helper.pxi in
pandas._libs.hashtable.PyObjectHashTable.get_item()</p>
<p>pandas/_libs/hashtable_class_helper.pxi in
pandas._libs.hashtable.PyObjectHashTable.get_item()</p>
<p>KeyError: 'gender'</p>
<p>The above exception was the direct cause of the following exception:</p>
<p>KeyError Traceback (most recent call
last)
/usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in
get_loc(self, key, method, tolerance) 3802 return
self._engine.get_loc(casted_key) 3803 except KeyError
as err:
-> 3804 raise KeyError(key) from err 3805 except TypeError: 3806 # If we have a listlike key,
_check_indexing_error will raise</p>
<p>KeyError: 'gender'</p>
</blockquote>
<p>I read other files in the same way and was successful but don't understand why I am getting error here. Or what is the root case.</p>
|
<python><pandas><google-colaboratory>
|
2024-01-07 19:00:34
| 1
| 2,046
|
user404
|
77,774,225
| 1,123,733
|
How to batch two select queries in sqlalchemy to avoid multiple requests?
|
<p>Is it possible to batch independent database requests in SQLAlchemy? Say I have a setup like this:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Session
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
class Item(Base):
__tablename__ = 'items'
id = Column(Integer, primary_key=True)
name = Column(String)
desc = Column(String)
def get_all_users_and_items(session: Session):
users = session.query(User).all()
items = session.query(Item).all()
return users, items
</code></pre>
<p>As I understand it, each <code>.all()</code> in the <code>get_all_users_and_items</code> function will result in a separate request to the database. Perhaps there is some laziness that the requests are not issued until the items are iterated over, but iterating over users and then items will result in two separate requests, i.e. two round trips to the database. If I know ahead of time that I will be iterating over both users and items, it seems wasteful to have two round trips to the database, and I'm hoping there is a way to have a single round-trip where I tell the database "hey, I need all the users and all the items, but separately".</p>
<p>Is this possible in SQLAlchemy?</p>
|
<python><sqlalchemy>
|
2024-01-07 18:04:27
| 0
| 1,883
|
nullUser
|
77,774,217
| 16,383,578
|
How to extract several rotated shapes from an image?
|
<p>I had taken an online visual IQ test, in it a lot of questions are like the following:</p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/3.png" alt="" /></p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/5.png" alt="" /></p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/7.png" alt="" /></p>
<p>The addresses of the images are:</p>
<pre><code>[f"https://www.idrlabs.com/static/i/eysenck-iq/en/{i}.png" for i in range(1, 51)]
</code></pre>
<p>In these images there are several shapes that are almost identical and of nearly the same size. Most of these shapes can be obtained from the others by rotation and translation, but there is exactly one shape that can only be obtained from the others with reflection, this shape has a different chirality from the others, and it is "the odd man". The task is to find it.</p>
<p>The answers here are 2, 1, and 4, respectively. I would like to automate it.</p>
<p>And I nearly succeeded.</p>
<p>First, I download the image, and load it using <code>cv2</code>.</p>
<p>Then I threshold the image and invert the values, and then find the contours. I then find the largest contours.</p>
<p>Now I need to extract the shapes associated with the contours and make the shapes stand upright. And this is where I stuck, I nearly succeeded but there are edge cases.</p>
<p>My idea is simple, find the minimal area bounding box of the contour, then rotate the image to make the rectangle upright (all sides are parallel to grid-lines, longest sides are vertical), and then calculate the new coordinates of the rectangle, and finally using array slicing to extract the shape.</p>
<p>I have achieved what I have described:</p>
<pre><code>import cv2
import requests
import numpy as np
img = cv2.imdecode(
np.asarray(
bytearray(
requests.get(
"https://www.idrlabs.com/static/i/eysenck-iq/en/5.png"
).content,
),
dtype=np.uint8,
),
-1,
)
def get_contours(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 128, 255, 0)
thresh = ~thresh
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
return contours
def find_largest_areas(contours):
areas = [cv2.contourArea(contour) for contour in contours]
area_ranks = [(area, i) for i, area in enumerate(areas)]
area_ranks.sort(key=lambda x: -x[0])
for i in range(1, len(area_ranks)):
avg = sum(e[0] for e in area_ranks[:i]) / i
if area_ranks[i][0] < avg * 0.95:
break
return {e[1] for e in area_ranks[:i]}
def find_largest_shapes(image):
contours = get_contours(image)
area_ranks = find_largest_areas(contours)
contours = [e for i, e in enumerate(contours) if i in area_ranks]
rectangles = [cv2.minAreaRect(contour) for contour in contours]
rectangles.sort(key=lambda x: x[0])
return rectangles
def rotate_image(image, angle):
size_reverse = np.array(image.shape[1::-1])
M = cv2.getRotationMatrix2D(tuple(size_reverse / 2.0), angle, 1.0)
MM = np.absolute(M[:, :2])
size_new = MM @ size_reverse
M[:, -1] += (size_new - size_reverse) / 2.0
return cv2.warpAffine(image, M, tuple(size_new.astype(int)))
def int_sort(arr):
return np.sort(np.intp(np.floor(arr + 0.5)))
RADIANS = {}
def rotate(x, y, angle):
if pair := RADIANS.get(angle):
cosa, sina = pair
else:
a = angle / 180 * np.pi
cosa, sina = np.cos(a), np.sin(a)
RADIANS[angle] = (cosa, sina)
return x * cosa - y * sina, y * cosa + x * sina
def new_border(x, y, angle):
nx, ny = rotate(x, y, angle)
nx = int_sort(nx)
ny = int_sort(ny)
return nx[3] - nx[0], ny[3] - ny[0]
def coords_to_pixels(x, y, w, h):
cx, cy = w / 2, h / 2
nx, ny = x + cx, cy - y
nx, ny = int_sort(nx), int_sort(ny)
a, b = nx[0], ny[0]
return a, b, nx[3] - a, ny[3] - b
def new_contour_bounds(pixels, w, h, angle):
cx, cy = w / 2, h / 2
x = np.array([-cx, cx, cx, -cx])
y = np.array([cy, cy, -cy, -cy])
nw, nh = new_border(x, y, angle)
bx, by = pixels[..., 0] - cx, cy - pixels[..., 1]
nx, ny = rotate(bx, by, angle)
return coords_to_pixels(nx, ny, nw, nh)
def extract_shape(rectangle, image):
box = np.intp(np.floor(cv2.boxPoints(rectangle) + 0.5))
h, w = image.shape[:2]
angle = -rectangle[2]
x, y, dx, dy = new_contour_bounds(box, w, h, angle)
image = rotate_image(image, angle)
shape = image[y : y + dy, x : x + dx]
sh, sw = shape.shape[:2]
if sh < sw:
shape = np.rot90(shape)
return shape
rectangles = find_largest_shapes(img)
for rectangle in rectangles:
shape = extract_shape(rectangle, img)
cv2.imshow("", shape)
cv2.waitKeyEx(0)
</code></pre>
<p>But it doesn't work perfectly:</p>
<p><a href="https://i.sstatic.net/UZdSc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UZdSc.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/ttnEk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ttnEk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/oUErM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oUErM.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/VvOrq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VvOrq.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/ldW3v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ldW3v.png" alt="enter image description here" /></a></p>
<p>As you can see, it includes everything in the bounding rectangle, not just the main shape in bounded by the contour, there are some extra bits sticking in. I want the shape to contain only areas bound by the contour.</p>
<p>And then, the more serious problem, somehow the bounding box doesn't always align with the principal axis of the contour, as you can see in the last image it doesn't stand upright and there are black areas.</p>
<p>How to fix these problems?</p>
|
<python><opencv><image-processing><computer-vision>
|
2024-01-07 18:01:50
| 1
| 3,930
|
Ξένη Γήινος
|
77,774,216
| 2,567,544
|
How do Python enums enforce read access only?
|
<p>I understand that in Python constants are implied, but not enforced, by an uppercase variable name. I also understand that by subclassing from Enum, a class can implement an enumerated type which prevents the individual enumerated values from being overwritten.</p>
<p>These two idioms seem to me to be somewhat at odds with each other and I was wondering how the Enum solution enforced read-only access to the variable values.</p>
|
<python><enums>
|
2024-01-07 18:00:43
| 1
| 607
|
user2567544
|
77,774,153
| 9,931,606
|
Error building MySQLclient wheel on Python 3.12.0: clang error - invalid active developer path
|
<p>I'm encountering an issue while trying to install the <code>mysqlclient</code> package on Python <code>3.12.0</code>. The build process fails with a clang error. I've included the relevant error output below.</p>
<p>Note: I have turned to use <code>PySQL</code> and it rightly worked. But I still want to dig into and fix the below issue on my project.</p>
<ol>
<li>Environment Details:</li>
</ol>
<ul>
<li>Python version: <code>3.12.0</code></li>
<li>Django version: <code>5.0.1</code></li>
</ul>
<ol start="2">
<li>Additional bash's info:</li>
</ol>
<pre><code>Collecting mysqlclient
Using cached mysqlclient-2.2.1.tar.gz (89 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: mysqlclient
Building wheel for mysqlclient (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for mysqlclient (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [41 lines of output]
# Options for building extension module:
extra_compile_args: ['-I/opt/homebrew/Cellar/mysql-client/8.2.0/include/mysql', '-std=c99']
extra_link_args: ['-L/opt/homebrew/Cellar/mysql-client/8.2.0/lib', '-lmysqlclient']
define_macros: [('version_info', (2, 2, 1, 'final', 0)), ('__version__', '2.2.1')]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-universal2-cpython-312
creating build/lib.macosx-10.9-universal2-cpython-312/MySQLdb
copying src/MySQLdb/release.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb
copying src/MySQLdb/cursors.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb
copying src/MySQLdb/connections.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb
copying src/MySQLdb/__init__.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb
copying src/MySQLdb/times.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb
copying src/MySQLdb/converters.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb
copying src/MySQLdb/_exceptions.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb
creating build/lib.macosx-10.9-universal2-cpython-312/MySQLdb/constants
copying src/MySQLdb/constants/FLAG.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb/constants
copying src/MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb/constants
copying src/MySQLdb/constants/__init__.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb/constants
copying src/MySQLdb/constants/ER.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb/constants
copying src/MySQLdb/constants/CR.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb/constants
copying src/MySQLdb/constants/FIELD_TYPE.py -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb/constants
running egg_info
writing src/mysqlclient.egg-info/PKG-INFO
writing dependency_links to src/mysqlclient.egg-info/dependency_links.txt
writing top-level names to src/mysqlclient.egg-info/top_level.txt
reading manifest file 'src/mysqlclient.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'src/mysqlclient.egg-info/SOURCES.txt'
copying src/MySQLdb/_mysql.c -> build/lib.macosx-10.9-universal2-cpython-312/MySQLdb
running build_ext
building 'MySQLdb._mysql' extension
creating build/temp.macosx-10.9-universal2-cpython-312
creating build/temp.macosx-10.9-universal2-cpython-312/src
creating build/temp.macosx-10.9-universal2-cpython-312/src/MySQLdb
clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g "-Dversion_info=(2, 2, 1, 'final', 0)" -D__version__=2.2.1 -I/Users/fuwhis/Desktop/cloudsky/django-blog-example/env/include -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c src/MySQLdb/_mysql.c -o build/temp.macosx-10.9-universal2-cpython-312/src/MySQLdb/_mysql.o -I/opt/homebrew/Cellar/mysql-client/8.2.0/include/mysql -std=c99
xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for mysqlclient
Failed to build mysqlclient
ERROR: Could not build wheels for mysqlclient, which is required to install pyproject.toml-based projects
</code></pre>
|
<python><macos><pip><libmysqlclient>
|
2024-01-07 17:43:14
| 1
| 323
|
littleAnt
|
77,774,062
| 1,005,334
|
quart-cors no longer working after update
|
<p>I've been using Quart with quart-schema and quart-cors for some time, and since the latest version of quart-schema provides some updates I can use, I upgraded the Quart packages with pip.</p>
<p>It all works fine, except the front-end now throws CORS errors at me:</p>
<blockquote>
<p>Access to fetch at 'http://abc.ai/api/blogs/' (redirected from
'http://localhost:5173/api/blogs') from origin 'http://localhost:5173'
has been blocked by CORS policy: No 'Access-Control-Allow-Origin'
header is present on the requested resource. If an opaque response
serves your needs, set the request's mode to 'no-cors' to fetch the
resource with CORS disabled.</p>
</blockquote>
<p>There are no errors in the back-end.</p>
<p>I configured it in a very basic way (the app in question is still in development), per the readme (<a href="https://pypi.org/project/quart-cors/" rel="nofollow noreferrer">https://pypi.org/project/quart-cors/</a>):</p>
<pre><code>app = Quart(__name__)
app = cors(app, allow_origin="*")
</code></pre>
<p>That worked fine before I updated the packages, and should do the trick, since the entire app is providing the necessary 'Access-Control-Allow-Origin' header in the most permissive way.</p>
<p>I'm not sure why it stopped working. According to the readme, the configuration is exactly the same as it used to be (I only upgraded from version 0.6.0 to 0.7.0). Perhaps it has something to do with the Hypercorn server I'm using and also updated (from 0.14.4 to 0.16.0), but that's just guessing, as I have no error messages from the API/server, only the CORS error from the front-end.</p>
<p>Am I missing something here? What could there not be working properly?</p>
|
<python><cors><quart><hypercorn>
|
2024-01-07 17:17:31
| 1
| 1,544
|
kasimir
|
77,773,884
| 8,754,958
|
Python Scrapy Splash Extremely Slow with Single Page
|
<p>I'm new to Scrapy with Splash and would appreciate some advice. I'm trying to scrape the website <a href="https://www.canada.ca/en/revenue-agency/services/forms-publications/forms.html" rel="nofollow noreferrer">https://www.canada.ca/en/revenue-agency/services/forms-publications/forms.html</a>, which contains a list of government forms. My Spider has worked fine with the Scrapy tutorials I followed. Scraping <a href="https://quotes.toscrape.com/" rel="nofollow noreferrer">https://quotes.toscrape.com/</a> only takes a couple of seconds to run. But with the site I'm trying now, the requests keep timing out after setting a timeout of 300 seconds, for a single page!. I must be doing something wrong but I don't know what.</p>
<p>Here's the settings for the spider:</p>
<pre><code>BOT_NAME = "quotes_js_scraper"
SPIDER_MODULES = ["quotes_js_scraper.spiders"]
NEWSPIDER_MODULE = "quotes_js_scraper.spiders"
ROBOTSTXT_OBEY = False
SPLASH_URL = "http://localhost:8050"
DOWNLOADER_MIDDLEWARES = {
"scrapy_splash.SplashCookiesMiddleware": 723,
"scrapy_splash.SplashMiddleware": 725,
"scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware": 810,
}
SPIDER_MIDDLEWARES = {
"scrapy_splash.SplashDeduplicateArgsMiddleware": 100,
}
DUPEFILTER_CLASS = "scrapy_splash.SplashAwareDupeFilter"
# added this myself after getting a warning that the default
# '2.6' is deprecated
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
</code></pre>
<p>And then here's the Spider:</p>
<pre><code>from pathlib import Path
import scrapy
from scrapy_splash import SplashRequest
class QuotesSpider(scrapy.Spider):
name = "quotes"
def start_requests(self):
# url = "https://quotes.toscrape.com/"
url = "https://www.canada.ca/en/revenue-agency/services/forms-publications/forms.html"
yield SplashRequest(
url,
callback=self.parse,
args={
"wait": 1,
"proxy": "http://scrapeops:xxxxxxxxxxxxxxxxxxxxxxxxxxxx@proxy.scrapeops.io:5353",
"timeout": 300,
},
)
def parse(self, response):
filename = "test_output.html"
Path(filename).write_bytes(response.body)
</code></pre>
<p>and here is the output in the terminal</p>
<pre><code>2024-01-07 11:39:55 [scrapy.utils.log] INFO: Scrapy 2.11.0 started (bot: quotes_js_scraper)
2024-01-07 11:39:55 [scrapy.utils.log] INFO: Versions: lxml 5.0.1.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.8.1, w3lib 2.1.2, Twisted 22.10.0, Python 3.12.1 (tags/v3.12.1:2305ca5, Dec 7 2023, 22:03:25) [MSC v.1937 64 bit (AMD64)], pyOpenSSL 23.3.0 (OpenSSL 3.1.4 24 Oct 2023), cryptography 41.0.7, Platform Windows-11-10.0.22621-SP0
2024-01-07 11:39:55 [scrapy.addons] INFO: Enabled addons:
[]
2024-01-07 11:39:55 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2024-01-07 11:39:55 [scrapy.extensions.telnet] INFO: Telnet Password: xxxxxxxxxxxxxxxxxxx
2024-01-07 11:39:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2024-01-07 11:39:55 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'quotes_js_scraper',
'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
'NEWSPIDER_MODULE': 'quotes_js_scraper.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'SPIDER_MODULES': ['quotes_js_scraper.spiders']}
2024-01-07 11:39:56 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy_splash.SplashCookiesMiddleware',
'scrapy_splash.SplashMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-01-07 11:39:56 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy_splash.SplashDeduplicateArgsMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-01-07 11:39:56 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-01-07 11:39:56 [scrapy.core.engine] INFO: Spider opened
2024-01-07 11:39:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-01-07 11:39:56 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-01-07 11:39:56 [py.warnings] WARNING: C:\Users\gmloo\OneDrive\Documents\Customtech Solutions\Products and Services\Scraping\quotes-js-project\venv\Lib\site-packages\scrapy_splash\dupefilter.py:20: ScrapyDeprecationWarning: Call to deprecated function scrapy.utils.request.request_fingerprint().
If you are using this function in a Scrapy component, and you are OK with users of your component changing the fingerprinting algorithm through settings, use crawler.request_fingerprinter.fingerprint() instead in your Scrapy component (you can get the crawler object from the 'from_crawler' class method).
Otherwise, consider using the scrapy.utils.request.fingerprint() function instead.
Either way, the resulting fingerprints will be returned as bytes, not as a string, and they will also be different from those generated by 'request_fingerprint()'. Before you switch, make sure that you understand the consequences of this (e.g. cache invalidation) and are OK with them; otherwise, consider implementing your own function which returns the same fingerprints as the deprecated 'request_fingerprint()' function.
fp = request_fingerprint(request, include_headers=include_headers)
2024-01-07 11:40:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-01-07 11:41:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-01-07 11:42:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-01-07 11:43:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-01-07 11:44:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-01-07 11:44:56 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.canada.ca/en/revenue-agency/services/forms-publications/forms.html via http://localhost:8050/render.html> (failed 1 times): 504 Gateway Time-out
2024-01-07 11:45:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-01-07 11:46:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
...
</code></pre>
<p>I noticed I'm getting very slow responses from trying to scrape other websites as well. My computer is slow but not that slow. Is there a way I can speed this up? Or are my settings wrong? For example, All I want is links and text on the page, I don't need any content of "script" tags returned. I know I can remove that in the parse method, but that only runs after the response has been generated right?</p>
<p>Thank you.</p>
|
<python><scrapy><splash-screen>
|
2024-01-07 16:28:13
| 0
| 805
|
Geoff L
|
77,773,861
| 2,251,142
|
How to run python source file and display source code and output
|
<p>[Added] <strong>Background</strong>: I am trying to make a script to preprocess markdown files containing examples in various languages (R, Stata, and python for now). For each markdown file, my script extracts the code blocks and save the code blocks to files, say "temp.R", "temp.do", and "temp.py".</p>
<p>When I source temp.R (R) and temp.do (Stata), I get log files containing both inputs and outputs, line by line. The commands are <code>R CMD BATCH -q temp.R</code> and <code>stata-mp -q -b temp.do</code>. I would like similar outputs for python.</p>
<p>When I source temp.py, I see only the outputs. I want to have a "log" file like what I would see if I copy and paste the source into a python interactive session, without worrying about incomplete lines.</p>
<p><strong>Question:</strong> My "temp.py" file contains:</p>
<pre class="lang-py prettyprint-override"><code>1+1
if True:
print('True')
msg = "Hello, world!"
print(msg)
</code></pre>
<p>I would like to get:</p>
<pre><code>>>> 1+1
2
>>> if True:
... print('True')
...
True
>>> msg = "Hello, world!"
>>> print(msg)
Hello, world!
</code></pre>
<p>by <code>some_command temp.py</code> like what I would see if the source lines are typed in a python interactive session. Is it possible? I've tried <code>cat temp.py | python</code> and <code>python < temp.py</code> in no avail. The actual python source files can be large and complicated.</p>
|
<python>
|
2024-01-07 16:20:27
| 1
| 663
|
chan1142
|
77,773,655
| 2,244,093
|
Prisoner's dilemma strange results
|
<p>I tried to implement a prisoner's dilemma in Python, but my results, instead of showing that tit for tat is a better solution, it is showing that defecting is giving better results.</p>
<p>Can someone look at my code, and tell me what I have done wrong here?</p>
<pre><code>import random
from colorama import Fore, Style
import numpy as np
# Define the actions
COOPERATE = 'cooperate'
DEFECT = 'defect'
# Define the strategies
def always_cooperate(history):
return COOPERATE
def always_defect(history):
return DEFECT
def random_choice_cooperate(history):
return COOPERATE if random.random() < 0.75 else DEFECT
def random_choice_defect(history):
return COOPERATE if random.random() < 0.25 else DEFECT
def random_choice_neutral(history):
return COOPERATE if random.random() < 0.5 else DEFECT
def tit_for_tat(history):
if not history: # If it's the first round, cooperate
return COOPERATE
opponent_last_move = history[-1][1] # Get the opponent's last move
return opponent_last_move # Mimic the opponent's last move
def tat_for_tit(history):
if not history: # If it's the first round, cooperate
return DEFECT
opponent_last_move = history[-1][1] # Get the opponent's last move
return opponent_last_move # Mimic the opponent's last move
def tit_for_two_tats(history):
if len(history) < 2: # If it's the first or second round, cooperate
return COOPERATE
opponent_last_two_moves = history[-2:] # Get the opponent's last two moves
if all(move[1] == DEFECT for move in opponent_last_two_moves): # If the opponent defected in the last two rounds
return DEFECT
return COOPERATE
# Define the payoff matrix
payoff_matrix = {
(COOPERATE, COOPERATE): (3, 3),
(COOPERATE, DEFECT): (0, 5),
(DEFECT, COOPERATE): (5, 0),
(DEFECT, DEFECT): (1, 1)
}
# Define the players
players = [always_cooperate, always_defect, random_choice_defect, tit_for_tat, tit_for_two_tats, random_choice_cooperate, tat_for_tit, random_choice_neutral]
# Assign a unique color to each player
player_colors = {
'always_cooperate': Fore.GREEN,
'always_defect': Fore.RED,
'tit_for_tat': Fore.BLUE,
'random_choice_cooperate': Fore.MAGENTA,
'random_choice_defect': Fore.LIGHTRED_EX,
'tat_for_tit': Fore.LIGHTYELLOW_EX,
'random_choice_neutral': Fore.WHITE,
'tit_for_two_tats': Fore.LIGHTBLACK_EX,
}
def tournament(players, rounds=100):
total_scores = {player.__name__: 0 for player in players}
for i in range(len(players)):
for j in range(i+1, len(players)):
player1 = players[i]
player2 = players[j]
history1 = []
history2 = []
match_scores = {player1.__name__: 0, player2.__name__: 0}
# print(f"\n{player1.__name__} vs {player2.__name__}")
for round in range(rounds):
move1 = player1(history1)
move2 = player2(history2)
score1, score2 = payoff_matrix[(move1, move2)]
match_scores[player1.__name__] += score1
match_scores[player2.__name__] += score2
total_scores[player1.__name__] += score1
total_scores[player2.__name__] += score2
history1.append((move1, move2))
history2.append((move2, move1))
# print(f"{player1.__name__} moves: {''.join([Fore.GREEN+'O'+Style.RESET_ALL if move[0]==COOPERATE else Fore.RED+'X'+Style.RESET_ALL for move in history1])}")
# print(f"{player2.__name__} moves: {''.join([Fore.GREEN+'O'+Style.RESET_ALL if move[0]==COOPERATE else Fore.RED+'X'+Style.RESET_ALL for move in history2])}")
# print(f"Match scores: {player1.__name__} {match_scores[player1.__name__]}, {player2.__name__} {match_scores[player2.__name__]}")
sorted_scores = sorted(total_scores.items(), key=lambda item: item[1], reverse=True)
return sorted_scores
# Run the tournament
# for player, score in tournament(players):
# print(f'\nFinal score: {player}: {score}')
num_tournaments = 1000
results = {player.__name__: [] for player in players}
for _ in range(num_tournaments):
for player, score in tournament(players):
results[player].append(score)
# Calculate the median score for each player and store them in a list of tuples
medians = [(player, np.median(scores)) for player, scores in results.items()]
# Sort the list of tuples based on the median score
sorted_medians = sorted(medians, key=lambda x: x[1])
num_players = len(sorted_medians)
# Print the sorted median scores with gradient color
for i, (player, median_score) in enumerate(sorted_medians):
# Calculate the ratio of green and red based on the player's position
green_ratio = i / (num_players - 1)
red_ratio = 1 - green_ratio
# Calculate the green and red components of the color
green = int(green_ratio * 255)
red = int(red_ratio * 255)
# Create the color code
color_code = f'\033[38;2;{red};{green};0m'
player_color = player_colors.get(player, Fore.RESET)
# Print the player name and median score with the color
print(f'{player_color}{player}: {median_score} coins')
</code></pre>
<p>The code itself create the matching for 100 rounds. But it then iterate 1000 times to get the median score over many iterations.</p>
<p>Here is the ouput of the results</p>
<pre><code>always_cooperate: 1347.024 coins
random_choice_cooperate: 1535.651 coins
tit_for_two_tats: 1561.442 coins
tit_for_tat: 1609.444 coins
tat_for_tit: 1619.43 coins
random_choice_neutral: 1663.855 coins
always_defect: 1711.764 coins
random_choice_defect: 1726.992 coins
</code></pre>
<p>In <a href="https://www.youtube.com/watch?v=mScpHTIi-kM" rel="nofollow noreferrer">the latest Veritasium video</a> the dilemma is presented with the reward matrix, but <em>Tit for Tat</em> is presented as the most efficient solution. I cannot replicate that result, and thus I'm opening this question.</p>
|
<python><game-theory>
|
2024-01-07 15:21:00
| 1
| 1,010
|
danielassayag
|
77,773,403
| 6,637,269
|
Find a concave hull of an array of integer points on a grid
|
<p>I have arrays of thousands of points on an integer grid. I'm looking for a fast way of finding a concave hull for the points. Points are adjacent if they are 1 unit away in any cardinal direction. I am ambivalent about whether the boundary should move diagonally (i.e. cutting corners as shown below from <code>[391,4036]</code> -> <code>[392,4037]</code>), and instead prioritise speed of calculation. There are no interior holes. I'm working in python.</p>
<p><a href="https://i.sstatic.net/ELCgs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ELCgs.png" alt="an example case using a smaller set of points" /></a></p>
<p>My initial thought was to loop through each point, and look up whether it's cardinal neighbours are also in the set of points, and if one of them is not then mark the point as being on the bounds of the shape. I would then need some other algorithm for ordering those points to get the (clockwise or anti-clockwise) boundary.</p>
<p>This would not scale well with number of points, as for each point I need to check it's four cardinal directions against every other point in the set for membership.</p>
<p>Python code for finding boundary points:</p>
<pre><code>boundary_pixels = [
(row_idx, col_idx)
for (row_idx, col_idx) in full_pixels
if not (
((row_idx+1, col_idx) in full_pixels) &
((row_idx-1, col_idx) in full_pixels) &
((row_idx, col_idx+1) in full_pixels) &
((row_idx, col_idx-1) in full_pixels)
)
]
</code></pre>
<p>I know that finding concave hulls is a difficult problem, but is there a solution for when the points are evenly spaced on a grid?</p>
|
<python><matplotlib><polygon><shapely><boundary>
|
2024-01-07 14:16:24
| 1
| 870
|
Josh Kidd
|
77,773,293
| 12,178,630
|
Python dictionary sorting weird behaviour
|
<p>I would like to sort a python dictionary by multiple criteria depending on a condition on its values, like:</p>
<pre><code>d = {'27': 'good morning', '14': 'morning', '23': 'good afternoon', '25': 'amazing'}
priority_1 = 'good'
priority_2 = 'morning'
priority_3 = 'afternoon'
new_d = sorted(d.items(), key=lambda c: [(priority_1 and priority_2) in c[1], priority_3 in c[1]])
</code></pre>
<p>this gives:</p>
<pre><code>[('25', 'amazing'),
('23', 'good afternoon'),
('27', 'good morning'),
('14', 'morning')]
</code></pre>
<p>While I expected it to return:</p>
<pre><code> [('25', 'amazing'),
('23', 'good afternoon'),
('14', 'morning'),
('27', 'good morning')]
</code></pre>
<p>More interestingly, I thought that writing <code>priority_1 and priority_2 in c[1]</code> is no different from <code>priority_2 and priority_1 in c[1]</code>, but It turns out I am mistaken, as when I change the order to <code>priority_2 and priority_1 in c[1]</code> I get a different result.</p>
<p>I could not find an answer in the docs regarding the effect of the order of operands when used with logical operators.</p>
|
<python><sorting>
|
2024-01-07 13:48:06
| 1
| 314
|
Josh
|
77,773,218
| 1,169,091
|
InvalidArgumentError: cannot compute MatMul as input #1(zero-based) was expected to be a double tensor but is a float tensor [Op:MatMul] name:
|
<p>The data type of input_data is an array of numpy.float64 but the code still fails inside a tensorflow library because it's not a 'double'. Not sure how to remedy this.</p>
<pre><code>import tensorflow as tf
import numpy as np
input_data = np.random.uniform(low=0.0, high=1.0, size=100)
print("type(input_data):", type(input_data), "type(input_data[0]):", type(input_data[0]))
class ArtificialNeuron(tf.Module):
def __init__(self):
self.w = tf.Variable(tf.random.normal(shape=(1, 1)))
self.b = tf.Variable(tf.zeros(shape=(1,)))
def __call__(self, x):
return tf.sigmoid(tf.matmul(x, self.w) + self.b)
neuron = ArtificialNeuron()
# Fails here: InvalidArgumentError: cannot compute MatMul as input #1(zero-based) was expected to be a double tensor but is a float tensor [Op:MatMul] name:
output_data = neuron(input_data)
</code></pre>
|
<python><numpy><tensorflow><types>
|
2024-01-07 13:23:27
| 1
| 4,741
|
nicomp
|
77,772,928
| 4,451,315
|
Julian date of timezone-aware timestamp?
|
<p>Should the Julian date change depending on the original timestamp's time zone?</p>
<p>For example, do <code>2020-01-01 +00:00</code> and <code>2020-01-01 +05:45</code> have the same Julian Date?</p>
<p>pandas suggests they do:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: import pandas as pd
In [2]: pd.Timestamp('2020-01-01', tz='Europe/London').to_julian_date()
Out[2]: 2458849.5
In [3]: pd.Timestamp('2020-01-01', tz='Asia/Kathmandu').to_julian_date()
Out[3]: 2458849.5
</code></pre>
<p>Is this a bug in pandas?</p>
|
<python><pandas><julian-date>
|
2024-01-07 11:50:22
| 1
| 11,062
|
ignoring_gravity
|
77,772,670
| 8,110,961
|
How to parse JSON response that includes multiple JSON objects
|
<p><a href="https://stackoverflow.com/questions/55575374/how-to-parse-json-response-that-includes-multiple-objects">How to parse JSON response that includes multiple objects</a>
with reference to above link, I am getting multiple JSON objects in response but unlike response in above link (which receives single array of multiple JSONs), I am getting multiple JSONs without the square brackets and comma between two JSONs</p>
<p>The response data is structured as follows:</p>
<pre class="lang-json prettyprint-override"><code>{
"self": "https://example1.com",
"key": "keyOne",
"name": "nameOne",
"emailAddress": "mailOne",
"avatarUrls": {
"48x48": "https://test.com/secure/useravatar?avatarId=1",
"24x24": "https://test.com/secure/useravatar?size=small&avatarId=1",
"16x16": "https://test.com/secure/useravatar?size=xsmall&avatarId=1",
"32x32": "https://test.com/secure/useravatar?size=medium&avatarId=1"
},
"displayName": "displayNameOne",
"active": true,
"timeZone": "Europe",
"locale": "en_UK"
}
{
"self": "https://example2.com",
"key": "keyTwo",
"name": "nameTwo",
"emailAddress": "mailTwo",
"avatarUrls": {
"48x48": "https://test.com/secure/useravatar?avatarId=2",
"24x24": "https://test.com/secure/useravatar?size=small&avatarId=2",
"16x16": "https://test.com/secure/useravatar?size=xsmall&avatarId=2",
"32x32": "https://test.com/secure/useravatar?size=medium&avatarId=2"
},
"displayName": "displayNameTwo",
"active": false,
"timeZone": "Europe",
"locale": "en_US"
}
</code></pre>
<p>I tried looping through rest API response, thought of enclosing response within square brackets but all failed.</p>
<p><a href="https://stackoverflow.com/questions/77363207/parsing-response-with-multiple-json-objects">Parsing response with multiple json objects</a>
I checked above link but here too but they have one JSON element in single line which is not the case with me. How to address the task using Python please help.</p>
|
<python><json><rest>
|
2024-01-07 10:18:33
| 3
| 385
|
Jack
|
77,772,628
| 719,276
|
Indexing a batch of images using PyTorch tensor from an index image
|
<p>Suppose I have a batch of images <code>M</code> in the form of a torch tensor <code>(B, W, H)</code>, and an image <code>I</code> of size <code>(W, H)</code> whose pixels are indices.</p>
<p>I want to get an image <code>(W, H)</code> where each pixel come from the corresponding image in the image batch (following the indexing of <code>I</code>).</p>
<p><strong>Example</strong></p>
<p>Given <code>M</code> of shape <code>(3, 4, 8)</code>:</p>
<pre><code>tensor([[[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.]],
[[-1., -1., -1., -1., -1., -1., -1., -1.],
[-1., -1., -1., -1., -1., -1., -1., -1.],
[-1., -1., -1., -1., -1., -1., -1., -1.],
[-1., -1., -1., -1., -1., -1., -1., -1.]],
[[-2., -2., -2., -2., -2., -2., -2., -2.],
[-2., -2., -2., -2., -2., -2., -2., -2.],
[-2., -2., -2., -2., -2., -2., -2., -2.],
[-2., -2., -2., -2., -2., -2., -2., -2.]]])
</code></pre>
<p>and <code>I</code> of shape <code>(4, 8)</code>:</p>
<pre><code>tensor([[2, 0, 2, 0, 1, 0, 1, 0],
[2, 2, 1, 0, 0, 2, 1, 0],
[2, 0, 0, 2, 1, 1, 0, 0],
[0, 1, 0, 0, 2, 0, 2, 1]], dtype=torch.int32)
</code></pre>
<p>the resulting image would be:</p>
<pre><code>tensor([[-2., 0., -2., 0., -1., 0., -1., 0.],
[-2., -2., -1., 0., 0., -2., -1., 0.],
[-2., 0., 0., -2., -1., -1., 0., 0.],
[ 0., -1., 0., 0., -2., 0., -2., -1.]])
</code></pre>
<p><strong>Note 1</strong></p>
<p>I don't care about the ordering of the <code>M</code> dimensions, it could be <code>(W, H, B)</code> as well if it provides an easier solution.</p>
<p><strong>Note 2</strong></p>
<p>I am also interested in a NumPy solution.</p>
|
<python><numpy><indexing><torch>
|
2024-01-07 10:05:25
| 1
| 11,833
|
arthur.sw
|
77,772,560
| 20,851,944
|
Merge value lists of two dict in python
|
<p>I have several dictionary with equal keys. The values of the key are list. Now I would merge the values lists like.</p>
<p>Input, e.g.:</p>
<pre><code>dict_1 ={"a":["1"], "b":["3"]}
dict_2 = {"a":["2"], "b":["3"]}
</code></pre>
<p>Required output:</p>
<pre><code>new_dict = {'a':["1","2"], 'b':["3","3"]}
</code></pre>
<p>What is the fastest, pythonic way to get this result?</p>
<p>I found this, but this doesn’t fulfill my needs:</p>
<pre><code>merged_dic = {**dict_1, **dict_2}
</code></pre>
<p>and others, but nothing solve my wish.
Is there a built in function without loop over each element, because I have a lot of dictionaries and more complex as my example above?
Thanks for any help!</p>
|
<python><python-3.x><list><dictionary>
|
2024-01-07 09:42:13
| 2
| 316
|
Paul-ET
|
77,772,472
| 16,383,578
|
Visual Studio Code doesn't recognize cv2 members
|
<p>I don't know if this is on topic but I have seen some similar questions on this site.</p>
<p>The problem is simple, recently I have found out that Visual Studio Code cannot access any members of <code>cv2</code> at all. The attribute names are all white, hovering over them show "(function) {name}: Any", it doesn't autocomplete.</p>
<p>There is no red wave signalling that <code>cv2</code> import resolve failure. Hovering over cv2 show: "OpenCV Python binary extension loader".</p>
<p>Screenshot:</p>
<p><a href="https://i.sstatic.net/lPtlS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lPtlS.png" alt="enter image description here" /></a></p>
<p>The code is working correctly and this question isn't about the code.</p>
<p>I have tried:</p>
<p><code>from cv2 import cv2</code>, the second <code>cv2</code> is white.</p>
<p>Add <code>python.linting.pylintArgs": ["--generate-members"]</code> to <code>settings.json</code>, the texts are darker, hovering over it shows: "Unknown Configuration Setting".</p>
<p>Tried <code>Python: Select Linter</code> in command palette: no such option was found.</p>
<p>I have installed many VS Code extensions. I have already selected the interpreter version to be same as the main interpreter which has all the external libraries installed.</p>
<p>How to fix this bug?</p>
<hr />
<p>Previously Pylint wasn't listed as a standalone extension, but I have installed many Python extension packs that no doubt have included it. I have run <code>pylint --generate-rcfile > ./.pylintrc</code> in the directory where my scripts are kept, and installed Pylint as a standalone extension, and restarted the editor. The problem persists.</p>
<hr />
<p>This bug is truly resilient, I have just resorted to drastic measures.</p>
<p>I have just uninstalled Visual Studio Code entirely via "C:\Program Files\Microsoft VS Code\unins000.exe", then I have deleted everything inside C:\Users\Xeni.vscode where the extensions are kept, effectively uninstalling all extensions. I have also deleted everything inside C:\Users\Xeni\AppData\Roaming\Code\ except for C:\Users\Xeni\AppData\Roaming\Code\User\History folder and C:\Users\Xeni\AppData\Roaming\Code\User\Settings.json.</p>
<p>Having done a complete reset, removing all possible files that could have caused the bug. I then downloaded the latest Visual Studio Code installer and installed it.</p>
<p>And then I have just installed 3 extensions: Black Formatter v2023.6.0 from Microsoft, Python v2023.22.1 from Microsoft, Sourcery v1.15.0, Pylance v2023.12.1 seems to be preinstalled.</p>
<p>Anyway these four extensions are the only ones in the current installation, but after setting things up and reloading Visual Studio Code, I opened the same file, and the same bug is still there...</p>
<p>Total file removal and reinstallation didn't fix the problem. And I can't think of another solution.</p>
|
<python><visual-studio-code>
|
2024-01-07 09:08:43
| 2
| 3,930
|
Ξένη Γήινος
|
77,772,327
| 10,618,857
|
TypeError when trying to display transformed images PyTorch
|
<p>I have some trouble defining the transforms for images using PyTorch.
Here you are the transforms I need:</p>
<pre><code>mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
train_transform = transforms.Compose([
transforms.RandomHorizontalFlip(p=0.5),
transforms.Resize((256, 256), interpolation=torchvision.transforms.InterpolationMode.BILINEAR),
transforms.CenterCrop(size=[224,224]),
transforms.Normalize(mean, std),
transforms.PILToTensor()
])
test_transform = transforms.Compose([
transforms.Resize((256, 256), interpolation=torchvision.transforms.InterpolationMode.BILINEAR),
transforms.CenterCrop(size=[224,224]),
transforms.Normalize(mean, std),
transforms.PILToTensor(),
])
</code></pre>
<p>Then, I create the loaders for the images of the dataset:</p>
<pre><code>BATCH_SIZE = 32
trainset = torchvision.datasets.ImageFolder(root='CVPR2023_project_2_and_3_data/train/', loader=open_image)
trainset_classes = trainset.classes.copy()
subset_size = int(0.15*len(trainset))
validset = torchvision.datasets.ImageFolder(root='CVPR2023_project_2_and_3_data/train/', loader=open_image)
indices = torch.randperm(len(trainset))
valid_indices = indices[:subset_size]
train_indices = indices[subset_size:]
trainset = Subset(trainset, train_indices)
validset = Subset(validset, valid_indices)
# Apply transformations only to the training set
trainset.dataset.transform = train_transform
# Apply transformations to the validation set
validset.dataset.transform = test_transform
trainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE, shuffle=True, pin_memory=True) # batch size of 1 because we have to crop in order to get all images to same size (64x64), also see pin_memory optin
validloader = torch.utils.data.DataLoader(validset, batch_size=BATCH_SIZE, shuffle=False, pin_memory=True)
testset = torchvision.datasets.ImageFolder(root='CVPR2023_project_2_and_3_data/test/', transform=test_transform, loader=Image.open)
testloader = torch.utils.data.DataLoader(testset, batch_size=BATCH_SIZE, shuffle=False, pin_memory=True)
print(f'entire train folder: {len(trainset)}, entire test folder: {len(testset)}, splitted trainset: {len(trainset)}, splitted validset: {len(validset)}')
</code></pre>
<p>Then I load a pre-trained network and freeze all the layers but the last one:</p>
<pre><code>model = torch.hub.load('pytorch/vision:v0.10.0', 'alexnet', pretrained=True)
model.classifier[6] = torch.nn.Linear(in_features=4096, out_features=15, bias=True) #adapting to 15 classes
for param in model.parameters():
param.requires_grad = False
for param in model.classifier[6].parameters():
param.requires_grad = True
</code></pre>
<p>Then, I define a function for showing an image and I try to print one:</p>
<pre><code>def imshow(img):
npimg = img.numpy()
plt.axis("off")
plt.imshow(np.transpose(npimg, (1, 2, 0)))
images, labels = next(iter(trainloader)) # <-- error
print(images[0])
</code></pre>
<p>This last piece of code does not work and the program crashes with the following error:</p>
<pre><code>img should be Tensor Image. Got <class 'PIL.Image.Image'>
</code></pre>
<p>I have already tried to change the transforms' order but I get the inverse error, i.e.</p>
<pre><code>img should be <class 'PIL.Image.Image'> Image. Got Tensor
</code></pre>
<p>Can anyone explain how I should solve this error?</p>
<p>Thank you in advance for your patience</p>
|
<python><pytorch><pre-trained-model>
|
2024-01-07 07:59:31
| 1
| 945
|
Eminent Emperor Penguin
|
77,772,223
| 6,562,828
|
django allauth adding uuid field
|
<p>I'm trying to add a field for user uuid for sigup but after adding <code>AUTH_USER_MODEL = "aihuser.UidUser" </code></p>
<p>I'm getting errors even without makemigrations and with makemigrations too. I deleted all migrations and drop the database and I created new database :</p>
<p>If I comment this line :</p>
<pre><code>AUTH_USER_MODEL = "aihuser.UidUser"
</code></pre>
<p>I get error :</p>
<pre><code>hook : register_rich_text_handlers
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 134, in inner_run
self.check(display_num_errors=True)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 546, in check
raise SystemCheckError(msg)
django.core.management.base.SystemCheckError: SystemCheckError: System check identified some issues:
ERRORS:
aihuser.AihUser.groups: (fields.E304) Reverse accessor 'Group.user_set' for 'aihuser.AihUser.groups' clashes with reverse accessor for 'auth.User.groups'.
HINT: Add or change a related_name argument to the definition for 'aihuser.AihUser.groups' or 'auth.User.groups'.
aihuser.AihUser.user_permissions: (fields.E304) Reverse accessor 'Permission.user_set' for 'aihuser.AihUser.user_permissions' clashes with reverse accessor for 'auth.User.user_permissions'.
HINT: Add or change a related_name argument to the definition for 'aihuser.AihUser.user_permissions' or 'auth.User.user_permissions'.
auth.User.groups: (fields.E304) Reverse accessor 'Group.user_set' for 'auth.User.groups' clashes with reverse accessor for 'aihuser.AihUser.groups'.
HINT: Add or change a related_name argument to the definition for 'auth.User.groups' or 'aihuser.AihUser.groups'.
auth.User.user_permissions: (fields.E304) Reverse accessor 'Permission.user_set' for 'auth.User.user_permissions' clashes with reverse accessor for 'aihuser.AihUser.user_permissions'.
HINT: Add or change a related_name argument to the definition for 'auth.User.user_permissions' or 'aihuser.AihUser.user_permissions'.
System check identified 4 issues (0 silenced).
</code></pre>
<p>If I uncomment the line :</p>
<p>AUTH_USER_MODEL = "aihuser.UidUser"</p>
<p>I get this error :</p>
<pre><code>Wagtail version is : 4.1.9
DEV environment
Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django_filters/utils.py", line 174, in get_field_parts
opts = field.remote_field.model._meta
AttributeError: 'str' object has no attribute '_meta'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 125, in inner_run
autoreload.raise_last_exception()
File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 398, in execute
autoreload.check_errors(django.setup)()
File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.8/site-packages/django/apps/registry.py", line 124, in populate
app_config.ready()
File "/usr/local/lib/python3.8/site-packages/wagtail/snippets/apps.py", line 16, in ready
register_deferred_snippets()
File "/usr/local/lib/python3.8/site-packages/wagtail/snippets/models.py", line 142, in register_deferred_snippets
_register_snippet_immediately(model, viewset)
File "/usr/local/lib/python3.8/site-packages/wagtail/snippets/models.py", line 74, in _register_snippet_immediately
from wagtail.snippets.views.snippets import SnippetViewSet
File "/usr/local/lib/python3.8/site-packages/wagtail/snippets/views/snippets.py", line 628, in <module>
class SnippetHistoryReportFilterSet(WagtailFilterSet):
File "/usr/local/lib/python3.8/site-packages/django_filters/filterset.py", line 62, in __new__
new_class.base_filters = new_class.get_filters()
File "/usr/local/lib/python3.8/site-packages/django_filters/filterset.py", line 325, in get_filters
field = get_model_field(cls._meta.model, field_name)
File "/usr/local/lib/python3.8/site-packages/django_filters/utils.py", line 144, in get_model_field
fields = get_field_parts(model, field_name)
File "/usr/local/lib/python3.8/site-packages/django_filters/utils.py", line 179, in get_field_parts
raise RuntimeError(
RuntimeError: Unable to resolve relationship `user` for `wagtailcore.ModelLogEntry`. Django is most likely not initialized, and its apps registry not populated. Ensure Django has finished setup before loading `FilterSet`s.
</code></pre>
<p>In my setting I have aihuser in installed apps :</p>
<h1>wg_aihalbum_04/wg_aihalbum_04/settings/base.py :</h1>
<pre><code>INSTALLED_APPS = [
# default apps
'wagtail_modeltranslation',
'wagtail_modeltranslation.makemigrations',
'wagtail_modeltranslation.migrate',
'colorfield',
'wagtail.contrib.forms',
'wagtail.contrib.redirects',
'wagtail.contrib.settings',
'wagtail.embeds',
'wagtail.sites',
'wagtail.users',
'wagtail.snippets',
'wagtail.documents',
'wagtail.images',
'wagtail.search',
'wagtail.admin',
'wagtail.locales',
'wagtail.contrib.routable_page',
'wagtail.contrib.sitemaps',
'captcha',
'wagtailcaptcha',
'wagtail',
'modelcluster',
'taggit',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sitemaps',
'wagtail.contrib.modeladmin',
'rest_framework',
# default apps
'apps.home',
'apps.search',
'apps.front',
'apps.site_settings',
'apps.menus',
'apps.flex',
'apps.newsletter',
# Authentication
'allauth',
'allauth.account',
'allauth.socialaccount',
# AIH APPS
'apps.aihuser',
# 'apps.aihalbum',
# 'apps.aihprofile',
]
MIDDLEWARE = [
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'wagtail.contrib.redirects.middleware.RedirectMiddleware',
# AIH APPS
'allauth.account.middleware.AccountMiddleware',
]
# Allauth specific settings
# Authentication Backends
AUTHENTICATION_BACKENDS = (
# Needed to login by username in Django admin, regardless of `allauth`
'django.contrib.auth.backends.ModelBackend',
# `allauth` specific authentication methods, such as login by e-mail
'allauth.account.auth_backends.AuthenticationBackend',
)
# AUTH_USER_MODEL = "aihuser.UidUser"
ACCOUNT_AUTHENTICATION_METHOD = 'username_email'
ACCOUNT_EMAIL_REQUIRED = True
LOGIN_URL = '/login/'
LOGIN_REDIRECT_URL = '/'
LOGOUT_REDIRECT_URL = '/'
ACCOUNT_AUTHENTICATION_METHOD = "username_email"
ACCOUNT_CONFIRM_EMAIL_ON_GET = True
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_EMAIL_VERIFICATION = 'optional' # Can be 'optional' or 'none'
ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True
ACCOUNT_LOGOUT_ON_GET = True
ACCOUNT_LOGIN_ON_PASSWORD_RESET = True
ACCOUNT_LOGOUT_REDIRECT_URL = '/login/'
ACCOUNT_PRESERVE_USERNAME_CASING = False
ACCOUNT_SESSION_REMEMBER = True
ACCOUNT_SIGNUP_PASSWORD_ENTER_TWICE = False
ACCOUNT_USERNAME_BLACKLIST = ["admin", "god"]
ACCOUNT_USERNAME_MIN_LENGTH = 2
ACCOUNT_FORMS = {
'signup': 'apps.forms.AihuserSignupForm',
}
</code></pre>
<p>I deleted everything and start project from beggining :</p>
<h1>wg_aihalbum_04/apps/aihuser/forms.py :</h1>
<pre><code>from django import forms
from allauth.account.forms import SignupForm
from .models import *
from collections import OrderedDict
import uuid
class AihuserSignupForm(SignupForm):
def save(self, request):
user = super(AihuserSignupForm, self).save(request)
user.uuid = uuid.uuid4() # Generate a new UUID
user.save()
return user
</code></pre>
<h1>wg_aihalbum_04/apps/aihuser/models.py :</h1>
<pre><code>import uuid
from django.db import models
from django.contrib.auth.models import AbstractUser
class AihUser(AbstractUser):
uuid = models.UUIDField(default=uuid.uuid4, editable=False, unique=True)
</code></pre>
<h1>pip list :</h1>
<pre><code>Package Version
------------------------ -----------
anyascii 0.3.1
asgiref 3.5.2
backports.zoneinfo 0.2.1
beautifulsoup4 4.11.1
certifi 2022.12.7
cffi 1.16.0
charset-normalizer 2.1.1
cryptography 41.0.7
defusedxml 0.7.1
Django 4.1.4
django-allauth 0.58.2
django-anymail 9.0
django-autoslug 1.9.8
django-colorfield 0.8.0
django-debug-toolbar 4.1.0
django-extensions 3.2.1
django-filter 22.1
django-modelcluster 6.0
django-modeltranslation 0.18.7
django-permissionedforms 0.1
django-recaptcha 3.0.0
django-robots 6.1
django-taggit 3.1.0
django-treebeard 4.5.1
djangorestframework 3.14.0
djongo 1.3.6
dnspython 2.4.2
draftjs-exporter 2.1.7
et-xmlfile 1.1.0
html5lib 1.1
idna 3.4
l18n 2021.3
mongoengine 0.27.0
oauthlib 3.2.2
openpyxl 3.0.10
Pillow 9.3.0
pip 23.3.2
psycopg2-binary 2.9.5
pycparser 2.21
PyJWT 2.8.0
pymongo 3.12.3
python3-openid 3.2.0
pytz 2022.6
requests 2.28.1
requests-oauthlib 1.3.1
setuptools 45.1.0
six 1.16.0
soupsieve 2.3.2.post1
sqlparse 0.2.4
telepath 0.3
typing_extensions 4.4.0
urllib3 1.26.13
wagtail 4.1.9
wagtail-django-recaptcha 1.0
wagtail-modeltranslation 0.14.0
wagtail-robots 0.4.0
webencodings 0.5.1
wheel 0.34.2
Willow 1.4.1
</code></pre>
|
<python><django><wagtail>
|
2024-01-07 07:17:38
| 1
| 705
|
Bynd
|
77,772,133
| 9,092,563
|
Can't get one pod to talk to another pod (ScrapyRT communication in Kubernetes not working)
|
<p>I'm managing a Kubernetes cluster and want <strong>Pod1</strong> to make API calls to <strong>Pod2</strong> and <strong>Pod3</strong> (but <strong>Pod1</strong> - <strong>Pod3</strong> fails!):</p>
<ol>
<li><strong>Pod1</strong>: A Jupyter Notebook environment to test connections.</li>
<li><strong>Pod2</strong>: An Express.js app running on port 8000, exposed on port 80 via the <code>express-backend-service</code>.</li>
<li><strong>Pod3</strong>: A python <a href="https://docs.scrapy.org/en/latest/" rel="nofollow noreferrer">scrapy</a> application with <a href="https://scrapyrt.readthedocs.io/en/stable/" rel="nofollow noreferrer">ScrapyRT</a> listening on port 14805, exposed on port 80 via <code>getcookie-14805-service</code>.</li>
</ol>
<h2>Pod2 Service and Deployment (express-backend-service):</h2>
<h4>express-deployment.yaml:</h4>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment
spec:
#...
containers:
- name: express-app
image: privaterepo/myproject-backend:latest
ports:
- containerPort: 8000
#...
</code></pre>
<h4>express-service.yaml:</h4>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: express-backend-service
spec:
selector:
app: express-app
ports:
- protocol: TCP
port: 80
targetPort: 8000
type: ClusterIP
</code></pre>
<h2>Pod3 Service and Deployment (getcookie-14805-service):</h2>
<h4>getcookie-14805-deployment.yaml:</h4>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: getcookie-pod
labels:
app: getcookie-pod
spec:
replicas: 1
selector:
matchLabels:
app: getcookie-pod
template:
metadata:
labels:
app: getcookie-pod
spec:
containers:
- name: getcookie-pod
image: privaterepo/myproject-scrapy:latest
imagePullPolicy: Always # Ensure the latest image is always pulled
ports:
- containerPort: 14805
envFrom:
- secretRef:
name: scrapy-env
env:
- name: SCRAPYRT_PORT
value: "14805"
imagePullSecrets:
- name: docker-credentials
</code></pre>
<h4>getcookie-14805-service.yaml:</h4>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: getcookie-service
spec:
selector:
app: getcookie-pod
ports:
- protocol: TCP
port: 80
targetPort: 14805
type: ClusterIP
</code></pre>
<h3>Kubernetes Logs</h3>
<p><code>kubectl get svc</code>:</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
express-backend-service ClusterIP 10.99.145.37 <none> 80/TCP 2d
getcookie-service ClusterIP 10.106.14.183 <none> 80/TCP 29m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
</code></pre>
<p>pods
<code>kubectl get pods</code></p>
<pre><code>$ kubectl get po
NAME READY STATUS RESTARTS AGE
express-app-deployment-6896ff994c-gl4pd 1/1 Running 3 (4h4m ago) 2d
getcookie-pod-59b8575ffc-8dcqb 1/1 Running 0 28m
jupyter-debug-pod 1/1 Running 3 (7h33m ago) 6d9h
</code></pre>
<p>pod3 logs (getcookie-pod-59b8575ffc-8dcqb):</p>
<pre><code>$ kubectl logs getcookie-pod-59b8575ffc-8dcqb -f
2024-01-08 06:26:07+0000 [-] Log opened.
2024-01-08 06:26:07+0000 [-] Site starting on 14805
2024-01-08 06:26:07+0000 [-] Starting factory <twisted.web.server.Site object at 0x7fc2139a44f0>
2024-01-08 06:26:07+0000 [-] Running with reactor: AsyncioSelectorReactor.
2024-01-08 06:56:24+0000 [-] "127.0.0.1" - - [08/Jan/2024:06:56:24 +0000] "GET / HTTP/1.1" 404 167 "-" "curl/7.68.0"
</code></pre>
<p>That curl log above showed up only after I did an <code>exec</code> into the pod and ran this:</p>
<pre><code>curl http://localhost:14805
</code></pre>
<h3>Update 1/7/2024</h3>
<p>Trying to directly curl the getcookie-service doesn't work:</p>
<p><code>curl http://getcookie-service</code>
Output:</p>
<pre><code>curl: (7) Failed to connect to getcookie-service port 80: Connection refused
</code></pre>
<h2>Issue:</h2>
<p>I can successfully send requests from <strong>Pod1</strong> to <strong>Pod2</strong> using the service name <strong>http://express-backend-service/api</strong>. However, when attempting to connect to <strong>Pod3</strong> using a similar approach, I get a connection error.</p>
<p>Here's the Python code snippet used in <strong>Pod1</strong> to connect to <strong>Pod3</strong>:</p>
<pre><code>def getCookie(userId):
endpoint = 'http://getcookie-14805-service.default/crawl.json?spider_name=getCookie&url=http://images.google.com/'
post = {
"request": {
"url": "http://images.google.com/",
"meta": {'userId': userId},
"callback": "parse",
"dont_filter": "True"
},
"spider_name": "getCookie"
}
try:
response = requests.post(endpoint, json=post).json()
return response['items'][0]['finalItems']
except Exception as e:
print('getCookie error:', e)
return None
user = '6010dga53294c92c981ef3y576'
getCookie(user)
</code></pre>
<p>Error received:</p>
<pre><code>ConnectionError: HTTPConnectionPool(host='getcookie-14805-service.default', port=80):
Max retries exceeded with url: /crawl.json?spider_name=getCookie&url=http://images.google.com/
(Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fde7a6cc7f0>: Failed
to establish a new connection: [Errno 111] Connection refused'))
</code></pre>
<p>Why can I successfully make calls from <strong>Pod1</strong> to <strong>Pod2</strong> but not from <strong>Pod1</strong> to <strong>Pod3</strong>?</p>
<p>I expected the Kubernetes services to facilitate inter-pod communication. Do I need additional configuration?</p>
|
<python><node.js><docker><kubernetes><scrapy>
|
2024-01-07 06:31:59
| 0
| 692
|
rom
|
77,772,039
| 5,811,638
|
How can I setup my container Django app in Azure so it doesn't crash?
|
<p>Sometimes, my container Django app in Azure crashes.</p>
<p>The error I get is this:</p>
<blockquote>
<p>upstream connect error or disconnect/reset before headers. retried,
and the latest reset reason: remote connection failure, transport
failure reason: delayed connect error: 111</p>
</blockquote>
<p>I haven't been able to figure out what's causing it.</p>
<p>Any help is appreciated.</p>
<p>Thanks.</p>
|
<python><django><azure>
|
2024-01-07 05:48:41
| 1
| 430
|
Jean Paul Ruiz
|
77,772,014
| 485,330
|
Syncing Redis with MariaDB
|
<p>In my spare time, I'm developing an API where successful queries result in a reduction of the user's available query credits. Currently, I'm utilizing Redis 6 for a temporary cache of the balance data (with a 60-second time-to-live) and MariaDB 10.5 for storing the data persistently.</p>
<p>My concern is about the potential for the cached balance in Redis to become unsynchronized with the data in MariaDB. I'm curious if there's a best practice approach for maintaining consistency between these two data storage systems. If aligning them proves to be too complex, I'm considering forgoing the Redis cache for balance information and relying solely on MariaDB.</p>
|
<python><database><caching><redis><mariadb>
|
2024-01-07 05:36:19
| 0
| 704
|
Andre
|
77,771,938
| 1,492,229
|
How to fix this MemoryError: Unable to allocate in Python
|
<p>Here is the code I have</p>
<pre><code>out = out.T.groupby(level=0, sort=False).sum().T
</code></pre>
<p>and it gives this error</p>
<pre><code>MemoryError: Unable to allocate 13.1 GiB for an array with shape (37281, 47002) and data type int64
</code></pre>
<p>I tried this</p>
<pre><code>out = out.T.groupby(level=0, sort=False).sum().astype(np.int8).T
</code></pre>
<p>but still getting the same error!</p>
<p>Any idea how to fix it?</p>
|
<python><dataframe><memory>
|
2024-01-07 04:48:20
| 2
| 8,150
|
asmgx
|
77,771,936
| 4,246,716
|
aligning multiple labels below xaxis using matplotlib at equi-distances
|
<p>I am trying to add a few labels below the x-axis using matplotlib. I have 5-6 labels which I want to plot below the xaxis. The text of these labels will vary hence, having a single setting to render the labels is not possible. I have a minimal example given below.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# Generate sample data
frame_width = 1920
frame_height =1080
##plotting function begins
fig, ax = plt.subplots(1,1,figsize=(frame_width/100,frame_height/100))
ax = plt.gca()
fig = plt.gcf()
fig_width = fig.get_figwidth() * fig.dpi
txtcolor='White'
text_font_size=12
textfont='Arial'
# Show only the x-axis
ax.set_xlim(1, 20)
txt1="ATTENTION"
a1 = plt.figtext(0.15, 0.070, txt1, ha='left',va='center',weight='bold',c=txtcolor,family=textfont ,fontsize=text_font_size, bbox=dict(facecolor='g', edgecolor='g'))
txt1_width = a1.get_window_extent().width / fig_width
txt2_x = txt1_width + 0.15 + 0.02
txt2="CONVICTION"
a2=plt.figtext(txt2_x, 0.070, txt2, ha='left',va='center',weight='bold',c=txtcolor,family=textfont ,fontsize=text_font_size, bbox=dict(facecolor='c', edgecolor='c'))
txt2_width = a2.get_window_extent().width / fig_width
txt3_x = txt2_width + 0.15 + 0.03 + txt1_width
txt3="INTEREST IN SPORTS"
a3=plt.figtext(txt3_x, 0.070, txt3, ha='left',va='center',weight='bold',c=txtcolor,family=textfont ,fontsize=text_font_size, bbox=dict(facecolor='m', edgecolor='m'))
txt3_width = a3.get_window_extent().width / fig_width
txt4_x = txt2_width + 0.15 + 0.03 + txt1_width + txt3_width
plt.show()
</code></pre>
<p>I am trying to take the width of the text + add a padding so that the next text does not overlap. However, as the text varies in length, I have to change the setting to avoid overlapping or creating a larger gap between the text labels. How can I dynamically render it so that the text labels look uniformly spaced irrespective of the size of the text.</p>
|
<python><matplotlib>
|
2024-01-07 04:46:27
| 1
| 3,045
|
Apricot
|
77,771,824
| 1,492,229
|
Why Python ignores the sign in the column name?
|
<p>I have a dataframe of text that looks like this</p>
<pre><code>RepID, Txt
1, +83 -193 -380 +55 +901
2, -94 +44 +2892 -60
3, +7010 -3840 +3993
</code></pre>
<p>Although the Txt field have +282 and -829 but these are string values not numeric</p>
<p>The problem is that when I use Bag of words function</p>
<pre><code>def BOW(df):
CountVec = CountVectorizer() # to use only bigrams ngram_range=(2,2)
Count_data = CountVec.fit_transform(df)
Count_data = Count_data.astype(np.uint8)
cv_dataframe=pd.DataFrame(Count_data.toarray(), columns=CountVec.get_feature_names_out(), index=df.index) # <- HERE
return cv_dataframe.astype(np.uint8)
</code></pre>
<p>I get the result columns without any sign + or -</p>
<p>the outcome is</p>
<pre><code>RepID 83 193 380 55 ...
1 1 1 1 1
2 0 0 0 0
</code></pre>
<p>it should be</p>
<pre><code>RepID +83 -193 -380 +55 ...
1 1 1 1 1
2 0 0 0 0
</code></pre>
<p>Why is that and how to fix it?</p>
|
<python><dataframe><nlp>
|
2024-01-07 03:34:18
| 3
| 8,150
|
asmgx
|
77,771,673
| 1,492,229
|
Merging DataFrame Columns in Python
|
<p>I have a special dataframe called df</p>
<p>here is how it looks like</p>
<pre><code>RepID +Col01 +Col02 +Col03 -Col01 +Col04 +Col05 -Col03 -Col04 +Col06 -Col07
1 5 7 9 8 3 8 1 9 4 6
2 1 3 3 3 1 2 2 3 6 0
3 9 8 0 9 4 9 5 1 2 0
4 3 1 0 5 8 7 1 0 9 2
5 0 7 1 2 0 0 2 9 2 1
</code></pre>
<p>They are all positive numbers in the data</p>
<p>but if you notice the column name it is a column name that is either with <strong>+</strong> or with <strong>-</strong></p>
<p>Some of these columns have + with no columns with the - (Such as +Col06)</p>
<p>Some of these columns have - with no columns with the + (Such as -Col07)</p>
<p>Some other have both (Such as +Col01 and -Col01)</p>
<p>I want to make this dataset normalised by subtracting ting the value in the - columns from the + columns, and chnage the column name to a name with no + or - in the begining of the name, so the end table will look like this</p>
<pre><code>RepID Col01 Col02 Col03 Col04 Col05 Col06 Col07
1 -3 7 8 -6 8 4 -6
2 -2 3 1 -2 2 6 -0
3 0 8 -5 3 9 2 0
4 -2 1 -1 8 7 9 -2
5 -2 7 -1 -9 0 2 -1
</code></pre>
<p>Is there anyway I can do that</p>
|
<python><pandas><dataframe>
|
2024-01-07 02:06:30
| 4
| 8,150
|
asmgx
|
77,771,642
| 3,782,963
|
Labelling a step plot in Matplotlib
|
<p>Below is a code in which it plots a sine wave and adds a step wave on top of it:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
# sine wave parameters
A = 5 # Increased Amplitude
f = 1 # Frequency in Hz
T = 1/f # Period
t = np.linspace(0, 2*T, 1000) # Time from 0 to 2 periods
# Create a sine wave
sine_wave = A * np.sin(2 * np.pi * f * t)
# Digital signal (sampled)
sampling_rate = 10
sampling_interval = 1 / sampling_rate
sample_times = np.arange(0, 2*T, sampling_interval)
sampled_sine_wave = A * np.sin(2 * np.pi * f * sample_times)
# Plot the results with a step function representing the digital signal and label the points
plt.figure(figsize=(12, 8))
plt.plot(t, sine_wave, label='Continuous Sine Wave')
plt.step(sample_times, sampled_sine_wave, 'r-', where='post', linewidth=2, label='Digital Signal (Step Wave)')
# Label each point on the step function
for x, y in zip(sample_times, sampled_sine_wave):
label = f"{y:.2f}"
plt.annotate(label, (x, y), textcoords="offset points", xytext=(0,10), ha='center', fontsize=8, color='blue')
plt.title('Analog Sine Wave and Digital Step Signal with Labels')
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.legend()
plt.grid(True)
plt.show()
</code></pre>
<p>And this results in the following plot:</p>
<p><a href="https://i.sstatic.net/5z9Nn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5z9Nn.png" alt="enter image description here" /></a></p>
<p>Is there a way to label this point, where the arrow is located?</p>
<p><a href="https://i.sstatic.net/TCzzo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TCzzo.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-01-07 01:46:22
| 1
| 2,835
|
Akshay
|
77,771,561
| 2,591,138
|
Use selenium to navigate in joint selection/list elements
|
<p>I'm using beautifulSoup / selenium to do some webscraping and am hitting a wall with a certain dropdown select menu. The rough HTML is as follows:</p>
<pre><code><div class="selection-box" alt="selection" title="selection" role="select" tabindex="0">
<select id="select" style="display: none;">
<option value="1">First</option>
<option value="2">Second</option>
<option value="3" selected="selected">Third</option>
</select>
<div class="current">Third</div>
<ul class="options" style="display: none;">
<li class="search--option" alt="First option" title="First option" aria-label="First option" role="option" tabindex="0">First</li>
<li class="search--option" alt="Second option" title="Second option" aria-label="Second option" role="option" tabindex="0">Second</li>
<li class="search--option selected" alt="Third option" title="Third option" aria-label="Third option" role="option" tabindex="0">Third</li>
</ul>
</code></pre>
<p>When I'm working the menu through the browsers, it changes as follows:</p>
<ul>
<li>the wrapper div class changes to "selection-box active"</li>
<li>the ul changes to "display: block"</li>
<li>once I pick a different option, those two get reversed again and the middle div and the selected li item change accordingly</li>
</ul>
<p>I want to use selenium to select a certain option. So far, I tried the following:</p>
<pre><code> from selenium.webdriver.support.ui import Select
drpBrand = driver.find_element(By.ID, "select");
css = 'select#select' # css selector of the element
js = """const data_options = Array.from(document.querySelectorAll('{css}'));
data_options.forEach(a=>{{a.style='display:block;';}});""".format(css=css)
driver.execute_script(js)
drpBrand.select_by_visible_text("Third");
</code></pre>
<p>This is a best-of using various threads (<a href="https://stackoverflow.com/questions/25679738/element-not-visible-element-is-not-currently-visible-and-may-not-be-manipulated">element not visible: Element is not currently visible and may not be manipulated - Selenium webdriver</a>, <a href="https://stackoverflow.com/questions/20138761/how-to-select-a-dropdown-value-in-selenium-webdriver-using-java">How to select a dropdown value in Selenium WebDriver using Java</a>), but it still doesn't work. Any ideas? I assume I need to target the list as well (besides the select)?</p>
<p>The error is always</p>
<blockquote>
<p>selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable: Element is not currently visible and may not be manipulated</p>
</blockquote>
<p>Thanks</p>
|
<python><html><selenium-webdriver>
|
2024-01-07 00:48:47
| 1
| 1,083
|
Berbatov
|
77,771,523
| 5,521,564
|
RateLimitingError when communicating with OpenAI API
|
<p>I am trying to get ChatGPT OpenAI API up and running with this little python script:</p>
<pre><code>from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."}
]
)
print(completion.choices[0].message)
</code></pre>
<p>The script however gives me the following error:</p>
<pre><code>RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. 'type': 'insufficient_quota', 'param': None, 'code':'insufficient_quota'}}
</code></pre>
<p>My quota however is $5, which is more than enough for this simple prompt. Is this a code problem, or a should I simply add credits to my OpenAI account?</p>
|
<python><chatgpt-api>
|
2024-01-07 00:27:58
| 1
| 1,963
|
Matthias
|
77,771,494
| 1,144,251
|
Python Plotly: How to avoid truncated chart
|
<p>I have a chart with an annotation that is 10 lines long that is truncated. I tried increasing the height of the layout and it only increased the height of the chart but the truncation issue persists.
I've included an image of what I mean below:</p>
<p><a href="https://i.sstatic.net/h4KFl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h4KFl.png" alt="enter image description here" /></a></p>
<p>Any ideas on how to avoid the truncation is appreciated. Thanks!</p>
<p>Edit:
I tried with suggested fig.update_yaxes(domain=[0.25, 1.0]) the result now looks like:
<a href="https://i.sstatic.net/nK2ND.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nK2ND.jpg" alt="enter image description here" /></a></p>
|
<python><plotly><annotations>
|
2024-01-07 00:14:36
| 1
| 357
|
user1144251
|
77,771,119
| 2,664,376
|
Gurobi: unsupported operand type(s) for -: 'int' and 'tupledict'
|
<p>I have this constraint with big-M parameter and an auxiliary binary variable w:</p>
<pre><code>for i in customers:
for j in customers:
if i != j:
mdl.addConstr(y[j] + z[j] <= y[i] + z[i] - df.demand[j]*(x1[i,j] + x2[i,j])
+ 100000 * (1 - w), name= 'C8')
</code></pre>
<p>When I run the code, I got the following error:</p>
<pre><code>TypeError: unsupported operand type(s) for -: 'int' and 'tupledict'
</code></pre>
<p>w is defined as follows:</p>
<pre><code>w = mdl.addVars(0,1,vtype=GRB.BINARY, name='w')
</code></pre>
<p>I couldn't figure out what is the problem? Is it a problem in defining w?
Thank you</p>
|
<python><gurobi>
|
2024-01-06 21:38:06
| 1
| 1,335
|
MAYA
|
77,770,346
| 12,432,147
|
Flask hot reload not working in Python 3.10: "operation was attempted on something that is not a socket"
|
<p>I am using Python 3.10.11, Flask 3.0.0 and Werkzeug 3.0.1.
I've worked on a project for couple of months, took a break and now, couple of months later for some reason the hot reload doesn't work. Instead, it throws this error. Moreover, Sometimes the app doesn't update even when I restart it (close and start new terminal). I tried solving this issue as an environmental issue and as a version mismatch issue but both solutions ended up not working. I don't have any sockets in my app as well.</p>
<pre><code>Exception in thread Thread-2 (serve_forever):
Traceback (most recent call last):
File "C:\Users\USER\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\USER\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\USER\AppData\Local\Programs\Python\Python310\lib\site-packages\werkzeug\serving.py", line 806, in serve_forever
super().serve_forever(poll_interval=poll_interval)
File "C:\Users\USER\AppData\Local\Programs\Python\Python310\lib\socketserver.py", line 232, in serve_forever
ready = selector.select(poll_interval)
File "C:\Users\USER\AppData\Local\Programs\Python\Python310\lib\selectors.py", line 324, in select
r, w, _ = self._select(self._readers, self._writers, [], timeout)
File "C:\Users\USER\AppData\Local\Programs\Python\Python310\lib\selectors.py", line 315, in _select
r, w, x = select.select(r, w, w, timeout)
OSError: [WinError 10038] An operation was attempted on something that is not a socket
</code></pre>
<p>I am using pipenv, tried running the app in multiple ways.
<code>flask --debug run</code>
<code>py ./app.py</code>
<code>pipenv run py ./app.py</code></p>
<p>I just can't understand why it stopped working. I checked for version mismatch and tried changing multiple python versions, and multiple Flask & Werkzeug versions, unfortunately with no luck.
I tried reseting netsh: <code>netsh winsock reset</code> as presented <a href="https://stackoverflow.com/a/73579356/12432147">here</a> but with no luck as well.
I tried deleting the environment and creating a new one. I ran this app globally and locally (in env) but got the same issue. Sometimes running in environment it catches only the first reload and then stops working (without any reload options)</p>
<p>I am clueless. What can cause this?
My only guess is maybe when it tries to reload the app, it crashes somewhere that I can't "Ctrl + C" out of and this causes a loop which corrupts the whole listening port. Changing ports doesn't work either.</p>
|
<python><sockets><flask><werkzeug><hot-reload>
|
2024-01-06 17:27:52
| 1
| 319
|
Ilan Yashuk
|
77,770,290
| 1,467,552
|
How to drop certain elements from a list typed column in Polars?
|
<p>Suppose I have polars Dataframe with a list column type of strings:</p>
<pre><code>┌─────────────────────────────────────────────────┐
│ words │
│ --- │
│ list[str] │
╞═════════════════════════════════════════════════╡
│ ["i", "like", "the", "pizza"] │
│ ["the", "dog", "is", "runnig"] │
│ ["me", "and", "my", "friend", "are", "playing"] │
└─────────────────────────────────────────────────┘
</code></pre>
<p>And I would like to filter stop words from every list.</p>
<p>I can apply some custom function using <code>map_elements</code>:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.Config(fmt_table_cell_list_len=8, fmt_str_lengths=80)
df = pl.DataFrame({
"words": [["i", "like", "the", "pizza"],
["the", "dog", "is", "runnig"],
["me", "and", "my", "friend", "are", "playing"]]
})
STOP_WORDS = ["the"]
filtered_df = df.with_columns(
pl.col("words").map_elements(lambda words:
[word for word in words if word not in STOP_WORDS]
)
)
</code></pre>
<pre><code>shape: (3, 1)
┌─────────────────────────────────────────────────┐
│ words │
│ --- │
│ list[str] │
╞═════════════════════════════════════════════════╡
│ ["i", "like", "pizza"] │
│ ["dog", "is", "runnig"] │
│ ["me", "and", "my", "friend", "are", "playing"] │
└─────────────────────────────────────────────────┘
</code></pre>
<p>However, it stated <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.map_elements.html" rel="nofollow noreferrer">in the docs</a> that custom UDFs are much slower, so I prefer native API based solution.</p>
<p>Is there any builtin function in Polars to achieve my goal?</p>
<p>Thanks.</p>
|
<python><dataframe><python-polars>
|
2024-01-06 17:12:18
| 2
| 1,170
|
barak1412
|
77,770,248
| 9,582,542
|
Scrapy loop to find each element and export to Json
|
<p>Code below locates all the elements I am looking for but I am struggling to put this is a loop where data gets put into a dataframe and exported out to a Json file. All the commands work from the command line to bring in the data I need. How can I get his to export a json?</p>
<pre><code>import scrapy
from scrapy.item import Field, Item
from scrapy.selector import Selector
class WeatherdataSpider(scrapy.Spider):
name = 'weatherdata'
allowed_domains = ['https://www.nflweather.com/']
start_urls = ['http://https://www.nflweather.com//']
def parse(self, response):
#pass
trans_table = {ord(c): None for c in u'\n\t\t'}
Datetime = ' '.join(s.strip().translate(trans_table) for s in response.xpath('//div[@class="fw-bold text-wrap"]/text()').extract())
awayTeam = response.xpath('//span[@class="fw-bold"]/text()').extract()
homeTeam = response.xpath('//span[@class="fw-bold ms-1"]/text()').extract()
TempProb = response.xpath('//div[@class="mx-2"]/span/text()').extract()
windspeed = response.xpath('//div[@class="text-break col-md-4 mb-1 px-1 flex-centered"]/span/text()').extract()
</code></pre>
|
<python><scrapy>
|
2024-01-06 17:00:21
| 1
| 690
|
Leo Torres
|
77,770,219
| 11,946,045
|
Is there a way to define response models based on a schema in FastAPI websockets?
|
<h1>Question</h1>
<p>Assume you have the following pydantic schema</p>
<pre><code>class Item(BaseModel):
id: int
name: str
</code></pre>
<p>and a fastapi server running websockets</p>
<pre><code>class ConnectionManager:
def __init__(self, db: Session = Depends(get_db)):
self.active_connections: list[WebSocket] = []
self.db = db
async def connect(self, websocket: WebSocket):
await websocket.accept()
self.active_connections.append(websocket)
def disconnect(self, websocket: WebSocket):
self.active_connections.remove(websocket)
async def handler(self, websocket: WebSocket):
data = await self.receive_json()
json = self.get_items()
# should send a list of items
await websocket.send_json(dict(json))
def get_items(self) -> List[Item]:
# apply the schema to the queried items then return
return self.query(Item).all()
</code></pre>
<p>the following is the websocket endpoint</p>
<pre><code>@app.websocket("/ws")
async def ws_endpoint(websocket: WebSocket, manager: ConnectionManager = Depends()):
await manager.connect(websocket)
try:
while True:
await manager.handle(websocket)
except WebSocketDisconnect:
manager.disconnect(websocket)
</code></pre>
<p>Is there a way to define response models for <code>ConnectionManager</code> methods such as <code>get_items</code> so that it returns json data with out manually going through the pain of converting the query results every time I need to returned data from the database</p>
<p>currently the shortest and closest way to response models that I found was list comprehension</p>
<pre><code>return [dict(SingleBeacon(**vars(b))) for b in self.db.query(Beacon).all()]
</code></pre>
<p>but that isn't scalable and honestly isn't pleasant to look at</p>
|
<python><websocket><sqlalchemy><fastapi><jsonschema>
|
2024-01-06 16:51:14
| 1
| 814
|
Weed Cookie
|
77,770,191
| 12,178,630
|
YOLOv8 inference with OpenCV Python
|
<p>I have trined YOLOv8 model for segmentation on a custom dataset, the model can do inference successfully when loaded by ultralytics, however I would like to run it on edge device for which ultralytics would be a bit heavy to install. Therefore I am using opencv.
I have exported my model to onnx format using the command:</p>
<pre><code>my_new_model.export(format='onnx', imgsz=[800,800], opset=12)
</code></pre>
<p>and then when I try to load the model with OpenCV and do inference (using this <a href="https://github.com/ultralytics/ultralytics/blob/main/examples/YOLOv8-OpenCV-ONNX-Python/main.py" rel="nofollow noreferrer">code</a>)
While I have 3 classes only, this part of the code below(from the link I mentioned) will print{2, 7, 10, 12, 13, 15, 16, 17, 18, 21, 22, 24}, while It should give only a set of {0,1,2} as I have 3 classes. I am just confused where do these numbers came from.
My classes are defined as:</p>
<p>CLASSES = ['lemon', 'tomato', 'watermelon']</p>
<pre><code>for i in range(rows):
classes_scores = outputs[0][i][4:]
(minScore, maxScore, minClassLoc, (x, maxClassIndex)) = cv2.minMaxLoc(classes_scores)
if maxScore >= 0.25:
box = [
outputs[0][i][0] - (0.5 * outputs[0][i][2]), outputs[0][i][1] - (0.5 * outputs[0][i][3]),
outputs[0][i][2], outputs[0][i][3]]
boxes.append(box)
scores.append(maxScore)
class_ids.append(maxClassIndex)
result_boxes = cv2.dnn.NMSBoxes(boxes, scores, 0.25, 0.45, 0.5)
print(set(class_ids))
</code></pre>
<p>Note that I have changed the number 640 in the code they provided since my model is trained and exported for images with size 800x800</p>
|
<python><opencv><deep-learning><yolo><yolov8>
|
2024-01-06 16:42:21
| 0
| 314
|
Josh
|
77,770,173
| 8,512,262
|
Get length of line on tkinter.canvas in inches (Mac OS)
|
<p>I'm currently running Mac OS Sonoma 14.2.1 and I'm trying to set up a tkinter canvas which is scaled to "actual" real-life size. I.e., if I draw a line between any two points on the canvas, I'd like to calculate its length in inches.</p>
<p>I've tried getting the DPI in tkinter like so:</p>
<pre><code>root = tk.Tk()
dpi = round(root.winfo_fpixels('1i'))
</code></pre>
<p>On my MacBook, this returns <code>72</code>, which seems like a sensible DPI</p>
<p>Then I used the distance formula to calculate the length of the line in pixels
(I don't need crazy precision, so that's why <code>round</code> is used here)</p>
<pre><code>x1, y1, x2, y2 = canvas.bbox(line)
length = round(math.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2))
</code></pre>
<p>Here's a barebones example of everything together. Note that I'm using <code>bbox()</code> here because in actuality the lines will be drawn between two points set with the mouse, so they can be at any angle, not just a static straight line like in this example...</p>
<pre><code>import math
import tkinter as tk
root = tk.Tk()
# get dpi
dpi = round(root.winfo_fpixels('1i'))
# init canvas
canvas = tk.Canvas(root)
canvas.pack(expand=True, fill='both')
# create a (supposedly) 1 inch line, for example
line = canvas.create_line(50, 50, dpi + 50, 50, width=2)
# get line endpoints
x1, y1, x2, y2 = canvas.bbox(line)
# calculate line length
length_in_px = round(math.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2))
length_in_inches = length_in_px / dpi
# result
print(length_in_inches) # => 1.0833333333333333 inches?
if __name__ == '__main__':
root.mainloop()
</code></pre>
<p>So the math (sort of?) checks out. I suspect this floating point weirdness is related to <code>bbox</code>, so if anyone has a better suggestion for getting the endpoints of the line, let me know!* (For completeness' sake, removing <code>round</code> from both <code>dpi</code> and <code>length_in_px</code> results in a slightly different weird float: <code>1.0857189357495975</code>)</p>
<p><sup>*SEE FIRST UPDATE</sup></p>
<p><em>But</em> the real issue is that when I physically measure the line on my laptop's display (literally, with calipers), it's actually 0.555 inches. This, to me, means at least one of two things:</p>
<ol>
<li>There's some UI scaling going on, in which case the question becomes <strong>how do I figure out this scaling factor on Mac OS?</strong> (I have experience doing this on Windows, but I can't figure it out on my Mac)</li>
<li>The DPI value returned by <code>winfo_fpixels()</code> is incorrect; this seems less likely to me, but not impossible. In this case, <strong>how do I find the actual DPI of my display(s)?</strong></li>
</ol>
<p>How can I accurately go about translating pixels to real-world units? Any help is appreciated, and thanks in advance!</p>
<hr>
<p><strong>UPDATE 1</strong></p>
<p>using <code>canvas.coords(line)</code> instead of <code>canvas.bbox(line)</code> to get the endpoints resolves the floating point weirdness, but not the scaling issue - that's one thing down!</p>
<hr>
<p><strong>UPDATE 2</strong></p>
<p>Tech specs for my laptop (2021 M1 MacBook Pro) state that the display DPI is 254...I'm not sure what to do with this info as yet</p>
|
<python><macos><tkinter><tk-toolkit>
|
2024-01-06 16:37:06
| 0
| 7,190
|
JRiggles
|
77,770,168
| 6,423,456
|
How do I create a class with async code in the constructor?
|
<p>I have a fastAPI project that talks to a Cosmos database.
All of my fastAPI routes are async (<code>async def ...</code>).
I need an async class that will perform CRUD operations of the Cosmos DB.
The problem I'm having is figuring out a constructor for the class.</p>
<p>I want the constructor to:</p>
<ul>
<li>Take in the CosmosClient as an arg (azure.cosmos.aio.CosmosClient)</li>
<li>Use the client to make sure the database is created</li>
<li>Use the client to make sure the container is created</li>
</ul>
<p>Something like this:</p>
<pre class="lang-py prettyprint-override"><code>class CosmosCRUD:
def __init__(self, client: CosmosClient):
self.client = CosmosClient
self.database = await self.client.create_database_if_not_exists("MY_DATABASE_NAME")
self.container = await self.database.create_container_if_not_exists("MY_CONTAINER_NAME", partition_key=...)
</code></pre>
<p>Unfortunately, you can only <code>await</code> inside of async functions, and <code>__init__</code> can't be async, so the code above doesn't work.</p>
<p>As far as I can tell, there's a few solutions:</p>
<ul>
<li>Create a new event loop inside of <code>__init__</code> and run the async code that gets the db and container within that
<ul>
<li>As far as I can tell, that means that fastAPI will stop processing all requests any time an instance of this class is being created, which is probably going to be on every request. This will absolutely destroy performance of the entire app</li>
</ul>
</li>
<li>Ignore <code>__init__</code> and create a new async constructor function like <code>create</code> that users of this class need to call
<ul>
<li>The IDE (PyCharm) doesn't know that <code>create</code> is the constructor that will always be called first. Any instance variables created in this new constructor that you try to use in other methods will be flagged as non-existing by the IDE, because they weren't created in <code>__init__</code></li>
<li>I tried having both a <code>create</code> async classmethod that is the actual constructor, and an <code>__init__</code> method where I just added declarations for any instance variables created in <code>create</code>, with the types (ex: <code>self.client: CosmosClient</code>) but the IDE still complains they don't exist</li>
</ul>
</li>
<li>Use dependency injection somehow
<ul>
<li>Something like this: <a href="https://stackoverflow.com/a/65247387/6423456">https://stackoverflow.com/a/65247387/6423456</a></li>
<li>Unfortunately, my <code>__init__</code> method is taking the client as an argument, and needs to pass it to the function being injected</li>
<li>In the example referenced above, that would be like needing to pass the <code>a</code> arg to the <code>async_dep</code> being injected.</li>
<li>Is that even possible? I'm guessing not</li>
</ul>
</li>
</ul>
<p>Is there a good way to have async code inside of a constructor, without making the IDE unhappy about missing instance variables?</p>
|
<python><asynchronous><fastapi>
|
2024-01-06 16:34:56
| 2
| 2,774
|
John
|
77,769,925
| 19,130,803
|
dropdown multi with same value but different datatype
|
<p>I am developing a <code>dash</code> app. I have a <code>dropdown</code> with <code>multi</code> that contains different datatype value. In this set of values, I have value as 1 which is <code>str</code> type and 1 which is <code>int</code> type. I am able to select both, but <strong>problem</strong> is when I remove any of them both gets removed.</p>
<p><strong>my observation is</strong> If I am able to select both values distinctly, then It must remove the respective type.</p>
<pre><code>import dash
from dash import Dash
import dash_bootstrap_components as dbc
dbc_css = (
"https://cdn.jsdelivr.net/gh/AnnMarieW/dash-bootstrap-templates@V1.0.2/dbc.min.css"
)
app = Dash(
__name__,
suppress_callback_exceptions=True,
external_stylesheets=[dbc.themes.BOOTSTRAP, dbc_css],
)
div_one = html.Div(id="div_one")
options = ["Montreal", "Paris", "1", "#99", 1]
value = ["Paris", "Montreal"]
dd = dcc.Dropdown(
id="dd",
options=options,
value=value,
multi=True,
)
@app.callback(
Output("div_one", "children"),
Input("dd", "value"),
)
def process(val):
print(f"{val=} and {type(val)=}")
for v in val:
print(f"{v=} and {type(v)=}")
return val
app.layout = dbc.Container(html.Div([dd, div_one]))
if __name__ == "__main__":
app.run(host="0.0.0.0", port="8001", debug=True)
</code></pre>
<p>What I am missing?</p>
|
<python><plotly><plotly-dash>
|
2024-01-06 15:14:00
| 0
| 962
|
winter
|
77,769,753
| 236,872
|
how do I find out the "package names" in requirements.txt when writing a Google Cloud Function?
|
<p>I am writing a Google Cloud Function in Python. It imports the following packages:</p>
<pre><code>from google.api_core.client_options import ClientOptions
from google.cloud import documentai # type: ignore
from google.cloud import storage
</code></pre>
<p>I need to put these requirements into requirements.txt. By random, I found out that they are called</p>
<pre><code>functions-framework==3.*
google-api-core>=2.3.2
google-cloud-core>=2.2.1
google-cloud-documentai>=1.2.0
google-cloud-storage>=1.36.2
</code></pre>
<p>What is the right way to find out the package names that need to be put into requirements.txt? Is there a reference where I can find them?</p>
|
<python><google-cloud-platform><google-cloud-functions>
|
2024-01-06 14:20:42
| 1
| 1,146
|
Thorsten Staerk
|
77,769,693
| 3,591,044
|
Encode audio data to string (Flask) and decode it (Javascript)
|
<p>I have a Python Flask app with the method shown below. In the method I'm synthesizing voice from text using Azure text to speech.</p>
<pre><code>@app.route("/retrieve_speech", methods=['POST'])
def retrieve_speech():
text= request.form.get('text')
start = time.time()
speech_key = "my key"
speech_region = "my region"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=speech_region)
speech_config.endpoint_id = "my endpoint"
speech_config.speech_synthesis_voice_name = "voice name"
speech_config.set_speech_synthesis_output_format(
speechsdk.SpeechSynthesisOutputFormat.Audio24Khz160KBitRateMonoMp3)
synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None)
result = synthesizer.speak_text_async(text=text).get()
if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:
# Convert to wav
audio = AudioSegment.from_file(io.BytesIO(result.audio_data))
duration = audio.duration_seconds
data = io.BytesIO()
audio.export(data, format='wav')
data.seek(0)
# Convert binary data to base64 string
data = base64.b64encode(data.read()).decode('utf-8')
speech_timing = time.time() - start
elif result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = result.cancellation_details
if cancellation_details.reason == speechsdk.CancellationReason.Error:
logging.error("Azure speech synthesis failed: {}".format(cancellation_details.error_details))
return jsonify(audio_data=data, speech_timing=str(speech_timing), other="other strings")
</code></pre>
<p>I'm using the Flask method in my frontend (webpage) using Javascript as follows:</p>
<pre><code> $.post("/retrieve_speech", { text: "This is a test" }).done(function (data) {
var audio_data= data.audio_data;
var speech_timing = data.speech_timing;
var other = data.other;
// Decode base64 string to binary
var binaryData = atob(audioData);
// Create an array of 8-bit unsigned integers
var byteArray = new Uint8Array(binaryData.length);
for(var i = 0; i < binaryData.length; i++) {
byteArray[i] = binaryData.charCodeAt(i);
}
// Create a blob object from the byte array
var blob = new Blob([byteArray], {type: 'audio/wav'});
// Create a URL for the blob object
var url = URL.createObjectURL(blob);
// Play the audio
var audio = new Audio(url);
audio.play();
</code></pre>
<p>The problem now is that the audio is not playing. In addition in the Flask app I'm getting the following message: <code>Numba: Attempted to fork from a non-main thread, the TBB library may be in an invalid state in the child process.</code></p>
<p>Synthesizing the speech works, so the problem must be the conversion to wav or to string in the Flask app and/or the decoding of the string in Javascript.</p>
<p>Is something wrong with my code?</p>
|
<javascript><python><flask><base64><azure-cognitive-services>
|
2024-01-06 13:59:53
| 1
| 891
|
BlackHawk
|
77,769,652
| 2,219,819
|
Reading sorted parquet files and merge them with Polars
|
<p>In order to save memory, I have written 49 sorted files to disk:</p>
<pre class="lang-py prettyprint-override"><code>tissue_pq_paths = []
for tissue in tissues:
[...]
aggregated_tissue_df = [...].sort(groupby).collect()
aggregated_tissue_df.write_parquet(tissue_pq_path, compression="snappy", statistics=True, use_pyarrow=True)
tissue_pq_paths.append(tissue_pq_path)
del aggregated_tissue_df
</code></pre>
<p>Now I want to read those 49 sorted files and merge them:</p>
<pre class="lang-py prettyprint-override"><code>(
pl.scan_parquet(tissue_pq_paths, hive_partitioning=False)
.sort(groupby)
.sink_parquet(output_pq_file, compression="snappy", statistics=True)
)
</code></pre>
<p>My assumption was, since the individual files are already sorted, that polars would just merge the files without sort.
Instead, polars reads all files to memory and sorts them again globally.</p>
<p>What can I do about this issue?</p>
|
<python><dataframe><memory><python-polars>
|
2024-01-06 13:46:10
| 0
| 716
|
Hoeze
|
77,769,608
| 1,757,321
|
Add Agent with llm-math to a LangChain Expressive Language implementation
|
<p>I have this LCEL solution:</p>
<pre><code>
from langchain.document_loaders.pdf import PyMuPDFLoader
import os
from typing import List, Tuple
from dotenv import load_dotenv
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import AIMessage, HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import (
RunnableParallel,
RunnableLambda
)
from langchain.document_loaders import DirectoryLoader, TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chat_models import ChatOllama
from langchain.embeddings import GPT4AllEmbeddings
from langchain.vectorstores import Qdrant
from langchain.utils.math import cosine_similarity
load_dotenv()
# Define a dictionary to map file extensions to their respective loaders
loaders = {
'.pdf': PyMuPDFLoader,
'.txt': TextLoader
}
# Define a function to create a DirectoryLoader for a specific file type
def create_directory_loader(file_type, directory_path):
return DirectoryLoader(
path=directory_path,
glob=f"**/*{file_type}",
loader_cls=loaders[file_type],
show_progress=True,
use_multithreading=True
)
dirpath = os.environ.get('DOCS_DIRECTORY')
pdf_loader = create_directory_loader('.pdf', dirpath)
txt_loader = create_directory_loader('.txt', dirpath)
pdfs = pdf_loader.load()
texts = txt_loader.load()
full_text = ''
for paper in texts:
full_text = full_text + paper.page_content
for paper in pdfs:
full_text = full_text + paper.page_content
full_text = " ".join(l for l in full_text.splitlines() if l)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1024,
chunk_overlap=100
)
document_chunks = text_splitter.create_documents([full_text])
embeddings = GPT4AllEmbeddings()
url = 'http://0.0.0.0:6333'
qdrant = Qdrant.from_documents(
documents=document_chunks,
embedding=embeddings,
url=url,
collection_name="v2_local",
prefer_grpc=True,
force_recreate=True
)
# do two different kinda searches using
# different algoriths, then submit both combined
# as context.
def _custom_retriever(inputs):
question = inputs["question"]
sim_search = qdrant.similarity_search(query=question, k=2)
marg_search = qdrant.max_marginal_relevance_search(query=question, k=5)
combined_retrieved_context = sim_search[0].page_content + \
marg_search[0].page_content
return combined_retrieved_context
# Ollama
ollama_llm = os.environ.get('MODEL_NAME') or "llama2:chat"
model = ChatOllama(model=ollama_llm, temperature=0.9)
def prepare_prompts(prompts):
prompt_templates = [item.prompt for item in prompts]
prompt_embeddings = embeddings.embed_documents(prompt_templates)
return [prompt_embeddings, prompt_templates]
def prompt_router(input):
prompt_embeddings, prompt_templates = prepare_prompts(input["prompt"])
query_embedding = embeddings.embed_query(input["question"])
similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]
most_similar = prompt_templates[similarity.argmax()]
return PromptTemplate.from_template(most_similar)
def _format_chat_history(chat_history: List[List[str]]) -> List:
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=f'[INST]{human}[/INST]'))
buffer.append(AIMessage(content=ai))
return buffer
class PromptItem(BaseModel):
name: str
prompt: str
class ChatHistory(BaseModel):
chat_history: List[Tuple[str, str]] = Field(..., extra={
"widget": {"type": "chat"}})
question: str
prompt: List[PromptItem]
_inputs = RunnableParallel(
{
"context": _custom_retriever,
"question": lambda x: x["question"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"prompt": lambda x: x["prompt"]
}
).with_types(input_type=ChatHistory)
chain = _inputs | RunnableLambda(prompt_router) | model | StrOutputParser()
</code></pre>
<p>It currently works fine, but absolutely terrible with any calculations related questions. I want to include the <code>llm-math</code> tool into this chain via agents.</p>
<p>Trying to work my way via using the <a href="https://python.langchain.com/docs/modules/agents/quick_start" rel="nofollow noreferrer">agents LCEL example</a> here even fails with <a href="https://github.com/langchain-ai/langchain/discussions/15634" rel="nofollow noreferrer">ValueError</a> using their default cookbook example</p>
<p>End results I'm trying to achieve is add the ability to handle mathematical calculations via an agent tool like <code>llm-math</code>, in tandem with the current approach above via the LCEL.</p>
|
<python><langchain>
|
2024-01-06 13:34:20
| 0
| 9,577
|
KhoPhi
|
77,769,496
| 2,320,983
|
Selenium unable to send keys to a combobox element
|
<p>I have been trying to create an automated way of adding stocks to google finance, using selenium. I am able to login and get to the adding of the new investment, but that's where I get stuck.</p>
<p>Steps to reproduce:</p>
<p>Create a new portfolio in google finance. Populate the portfolio with at least 1 item. Next, when you run the script as below, it gets to clicking investment but is unable to send any stock names to the combo box.</p>
<p>Code:</p>
<pre><code>class Google:
def __init__(self) -> None:
self.url = "https://accounts.google.com"
self.driver = uc.Chrome()
self.driver.delete_all_cookies()
self.time = 60
def login_and_goto_google_finance(self, email, password):
self.driver.get(self.url)
WebDriverWait(self.driver, 20) \
.until(EC.visibility_of_element_located((By.NAME, 'identifier'))) \
.send_keys(f'{email}' + Keys.ENTER)
WebDriverWait(self.driver, 20) \
.until(EC.visibility_of_element_located((By.NAME, 'Passwd'))) \
.send_keys(f'{password}' + Keys.ENTER)
def navigate_to_site(self, url, wait_enabled=True):
if wait_enabled:
sleep(self.time/20)
self.driver.get(url)
self.driver.find_element(By.XPATH, '//span[text()="Investment"]').click()
self.enter_symbol("BP", 1, 20240201, 100)
sleep(60*self.time)
def enter_symbol(self, symbol_name, qty, date, price):
try:
x = self.driver.find_element(By.XPATH, "//*[contains(@class, 'Ax4B8 ZAGvjd')]")
print(x.aria_role)
sleep(1)
x.click()
sleep(1)
x.send_keys(f'{symbol_name}'+Keys.ENTER)
except:
pass
finally:
sleep(60*self.time)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-u", "--username", type=str, help="Email Id for logging in to Google", required=True)
parser.add_argument("-p", "--password", type=str, help="Password for logging in to Google", required=True)
args = parser.parse_args()
google = Google()
google.login_and_goto_google_finance(args.username, args.password)
google.navigate_to_site(<link_to_your_portfolio_here>)
</code></pre>
<p>What am I doing wrong here, and what should be the correct way of handling this?</p>
|
<python><selenium-webdriver><google-finance>
|
2024-01-06 13:00:33
| 1
| 326
|
Akshayanti
|
77,769,473
| 8,465,299
|
Adjusting Output Size of Auxiliary Matrix in Python
|
<p>After hidden layer activation, I want to create an auxiliary matrix to better capture the temporal aspect of the data in the below code snippet. The current shape of the return variable is <code>[out_channels, out_channels]</code> but I want the returned shape to be <code>[input_channels, out_channels]</code>. Which part of the code should be modified to achieve the desired output while keeping the idea/logic as it is?</p>
<pre><code>def my_fun( self, H: torch.FloatTensor,) -> torch.FloatTensor:
self.input_channels = 4
self.out_channels = 16
self.forgettingFactor = 0.92
self.lamb = 0.01
self.M = torch.inverse(self.lamb*torch.eye(self.out_channels))
HH = self.calculateHiddenLayerActivation(H) # [4,16]
Ht = HH.t() # [16 , 4]
###### Computation of auxiliary matrix
initial_product = torch.mm((1 / self.forgettingFactor) * self.M, Ht) # [16, 4]
intermediate_matrix = torch.mm(HH, initial_product ) # [4, 4]
sum_inside_pseudoinverse = torch.eye(self.input_channels) + intermediate_matrix # [4, 4]
pseudoinverse_sum = torch.pinverse(sum_inside_pseudoinverse) # [4, 4]
product_inside_expression = torch.mm(HH, (1/self.forgettingFactor) * self.M) # [4, 16]
dot_product_pseudo = torch.mm( pseudoinverse_sum , product_inside_expression) # [4, 16]
dot_product_with_hidden_matrix = torch.mm(Ht, dot_product_pseudo ) # [16, 16]
res = (1/self.forgettingFactor) * self.M - torch.mm((1/self.forgettingFactor) * self.M, dot_product_with_hidden_matrix ) # [16,16]
return res
</code></pre>
|
<python><pytorch><matrix-multiplication>
|
2024-01-06 12:55:49
| 1
| 733
|
Asif
|
77,769,458
| 12,806,025
|
monitoring memory usage and time with Python of the subprocess in Unix
|
<p>I want to track the time elapsed and memory usage of the <code>process</code> (bioinformatic tool) I execute from a Python script. I run the process on the Unix cluster, and save the monitoring parameters in a <code>report_file.txt</code>. To measure the elapsed time, I use the <code>resources</code> library, and to monitor the memory usage I use <code>psutil</code> library.</p>
<p>My main objective is to compare the performance of different tools, so I don't want to restrict memory or time in any way.</p>
<pre><code>import sys
import os
import subprocess, resource
import psutil
import time
def get_memory_info():
return {
"total_memory": psutil.virtual_memory().total / (1024.0 ** 3),
"available_memory": psutil.virtual_memory().available / (1024.0 ** 3),
"used_memory": psutil.virtual_memory().used / (1024.0 ** 3),
"memory_percentage": psutil.virtual_memory().percent
}
# Open file to capture process parameters
outrepfp = open(tbl_rep_file, "w");
### Start measuring the process parameters
SLICE_IN_SECONDS = 1
# Start measuring time
usage_start = resource.getrusage(resource.RUSAGE_CHILDREN)
# Create the line for process execution
cmd = '{0} {1} --tblout {2} {3}'.format(bioinformatics_tool, setups, resultdir, inputs)
# Execute the process
r = subprocess.Popen(cmd.split(), stdout=subprocess.DEVNULL, stderr=subprocess.PIPE, encoding='utf-8')
# End measuring time
usage_end = resource.getrusage(resource.RUSAGE_CHILDREN) # end measuring resources
# Save memory measures
resultTable = []
while r.poll() == None:
resultTable.append(get_memory_info())
time.sleep(SLICE_IN_SECONDS)
# In case the process fails
if r.returncode: sys.exit('FAILED: {}\n{}'.format(cmd, r.stderr))
# Extract used memory
memory = [m['used_memory'] for m in resultTable]
# Count the elapsed time
cpu_time_user = usage_end.ru_utime - usage_start.ru_utime
cpu_time_system = usage_end.ru_stime - usage_start.ru_stime
# Write measurment to report_file.txt
outrepfp.write('{0} {1} {2} {3}\n'.format(bioinformatics_tool, cpu_time_user, cpu_time_system, memory))
</code></pre>
<p>For a given process, I received my <code>report_file.txt</code>:</p>
<blockquote>
<p>bioinformatics_tool 0.0 0.0 [48.16242980957031, 47.76295852661133]</p>
</blockquote>
<p>Could you please help me understand why the elapsed time is showing as 0, even though memory usage was monitored for 2 seconds and two values were captured?</p>
<p>Previously, I had implemented a time-capturing mechanism that reported around 4 seconds of elapsed time for the same process, which seems inconsistent with my current memory usage measurement.</p>
<p>***** <strong>EDIT</strong> *****</p>
<p>When I moved <code>usage_end</code> behind <code>r.poll()</code> loop I received some time measurement, but more reports of memory:</p>
<blockquote>
<p>bioinformatics_tool 1.699341 0.063338 [18.01854705810547, 18.022377014160156, 17.966495513916016, 18.160659790039062, 18.281261444091797, 18.44908142
0898438, 18.343822479248047]</p>
</blockquote>
|
<python><unix><subprocess><resources><psutil>
|
2024-01-06 12:50:56
| 1
| 369
|
amk
|
77,769,366
| 14,839,602
|
Exclude page number from text when extracting from a PDF
|
<p>I want to exclude the page number of a PDF from the actual text using <code>pypdf</code> package</p>
<pre><code>from pypdf import PdfReader
reader = PdfReader("pdf-examples/kurdish-sample-2.pdf")
full_text = ""
for page in reader.pages:
full_text += page.extract_text() + "\n"
print(full_text)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>5 دوارۆژی ئەم منداڵه بکەنەوە کە چۆن و چی بەسەر دێت و دووچاری
</code></pre>
<p>The number <strong>5</strong> is the page number which should be excluded.</p>
|
<python><pdf><pypdf><text-extraction>
|
2024-01-06 12:26:10
| 1
| 434
|
Hama Sabah
|
77,769,292
| 584,239
|
when using pyserial connection to /dev/ttyS0 second time throws error termios.error: (22, 'Invalid argument')
|
<p>Below is my sample Python script. I am using <code>socat</code> to simulate <code>/dev/ttyS0</code> serial connection. When I run my Python script first time I do not get any errors. But when the Python script stops executing and I run it for second time I get the below error. I have set <code>parity=serial.PARITY_NONE</code>. Also if I stop and start my socat script, then I am able to run my Python script one more time. Eventhough, I am calling <code>ser.close()</code> at the end, I am assume there is a specific way to stop a connection to serial port so that it can be reconnected. I am new to this serial port programming any help would be great.</p>
<pre><code>Traceback (most recent call last): File
"/home/tk/tkworkspace/billing-system-main/test/WeightMachineReading2.py",
line 13, in <module>
ser = serial.Serial( File "/home/tk/.local/lib/python3.10/site-packages/serial/serialutil.py",
line 244, in __init__
self.open() File "/home/tk/.local/lib/python3.10/site-packages/serial/serialposix.py",
line 332, in open
self._reconfigure_port(force_update=True) File "/home/tk/.local/lib/python3.10/site-packages/serial/serialposix.py",
line 517, in _reconfigure_port
termios.tcsetattr( termios.error: (22, 'Invalid argument')
</code></pre>
<p>Socat script:</p>
<pre><code>socat -d -d pty,link=/dev/ttyS0,raw,group-late=dialout,mode=660,echo=0 pty,link=/dev/ttyS1,raw,group-late=dialout,mode=660,echo=0
</code></pre>
<p>Python Script:</p>
<pre><code>import time
import serial
ser = serial.Serial(
port='/dev/ttyS0',
baudrate=115200,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.SEVENBITS,
)
out = ''
for i in range(1):
waitingQueue = ser.inWaiting()
time.sleep(2)
if waitingQueue == 0:
weight = "-- kg"
print(weight.strip())
ser.close()
</code></pre>
|
<python><pyserial>
|
2024-01-06 12:02:21
| 0
| 9,691
|
kumar
|
77,769,243
| 893,254
|
How do I take one column of a Pandas dataframe and make it become part of the index?
|
<p>I have a Pandas dataframe which looks something like this:</p>
<pre><code> data day_of_month days_in_month
timestamp
2022-01-03 09:00:00 12 3 31
</code></pre>
<p>There is currently an index, which has the type of "pandas timestamp".</p>
<p>I want to be able to index this dataframe by the value in the column "days_in_month".</p>
<p>One way to do this might be to split the dataframe into seperate dataframes, each one holding data for a different day of the month.</p>
<p>That would be relatively easy to do:</p>
<pre><code># etc ...
df_30_days_in_month = df[df['days_in_month'] == 30]
df_31_days_in_month = df[df['days_in_month'] == 31]
</code></pre>
<p>This isn't the route I want to take.</p>
<p>If I knew the name of the operation required to do this it would be relatively easy to search for a solution - but I don't.</p>
<p>I tried:</p>
<ul>
<li>pivot</li>
<li>melt</li>
</ul>
<p>neither of those appears to be the correct operation.</p>
<p>I want to keep <code>timestamp</code> as an index - but as a second level index. The primary index should be the <code>days_in_month</code> value.</p>
<p>What should I do?</p>
|
<python><pandas><dataframe>
|
2024-01-06 11:42:21
| 2
| 18,579
|
user2138149
|
77,769,088
| 1,234,434
|
Why isn't the python function updating arguments on recursive call
|
<p>I am learning Python.</p>
<p>I have a task to find the numbers in a list that sum up to the target number.</p>
<p>My attempt is this:</p>
<pre><code>def two_sum(numbers, target):
# subtraction
numbers.sort(reverse=True)
if (len(numbers)>=2):
for k,v in enumerate(numbers):
count=difference(v,target)
print(v,list(count))
def difference(number,targetdiff):
valid=1
count=0
while valid!=0:
print(f"targetdiff={targetdiff}--number={number}")
if (targetdiff >=number):
diff=targetdiff-number
yield difference(number,diff)
count+=1
else:
valid=0
yield count
</code></pre>
<p>The loop is infinetly stuck and somehow the recurring function isn't taking my new arguments.</p>
<p>Input:</p>
<pre><code>two_sums([1, 2, 3], 4)
</code></pre>
<p>Output:</p>
<pre><code>targetdiff=4--number=3
targetdiff=4--number=3
targetdiff=4--number=3
targetdiff=4--number=3
...
</code></pre>
<p>I've ended up confusing myself and don't understand why this behaviour is occurring, any advice?</p>
<p>*** Update from feedback.</p>
<p>The task is to find out which factors in the input list sum up to the target, assuming each factor could fit multiple times in the target number.</p>
<p>My way of trying to solve this was:
take a number from the input list and subtract it from the target number and then count how many times that number can be subtracted and return the count to the list.</p>
<p>The next part (which I haven't attempted yet) was to take the remainder (once I return it from the difference function (not yet implemented) and cycle through the remaining numbers in the input list.</p>
<p>Expected output (from highest factor to lowest) could be:
[3,1] if the target is 4</p>
<p>There could be other factors that make up 4, but I'm just seeking the approach of finding the largest factors (why I introduce the sort function). Other numbers can be valid e.g.:
[[1,1,1,1],[2,1,1],[2,2],[3,1]]
In this the largest factor would be [3,1]</p>
|
<python><for-loop><recursion>
|
2024-01-06 10:50:51
| 1
| 1,033
|
Dan
|
77,769,043
| 1,841,839
|
Gemini returns error: 400 Multiturn chat is not enabled for models/gemini-pro-vision
|
<p>I am trying to send a request to gemini-pro-vision model and it keeps returning this error</p>
<blockquote>
<p>400 Multiturn chat is not enabled for models/gemini-pro-vision</p>
</blockquote>
<p>My code.</p>
<pre><code># Create a client
client = generativelanguage_v1beta.GenerativeServiceAsyncClient()
content1 = build_content("user", image, text)
contents2 = build_content("model", image, "This is a dog")
contents3 = build_content("user", image, "Are you Sure?")
request = generativelanguage_v1beta.GenerateContentRequest(
model="models/gemini-pro-vision",
contents=[content1, contents2, contents3],
generation_config=generation_config,
safety_settings=safety_settings
)
# Make the request
response = await client.generate_content(request=request)
# Handle the response
return response.candidates[0].content.parts[0].text
</code></pre>
|
<python><google-gemini>
|
2024-01-06 10:37:33
| 2
| 118,263
|
Linda Lawton - DaImTo
|
77,769,033
| 21,896,093
|
RandomizedSearchCV independently on models in an ensemble
|
<p>Suppose I construct an ensemble of two estimators, where each estimator runs its own parameter search:</p>
<p>Imports and regression dataset:</p>
<pre><code>from sklearn.ensemble import VotingRegressor, StackingRegressor, RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import RandomizedSearchCV
X, y = make_regression()
</code></pre>
<p>Define two self-tuning estimators, and ensemble them:</p>
<pre><code>rf_param_dist = dict(n_estimators=[1, 2, 3, 4, 5])
rf_searcher = RandomizedSearchCV(RandomForestRegressor(), rf_param_dist, n_iter=5, cv=3)
dt_param_dist = dict(max_depth=[4, 5, 6, 7, 8])
dt_searcher = RandomizedSearchCV(DecisionTreeRegressor(), dt_param_dist, n_iter=5, cv=3)
ensemble = StackingRegressor(
[ ('rf', rf_searcher), ('dt', dt_searcher) ]
).fit(X, y)
</code></pre>
<p>My questions are about how <code>sklearn</code> handles the fitting of <code>ensemble</code>.</p>
<p>Q1) We have two unfitted estimators in parallel, and both need to be fitted before <code>ensemble.predict(...)</code> would work. But we can't fit any of the estimators without first getting a prediction from the ensemble. How does <code>sklearn</code> handle this circular dependency?</p>
<p>Q2) Since we have two estimators running independent tuning, does each estimator make the false assumption that the parameters of the other estimator are fixed? So we end up with a poorly-defined optimisation problem.</p>
<hr />
<p>For reference, I think the correct way to jointly optimise the models of an ensemble would be to define a single CV that searches over all parameters jointly, shown below. But my questions are about how <code>sklearn</code> handles the special case described earlier.</p>
<pre><code>#Joint optimisation
ensemble = VotingRegressor(
[ ('rf', RandomForestRegressor()), ('dt', DecisionTreeRegressor()) ]
)
jointsearch_param_dist = dict(
rf__n_estimators=[1, 2, 3, 4, 5],
dt__max_depth=[4, 5, 6, 7, 8]
)
ensemble_jointsearch = RandomizedSearchCV(ensemble, jointsearch_param_dist)
</code></pre>
|
<python><machine-learning><scikit-learn>
|
2024-01-06 10:35:42
| 1
| 5,252
|
MuhammedYunus
|
77,768,813
| 1,172,907
|
How can I unit test the log file?
|
<p>I must run the script below by hand <code>$ python -m mylog</code> in order to test logging.</p>
<p>Why does <code>$ pytest -s</code> fail creating the logfile?</p>
<pre class="lang-py prettyprint-override"><code>#mylog.py
import logging
def foobar(logfile):
logging.basicConfig(filename=logfile, level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
logging.error('And non-ASCII stuff, too, like Øresund and Malmö')
</code></pre>
<pre class="lang-py prettyprint-override"><code>#test_mylog.py
from . import mylog
from pathlib import Path
def test_mylog():
logfile='./mylog.log'
mylog.foobar(logfile)
assert Path(logfile).exists()
</code></pre>
<pre><code>>>> AssertionError: assert False
</code></pre>
|
<python><logging><pytest>
|
2024-01-06 09:06:01
| 1
| 605
|
jjk
|
77,768,803
| 485,330
|
AWS Lambda using Python does not recognize the Redis library
|
<p>Using Python 3.12 and AWS Lambda, I'm trying to run Redis, but it gives an error for an unrecognized library.</p>
<p>It's hard to believe that Redis is not a native Lambda library. Do I need to upload the Redis library manually, or is it under another name?</p>
<pre><code>import json
import redis
def lambda_handler(event, context):
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
</code></pre>
<p><strong>Error message:</strong></p>
<pre><code>Response
{
"errorMessage": "Unable to import module 'lambda_function': No module named 'redis'",
"errorType": "Runtime.ImportModuleError",
"requestId": "xxxx-xxx-xxxx-xxxx-xxx",
"stackTrace": []
}
</code></pre>
|
<python><amazon-web-services><aws-lambda><redis>
|
2024-01-06 08:59:05
| 1
| 704
|
Andre
|
77,768,631
| 5,790,653
|
python how to find duplicated values of yaml file for specific key
|
<p>I have a yaml file like this:</p>
<pre class="lang-yaml prettyprint-override"><code>-
ip: 1.1.1.1
status: Active
type: 'typeA'
-
ip: 1.1.1.1
status: Disabled
type: 'typeA'
-
ip: 2.2.2.2
status: Active
type: 'typeC'
-
ip: 3.3.3.3
status: Active
type: 'typeB'
-
ip: 3.3.3.3
status: Active
type: 'typeC'
-
ip: 2.2.2.2
status: Active
type: 'typeC'
-
</code></pre>
<p>I'm going to find any duplicate IPs which <code>type</code> is the same.</p>
<p>For example, IP <code>1.1.1.1</code> has two entries and both types are <code>typeA</code>, so it should be considered. But IP <code>3.3.3.3</code>'s type is not the same so it should not be.</p>
<p>Expected output:</p>
<pre><code>IP 1.1.1.1, typeA duplicate
IP 2.2.2.2, typeC duplicate
</code></pre>
|
<python><yaml>
|
2024-01-06 07:41:12
| 2
| 4,175
|
Saeed
|
77,768,473
| 2,989,330
|
Python: Protocol requiring class property
|
<p>My method requires an object with the attribute <code>pattern</code> and the function <code>handle</code>. To this end, I wrote a <code>Protocol</code> instance with these two members. Consider the following code:</p>
<pre><code>class Handler(typing.Protocol):
"""Handler protocol"""
pattern: re.Pattern
@staticmethod
def handle(match: re.Match) -> str:
pass
def f(obj: Handler, text: str):
output = []
for match in re.finditer(obj.pattern, text):
output.append(obj.handle(match))
return output
class MyHandler(Handler):
pattern = re.compile("hello world")
@staticmethod
def handle(match: re.Match) -> str:
return "hi there"
f(MyHandler, 'hello world, what a nice morning!')
</code></pre>
<p>However, when I call this function with <code>MyHandler</code>, my IDE (PyCharm) issues the following warning:</p>
<blockquote>
<p>Expected type 'ReplacerProtocol', got 'Type[CitationReplacer]' instead</p>
</blockquote>
<p>This warning goes away when I remove the <code>pattern</code> attribute from the protocol class. When I change the protocol to require a <code>@property</code> or a function with the name <code>pattern</code>, the same warning is issued.</p>
<p>What is the correct way to define my interface in Python?</p>
|
<python><python-3.x><python-typing>
|
2024-01-06 06:36:07
| 0
| 3,203
|
Green 绿色
|
77,768,127
| 2,530,674
|
Accessing nested element using beautifulsoup
|
<p>I want to find all the <code>li</code> elements nested within <code><ol class="messageList" id="messageList"></code>. I have tried the following solutions and they all return 0 messages:</p>
<pre class="lang-py prettyprint-override"><code>messages = soup.find_all("ol")
messages = soup.find_all('div', class_='messageContent')
messages = soup.find_all("li")
messages = soup.select('ol > li')
messages = soup.select('.messageList > li')
</code></pre>
<p>The full html can be seen here in this <a href="https://gist.github.com/sachinruk/9b0aaf5c134aa398f7f201c2b298210a" rel="nofollow noreferrer">gist</a>.</p>
<ol>
<li>Just wondering what is the correct way of grabbing these list items.</li>
<li>In beautiful soup do you have to know the nested path to get the element you are after. Or would doing something like <code>soup.find_all("li")</code> supposed to return all elements, whether it's nested or not?</li>
</ol>
<p>Happy for non-bs4 answers too.</p>
<h2>Update</h2>
<p>This is how I got the code.</p>
<pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup
# Load the HTML content
with open('/tmp/property.html', 'r', encoding='utf-8') as file:
html_content = file.read()
# Create a BeautifulSoup object and specify the parser
soup = BeautifulSoup(html_content, 'html.parser')
</code></pre>
<p>The file is in the gist link above.</p>
<h2>Update 2</h2>
<p>I got it working using <code>requests</code> library. Looks like manually downloading the file might have caused some of the html to break?</p>
<pre class="lang-py prettyprint-override"><code>import requests
from bs4 import BeautifulSoup
url = "https://www.propertychat.com.au/community/threads/melbourne-property-market-2024.75213/"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
messages = soup.select('.messageList > li')
</code></pre>
|
<python><html><beautifulsoup>
|
2024-01-06 03:12:49
| 2
| 10,037
|
sachinruk
|
77,768,011
| 10,200,497
|
Change first row of a column if it meets a condition
|
<p>This is a very basic problem. I have searched SO, but I didn't find the answer.</p>
<p>My DataFrame:</p>
<pre><code>df = pd.DataFrame(
{
'a': [10, 50, 3],
'b': [5, 4, 5],
}
)
</code></pre>
<p>Now I want to change <code>df.b.iloc[0]</code> to 1 if <code>df.a.iloc[0]</code> is greater than 5.</p>
<p>Expected output:</p>
<pre><code> a b
0 10 1
1 50 4
2 3 5
</code></pre>
<p>Attempts:</p>
<pre><code># attempt 1
df[df.a > 5].iloc[0]
# attempt 2
df.loc[df.a.iloc[0] > 5, 'b'] = 1
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-06 01:55:43
| 1
| 2,679
|
AmirX
|
77,767,855
| 1,084,875
|
Use Sphinx doctest with docstrings that contain Matplotlib examples
|
<p>The function shown below displays a line plot using Matplotlib. The docstring of the function contains an example of using the function with a list of numbers. I'm using the Sphinx <code>make html</code> command to create documentation for this function. I also use Sphinx to test the docstring using <code>make doctest</code>. However, doctest pauses when it tests this function because it displays a window containing the plot figure. I have to manually close the plot figure window for doctest to continue running. Is there a better way to use doctest with docstrings that contain Matplotlib examples?</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
def plot_numbers(x):
"""
Show a line plot of numbers.
Parameters
----------
x : list
Numbers to plot.
Example
-------
>>> import calc
>>> x = [1, 2, 5, 6, 8.1, 7, 10.5, 12]
>>> calc.plot_numbers(x)
"""
_, ax = plt.subplots()
ax.plot(x, marker="o", mfc="red", mec="red")
ax.set_xlabel("Label for x-axis")
ax.set_ylabel("Label for y-axis")
ax.set_title("Title of the plot")
plt.show()
</code></pre>
|
<python><matplotlib>
|
2024-01-06 00:37:36
| 1
| 9,246
|
wigging
|
77,767,715
| 401,226
|
Celery chain, how to wait for dynamically created subtasks (seems to rule out chord)
|
<p>I have a series of tasks which should be executed sequentially for data integrity purposes. I don't need any kind of celery orchestration for this as such; I could just have one task do the top level steps sequentially.</p>
<p>This is in django, python 3.11 and latest celery. Redis is the broker and result store.</p>
<p>Some of these tasks use slow APIs to fetch pages. Each page triggers processing and many database writes. These writes themselves are slow due to their number, so to be faster in the wall clock sense, I want to offload each page of API processing to a subtask as soon as a page arrives from the API. For that, I can create a subtask. I create them "on the fly".</p>
<p>Much of celery orchestration shows that you prepare task signatures in advance, and then dispatch them. This pattern doesn't save me any wall clock time.</p>
<p>I want to make sure all the background database write subtasks finish before proceeding with the next top-level task.</p>
<p>I hoped that if I used chain (with immutable signatures) for the top level task, I could use the "add_to_parent" argument of .apply_async() to somehow associate the dynmamic subtasks with their parent task. But it does not work. Documentation says it defaults to True, but I am setting it anyway.</p>
<p>Possibly I misunderstand what "add_to_parent" means, or possibly I am not implementing it correctly.</p>
<p>If the top level tasks is <code>@task api_1()</code>, the code structure is
<code>@task api_1() -> intermediate_function() -> @task write_database_page()</code> via <code>apply_async()</code></p>
<p>I wonder how the sub task <code>write_database_page()</code> knows what its parent is, so I suspect and hope that this is my problem, and that I miss some way of making the parent task context known to the subtask.</p>
<p>There are many questions which seem similar, and the solution is to use chord().
However, I don't think there is any way of adding tasks to a chord() after the fact. It seems to be that chord() requires me to gather all the database write tasks before starting it.</p>
<p>An alternative is to somehow collect all the task IDs from the subtasks, and pass them to the next top level function, which would then somehow wait with a non blocking approach for them to complete. It surprises me that the pattern I want is not handled by Celery, and it seems more likely that I am doing it wrong.</p>
|
<python><celery>
|
2024-01-05 23:42:16
| 1
| 7,351
|
Tim Richardson
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.