QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
76,059,509
| 1,231,940
|
How to get an element index in a list column, if element is specified in a different column
|
<p>I have a dataframe, where one column <code>a</code> is a list, and another column <code>b</code> contains a value that's in <code>a</code>. I need to create the column <code>c</code> which contains index of the element in <code>b</code> in list <code>a</code></p>
<pre><code>df = pl.DataFrame({'a': [[1, 2, 3], [4, 5, 2], [6, 2, 7]], 'b': [3, 4, 2]})
print(df)
shape: (3, 2)
βββββββββββββ¬ββββββ
β a β b β
β --- β --- β
β list[i64] β i64 β
βββββββββββββͺββββββ‘
β [1, 2, 3] β 3 β
β [4, 5, 2] β 4 β
β [6, 2, 7] β 2 β
βββββββββββββ΄ββββββ
</code></pre>
<p>so resulting dataframe looks like following</p>
<pre><code>shape: (3, 3)
βββββββββββββ¬ββββββ¬βββββββββββββ
β a β b β a.index(b) β
β --- β --- β --- β
β list[i64] β i64 β i64 β
βββββββββββββͺββββββͺβββββββββββββ‘
β [1, 2, 3] β 3 β 2 β
β [4, 5, 2] β 4 β 0 β
β [6, 2, 7] β 2 β 1 β
βββββββββββββ΄ββββββ΄βββββββββββββ
</code></pre>
<p>all elements of <code>a</code> are unique within the row, and <code>b</code> is guaranteed to be in <code>a</code>.</p>
|
<python><dataframe><python-polars>
|
2023-04-19 23:26:27
| 3
| 437
|
Kaster
|
76,059,444
| 6,724,526
|
Python: Importing pandas into main and modules?
|
<p>I have a python script with a structure that looks like the following:</p>
<pre><code>/
__init__.py
main.py
/modules
fbm.py
</code></pre>
<p>I am attempting to split some functions out from main.py into fbm.py and then import fbm.py as a module using.</p>
<pre><code>sys.path.extend([f'{item[0]}' for item in os.walk("./app/modules/") if os.path.isdir(item[0])])
import fbm
</code></pre>
<p>This is working.</p>
<p>What I am having trouble with is where I need to call pandas (or another import) from a module.
I'm currently importing pandas in main.py with <code>import pandas as pd</code>.
When I call the function from <code>fbm</code> an error is thrown as soon as it hits a reference to <code>pd</code> stating that <code>NameError: name 'pd' is not defined</code>. <code>pd</code> is defined in main.py, but not fbm.py. It thus works in main, but not in fbm.</p>
<p>So, my question is: <strong>Is it good and appropriate to <code>import pandas as pd</code> in both the main.py and each module which requires pandas?</strong></p>
<ul>
<li>Will this have an impact on memory usage, ie loading multiple copies of pandas</li>
<li>is there a better way to handle this?</li>
</ul>
|
<python><python-3.x><pandas>
|
2023-04-19 23:07:57
| 1
| 1,258
|
anakaine
|
76,059,429
| 2,893,712
|
APScheduler Set misfire_grace_time globally
|
<p>I am trying to apply <code>misfire_grace_time</code> globally for all of my jobs. According to the <a href="https://apscheduler.readthedocs.io/en/3.x/userguide.html#missed-job-executions-and-coalescing" rel="nofollow noreferrer">Doc Page</a> it says:</p>
<blockquote>
<p>The scheduler will then check each missed execution time against the jobβs <code>misfire_grace_time</code> option (which can be set on per-job basis or globally in the scheduler) to see if the execution should still be triggered.</p>
</blockquote>
<p>but how do I set it globally?</p>
<p>I tried:</p>
<pre><code>sched = BackgroundScheduler(misfire_grace_time=30)
# a few sched.add_job()
for job in sched.get_jobs():
print(job.misfire_grace_time) # Throws exception
</code></pre>
<p>but doing <code>sched.add_job(...., misfire_grace_time=30)</code> or <code>job.misfire_grace_time = 30</code> in the for loop works.</p>
<p>What is the correct way of setting <code>misfire_grace_time</code> globally without adding it to each individual job?</p>
|
<python><apscheduler>
|
2023-04-19 23:05:39
| 1
| 8,806
|
Bijan
|
76,059,423
| 1,546,990
|
Django REST framework: passing additional contextual data from view to serializer
|
<p>I have a Django REST framework view (<code>generics.GenericAPIView</code>). In it, I want to pass contextual data (the original request received by the view) to a serializer.</p>
<p>As per <a href="https://stackoverflow.com/questions/52188500/serializer-including-extra-context-in-django-rest-framework-3-not-working">Serializer - Including extra context in Django Rest Framework 3 not working</a>, I've overridden the <code>get_serializer_context()</code> method inherited from Rest Framework's <code>GenericAPIView</code>.</p>
<p>My view looks like this:</p>
<pre><code>class MyView(GenericAPIView):
def get_serializer_context(self):
context = super().get_serializer_context()
context.update({"original_request": self.request})
return context
def put(self, request, *args, **kwargs):
my_object = ... get object from database ...
serializer = MySerializer(my_instance=my_object, validated_data=request.data)
if serializer.is_valid():
logger.warning("MyView/put() - request: " + str(request))
.... other stuff ...
</code></pre>
<p>My serializer:</p>
<pre><code>class MySerializer():
def update(self, my_instance, validated_data):
logger.warning("MySerializer/update() - context: " + str(self.context))
... other stuff ...
</code></pre>
<p>My view's logs shows that the request details are present:</p>
<pre><code>MyView/put() - request: <rest_framework.request.Request: PUT '<URL>'>
</code></pre>
<p>... but they are not passed to the serializer:</p>
<pre><code>MySerializer/update() - context: {}
</code></pre>
<p>I'm suspecting that it's because the <code>self.request</code> value in <code>get_serializer_context()</code> is not populated or is not the same as the one supplied in the view's <code>put()</code> method.</p>
<p>I've also tried including the context data directly during the instantiation of the serializer, as per <a href="https://www.django-rest-framework.org/api-guide/serializers/#including-extra-context" rel="nofollow noreferrer">https://www.django-rest-framework.org/api-guide/serializers/#including-extra-context</a> but that didn't work either - again, the contextual data was not passed from the view to the serializer.</p>
<p>How can I supply the additional contextual data to the serializer for individual requests via <code>generics.GenericAPIView</code>, please?</p>
<p>Many thanks for any information.</p>
|
<python><django><django-rest-framework>
|
2023-04-19 23:03:55
| 1
| 2,069
|
GarlicBread
|
76,059,401
| 15,378,398
|
How to achieve desired well-log visualization in Power BI using Python visuals
|
<p>I'm using python visuals in Power BI for plotting 2 columns (DEPTH and GR) of a Well-Log <a href="https://github.com/santiagortiiz/Well-Logs-and-Petrophysics/blob/main/Data/15-9-19_SR_COMP.LAS" rel="nofollow noreferrer">dataset</a> in a "vertical line chart".</p>
<p>The target chart looks like this:
<a href="https://i.sstatic.net/agxaj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/agxaj.png" alt="enter image description here" /></a></p>
<p>The Python script in jupyter notebook is pretty simple:</p>
<pre><code>import lasio
file_path = "Well-Logs-and-Petrophysics/Data/15-9-19_SR_COMP.LAS"
las = lasio.read(file_path)
df = las.df()
df.reset_index(drop=False, inplace=True)
df.rename(columns={'index': 'Index', 'DEPT':'DEPTH'}, inplace=True)
df.dropna(how='any', axis=0, inplace=True)
x = df['GR']
y = df['DEPTH']
plt.plot(x, y)
</code></pre>
<p>For loading the data into Power BI, I'm using the same firsts lines to read (and clean) the LAS file as a dataframe.</p>
<p>Finally, I'm using the using the following code for the python visual:</p>
<pre><code>import matplotlib.pyplot as plt
dataset.plot(kind='line', x='GR', y='DEPTH')
plt.show()
</code></pre>
<p>But the image plotted is the following:</p>
<p><a href="https://i.sstatic.net/xbeD7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xbeD7.png" alt="enter image description here" /></a></p>
<p>Summary:</p>
<p>The python script is the same as in jupyter notebook, than in power bi, but power BI is not plotting the same way.</p>
<p>Question:</p>
<p>What is happening, and how to achieve the desired results?</p>
|
<python><matplotlib><plot><powerbi><powerbi-python-visual>
|
2023-04-19 23:01:50
| 0
| 336
|
Santiago Ortiz Ceballos
|
76,059,371
| 6,440,589
|
"python: can't open file" error when running docker-compose up
|
<p>I wrote a simple Python script called <strong>myscript.py</strong>, calling a custom module in <strong>mycustommodules</strong>.
Below is my folder structure:</p>
<pre><code>-docker-compose.yml
-Dockerfile
-mycustommodules
|
-config.py
-__init__.py
-mysubfolder
|
-acquisition
|
-myscript.py
</code></pre>
<p>When I call the script from the terminal it runs correctly.
However, running <code>docker-compose up</code> or <code>docker-compose up --build</code> installs the specified Python packages but then returns the following error:</p>
<pre><code>rta-app2_1 | python: can't open file 'mysubfolder/acquisition/myscript.py': [Errno 2] No such file or directory
</code></pre>
<p>Here is my <strong>Dockerfile</strong>:</p>
<pre><code>FROM python:3.8.3
ADD ./mysubfolder/acquisition/myscript.py .
ADD mycustommodules .
RUN pip install h5py datetime argparse
CMD ["python", "mysubfolder/acquisition/myscript.py"]
</code></pre>
<p>I tried replacing the last line with <code>CMD ["python", "./mysubfolder/acquisition/myscript.py"] </code>, to no avail.</p>
<p><strong>What am I doing wrong?</strong></p>
<hr />
<p>Here is <strong>myscript.py</strong>:</p>
<pre><code>import serial
import threading
import queue
import struct
import sys
import os
import h5py
from datetime import datetime, timedelta
import subprocess
import re
import argparse
from shutil import move
sys.path.append("/media/sheldon/My Passport/docker_rta_experiment")
from mycustommodules import config
def main_run():
print("OK MAIN_RUN WAS CALLED SUCCESSFULLY")
print(config.example_config_parameter)
if __name__ == "__main__":
print("AHOY!")
try:
main_run()
except:
print(sys.exc_info())
</code></pre>
<p>The <strong>config.py</strong> simply contains:</p>
<pre><code>example_config_parameter = 42
</code></pre>
<p>Here is my <strong>docker-compose.yml</strong>:</p>
<pre><code>version: "3"
services:
rta-app2:
build:
context: .
</code></pre>
<p>The script should output:</p>
<blockquote>
<pre><code>AHOY!
OK MAIN_RUN WAS CALLED SUCCESSFULLY
42
</code></pre>
</blockquote>
|
<python><docker><docker-compose><import>
|
2023-04-19 22:53:04
| 0
| 4,770
|
Sheldon
|
76,059,187
| 497,934
|
How do I indicate that the .value of an enum is an unstable implementation detail?
|
<p>The official Enum HOWTO has <a href="https://docs.python.org/3.11/howto/enum.html#planet" rel="nofollow noreferrer">this example</a>:</p>
<pre class="lang-py prettyprint-override"><code>class Planet(Enum):
MERCURY = (3.303e+23, 2.4397e6)
VENUS = (4.869e+24, 6.0518e6)
EARTH = (5.976e+24, 6.37814e6)
MARS = (6.421e+23, 3.3972e6)
JUPITER = (1.9e+27, 7.1492e7)
SATURN = (5.688e+26, 6.0268e7)
URANUS = (8.686e+25, 2.5559e7)
NEPTUNE = (1.024e+26, 2.4746e7)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
@property
def surface_gravity(self):
# universal gravitational constant (m3 kg-1 s-2)
G = 6.67300E-11
return G * self.mass / (self.radius * self.radius)
</code></pre>
<pre><code>>>> Planet.EARTH.value
(5.976e+24, 6378140.0)
>>> Planet.EARTH.surface_gravity
9.802652743337129
</code></pre>
<p>Suppose I'm doing something like this, and I want to treat the <code>.value</code>sβthe tuples like <code>(3.303e+23, 2.4397e6)</code>βas unstable implementation details of the <code>Planet</code> API. I don't want my API consumers to ever rely on them. Instead, I want them to use the properties that I explicitly expose myself, like <code>.surface_gravity</code>.</p>
<p>Is there a conventional way to indicate this?</p>
<p>I'm currently just adding a note in the docstring like this:</p>
<pre class="lang-py prettyprint-override"><code>class Planet(Enum):
""".value is an implementation detail. Use .surface_gravity instead."""
</code></pre>
<p>But that seems too easy to miss.</p>
<p>If it were a normal class, I would just make it <code>._value</code> instead of <code>.value</code>. But here, <code>.value</code> is added automatically because I subclassed from <code>Enum</code>, and I don't see a way to override that.</p>
|
<python><enums>
|
2023-04-19 22:14:33
| 2
| 26,053
|
Maxpm
|
76,059,134
| 6,539,635
|
fix_final works for x_f=[0,0,0,0,0,0] but for absolutely no other final state - 'Solution Not Found'
|
<p>'x_f' is defined in a separate file but here's the code for the GEKKO implementation:</p>
<pre><code>m = GEKKO(remote = solverParams["remote"])
m.time = timeSeq
x = [m.Var(value = x_0[i], fixed_initial = True) for i in range(len(x_0))]
u = [m.Var(value = u_0[i], fixed_initial = False) for i in range(len(u_0))]
w = m.Param(value = w)
for i in range(len(x)):
m.fix_final(x[i], val = x_f[i])
if stateBounds[i]["lower"] != "-Inf":
x[i].lower = stateBounds[i]["lower"]
if stateBounds[i]["upper"] != "+Inf":
x[i].upper = stateBounds[i]["upper"]
for i in range(len(u)):
if forceBounds[i]["lower"] != "-Inf":
u[i].lower = forceBounds[i]["lower"]
if forceBounds[i]["upper"] != "+Inf":
u[i].upper = forceBounds[i]["upper"]
eqs = [x[i].dt() == np.matmul(A[i], x) + np.matmul(np.atleast_1d(B[i]), u)
for i in range(len(x))]
eqs = m.Equations(eqs)
startTime = time.time()
m.Minimize(np.add(np.matmul(np.subtract(x, x_f), np.atleast_1d(np.matmul(np.atleast_1d(Q), np.subtract(x, x_f)))),
np.matmul( u, np.atleast_1d(np.matmul(np.atleast_1d(R), u))))
*w)
m.options.IMODE = 6
m.options.SOLVER = 3
m.solve(disp = solverParams["disp"])
stopTime = time.time()
x_p = [x[i].value for i in range(len(x))]
u_p = [u[i].value for i in range(len(u))]
x_p = np.transpose(x_p)
u_p = np.transpose(u_p)
</code></pre>
<p>My test case is for:</p>
<pre><code>x_0 = np.array([50.00, -25.00, 80.00, 0.00, 0.00, 0.00])
</code></pre>
<p>If I set x_f to be the exact zero vector, it gives me a solution but if I set it to, say,</p>
<pre><code>x_f = np.array([ 0.04, 0.00, 0.00, 0.00, 0.00, 0.00])
</code></pre>
<p>it says that it hits a point of local infeasibility. I doubt that this is an actual scenario of infeasibility since this has happened for several cases of x_0 and x_f.</p>
<p>Here, the Q matrix is 0_{6x6} and R is 10*I_{3x3} and the system is linear time-invariant.</p>
|
<python><optimization><gekko>
|
2023-04-19 22:03:35
| 1
| 349
|
Aaron John Sabu
|
76,059,118
| 7,091,646
|
conditionally append value to list of lists in pandas
|
<p>I'm trying to conditionally append a list of lists in pandas:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data={'A': [1, 2, 3]})
df['B'] = [[[1],[1],[1]]] * df.shape[0]
df
A B
0 1 [[1], [1], [1]]
1 2 [[1], [1], [1]]
2 3 [[1], [1], [1]]
# attempting to append 1st list of lists in B column with 2
df['B'] = df['B'].mask(df.A == 2, df['B'].apply(lambda x: x[0].append(2)))
df
A B
0 1 [[1, 2, 2, 2], [1], [1]]
1 2 None
2 3 [[1, 2, 2, 2], [1], [1]]
#expected result I'm hoping for is:
df['B'] = [[[1],[1],[1]],[[1,2],[1],[1]],[[1],[1],[1]]]
df
A B
0 1 [[1], [1], [1]]
1 2 [[1, 2], [1], [1]]
2 3 [[1], [1], [1]]
</code></pre>
|
<python><pandas>
|
2023-04-19 22:00:51
| 4
| 1,399
|
Eric
|
76,059,076
| 2,079,306
|
Update image using flask and Ajax
|
<p>Running a webpage and when I click a button, I want an existing image to be replaced without reloading the page, after a servers function is finished creating the new image server side. I am getting a 400 error though, so something is wrong before I can even print the data in routes.py to see what it's passing, meaning something is wrong with my ajax call to my flask routes.py function.</p>
<p>The image:</p>
<pre><code> <div class="col-6">
<div class="card">
<img src="{{ user_image }}" alt="User Image" id="myImage">
</div>
</div>
</code></pre>
<p>At the end of my html file I have this script as an attempt to replace the existing image:</p>
<pre><code></body>
<script>
$(document).ready(function() {
$('#makeNewImage').click(function(){
$.ajax({
type: 'POST',
url: "{{ url_for('getData') }}",
headers: {"Content-Type": "application/json"},
success: function(resp){
$('myImage').attr('src', "static/img/" + resp);
}
});
});
});
</script>
</html>
</code></pre>
<p>In my flask routes.py page:</p>
<pre><code> @app.route('/getData', methods=['POST'])
def getData():
// generate image, save image in /static/img/filename
return filename
</code></pre>
<p>Passing data and everything works when I use my original xhr function, but it isn't auto updating as it's pure JS. I receive the 200 message with this function.</p>
<pre><code>function getResults() {
let xhr = new XMLHttpRequest();
let url = "/getData";
let error = false;
if (!Array.isArray(myArray) || !myArray.length) {
alert("No items selected.")
error = true;
} else {
xhr.open("POST", url, true);
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onreadystatechange = function () {
if (xhr.readyState === 4 && xhr.status === 200) {
console.log("I see this as an absolute win")
}
};
xhr.onerror = function () {
console.log("** An error occurred during the transaction");
};
param1 = document.getElementById('param1').value
param2 = document.getElementById('param2').value
param3 = document.getElementById('param3').value
let data = JSON.stringify({"myArray": myArray, "param1 ": param1 , "param2 ": param2, "param3": param3});
xhr.send(data);
}
}
</code></pre>
<p>Anyone know the reason for the 400 error on my new Ajax code attempt?</p>
|
<javascript><python><html><ajax><flask>
|
2023-04-19 21:52:22
| 0
| 1,123
|
john stamos
|
76,059,063
| 2,386,605
|
How to compile ML model for AWS Inferentia with flexible input size?
|
<p>I have an ML model from Huggingface, which essentially looks as follows:</p>
<pre><code>import torch
from transformers import BloomTokenizerFast, BloomForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom-560m")
model = BloomForCausalLM.from_pretrained("bigscience/bloom-560m").to(device)
text = tokenizer.encode(seed)
inputs, past_key_values = torch.tensor([text[0]]), None
with torch.no_grad():
while #condition met:
model_out = model(input_ids=inputs.to(device), past_key_values=past_key_values)
...
# Generate new inputs and go back to the start
</code></pre>
<p>Now I would like to deploy this model to Inf1 on AWS Sagemaker see <a href="https://sagemaker-examples.readthedocs.io/en/latest/sagemaker_neo_compilation_jobs/deploy_pytorch_model_on_Inf1_instance/pytorch_torchvision_neo_on_Inf1.html#Deploying-pre-trained-PyTorch-vision-models-with-Amazon-SageMaker-Neo-On-Inf1-Instance" rel="nofollow noreferrer">here</a>:</p>
<pre><code>from sagemaker.pytorch.model import PyTorchModel
pytorch_model = PyTorchModel(
model_data=model_path,
role=role,
entry_point="my_entry_point_file.py",
framework_version="1.5.1",
py_version="py3",
)
neo_model = pytorch_model.compile(
target_instance_family="ml_inf1",
input_shape={"input0": [1, 3, 224, 224]},
output_path=compiled_model_path,
framework="pytorch",
framework_version="1.5.1",
role=role,
job_name=compilation_job_name,
)
</code></pre>
<p>However, in my case I get</p>
<pre><code>UnexpectedStatusException: Error for Compilation job bloom-compiled-inf-inf1-202304-1921-4203: Failed. Reason: ClientError: CompilationError: Unable to compile model for ml_inf1:', 'No operations were successfully partitioned and compiled to neuron for this model - aborting trace!') For further troubleshooting common failures please visit: https://docs.aws.amazon.com/sagemaker/latest/dg/neo-troubleshooting-compilation.html
</code></pre>
<p>I believe the main problem is the following: Whereas <code>inputs</code> can be considered of length <code>[1,1]</code>, the variable <code>past_key_values</code> is much more complex. In this case, it is</p>
<ul>
<li>a tuple of length 24</li>
<li>each entry is a tuple itself of length 2</li>
<li>the two entries are torch tensors of size <code>[16, 64, 6]</code> and <code>[16, 6, 64]</code></li>
</ul>
<p>My question is now, what can I do such that it works on Inf1?</p>
<p>I could imagine that either</p>
<ul>
<li>there is a way to enter the right <code>input_shape</code>, which can be something like <code>{βvar1β: [1,1,28,28], βvar2β:[1,1,28,28]}</code> (I do not know how to display the more complex tuple-tensor structure as outlined above)</li>
<li>or can we split <code>past_key_values</code> such that we can build <code>input_shape</code> easily?</li>
</ul>
<p>Any suggestions to solving that problem would be very appreciated.</p>
|
<python><amazon-web-services><tensorflow><pytorch><amazon-sagemaker>
|
2023-04-19 21:50:41
| 0
| 879
|
tobias
|
76,059,038
| 19,130,803
|
Postgres update query: only last letter from entire string get inserted
|
<p>I am working on python web app using docker. At startup I need to create and initialize the tables with some default value, for that I have created <code>queries.sql</code> script.</p>
<pre><code># queries.sql
CREATE TABLE IF NOT EXISTS person (
id serial primary key,
address varchar(2000)
);
INSERT INTO person (address)
VALUES ('some_default_value');
</code></pre>
<p>The table gets created and row gets inserted <code>successfully</code>.</p>
<p>Now, from <code>web UI</code>, I have field address using <code>textarea</code> where user can enter the address.
and I am updating that in above table.</p>
<p>I am using <code>psycog2</code>.</p>
<pre><code>
def update(query: str, vars_list: list[Any]):
# psycog 2 connection engine code
cursor.executemany(query=query, vars_list=vars_list)
conn.commit()
def save(data: str):
query: str = """
UPDATE person
SET address = %s
WHERE id = 1;
"""
vars_list: list[Any] = list(tuple(data,))
update(query=query, vars_list=vars_list)
</code></pre>
<p>While debugging, data from textarea comes fully. say eg user types <code>address</code> as</p>
<pre><code>aaa
bbb
</code></pre>
<p>When user click save button, <code>update query</code> gets executed only <code>last letter</code> from entire address gets <code>inserted/updated</code> as
<code>b</code></p>
<p>what I am missing?</p>
|
<python><postgresql>
|
2023-04-19 21:46:02
| 0
| 962
|
winter
|
76,059,001
| 19,157,137
|
Viewing all the File Handling Methods and all the types of Errors in Python
|
<p>How would I be able to write a code that I can use to see all of the File Methods like <code>close(), detach(), readline(), readlines() ... </code> in python 3. I also want to have another code that shows all the possible <code>errors</code> like <code>ArithmeticError, AssertionError, MemoryError ...</code>. I am trying to use <code>dir()</code> to be able to list the intended outputs but was unsuccessful so far. How would I go about coding or these?</p>
|
<python><python-3.x><file><methods><error-handling>
|
2023-04-19 21:39:32
| 1
| 363
|
Bosser445
|
76,058,975
| 7,160,815
|
Can we do the distance transform from one boundary to another?
|
<p>I have ring-like binary images with varying thicknesses. I want to calculate its thickness by calculating the distance between the inner and outer boundaries.</p>
<p><a href="https://i.sstatic.net/7DRYq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7DRYq.png" alt="enter image description here" /></a></p>
<p>In many places, people used distance transform to do this. Ideally, if I take the distance values in the medial axis, it will give me the thickness by distance*2.</p>
<p>However, in my case, the shape is very uneven; hence the medial axis can branch. But since the 3D structure I am analyzing is tube-like, I need a thickness value that I can project to the outer boundary to visualize. This means the thickness values should be strictly on a ring-like medial axis.</p>
<p>I am wondering if we can do the distance transform from one boundary to another instead of starting from boundaries and going toward the center.</p>
|
<python><image-processing><euclidean-distance><thickness>
|
2023-04-19 21:34:55
| 0
| 344
|
Savindi
|
76,058,957
| 5,082,048
|
How can I share large numpy arrays between python processes, e.g. jupyter notebooks, without duplicating them in memory?
|
<p>I have large numpy arrays that I want to share with other python processes on the same machine without holding copies in memory. Specifically, my usecase is to share the array between jupyter notebooks on linux. How can I do this?</p>
|
<python><multiprocessing><shared-memory>
|
2023-04-19 21:31:03
| 1
| 3,950
|
Arco Bast
|
76,058,926
| 4,572,274
|
Azure ML experiment run failing with 'HttpLoggingPolicy' has no attribute 'DEFAULT_HEADERS_ALLOWLIST'
|
<p>This is related to the azure experiment run in the Microsoft Azure Machine Learning Studio.
The experiment run fails with this stack trace:</p>
<pre><code>Warning: Failed to setup Azure Machine Learning system code due to `type object 'HttpLoggingPolicy' has no attribute 'DEFAULT_HEADERS_ALLOWLIST'`. Your job will proceed but if you notice any issues, please contact Azure Support with this exception message.
/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/utils.py:213: UserWarning: Failure attempting to register store for scheme "adbazureml": type object 'HttpLoggingPolicy' has no attribute 'DEFAULT_HEADERS_ALLOWLIST'
_tracking_store_registry.register_entrypoints()
/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/utils.py:213: UserWarning: Failure attempting to register store for scheme "azureml": type object 'HttpLoggingPolicy' has no attribute 'DEFAULT_HEADERS_ALLOWLIST'
_tracking_store_registry.register_entrypoints()
/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/mlflow/store/artifact/artifact_repository_registry.py:93: UserWarning: Failure attempting to register artifact repository for scheme "adbazureml": type object 'HttpLoggingPolicy' has no attribute 'DEFAULT_HEADERS_ALLOWLIST'
_artifact_repository_registry.register_entrypoints()
/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/mlflow/store/artifact/artifact_repository_registry.py:93: UserWarning: Failure attempting to register artifact repository for scheme "azureml": type object 'HttpLoggingPolicy' has no attribute 'DEFAULT_HEADERS_ALLOWLIST'
_artifact_repository_registry.register_entrypoints()
Traceback (most recent call last):
File "azure_ml_pipeline.py", line 7, in <module>
from interface import step_factory
File "/mnt/azureml/cr/j/e54eb5aef8a648e7bfb46dc654434eca/exe/wd/interface.py", line 3, in <module>
from steps.evaluation import Evaluation
File "/mnt/azureml/cr/j/e54eb5aef8a648e7bfb46dc654434eca/exe/wd/steps/__init__.py", line 1, in <module>
from utils.data_container import DataContainer
File "/mnt/azureml/cr/j/e54eb5aef8a648e7bfb46dc654434eca/exe/wd/utils/data_container.py", line 1, in <module>
from azureml.core import Dataset
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azureml/core/__init__.py", line 13, in <module>
from .workspace import Workspace
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azureml/core/workspace.py", line 22, in <module>
from azureml._project import _commands
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azureml/_project/_commands.py", line 24, in <module>
from azureml._workspace._utils import (
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azureml/_workspace/_utils.py", line 10, in <module>
from azure.mgmt.storage import StorageManagementClient
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azure/mgmt/storage/__init__.py", line 9, in <module>
from ._storage_management_client import StorageManagementClient
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azure/mgmt/storage/_storage_management_client.py", line 14, in <module>
from azure.mgmt.core import ARMPipelineClient
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azure/mgmt/core/__init__.py", line 29, in <module>
from ._pipeline_client import ARMPipelineClient
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azure/mgmt/core/_pipeline_client.py", line 28, in <module>
from .policies import ARMAutoResourceProviderRegistrationPolicy, ARMHttpLoggingPolicy
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azure/mgmt/core/policies/__init__.py", line 40, in <module>
class ARMHttpLoggingPolicy(HttpLoggingPolicy):
File "/azureml-envs/pytorch-1.10/lib/python3.8/site-packages/azure/mgmt/core/policies/__init__.py", line 43, in ARMHttpLoggingPolicy
DEFAULT_HEADERS_ALLOWLIST = HttpLoggingPolicy.DEFAULT_HEADERS_ALLOWLIST | set(
AttributeError: type object 'HttpLoggingPolicy' has no attribute 'DEFAULT_HEADERS_ALLOWLIST'
</code></pre>
<p>Does anyone facing the issue recently know the fix for it?</p>
|
<python><azure><azure-machine-learning-service><azuremlsdk>
|
2023-04-19 21:26:06
| 1
| 4,632
|
yardstick17
|
76,058,816
| 8,229,534
|
How to build docker image with Facebook prophet library
|
<p>I am trying to build a docker image with Prophet 1.0.1</p>
<p>Here is my requirements.txt file</p>
<pre><code>google-cloud-bigquery
google-cloud-storage
numpy==1.21.0
pandas==1.2.0
db-dtypes
plotly==5.10.0
hampel==0.0.5
prophet==1.0.1
click
joblib
scikit-learn
</code></pre>
<p>and here is my docker file -</p>
<pre><code>FROM python:3.8-slim-buster as base
RUN mkdir my_dir
COPY requirements.txt /my_dir/requirements.txt
COPY src /my_dir/src
WORKDIR /my_dir
RUN apt update
RUN apt-get install -y --no-install-recommends \
ca-certificates \
cmake \
build-essential \
gcc \
g++ \
git \
curl \
apt-transport-https \
ca-certificates \
gnupg
RUN python -m pip install --no-cache-dir --upgrade setuptools pip wheel pipenv
RUN pip install -r requirements.txt
WORKDIR /my_dir/src
RUN python setup.py bdist_wheel
RUN pip install dist/*
# kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin
ENTRYPOINT [ "sh", "run.sh" ]
</code></pre>
<p>When I try to run <code>docker-compose up -d</code> command, the build fails locally with below error -</p>
<pre><code>13 25.10 Building wheels for collected packages: prophet, pymeeus
#13 25.10 Building wheel for prophet (setup.py): started
#13 25.40 Building wheel for prophet (setup.py): finished with status 'error'
#13 25.41 error: subprocess-exited-with-error
#13 25.41
#13 25.41 Γ python setup.py bdist_wheel did not run successfully.
#13 25.41 β exit code: 1
#13 25.41 β°β> [49 lines of output]
#13 25.41 running bdist_wheel
#13 25.41 running build
#13 25.41 running build_py
#13 25.41 creating build
#13 25.41 creating build/lib
#13 25.41 creating build/lib/prophet
#13 25.41 creating build/lib/prophet/stan_model
#13 25.41 Traceback (most recent call last):
#13 25.41 File "<string>", line 36, in <module>
#13 25.41 File "<pip-setuptools-caller>", line 34, in <module>
#13 25.41 File "/tmp/pip-install-l_l1ynvx/prophet_2f0894fc2ee74edfa49399ff3fb20863/setup.py", line 150, in <module>
#13 25.41 long_description_content_type='text/markdown',
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/__init__.py", line 108, in setup
#13 25.41 return distutils.core.setup(**attrs)
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 185, in setup
#13 25.41 return run_commands(dist)
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
#13 25.41 dist.run_commands()
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
#13 25.41 self.run_command(cmd)
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/dist.py", line 1221, in run_command
#13 25.41 super().run_command(command)
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
#13 25.41 cmd_obj.run()
#13 25.41 File "/usr/local/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 343, in run
#13 25.41 self.run_command("build")
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
#13 25.41 self.distribution.run_command(command)
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/dist.py", line 1221, in run_command
#13 25.41 super().run_command(command)
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
#13 25.41 cmd_obj.run()
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/_distutils/command/build.py", line 131, in run
#13 25.41 self.run_command(cmd_name)
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
#13 25.41 self.distribution.run_command(command)
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/dist.py", line 1221, in run_command
#13 25.41 super().run_command(command)
#13 25.41 File "/usr/local/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
#13 25.41 cmd_obj.run()
#13 25.41 File "/tmp/pip-install-l_l1ynvx/prophet_2f0894fc2ee74edfa49399ff3fb20863/setup.py", line 48, in run
#13 25.41 build_models(target_dir)
#13 25.41 File "/tmp/pip-install-l_l1ynvx/prophet_2f0894fc2ee74edfa49399ff3fb20863/setup.py", line 36, in build_models
#13 25.41 from prophet.models import StanBackendEnum
#13 25.41 File "/tmp/pip-install-l_l1ynvx/prophet_2f0894fc2ee74edfa49399ff3fb20863/prophet/__init__.py", line 8, in <module>
#13 25.41 from prophet.forecaster import Prophet
#13 25.41 File "/tmp/pip-install-l_l1ynvx/prophet_2f0894fc2ee74edfa49399ff3fb20863/prophet/forecaster.py", line 14, in <module>
#13 25.41 import numpy as np
#13 25.41 ModuleNotFoundError: No module named 'numpy'
#13 25.41 [end of output]
</code></pre>
<p>I do have the <code>numpy</code> package included in the <code>requirements.txt</code> but it does not seem to work for me. How can I successfully install the prophet 1.0.1 using docker?</p>
|
<python><docker><facebook-prophet>
|
2023-04-19 21:08:06
| 1
| 1,973
|
Regressor
|
76,058,782
| 17,041,240
|
Unable to automate the individual key actions after providing 'path' as user input
|
<p>Initially my code worked fine when the path/to/file was assigned to a variable directly and this variable was passed as an argument to a function. My working code:</p>
<pre><code>def read_installation_package(filePath):
...
...
dlg = app.window(class_name="#32770")
dlg.Edit.type_keys(filePath, with_spaces = True)
dlg.Open.click()
app.MyTestApplication.type_keys("{TAB 3}")
...
...
chkwin = app.window(title="Check installation package")
...
...
def main():
file_path = "C:\\Development Projects\\installation-files.zip"
read_installation_package(file_path)
</code></pre>
<p>Then I modified my code (all the necessary imports are made correctly) so that the path is provided as user input by opening a dialogue box:</p>
<pre><code>import tkinter as tk
from tkinter import filedialog
def read_installation_package(filePath):
...
...
dlg = app.window(class_name="#32770")
dlg.Edit.type_keys(filePath, with_spaces = True)
dlg.Open.click()
app.MyTestApplication.type_keys("{TAB 3}")
...
...
chkwin = app.window(title="Check installation package")
...
...
def main():
root = tk.Tk()
root.withdraw()
file_path = filedialog.askopenfilename()
file_path = file_path.replace("/", "\\")
# file_path = "C:\\Development Projects\\installation-files.zip"
read_installation_package(file_path)
</code></pre>
<p>Here, I got an error <code>pywinauto.findwindows.ElementNotFoundError: {'title': 'Check installation package', 'backend': 'win32', 'process': 22936}</code>. However, it was possible to run the code without error if the {TAB 3} was split into 3.
For example, when the code was modified to:</p>
<pre><code>app.MyTestApplication.type_keys("{TAB}")
time.sleep(1)
app.MyTestApplication.type_keys("{TAB}")
time.sleep(1)
app.MyTestApplication.type_keys("{TAB}")
time.sleep(1)
</code></pre>
<p>then it was working. I could not figure out why this is happening and how to fix this issue, since having to include delay between steps is not an ideal solution. Could someone please help? Many thanks in advance!</p>
|
<python><automation><pywinauto>
|
2023-04-19 21:01:28
| 1
| 349
|
Pramesh
|
76,058,513
| 5,036,928
|
Regex (Python): Matching Integers not Preceded by Character
|
<p>Based on some string of numbers:</p>
<pre><code>(30123:424302) 123 #4324:#34123
</code></pre>
<p>How can I obtain only the numbers that are NOT immediately preceded by "#"? I have found how to get those numbers preceded by "#" (<code>\#+\d+</code>) but I need the opposite. Can I group all <code>\d+</code> and then inverse match based on the pattern I have somehow?</p>
<p>To clarify, I need <code>30123</code>, <code>424302</code>, and <code>123</code> in the above example.</p>
|
<python><regex><regex-negation>
|
2023-04-19 20:19:53
| 2
| 1,195
|
Sterling Butters
|
76,058,487
| 11,331,843
|
regex code to find email address within HTML script webscraping
|
<p>I am trying to extract phone, address and email from couple of corporate websites through webscraping</p>
<p>My code for that is as follows</p>
<pre><code>l = 'https://www.zimmermanfinancialgroup.com/about'
address_t = []
phone_num_t = []
# make a request to the link
response = requests.get(l)
soup = BeautifulSoup(response.content, "html.parser")
#soup = BeautifulSoup(response.content, 'html.parser')
phone_regex = "(\+\d{1,2}\s)?\(?\d{3}\)?[\s.-]\d{3}[\s.-]\d{4}"
# extract the phone number information
match = soup.findAll(string=re.compile(phone_regex))
if match:
print("Found the matching string:", match)
else:
print("Matching string not found")
# extract email address information
mail = "\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,7}\b"
match_a = soup.findAll(string=re.compile(mail))
match_a
</code></pre>
<p>The above code is working fine and it extracts phone number correctly, but it's not able to detect email address, same issue with other website (<a href="https://www.benefitexperts.com/about-us/" rel="nofollow noreferrer">https://www.benefitexperts.com/about-us/</a>)</p>
|
<python><regex><web-scraping><beautifulsoup>
|
2023-04-19 20:15:36
| 1
| 631
|
anonymous13
|
76,058,384
| 9,940,188
|
Why doesn't this custom logger use the root formatter?
|
<p>The way I understand the logging documentation is that all calls to loggers that don't have handlers configured on them will eventually "fall through" to the root logger (and implicitly also use its formatter).</p>
<p>In the example below, neither "log" nor "mylog" are the root logger. Yet the call to <code>log.error()</code> uses the root logger's format, but <code>mylog.error()</code> doesn't. Why is that?</p>
<pre><code>import logging
class MyLogger(logging.getLoggerClass()):
pass
mylog = MyLogger("mytest")
log = logging.getLogger("test")
root = logging.getLogger()
h = logging.StreamHandler()
f = logging.Formatter(fmt="[%(levelname)s] %(message)s")
h.setFormatter(f)
root.addHandler(h)
log.error("Normal log")
mylog.error("Test log")
</code></pre>
<p>Output:</p>
<pre><code>[ERROR] Normal log
Test log
</code></pre>
<p>(Background: I've written a custom logger that takes an extra object argument and augments the message with info about that object. Calls to that custom logger are sprinkled throughout several modules, alongside loggers acquired with <code>logging.getLogger()</code>. Of course I want those custom logs to use the same handlers and formatters I've set up centrally.)</p>
|
<python>
|
2023-04-19 19:59:18
| 2
| 679
|
musbur
|
76,058,371
| 504,717
|
Pythonic way to map enum to api values
|
<p>I have a proto generated code which has 3 values (values i am interested in)</p>
<pre class="lang-py prettyprint-override"><code>proto_type.VALUE1
proto_type.VALUE2
proto_type.VALUE3
</code></pre>
<p>I have created an enum in my python project.</p>
<pre class="lang-py prettyprint-override"><code>class MyEnum(Enum):
VALUE1 = 0
VALUE2 = 1
VALUE3 = 2
</code></pre>
<p>The sole reason i have done this so that i can restrict callee to use specific values. Now I want to map <code>MyEnum</code> to <code>proto_type</code> values.</p>
<p>I was thinking to create a dictionary i.e.,</p>
<pre class="lang-py prettyprint-override"><code>Mapping = {
MyEnum.Value1: imported_proto_type.VALUE1,
MyEnum.Value2: imported_proto_type.VALUE2,
MyEnum.Value3: imported_proto_type.VALUE3,
}
</code></pre>
<p>What i wanted to ask is this best pythonic way to do so? Is there any better approach?</p>
|
<python><python-3.x><enums><python-3.7>
|
2023-04-19 19:58:16
| 2
| 8,834
|
Em Ae
|
76,058,342
| 8,512,262
|
What does socket.setdefaulttimeout have to do with Windows services written in Python?
|
<p>I'm familiarizing myself with creating Windows services in Python, and all of the examples I've found here or elsewhere call <code>socket.setdefaulttimeout(60.0)</code> (or something similar). But none of these examples explain <em>why</em> this is being done...and I can't find any info regarding whether or not this has to do with Windows services or with the way they're implemented in Python.</p>
<p>I'm just trying to get a better handle of what's going on in my code. If anyone can explain why this is done / if it's needed, I'd appreciate the help.</p>
<p>For reference, here's a (partial) boilerplate implementation of a service class in Python</p>
<pre><code>import socket
import win32event
from win32serviceutil import ServiceFramework
# some of the usual imports have been left out for the sake of brevity
class ExampleService(ServiceFramework):
_svc_name_ = 'example_service'
_svc_display_name_ = 'Example Service'
_svc_description_ = 'Lorem ipsum dolor sit amet'
def __init__(self, *args):
super().__init__(*args)
socket.setdefaulttimeout(60.0) # <- this is the line in question
self.svc_stop_event = win32event.CreateEvent(None, 0, 0, None)
def SvcStop(self):
...
def SvcDoRun(self):
...
</code></pre>
<hr>
<p><em><strong>Update:</strong></em>
I've removed the call to <code>socket.setdefaulttimeout()</code> from my code, and unsurprisingly nothing was broken. Good times!</p>
|
<python><windows><sockets><service>
|
2023-04-19 19:53:49
| 1
| 7,190
|
JRiggles
|
76,058,279
| 12,144,502
|
The Travelling Salesman Problem Using Genetic Algorithm
|
<p>I was looking to learn about AI and found the traveling salesman problem very interesting. I also wanted to learn about genetic algorithms, so it was a fantastic combo. The task is to find the shortest distance traveling from <code>id 1</code> to each location from the list once and returning to the starting location <code>id 1</code></p>
<p><em>Restriction for the problem :</em></p>
<blockquote>
<p>The location <code>id 1</code> must be the starting and the ending point</p>
<p>The maximum distance allowed is <code>distance <= 9000</code></p>
<p>Only max of <code>250000</code> fitness calculation is allowed</p>
</blockquote>
<hr />
<p><strong>Code</strong> :</p>
<pre><code>import numpy as np
import random
import operator
import pandas as pd
val10 = 0
val9 = 0
class Locations:
def __init__(self, x, y):
self.x = x
self.y = y
def dist(self, location):
x_dist = abs(float(self.x) - float(location.x))
y_dist = abs(float(self.y) - float(location.y))
# β( (x2 β x1)^2 + (π¦2 β π¦1)^2 )
dist = np.sqrt((x_dist ** 2) + (y_dist ** 2))
return dist
def __repr__(self):
return "(" + str(self.x) + "," + str(self.y) + ")"
class Fitness:
def __init__(self, route):
self.r = route
self.dist = 0
self.fit = 0.0
def route_dist(self):
if self.dist == 0:
path_dist = 0
for i in range(0, len(self.r)):
from_location = self.r[i]
to_location = None
if i + 1 < len(self.r):
to_location = self.r[i+1]
else:
to_location = self.r[0]
path_dist += from_location.dist(to_location)
self.dist = path_dist
return self.dist
def route_fittness(self):
if self.fit == 0:
self.fit = 1 / float(self.route_dist())
global val9
val9 = val9 + 1
return self.fit
def generate_route(location_list):
route = random.sample(location_list, len(location_list))
return route
def gen_zero_population(size, location_list):
population = []
for i in range(0, size):
population.append(generate_route(location_list))
return population
def determine_fit(population):
result = {}
for i in range(0, len(population)):
result[i] = Fitness(population[i]).route_fittness()
global val10
val10 = val10 + 1
return sorted(result.items(), key=operator.itemgetter(1), reverse=True)
def fit_proportionate_selection(top_pop, elite_size):
result = []
df = pd.DataFrame(np.array(top_pop), columns=["index", "Fitness"])
df['cumulative_sum'] = df.Fitness.cumsum()
df['Sum'] = 100*df.cumulative_sum/df.Fitness.sum()
for i in range(0, elite_size):
result.append(top_pop[i][0])
for i in range(0, len(top_pop) - elite_size):
select = 100*random.random()
for i in range(0, len(top_pop)):
if select <= df.iat[i, 3]:
result.append(top_pop[i][0])
break
return result
def select_mating_pool(populatoin, f_p_s_result):
mating_pool = []
for i in range(0, len(f_p_s_result)):
index = f_p_s_result[i]
mating_pool.append(populatoin[index])
return mating_pool
def ordered_crossover(p1, p2):
child, child_p1, child_p2 = ([] for i in range(3))
first_gene = int(random.random() * len(p1))
sec_gene = int(random.random() * len(p2))
start_gene = min(first_gene, sec_gene)
end_gene = max(first_gene, sec_gene)
for i in range(start_gene, end_gene):
child_p1.append(p1[i])
child_p2 = [item for item in p2 if item not in child_p1]
child = child_p1 + child_p2
return child
def ordered_crossover_pop(mating_pool, elite_size):
children = []
leng = (len(mating_pool) - (elite_size))
pool = random.sample(mating_pool, len(mating_pool))
for i in range(0, elite_size):
children.append(mating_pool[i])
for i in range(0, leng):
var = len(mating_pool)-i - 1
child = ordered_crossover(pool[i], pool[var])
children.append(child)
return children
def swap_mutation(one_location, mutation_rate):
for i in range(len(one_location)):
if (random.random() < mutation_rate):
swap = int(random.random() * len(one_location))
location1 = one_location[i]
location2 = one_location[swap]
one_location[i] = location2
one_location[swap] = location1
return one_location
def pop_mutation(population, mutation_rate):
result = []
for i in range(0, len(population)):
mutaded_res = swap_mutation(population[i], mutation_rate)
result.append(mutaded_res)
return result
def next_gen(latest_gen, elite_size, mutation_rate):
route_rank = determine_fit(latest_gen)
selection = fit_proportionate_selection(route_rank, elite_size)
mating_selection = select_mating_pool(latest_gen, selection)
children = ordered_crossover_pop(mating_selection, elite_size)
next_generation = pop_mutation(children, mutation_rate)
return next_generation
def generic_algor(population, pop_size, elite_size, mutation_rate, gen):
pop = gen_zero_population(pop_size, population)
print("Initial distance: " + str(1 / determine_fit(pop)[0][1]))
for i in range(0, gen):
pop = next_gen(pop, elite_size, mutation_rate)
print("Final distance: " + str(1 / determine_fit(pop)[0][1]))
best_route_index = determine_fit(pop)[0][0]
best_route = pop[best_route_index]
print(best_route)
return best_route
def read_file(fn):
a = []
with open(fn) as f:
[next(f) for _ in range(6)]
for line in f:
line = line.rstrip()
if line == 'EOF':
break
ID, x, y = line.split()
a.append(Locations(x=x, y=y))
return a
location_list = read_file(r'path_of_the_file')
population = location_list
pop_size = 100
elite_size = 40
mutation_rate = 0.001
gen = 500
generic_algor(population, pop_size, elite_size, mutation_rate, gen)
print(val10)
print(val9)
</code></pre>
<p><strong>Location file with <code>x</code> and <code>y</code> coordinates</strong> :</p>
<pre><code>|Locations
|
|52 Locations
|
|Coordinates
|
1 565.0 575.0
2 25.0 185.0
3 345.0 750.0
4 945.0 685.0
5 845.0 655.0
6 880.0 660.0
7 25.0 230.0
8 525.0 1000.0
9 580.0 1175.0
10 650.0 1130.0
11 1605.0 620.0
12 1220.0 580.0
13 1465.0 200.0
14 1530.0 5.0
15 845.0 680.0
16 725.0 370.0
17 145.0 665.0
18 415.0 635.0
19 510.0 875.0
20 560.0 365.0
21 300.0 465.0
22 520.0 585.0
23 480.0 415.0
24 835.0 625.0
25 975.0 580.0
26 1215.0 245.0
27 1320.0 315.0
28 1250.0 400.0
29 660.0 180.0
30 410.0 250.0
31 420.0 555.0
32 575.0 665.0
33 1150.0 1160.0
34 700.0 580.0
35 685.0 595.0
36 685.0 610.0
37 770.0 610.0
38 795.0 645.0
39 720.0 635.0
40 760.0 650.0
41 475.0 960.0
42 95.0 260.0
43 875.0 920.0
44 700.0 500.0
45 555.0 815.0
46 830.0 485.0
47 1170.0 65.0
48 830.0 610.0
49 605.0 625.0
50 595.0 360.0
51 1340.0 725.0
52 1740.0 245.0
EOF
</code></pre>
<p>I have tried to tweak the value of the parameters but it has never gone below or be <code>9000</code> it is always around the upper <code>9500</code> What can I improve to get it to work for my location file?</p>
<p><a href="https://i.sstatic.net/lEsKX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lEsKX.png" alt="Graph" /></a></p>
|
<python><python-3.x><performance><genetic-algorithm><traveling-salesman>
|
2023-04-19 19:45:50
| 1
| 400
|
zellez11
|
76,058,225
| 4,025,583
|
Numpy's `meshgrid` is discontinuously slow
|
<p>Over repeated calls to <code>meshgrid</code> I realized it can become slow at some sample sizes for no seemingly apparent reason. For example, in this snippet:</p>
<pre class="lang-py prettyprint-override"><code>from time import time
import numpy as np
np.random.seed(1729)
maxn = 760
times = np.empty(maxn+1)
times[0] = time()
for n in range(1, maxn+1):
x = np.random.uniform(size=(n,1))
y = np.random.uniform(size=(n,1))
x_grid, y_grid = np.meshgrid(x, y)
times[n] = time()
last = 40
[f'{i+maxn-last+1}: {x:.3f}' for i, x in enumerate(np.diff(times)[-last:])]
</code></pre>
<p>I get</p>
<pre class="lang-py prettyprint-override"><code>['721: 0.003',
'722: 0.003',
'723: 0.003',
'724: 0.004',
'725: 0.010',
'726: 0.007',
'727: 0.007',
'728: 0.005',
'729: 0.005',
'730: 0.006',
'731: 37.243', # Suddenly slow
'732: 0.009',
'733: 0.007',
'734: 0.007',
'735: 0.009',
'736: 0.007',
'737: 0.009',
'738: 0.009',
'739: 0.009',
'740: 11.602', # Suddenly slow
'741: 0.008',
'742: 0.010',
'743: 0.012',
'744: 0.012',
'745: 0.016',
'746: 0.009',
'747: 0.015',
'748: 0.015',
'749: 61.460', # Suddenly slow
'750: 0.008',
'751: 0.007',
'752: 0.007',
'753: 0.007',
'754: 0.007',
'755: 0.007',
'756: 0.007',
'757: 0.007',
'758: 1.167', # Suddenly slow
'759: 0.007',
'760: 0.010']
</code></pre>
<p>You can see most runs are instantaneous, but suddenly one particular run has a very slow runtime (weirdly in this case it's every 9th run, starting in 731). Not sure what's a maxn that will yield this phenomenon in other machines, though. Any thoughts on what might be happening?</p>
<p>Python version 3.10.10 and numpy version 1.24.2 on Arch Linux.</p>
<p><strong>EDIT2:</strong> I decided to use a workaround in place of <code>meshgrid</code>, which seems to work for my use case:</p>
<pre class="lang-py prettyprint-override"><code>x_grid, y_grid = np.broadcast_arrays(x[:, np.newaxis], y[np.newaxis, :])
x_grid = x_grid.reshape((x.shape[0], y.shape[0])).T
y_grid = y_grid.reshape((x.shape[0], y.shape[0])).T
</code></pre>
<p><strong>EDIT:</strong> I wan the snippet with <code>perf stat</code> and am posting the results here. The slowdowns happened at different places, interestingly.</p>
<pre><code>721: 0.003
722: 0.003
723: 0.003
724: 0.004
725: 0.008
726: 0.009
727: 0.008
728: 0.009
729: 0.014
730: 0.015
731: 0.013
732: 0.009
733: 0.009
734: 16.755
735: 0.012
736: 0.011
737: 0.011
738: 0.011
739: 0.013
740: 0.010
741: 0.010
742: 0.013
743: 0.011
744: 22.704
745: 0.016
746: 0.012
747: 0.017
748: 0.016
749: 0.016
750: 0.012
751: 0.011
752: 0.016
753: 4.244
754: 0.018
755: 0.014
756: 0.018
757: 0.015
758: 0.017
759: 0.021
760: 0.018
Performance counter stats for 'python test.py':
45,125.79 msec task-clock:u # 0.998 CPUs utilized
0 context-switches:u # 0.000 /sec
0 cpu-migrations:u # 0.000 /sec
568,187 page-faults:u # 12.591 K/sec
1,150,132,500 cycles:u # 0.025 GHz
1,124,532,475 instructions:u # 0.98 insn per cycle
188,859,225 branches:u # 4.185 M/sec
3,959,138 branch-misses:u # 2.10% of all branches
45.194501699 seconds time elapsed
0.393526000 seconds user
44.537625000 seconds sys
</code></pre>
|
<python><numpy><performance>
|
2023-04-19 19:36:36
| 0
| 454
|
Mauricio
|
76,058,004
| 2,725,742
|
Efficiently Parsing XSD with xmlschema, without XML Available
|
<pre><code>import xmlschema
data_schema = xmlschema.XMLSchema("myXSD.xsd")
data = data_schema.to_dict("myXSD.xsd")
</code></pre>
<p>I am not sure what the normal use case for this is, but I only have a ton of XSD files to parse, so I ended up passing the XSD file into both of these calls to get a dictionary. Now that I am basically done, I am focusing on how slow it is, requiring nearly a 10 second delay between launching the script and the GUI being usable because all of this information is required to populate it.</p>
<p>Profiling the script, I saw that most of the time is from the initial read and then the second largest part was populating the dictionary. Since I am not actually passing an .xml file like documentation says the second call should be doing, I assume it is failing the process when reading the .xsd a second time but still giving me the desired dictionary.</p>
<p>Is there any way to get the dictionary without it reading the same file two times? I have tried passing other things, including <code>None</code>, but that does not work.</p>
|
<python><xsd>
|
2023-04-19 19:00:52
| 0
| 448
|
fm_user8
|
76,057,621
| 13,055,818
|
Any fancy way to unwind Enum classes in python?
|
<p>I am currently coding a package that contains several <code>Enum</code> classes and would like to use the symbolic names directly from the package name, not from the classes themselves, as OpenCV does. Is there anyway to define the classes but still accessing their symbolic names directly from the package or shall I directly hardcode them in the package's <code>__init__.py</code> ?</p>
<p>For now I am currently in this state:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
class enum_class1(Enum):
class1_sym1: int = 0
class1_sym2: int = 1
class1_sym3: int = 2
class enum_class2(Enum):
class2_sym1: int = 0
class2_sym2: int = 1
class2_sym3: int = 2
</code></pre>
<p>What I am currently doing :</p>
<pre class="lang-py prettyprint-override"><code>import mypackage as p
state = p.enum_class2.class_2_sim1
</code></pre>
<p>What I would like to rather do :</p>
<pre class="lang-py prettyprint-override"><code>import mypackage as p
state = p.class_2_sim1
</code></pre>
|
<python><enums>
|
2023-04-19 18:08:09
| 4
| 519
|
Maxime Debarbat
|
76,057,616
| 6,529,926
|
Can't load properties from .ini into custom logger
|
<p>I have a class called <code>CustomLogger</code> that is a subclass of <code>BaseLogger</code>, which is a subclass of <code>logging.Logger</code>. <code>BaseLogger</code> loads properties from a <code>logger.ini</code> file. So, <code>CustomLogger</code> is just a facade for <code>BaseLogger</code>.</p>
<p>Here is my code:</p>
<p><strong>logger.ini</strong></p>
<pre><code>[loggers]
keys=root
[handlers]
keys=consoleHandler,fileHandler
[formatters]
keys=fileFormatter, consoleFormatter
[logger_root]
level=INFO
handlers=consoleHandler, fileHandler
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=consoleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=INFO
formatter=fileFormatter
args=('app.log',)
[formatter_fileFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt=%Y-%m-%d %H:%M:%S
[formatter_consoleFormatter]
format=%(levelname)s - %(message)s
datefmt=%Y-%m-%d %H:%M:%S
</code></pre>
<p><strong>base_logger.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from os import path
import logging
import logging.config
class BaseLogger(logging.Logger):
def __init__(self, name):
super().__init__(name)
config_file_path = path.join(path.dirname(__file__), 'logger.ini')
logging.config.fileConfig(config_file_path)
print("Loaded config level:", self.getEffectiveLevel())
</code></pre>
<p><strong>custom_logger.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from .base_logger import BaseLogger
class CustomLogger(BaseLogger):
def __init__(self, name):
super().__init__(name)
</code></pre>
<p><strong>main.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from custom_logger import CustomLogger
logger = CustomLogger('root')
logger.info("hello") #doesn't work
</code></pre>
<p>It looks ok, but when i use <code>CustomLogger</code> in my code, it doesn't seem to load properties from the .ini file. <code>getEffectiveLevel()</code> is returning 0, which indicates that logger level is not being set when instantiated.</p>
<p>I tried using <code>logging.setLoggerClass</code>, but also didn't work.</p>
|
<python><oop><debugging><inheritance><logging>
|
2023-04-19 18:07:34
| 1
| 729
|
heresthebuzz
|
76,057,604
| 6,546,694
|
Method not picking up default argument values and the local argument values persist between separate calls to the method
|
<p>What should the following code print?</p>
<pre><code>class C:
def b(self,d = {'k':0}):
print(d)
d['k'] += 1
def a(self):
self.b()
c1 = C()
for i in range(3):
c1.a()
</code></pre>
<p>I could have sworn it should be</p>
<pre><code>{'k': 0}
{'k': 0}
{'k': 0}
</code></pre>
<p>since I am calling b <code>self.b()</code> without any parameters so it should pick the default value of <code>d</code> each time which is <code>{'k':0}</code>. I just don't know what I am overlooking here but it prints</p>
<pre><code>{'k': 0}
{'k': 1}
{'k': 2}
</code></pre>
<p>What is wrong with the following two statements?</p>
<ol>
<li>It should pick the default value of <code>d</code> when <code>b</code> is called without arguments</li>
<li><code>d</code> should be local to the method <code>b</code> and should not persist between different calls to the method <code>b</code></li>
</ol>
|
<python><python-3.x>
|
2023-04-19 18:05:46
| 0
| 5,871
|
figs_and_nuts
|
76,057,485
| 15,412,256
|
Pandas 2.0 pyarrow backend datetime operation
|
<p>I have the following pandas dataframe object using the pyarrow back end:</p>
<pre class="lang-py prettyprint-override"><code>crsp_m.info(verbose = True)
out:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4921811 entries, 0 to 4921810
Data columns (total 87 columns):
# Column Dtype
--- ------ -----
0 permno int64[pyarrow]
1 secinfostartdt date32[day][pyarrow]
2 secinfoenddt date32[day][pyarrow]
3 securitybegdt date32[day][pyarrow]
4 securityenddt date32[day][pyarrow]
</code></pre>
<p>I want to push back theses days to the month end similiar to what I did using the pandas datetime:</p>
<pre class="lang-py prettyprint-override"><code>crsp_m["date"] = pd.to_datetime(crsp_m.date)
crsp_m["date"] = crsp_m.date + pd.tseries.offsets.MonthEnd(0)
</code></pre>
<p>What would be equavalent operation for <code>date32[day][pyarrow]</code> object?</p>
|
<python><pandas><jupyter-notebook><pyarrow>
|
2023-04-19 17:50:46
| 1
| 649
|
Kevin Li
|
76,057,458
| 15,706,665
|
Reproduce color palette from an image containing that color palette (IVIS machine)
|
<p>I would like to know if that is possible to reproduce a color palette from an image.</p>
<p>Here is an <a href="https://resources.perkinelmer.com/lab-solutions/resources/docs/tch_010887_01_subject_rois.pdf" rel="nofollow noreferrer">example</a>. Pages 1 and 3 show a rainbow palette in the figure legend.</p>
<p>The software is proprietary and has very little room for customization. I would like to regenerate the legend by R (or any other programming language, e.g. python) but retain the scale/ range of the colors.</p>
<p>I recognize that some color picker <a href="https://imagecolorpicker.com/" rel="nofollow noreferrer">tools</a> are available. However, the resolution of the palette is quite low, and often 3-4 colors are displayed in the same row. I am not sure which color I should pick in the raster image.</p>
|
<python><r><colors><palette>
|
2023-04-19 17:47:16
| 2
| 485
|
William Wong
|
76,057,410
| 12,300,981
|
How to have approx_fprime return an array (multiple values) instead of scalar?
|
<p>I'm trying to figure out how I can get approx_fprime to return multiple values, instead of just a scalar.</p>
<p>Let's say your doing error propagation through a system of equations:</p>
<pre><code>from scipy.optimize import fsolve, approx_fprime
def pipeline2(inp):
sol1,sol1A,sol1B,sol1AB=inp
def equations(initial):
x,y,k1,k=initial
sol1_equation=((k*k1)/(1+k1))-sol1
sol1A_equaiton=((k*k1*x**2)/(1+k1*x))-sol1A
sol1B_equation=((k*k1*y**2)/(1+k1*y))-sol1B
sol1AB_equation=((k*k1*x**2*y**2)/(1+k1*x*y))-sol1AB
return(sol1_equation,sol1A_equaiton,sol1B_equation,sol1AB_equation)
val1,val2,val3,val4=fsolve(equations,(7,30,0.02,500))
return np.array([val1,val2,val3,val4])
</code></pre>
<p>Now normally I return an array, because I can easily calculate the partial derivatives of all values all at once as such</p>
<pre><code>h=1e-10
differential=np.array([(pipeline2(sol1_value+h,sol1A_value,sol1B_value,sol1AB_value)-pipeline2(sol1_value,sol1A_value,sol1B_value,sol1AB_value))/h,(pipeline2(sol1_value,sol1A_value+h,sol1B_value,sol1AB_value)-pipeline2(sol1_value,sol1A_value,sol1B_value,sol1AB_value))/h,(pipeline2(sol1_value,sol1A_value,sol1B_value+h,sol1AB_value)-pipeline2(sol1_value,sol1A_value,sol1B_value,sol1AB_value))/h,(pipeline2(sol1_value,sol1A_value,sol1B_value,sol1AB_value+h)-pipeline2(sol1_value,sol1A_value,sol1B_value,sol1AB_value))/h])
</code></pre>
<p>This provides a matrix of partial derivatives for all 4 variables, for all 4 sol1s (4x4 matrix). Naturally, while the output is nice and clean, the required code to generate it is ugly (this long line). Hence why I'd prefer to use approx_fprime (no point in reinventing the wheel here if it does the same thing I want).</p>
<p>However, approx_fprime returns a scalar only.</p>
<p>Anyone have any suggestions how I can calculate the partial derivatives for val1,val2,val3,val4 with respect to sol1,sol1A,sol1B,sol1AB in an elegant fashion using approx_fprime, say in one line like I have setup?</p>
|
<python><scipy>
|
2023-04-19 17:41:00
| 1
| 623
|
samman
|
76,057,350
| 63,898
|
Python 3.8 Flask log doesn't print to console, only to file
|
<p>I have this simple code running in Flask. I want to make it print to console and to file. Currently I can only print to file.</p>
<p>The code:</p>
<pre><code>import logging
from flask import Flask, make_response
from flask_cors import CORS
from Handlers import Handlers
app = Flask(__name__)
# logging.basicConfig(filename='record.log',
# level=logging.DEBUG, format='%(asctime)s %(levelname)s %(name)s %(threadName)s : %(message)s')
CORS(app, supports_credentials=True,resources=r'/*'
,methods=['GET', 'HEAD', 'POST', 'OPTIONS', 'PUT', 'PATCH', 'DELETE']
,allow_headers=["Accept","Authorization","Content-Type","X-CSRF-Token"])
app.logger.info("Start broker-service port %s",71)
@app.route('/',methods=['POST'])
def handleBroker():
handlers = Handlers()
return handlers.broker()
@app.route('/handle',methods=['POST'])
def handleSubmission():
handlers = Handlers()
return handlers.handleSubmission()
if __name__ == '__main__':
logger = logging.getLogger('dev')
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(name)s %(threadName)s : %(message)s')
fileHandler = logging.FileHandler('test.log')
fileHandler.setLevel(logging.INFO)
fileHandler.setFormatter(formatter)
consoleHandler = logging.StreamHandler()
consoleHandler.setLevel(logging.INFO)
consoleHandler.setFormatter(formatter)
app.logger.addHandler(fileHandler)
app.logger.addHandler(consoleHandler)
app.run(host="localhost", port=71, debug=True)
</code></pre>
|
<python><python-3.x><flask><logging><console>
|
2023-04-19 17:34:16
| 0
| 31,153
|
user63898
|
76,057,335
| 16,578,438
|
add character at character count in pyspark
|
<p>I'm looking for a way to insert special character at a specific character count in a string in <code>pyspark</code> :</p>
<pre><code>"M202876QC0581AADMM01"
to
"M-202876-QC0581-AA-DMM01"
(1-6-6-2-)
insertion after 1char then after 6char then after 6char then after 2char
</code></pre>
<p>Tried something like below but no luck yet :</p>
<pre><code>df = spark.createDataFrame([('M202876QC0581AADMM01',)], ['str'])
(df.withColumn("str", F.regexp_replace(F.col("str") , r"(\d{0})(\d{3})(\d{3})" , "$1-$2-$3"))).collect()
Out[121]: [Row(str='M-202-876QC0581AADMM01')]
</code></pre>
|
<python><dataframe><apache-spark><pyspark><apache-spark-sql>
|
2023-04-19 17:32:30
| 1
| 428
|
NNM
|
76,057,312
| 20,646,427
|
Why python interpreter not using venv
|
<p>I set up virtual env in my pycharm and i have interpreter of python in that venv and my console shows me that im using that but for some reason i still have packages of my main interpreter</p>
<p>How can i solve that?</p>
<p>This is my venv interpreter and im using it
<a href="https://i.sstatic.net/nlQnJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nlQnJ.png" alt="enter image description here" /></a></p>
<p>But for some reason in terminal i still has previous intrepreter from another project <a href="https://i.sstatic.net/x4Gz3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x4Gz3.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/2Sc7y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Sc7y.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/kwad9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kwad9.png" alt="enter image description here" /></a></p>
|
<python><windows><pycharm>
|
2023-04-19 17:29:51
| 1
| 524
|
Zesshi
|
76,057,301
| 13,762,083
|
Fit a shape enclosing data points
|
<p>I have some data points as shown in the image:
<a href="https://i.sstatic.net/ErpML.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ErpML.png" alt="enter image description here" /></a></p>
<p>I want to fit a curve that encloses all these data points, e.g. an ellipse or a circle. How do I do this?</p>
|
<python><matplotlib><curve-fitting>
|
2023-04-19 17:28:04
| 1
| 409
|
ranky123
|
76,057,261
| 21,346,793
|
Why does React to Flask call fail with CORS despite flask_cors being included
|
<p>I need to read data from flask into react.
React:</p>
<pre><code>const data = {
name: 'John',
email: 'john@example.com'
};
axios.post('http://127.0.0.1:5000/api/data', data)
.then(response => {
console.log(response);
})
.catch(error => {
console.log(error);
});
</code></pre>
<p>Flask:</p>
<pre><code>from flask import Flask, request, jsonify
from flask_cors import CORS
app = Flask(__name__)
CORS(app, origins='http://localhost:3000')
@app.route('/api/data', methods=['POST'])
def process_data():
data = request.get_json()
# ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠ° Π΄Π°Π½Π½ΡΡ
response_data = {'message': 'Sucsess'}
return jsonify(response_data), 200
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>And i take in browser an error: Access to XMLHttpRequest at 'http://127.0.0.1:5000/api/data' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
ShortPage.jsx:24 AxiosErrorΒ {message: 'Network Error', name: 'AxiosError', code: 'ERR_NETWORK', config: {β¦}, request: XMLHttpRequest,Β β¦}</p>
<p>How can i fix this?</p>
|
<python><reactjs><flask>
|
2023-04-19 17:22:28
| 1
| 400
|
Ubuty_programmist_7
|
76,057,129
| 3,286,861
|
Use seaborn object interface to plot overlapping density plots, added inside a for loop, each having its own color/label shown in a legend
|
<p>Using seaborn python library, I am trying to make several density plots overlapping each other in the same figure and I want to color/label each of the lines. Using seaborn objects interface I am able to make the density plots within a for loop. But I cannot add color/label to each density plot.</p>
<p>I understand that there are other ways e.g., I create a dataframe with all the data and corresponding labels first and then pass it to seaborn plot(). But I was just wondering if below code (using seaborn objects interface) could work with some modifications. Please advise.</p>
<ul>
<li><strong>Plot using Seaborn objects</strong></li>
</ul>
<p>Code:</p>
<p>Here I am setting color=s_n which is the number of samples that I drew from the normal distribution. I want to label each density plot with the number of samples (please also the see the desired plot towards the end of post)</p>
<pre><code>import scipy.stats as st
import seaborn.objects as so
num_samples = 2000
normal_distr = st.norm(1,1)
sp = so.Plot()
for s_n in range(10,num_samples,400):
sample_normal = normal_distr.rvs(s_n)
sp = sp.add(so.Line(),so.KDE(),x=sample_normal,color=s_n)
sp.show()
</code></pre>
<p>The plots looks like this and it does not color/label each density line separately.</p>
<p><a href="https://i.sstatic.net/hj11z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hj11z.png" alt="KDE plot without individual color for each density" /></a></p>
<ul>
<li><strong>Desired Plot</strong></li>
</ul>
<p>If I directly use seaborn kdeplot, I can get the desired plot (below). But I was just wondering if I can use seaborn objects instead of direct kdeplot</p>
<p><strong>Code using kdeplot:</strong></p>
<pre><code>import scipy.stats as st
import seaborn as sns
import matplotlib.pyplot as plt
num_samples = 2000
normal_distr = st.norm(1,1)
for s_n in range(10,num_samples,400):
sample_normal = normal_distr.rvs(s_n)
sns.kdeplot(x=sample_normal, label=s_n)
plt.legend()
</code></pre>
<p>The (desired) plot:</p>
<p><a href="https://i.sstatic.net/K6Qe1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K6Qe1.png" alt="KDE plot with individual color for each density" /></a></p>
|
<python><seaborn><kdeplot><seaborn-objects>
|
2023-04-19 17:05:39
| 1
| 508
|
rkmalaiya
|
76,057,076
| 4,222,261
|
How to stream Agent's response in Langchain?
|
<p>I am using Langchain with Gradio interface in Python. I have made a conversational agent and am trying to stream its responses to the Gradio chatbot interface. I have had a look at the Langchain docs and could not find an example that implements streaming with Agents.
Here are some parts of my code:</p>
<pre><code># Loading the LLM
def load_llm():
return AzureChatOpenAI(
temperature=hparams["temperature"],
top_p=hparams["top_p"],
max_tokens=hparams["max_tokens"],
presence_penalty=hparams["presence_penalty"],
frequency_penalty=hparams["freq_penaulty"],
streaming=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True,
model_name=hparams["model"],
deployment_name = models_dict[hparams["model"]],
)
# Loading the agent
def load_chain(memory, sys_msg, llm):
"""Logic for loading the chain you want to use should go here."""
agent_chain = initialize_agent(tools,
llm,
agent="conversational-react-description",
verbose=True,
memory=memory,
agent_kwargs = {"added_prompt": sys_msg},
streaming=True,
)
return agent_chain
# Creating the chatbot to be used in Gradio.
class ChatWrapper:
def __init__(self, sys_msg):
self.lock = Lock()
self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True,)
self.chain = load_chain(self.memory, sys_msg, load_llm())
self.sysmsg = sys_msg
def __call__(
self, api_key: str, inp: str, history: Optional[Tuple[str, str]], chain: Optional[ConversationChain]
):
"""Execute the chat functionality."""
self.lock.acquire()
try:
history = history or []
# Run chain and append input.
output = self.chain.run(input=inp)
history.append((inp, output))
except Exception as e:
raise e
finally:
self.lock.release()
return history, history
</code></pre>
<p>I currently can stream into the terminal output but what I am looking for is streaming in my Gradio interface.</p>
<p>Can you please help me with that?</p>
|
<python><chatgpt-api><gradio><langchain>
|
2023-04-19 16:58:22
| 4
| 485
|
MRF
|
76,057,051
| 7,339,624
|
How to create a leptokurtic/platykurtic distribution in scipy having four moments?
|
<p>As <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html" rel="nofollow noreferrer">scipy documentation</a> says, we can easily calculate the first four moments of a normal distribution by this:</p>
<pre><code>mean, var, skew, kurt = norm.stats(moments='mvsk')
</code></pre>
<p>But I can't find it on their documentation, that by having all four moments <code>mean, var, skew, kurt</code>(skewness and kurtosis) how can you create a semi-normal distribution object? (for example, how can I make a leptokurtic or a platykurtic distribution in scipy?)</p>
<p>P.S. I know I can use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.skewnorm.html" rel="nofollow noreferrer">scipy.stats.skewnorm</a> for skewed normal. But that doesn't accept the 4th-moment kurtosis either.</p>
|
<python><scipy><statistics><normal-distribution>
|
2023-04-19 16:54:35
| 1
| 4,337
|
Peyman
|
76,057,034
| 2,961,927
|
Python Matplotlib produces larger and blurrier PDF than R
|
<p>Consider the following:</p>
<ol>
<li><p>Plot a histogram using R and save it in PDF:</p>
<pre><code> set.seed(42)
x = c(rnorm(1000, 1, 1), rnorm(1000, 8, 3))
pdf("Rplot.pdf", width = 10, height = 3.33)
par(mar = c(4, 5, 0, 0), family = "serif")
hist(x, breaks = 100, border = NA, col = "gray",
xlab = "x", ylab = "Frequency", cex.lab = 2.75, cex.axis = 2,
main = "", las = 1, xaxt = "n")
axis(side = 1, at = seq(-2.5, by = 2.5, len = 30), cex.axis = 2)
dev.off()
</code></pre>
</li>
<li><p>Plot a histogram using Python and save it in PDF:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
x = np.concatenate((np.random.normal(1, 1, size = 1000),
np.random.normal(8, 3, size = 1000)))
plt.close()
plt.rcParams["figure.figsize"] = (10, 3.33)
plt.rcParams["font.family"] = "Times New Roman"
plt.rcParams["axes.spines.bottom"] = True
plt.rcParams["axes.spines.left"] = True
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.right"] = False
tmp = plt.hist(x, bins = 100, color = 'lightgray')
plt.xlabel('x', fontsize = 30)
plt.ylabel('Frequency', fontsize = 30)
tmp = plt.xticks(fontsize = 25)
tmp = plt.yticks(fontsize = 25)
plt.tight_layout()
plt.savefig("pyPlot.pdf", bbox_inches='tight')
</code></pre>
</li>
</ol>
<p>Not only <code>pyPlot.pdf</code> (13KB) is 2.6x the size of <code>Rplot.pdf</code> (5KB), but if we compare them in Adobe Reader, <code>pyPlot.pdf</code> is also obviously blurrier than <code>Rplot.pdf</code>.</p>
<p>Some further investigation shows that, if we save both plots in <code>.svg</code>, then they are totally comparable. <code>pyPlot.pdf</code> also appears to be a direct clone of <code>pyPlot.svg</code> in terms of visual quality.</p>
<p>Is it possible to generate the level of visual quality and file size of <code>Rplot.pdf</code> using Matplotlib?</p>
<p>PS: I uploaded the two <code>.pdf</code>s here: <a href="https://github.com/WhateverLiu/twoImages" rel="nofollow noreferrer">https://github.com/WhateverLiu/twoImages</a> . Please check the file size and visual quality. Even in Chrome, if you look closely, <code>Rplot.pdf</code> prints smoother labels. But the major problem is that <code>pyPlot.pdf</code> is 2.5x larger, which really frustrates my work. Is it simply because R performed extra optimization on its graphic device? I don't want to give up on Python yet..</p>
|
<python><r><matplotlib><pdf><graphics>
|
2023-04-19 16:51:48
| 0
| 1,790
|
user2961927
|
76,057,033
| 181,783
|
Getting version of uninstalled Python module
|
<p>I have just joined a project that defined the version string in <code>src/__init__.py</code> like so</p>
<pre><code>#src/__init__.py
version = '0.0.7'
</code></pre>
<p>and the application in the same directory</p>
<pre><code>#src/acme.py
...
# get version here
</code></pre>
<p>I'd like to get the version string in the application, but everything I've tried has failed I suspect because much of the advise out here is geared towards installed modules, whereas the acme module is not installed.</p>
|
<python><python-3.x><version>
|
2023-04-19 16:51:17
| 0
| 5,905
|
Olumide
|
76,057,015
| 15,226,448
|
Python visualization in Power BI with slicers
|
<p>I am trying to create a Python visualization in Power BI from the data in my df and the plot pizza comparison code from mplsoccer: <a href="https://mplsoccer.readthedocs.io/en/latest/gallery/pizza_plots/plot_pizza_comparison.html" rel="nofollow noreferrer">https://mplsoccer.readthedocs.io/en/latest/gallery/pizza_plots/plot_pizza_comparison.html</a></p>
<p>This is my df:</p>
<p><a href="https://i.sstatic.net/vp3Km.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vp3Km.png" alt="df" /></a></p>
<p>It has a total of 14 columns, the first 2 being the names of the players I want to compare in the visualization. I thought that making 2 columns with the same players maybe would be beneficial when working with slicers (one for Player 1 and one for Player 2) but I don't know if this is correct? If there is a way not to do it, I would appreciate some advice.</p>
<p>The next step I do is to create the Python object with this code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from highlight_text import fig_text
from mplsoccer import PyPizza, FontManager
import numpy as np
from scipy import stats
import math
import matplotlib.pyplot as plt
from highlight_text import fig_text
df = dataset
df = df.fillna(0)
params = list(df.columns)
params = params[2:]
jugador1 = df['Jugador 1'].reset_index()
#we pass player's 1 Values :
jugador1 = list(jugador1.loc[0])
jugador2 = df['Jugador 2'].reset_index()
#we pass player's 2 Values :
jugador2 = list(jugador2.loc[0])
#we slice the list to remove the index-index-name columns :
jugador1 = jugador1[3:]
jugador2 = jugador2[3:]
#calculate percentile score of the values compared to the original dataframe :
values1 = []
values2 = []
for x in range(len(params)):
values1.append(math.floor(stats.percentileofscore(df[params[x]],jugador1[x])))
for x in range(len(params)):
values2.append(math.floor(stats.percentileofscore(df[params[x]],jugador2[x])))
font_normal = FontManager('https://raw.githubusercontent.com/google/fonts/main/apache/roboto/'
'Roboto%5Bwdth,wght%5D.ttf')
font_italic = FontManager('https://raw.githubusercontent.com/google/fonts/main/apache/roboto/'
'Roboto-Italic%5Bwdth,wght%5D.ttf')
font_bold = FontManager('https://raw.githubusercontent.com/google/fonts/main/apache/robotoslab/'
'RobotoSlab%5Bwght%5D.ttf')
# instantiate PyPizza class
baker = PyPizza(
params=params, # list of parameters
background_color="#EBEBE9", # background color
straight_line_color="#222222", # color for straight lines
straight_line_lw=1, # linewidth for straight lines
last_circle_lw=1, # linewidth of last circle
last_circle_color="#222222", # color of last circle
other_circle_ls="-.", # linestyle for other circles
other_circle_lw=1 # linewidth for other circles
)
# plot pizza
fig, ax = baker.make_pizza(
values1, # list of values
compare_values=values2, # comparison values
figsize=(13,13), # adjust figsize according to your need
kwargs_slices=dict(
facecolor="#1A78CF", edgecolor="#222222",
zorder=2, linewidth=1
), # values to be used when plotting slices
kwargs_compare=dict(
facecolor="#FF9300", edgecolor="#222222",
zorder=2, linewidth=1,
),
kwargs_params=dict(
color="#000000", fontsize=12,
fontproperties=font_normal.prop, va="center"
), # values to be used when adding parameter
kwargs_values=dict(
color="#000000", fontsize=12,
fontproperties=font_normal.prop, zorder=3,
bbox=dict(
edgecolor="#000000", facecolor="cornflowerblue",
boxstyle="round,pad=0.2", lw=1
)
), # values to be used when adding parameter-values labels
kwargs_compare_values=dict(
color="#000000", fontsize=12, fontproperties=font_normal.prop, zorder=3,
bbox=dict(edgecolor="#000000", facecolor="#FF9300", boxstyle="round,pad=0.2", lw=1)
), # values to be used when adding parameter-values labels
)
# add title
fig_text(
0.515, 0.99, "Comparison between Player 1 and Player 2", size=17, fig=fig,
highlight_textprops=[{"color": '#1A78CF'}, {"color": '#EE8900'}],
ha="center", fontproperties=font_bold.prop, color="#000000"
)
plt.show()
</code></pre>
<p>I also create 2 independent slicers for Player 1 and another one for Player 2. That is, when choosing an option in Player 1, the slicer of Player 2 is not filtered and so I can choose another different player to compare their metrics. This is how is shown:</p>
<p><a href="https://i.sstatic.net/NjeCb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NjeCb.png" alt="powerbi" /></a></p>
<p>But as you can see, there is an error:</p>
<p><a href="https://i.sstatic.net/V6Wyw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V6Wyw.png" alt="error" /></a></p>
<p>I think the key is in these lines of code:</p>
<pre><code>jugador1 = df['Jugador 1'].reset_index()
#we pass player's 1 Values :
jugador1 = list(jugador1.loc[0])
jugador2 = df['Jugador 2'].reset_index()
#we pass player's 2 Values :
jugador2 = list(jugador2.loc[0])
#we slice the list to remove the index-index-name columns :
jugador1 = jugador1[3:]
jugador2 = jugador2[3:]
</code></pre>
<p>However, I don't know exactly what might be going wrong. I also tried removing <code>.reset_index()</code> but it didn't work. It's something related to lists, ranges and index but I don't have the knowledge to find it.</p>
<p>How can I solve it? I need exactly that radar chart and I'm very interested in working with slicers in Power BI + Python. Thanks in advance!</p>
|
<python><powerbi>
|
2023-04-19 16:48:39
| 1
| 353
|
nokvk
|
76,056,940
| 5,343,362
|
HTTP call working with python but not with Android okhhtp
|
<p>I am facing a strange issue with Android.</p>
<p>I want to do a http call to <a href="https://api.moffi.io/api/orders/add" rel="nofollow noreferrer">https://api.moffi.io/api/orders/add</a>.</p>
<p>I am using retrofit and defined this method:</p>
<pre><code>@Headers("Content-Type: application/json")
@POST("orders/add")
fun order(@Header("Authorization") token: String, @Body order: Order): Call<JsonNode>
</code></pre>
<p>When calling it with an auth token, I get:</p>
<p>Error: USER_MUST_BE_CONNECTED.</p>
<p>Now when I do the same thing with python:</p>
<pre><code>order = query(method="POST", url="/orders/add", data=body_order, auth_token=auth_token)
def query( # pylint: disable=too-many-arguments
method: str,
url: str,
auth_token: str,
params: Dict[str, str] = None,
headers: Dict[str, str] = None,
data: Dict[str, Any] = None,
) -> Dict[str, Any]:
"""
Query Moffi API
:param method: Used method (GET, POST, OPTIONSβ¦)
:param url: Moffi endpoint URL
:param auth_token: Authentication token
:param headers: custom headers
:param data: body data
:return: Json response
:raise: RequestException
"""
if not url.startswith(MOFFI_API):
if not url.startswith("/"):
url = f"/{url}"
url = f"{MOFFI_API}{url}"
if params:
url = f"{url}?{urlencode(params)}"
ciheaders = CaseInsensitiveDict()
if headers is not None:
for key, value in headers.items():
ciheaders[key] = value
ciheaders["Accept"] = "application/json"
ciheaders["Authorization"] = f"Bearer {auth_token}"
if method.lower() not in requests.__dict__:
raise RecursionError(f"Unknown method {method}")
method = requests.__dict__[method.lower()]
try:
result = method(url=url, headers=ciheaders, json=data)
except requests.exceptions.RequestException as ex:
raise RequestException from ex
if result.status_code > 399:
raise RequestException(f"Request error {result.status_code} {result.text}")
return result.json()
</code></pre>
<p>It is working fine.</p>
<p>I am guessing there is a diffrence but I do not undestand what it is.</p>
<p>For more details, here is the Android code: <a href="https://github.com/moutyque/MoffiSimple" rel="nofollow noreferrer">https://github.com/moutyque/MoffiSimple</a></p>
<p>And the python: <a href="https://github.com/tduboys/moffi" rel="nofollow noreferrer">https://github.com/tduboys/moffi</a></p>
<p>Thx for any help</p>
|
<python><android><http><okhttp>
|
2023-04-19 16:39:49
| 0
| 301
|
Quentin M
|
76,056,936
| 1,422,096
|
MySQL "data" folder keeps growing, even after DROP TABLE
|
<p>I'm using MySQL (8.0.33-winx64, for Windows) with Python 3 and <code>mysql.connector</code> package.</p>
<p>Initially my <code>mysql-8.0.33-winx64\data</code> folder was rather small: < 100 MB.</p>
<p>Then after a few tests of <code>CREATE TABLE...</code>, <code>INSERT...</code> and <code>DROP TABLE...</code>, I notice that even after I totally drop the tables, the <code>data</code> folder keeps growing:</p>
<ul>
<li><code>#innodb_redo</code> folder seems to stay at max 100 MB</li>
<li><code>#innodb_temp</code> seems to be small</li>
<li><code>binlog.000001</code>: this one seems to be the culprit: it keeps growing even if I drop tables!</li>
</ul>
<p><strong>How to clean this data store after I drop tables, to avoid unused disk space with MySQL?</strong></p>
<p>Is it possible directly from Python3 <code>mysql.connector</code> API? Or from a SQL command to be execute (I already tried "OPTIMIZE" without success)? Or do I need to use an OS function manually (<code>os.remove(...)</code>)?</p>
<hr />
<p><strike>Note: the config file seems to be in <code>mysql-8.0.33-winx64\data\auto.cnf</code> in the portable Windows version (non-used as a service, but started with <code>mysqld --console</code>)</strike> (no default config file is created after a first run of the server, we can create it in <code>mysql-8.0.33-winx64\my.cnf</code>)</p>
|
<python><mysql><windows>
|
2023-04-19 16:39:24
| 2
| 47,388
|
Basj
|
76,056,867
| 12,319,746
|
CFFI Backend not found Azure functions
|
<p>Trying to deploy an azure python function from an Azure DevOps pipeline. The function gives the error</p>
<blockquote>
<p>ModuleNotFoundError: No module named _cffi_backend</p>
</blockquote>
<p>The interpreter is correct.
This is my <code>requirements.txt</code></p>
<pre><code>azure-functions
requests==2.26.0
cffi==1.14.5
azure-storage-blob==12.9.0
</code></pre>
<p>After having trying numerous variations, this is the final bash script I m using</p>
<pre><code>sudo apt-get install libffi-dev
python -m venv funcenv
source funcenv/bin/activate
pip install --upgrade pip
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
pip install --target="./.python_packages/lib/site-packages" --no-cache-dir cffi
</code></pre>
|
<python><azure-functions>
|
2023-04-19 16:30:50
| 1
| 2,247
|
Abhishek Rai
|
76,056,810
| 12,175,820
|
Making a class with empty name using type
|
<p>According to Python's documentation for the built-in function <a href="https://docs.python.org/3/library/functions.html#type" rel="nofollow noreferrer"><code>type</code></a>:</p>
<blockquote>
<p><code>class type(name, bases, dict, **kwds)</code></p>
<p>[...]</p>
<p>With three arguments, return a new type object. This is essentially a dynamic form of the <a href="https://docs.python.org/3/reference/compound_stmts.html#class" rel="nofollow noreferrer"><code>class</code></a> statement. The <em>name</em> string is the class name and becomes the <a href="https://docs.python.org/3/library/stdtypes.html#definition.__name__" rel="nofollow noreferrer"><code>__name__</code></a> attribute.</p>
</blockquote>
<p>I have realized that the <em>name</em> string might as well be empty, and the function invocation would still work:</p>
<pre class="lang-py prettyprint-override"><code>>>> type('', (), {})
<class '__main__.'>
</code></pre>
<p>Is there any issue that might arise from doing it?</p>
|
<python>
|
2023-04-19 16:22:14
| 1
| 712
|
Gabriele Buondonno
|
76,056,789
| 16,243,418
|
Dict in Django TemplateView throws Server Error 500, Suggested to use ListView that helps for DetailView
|
<p>I'm attempting to utilise the dictionary <strong>prices</strong> to retrieve the stock prices from yahoo finance. And the Django database is where I get my stock tickers. In HTML, I am unable to utilise dict. Please review the code samples I've included below.</p>
<p>If the datatype for the variable "prices" is likewise a dict, Django produces a 500 server error. I need to pull stock prices from yahoo finance and show stock tickers from the Django database in HTML. How can I fix it?</p>
<p>Please give me some advice on how to show my coins using ListView. In order to assist me with DetailView, which is crucial to my online application.</p>
<p>I want to say thank you.</p>
<p><strong>Model:</strong></p>
<pre><code>class Coin(models.Model):
ticker = models.CharField(max_length=10)
def __str__(self):
return self.ticker
</code></pre>
<p><strong>View:</strong></p>
<pre><code>class BucketView(LoginRequiredMixin, TemplateView):
template_name = 'coinbucket/bucket.html'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
symbols = [symbol.ticker for symbol in Coin.objects.all()]
prices = {}
for symbol in symbols:
ticker = yf.Ticker(symbol)
price = ticker.history(period='1d')['Close'][0]
prices[symbol] = f'$ {price:.6f}'
context["prices"] = prices
return context
</code></pre>
<p><strong>Template:</strong></p>
<pre><code>{% for coin in coins %}
<div class="col-12 col-md-8 my-2">
<div class="card">
<div class="card-body">
<div class="row">
<div class="col">{{ coin }}</div>
<div class="col">{{ prices[coin] }}</div>
</div>
</div>
</div>
</div>
{% endfor %}
</code></pre>
<p>Values of dict <strong>prices</strong>:</p>
<pre><code>{'BTC-USD': '$ 29335.927734', 'ETH-USD': '$ 1987.428223', 'XRP-USD': '$ 0.498745', 'DOGE-USD': '$ 0.090749', 'LTC-USD': '$ 94.516197'}
</code></pre>
<p>Following code snippets are tried, but no results:</p>
<pre><code><div class="col">{{ prices['coin'] }}</div>
</code></pre>
<pre><code><div class="col">{{ prices.coin }}</div>
</code></pre>
<pre><code><div class="col">{{ prices.get(coin, 'N/A') }}</div>
</code></pre>
|
<python><django><dictionary><django-views><django-templates>
|
2023-04-19 16:20:04
| 1
| 352
|
Akshay Saambram
|
76,056,447
| 10,311,377
|
BrokenPipeError inside gevent library when it is used with Celery. How to overcome the issue?
|
<p>We use <code>Celery</code> with <code>gevent</code> worker in our application. Workers are run as separated docker containers.</p>
<p>How we run Celery:
<code>celery -A app.worker worker --pool=gevent --loglevel=info --concurrency=50 -E -n woker-1 -Q some_q</code></p>
<p>Once (sometimes more often) per day we get the following error:</p>
<pre class="lang-py prettyprint-override"><code>2023-04-14 05:06:11,023 ERROR celery.worker.consumer.consumer perform_pending_operations() L227 Pending callback raised: BrokenPipeError(32, 'Broken pipe')
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 225, in perform_pending_operations
self._pending_operations.pop()()
File "/usr/local/lib/python3.8/site-packages/vine/promises.py", line 160, in call
return self.throw()
File "/usr/local/lib/python3.8/site-packages/vine/promises.py", line 157, in call
retval = fun(*final_args, **final_kwargs)
File "/usr/local/lib/python3.8/site-packages/kombu/message.py", line 128, in ack_log_error
self.ack(multiple=multiple)
File "/usr/local/lib/python3.8/site-packages/kombu/message.py", line 123, in ack
self.channel.basic_ack(self.delivery_tag, multiple=multiple)
File "/usr/local/lib/python3.8/site-packages/amqp/channel.py", line 1407, in basic_ack
return self.send_method(
File "/usr/local/lib/python3.8/site-packages/amqp/abstract_channel.py", line 70, in send_method
conn.frame_writer(1, self.channel_id, sig, args, content)
File "/usr/local/lib/python3.8/site-packages/amqp/method_framing.py", line 186, in write_frame
write(buffer_store.view[:offset])
File "/usr/local/lib/python3.8/site-packages/amqp/transport.py", line 347, in write
self._write(s)
File "/usr/local/lib/python3.8/site-packages/gevent/_socketcommon.py", line 699, in sendall
return _sendall(self, data_memory, flags)
File "/usr/local/lib/python3.8/site-packages/gevent/_socketcommon.py", line 409, in _sendall
timeleft = __send_chunk(socket, chunk, flags, timeleft, end)
File "/usr/local/lib/python3.8/site-packages/gevent/_socketcommon.py", line 338, in __send_chunk
data_sent += socket.send(chunk, flags)
File "/usr/local/lib/python3.8/site-packages/gevent/_socketcommon.py", line 722, in send
return self._sock.send(data, flags)
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>From traceback I see that the error comes from <code>"/usr/local/lib/python3.8/site-packages/gevent/_socketcommon.py"</code></p>
<p>I found this issue in <a href="https://github.com/celery/celery/issues/3377" rel="nofollow noreferrer">Github Issue#3377</a> , but it is related to another part of the code (older version of Celery), though it is also related to gevent library.</p>
<p>I also found "modern" version of the problem <a href="https://github.com/celery/celery/issues/7888" rel="nofollow noreferrer">Github Issue#7888</a> . The issue states that the problem was not actually solved in <a href="https://github.com/celery/celery/issues/3377" rel="nofollow noreferrer">Github Issue#3377</a></p>
<p>Container (with <code>Celery</code> worker inside) does not stops, but it stops doing it's work, connetion with RabbitMQ is broken and the worker does nothing.</p>
<p>How to overcome the issue?</p>
<p><strong>Possible walkaround:</strong></p>
<p>I guess that we can restart container every 1 hour with the help of some bash scripting + orchestrator logic. But I would like to have more robust solution, since the issue can presumably happen more often than once per day + restart is done "manually" rather than to some error.</p>
<p><strong>My (not the best) solution:</strong></p>
<p>Some bash script (you can add commands <code>python ...</code> or <code>celery ...</code> inside the block to run your python application)</p>
<pre class="lang-bash prettyprint-override"><code># some pseudo python app code
declare -i n=0
while [ $n -lt 15 ]
do
d=$(date)
n=n+1
echo $d Step: $n
sleep 2
done
# some sleeping after it, then exit
exit 0 # the most important is to exit after n minutes of sleeping
</code></pre>
<p>Docker-compose file:</p>
<pre class="lang-yaml prettyprint-override"><code>version: '3.8'
services:
xyz:
image: xyz:latest
container_name: xyz
hostname: xyz
restart: always # always restart app despite no errors - code 0
</code></pre>
<p>Just some random Dockerfile:</p>
<pre><code>FROM python:3.8
RUN mkdir -p /my_super_app
COPY ./app /my_super_app/app
WORKDIR /my_super_app
# celery-worker starts inside the bash script
CMD ["bash", "/bgate-smev/app/worker-start.sh"]
</code></pre>
|
<python><docker><rabbitmq><celery><gevent>
|
2023-04-19 15:43:12
| 0
| 3,906
|
Artiom Kozyrev
|
76,056,354
| 4,817,370
|
Python : access to a variable from an other module is only given as a reference
|
<p>I am working on writing the tests for a project that has a massive technical debt and am running into limitations of my understanding as to how python references variables</p>
<p>Here is the very simplified example :</p>
<pre><code>main.py
src/
__init__.py
foo.py
variables.py
tests/
__init__.py
unit_test.py
main.py
</code></pre>
<p>the <code>__init__.py</code> and <code>main.py</code> files are empty</p>
<p>foo.py</p>
<pre><code>from src.variables import GLOBAL_VARIABLE
class Foo:
def bar(self):
print(f"foo :{id(GLOBAL_VARIABLE)}")
if GLOBAL_VARIABLE:
return 1
return 2
</code></pre>
<p>variables.py</p>
<pre><code>GLOBAL_VARIABLE = False
print(f"\norig:{id(GLOBAL_VARIABLE)}")
</code></pre>
<p>unit_test.py</p>
<pre><code>import src.variables
from src.foo import Foo
def testFoo():
src.variables.GLOBAL_VARIABLE = True
print(f"\ntest:{id(src.variables.GLOBAL_VARIABLE)}")
foo = Foo()
assert foo.bar() == 1
</code></pre>
<p>I am attempting to change the value of a variable contained in the src/variables.py file in order to run some tests but I can only get a copy rather than the variable itself</p>
<p>The output is as follows ( simply running pytest with no configuration nor arguments ) :</p>
<pre><code>orig:140734094465928
collected 1 item
tests\unit_test.py
test:140734094465896
foo :140734094465928
</code></pre>
<p>I cannot seem to get access to the variable itself from the test module, only a copy. I have attempted different imports for the tests but I am consistently getting a variable that does not have the same id and the simply fails</p>
<p>How can I change the value of GLOBAL_VARIABLE ?</p>
<p>There are solutions that I have attempted such as <a href="https://stackoverflow.com/questions/34913078/importing-and-changing-variables-from-another-file">this one</a> but was not able to change the value of my boolean ( the difference seems to be that I am importing from an other module ? )</p>
<p>As a bit of context I cannot change the tested code ( would like to ), only the tests, so I cannot implement better coding practices on the tested code</p>
|
<python><reference><pytest>
|
2023-04-19 15:32:24
| 1
| 2,559
|
Matthieu Raynaud de Fitte
|
76,056,223
| 2,071,807
|
Type hint a SQLAlchemy 2 declarative model
|
<p>I create my SQLAlchemy models in <a href="https://docs.sqlalchemy.org/en/20/orm/quickstart.html" rel="nofollow noreferrer">the 2.0 way</a>:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user_account"
id: Mapped[int] = mapped_column(primary_key=True)
</code></pre>
<p>I'm writing a function which does something to one of these models:</p>
<pre class="lang-py prettyprint-override"><code>def drop_table(model):
model.__table__.drop()
</code></pre>
<p>How can I type hint <code>model</code> here, such that my type hinter will:</p>
<ol>
<li>Know which methods <code>model.__table__</code> has got?</li>
<li>Accept <code>drop_table(User)</code>?</li>
</ol>
<p>I naively assumed I could do:</p>
<pre class="lang-py prettyprint-override"><code>def drop_table(model: Base):
...
</code></pre>
<p>but that gives me:</p>
<blockquote>
<p>"Type[DeclarativeAttributeIntercept]" is incompatible with "Type[Base]"</p>
</blockquote>
|
<python><sqlalchemy>
|
2023-04-19 15:19:51
| 3
| 79,775
|
LondonRob
|
76,056,179
| 16,223,413
|
tuple false indexing function that takes into consideration old false indices
|
<p>Imagine a list of booleans, which are all True:</p>
<pre><code>bools = [True] * 100
</code></pre>
<p>I then have a tuple that correspond to the index of the bool that i want to set to False:</p>
<pre><code>false_index = (0,2,4)
for element in false_index:
bools[element] = False
</code></pre>
<p>The next time i would like to set the False elements with the same tuple, the index of the false_index, needs to take into consideration the <code>False</code> values of the list that has already been flipped.</p>
<p>I would like a function that take the index of the values that are already flipped, and the new values to be flipped, and return a tuple of the true values to be flipped, like:</p>
<pre><code>old_false_index = (0,2,4)
new_false_index = (0,2,4)
check_index(old_false_index, new_false_index) # output (1,5,7)
</code></pre>
<p>A more visual example:
for a list of 8 True values:</p>
<pre><code>[
1
1
1
1
1
1
1
1
]
</code></pre>
<p>The first tuple changes the elements to False at the given indices. So for (0,2,4), the list has now been changed to:</p>
<pre><code>[
1 # index 0
1
1 # index 2
1
1 # index 4
1
1
1
]
</code></pre>
<p>The next time it is changed with the same tuple, the index has changed:</p>
<pre><code>[
0
1 # index 1 but the first element that can be changed
0
1
0
1 # index 5 but the third element that can be changed
1
1 # index 7 but the fourth element that can be changed
]
</code></pre>
|
<python>
|
2023-04-19 15:16:13
| 1
| 631
|
DHJ
|
76,056,098
| 1,028,270
|
How do I programmatically check if a secondary ip service range is in use by a GKE cluster?
|
<p>I pre-create secondary ranges for my GKE clusters to use.</p>
<p>How do I programmatically check if a secondary range is currently in use by a cluster or is free?</p>
<p>I use the python API.</p>
<p>I can't even see how to list secondary ranges in a VPC. Looking at the docs the only thing I see remotely related to VPC is for VPC access connectors:</p>
<p><a href="https://i.sstatic.net/Ktylo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ktylo.png" alt="enter image description here" /></a></p>
|
<python><google-cloud-platform><google-kubernetes-engine><google-vpc>
|
2023-04-19 15:07:29
| 1
| 32,280
|
red888
|
76,056,089
| 7,087,604
|
Python code patterns: elegant way of trying different methods until one suceeds?
|
<p>I'm trying to extract informations from an HTML page, like its last modification date, in a context where there are more than one way of declaring it, and those ways use non-uniform data (meaning a simple loop over fetched data is not possible).</p>
<p>The ugly task is as follow:</p>
<pre class="lang-py prettyprint-override"><code>def get_date(html):
date = None
# Approach 1
time_tag = html.find("time", {"datetime": True})
if time_tag:
date = time_tag["datetime"]
if date:
return date
# Approach 2
mod_tag = html.find("meta", {"property": "article:modified_time", "content": True})
if mod_tag:
date = mod_tag["content"]
if date:
return date
# Approach 3
div_tag = html.find("div", {"class": "dateline"})
if div_tag:
date = div_tag.get_text()
if date:
return date
# Approach n
# ...
return date
</code></pre>
<p>I wonder if Python doesn't have some concise and elegant way of achieving this through a `while" logic, in order to run fast, be legible and maintenance-friendly:</p>
<pre class="lang-py prettyprint-override"><code>def method_1(html):
test = html.find("time", {"datetime": True})
return test["datetime"] if test else None
def method_2(html):
test = html.find("meta", {"property": "article:modified_time", "content": True})
return test["content"] if test else None
def method_3(html):
test = html.find("div", {"class": "dateline"})
return test.get_text() if test else None
...
def get_date(html):
date = None
bag_of_methods = [method_1, method_2, method_3, ...]
i = 0
while not date and i < len(bag_of_methods):
date = bag_of_methods[i](html)
i += 1
return date
</code></pre>
<p>I can make that work right now by turning each approach from the first snippet in a function, append all functions to the <code>bag_of_methods</code> iterable and run them all until one works.</p>
<p>However, those functions would be 2 lines each and will not be reused later in the program, so it just seems like it's adding more lines of code and polluting the namespace for nothing.</p>
<p>Is there a better way of doing this ?</p>
|
<python><design-patterns>
|
2023-04-19 15:06:47
| 2
| 713
|
AurΓ©lien Pierre
|
76,055,967
| 9,314,961
|
How to resolve 'TypeError: waveshow() takes 1 positional argument but 2 were given' error?
|
<p>I tried the below code in pycharm. Then it gave the error, 'AttributeError: module 'librosa.display' has no attribute 'waveplot'. Did you mean: 'waveshow'?'. Then I used, 'waveshow', instead of the waveplot method. But now I am getting the error, '<strong>TypeError: waveshow() takes 1 positional argument but 2 were given</strong>'. Though I searched for the error I did not find a way to resolve it. Does anyone have a proper solution for it?</p>
<pre><code>e= "happy"
plt.figure(figsize=(14,4))
librosa.display.waveplot(data, sampling_rate)
create_spectrogram(data, sampling_rate, e)
Audio(path)
</code></pre>
|
<python><deep-learning><librosa>
|
2023-04-19 14:55:12
| 1
| 311
|
RMD
|
76,055,891
| 2,134,072
|
fastAPI background task takes up to 100 times longer to execute than calling function directly
|
<p>I have simple fastAPI endpoint deployed on Google Cloud Run. I wrote the <code>Workflow</code> class myself. When the <code>Workflow</code> instance is executed, some steps happen, e.g., the files are processed and the result are put in a vectorstore database.</p>
<p>Usually, this takes a few seconds per file like <a href="https://fastapi.tiangolo.com/tutorial/background-tasks/" rel="nofollow noreferrer">this</a>:</p>
<pre><code>from .workflow import Workflow
...
@app.post('/execute_workflow_directly')
async def execute_workflow_directly(request: Request)
... # get files from request object
workflow = Workflow.get_simple_workflow(files=files)
workflow.execute()
return JSONResponse(status_code=200, content={'message': 'Successfully processed files'})
</code></pre>
<p>Now, if many files are involved, this might take a while, and I don't want to let the caller of the endpoint wait, so I want to run the workflow execution in the background like this:</p>
<pre><code>from .workflow import Workflow
from fastapi import BackgroundTasks
...
def run_workflow_in_background(workflow: Workflow):
workflow.execute()
@app.post('/execute_workflow_in_background')
async def execute_workflow_in_background(request: Request, background_tasks: BackgroundTasks):
... # get files from request object
workflow = Workflow.get_simple_workflow(files=files)
background_tasks.add_task(run_workflow_in_background, workflow)
return JSONResponse(status_code=202, content={'message': 'File processing started'})
</code></pre>
<p>Testing this with still only one file, I already run into a problem: Locally, it works fine, but when I deploy it to my Google Cloud Run service, execution time goes through the roof: In one example, background execution it took almost ~500s until I saw the result in the database, compared to ~5s when executing the workflow directly.</p>
<p>I already tried to increase the number of CPU cores to 4 and subsequently the number of gunicorn workers to 4 as well. Not sure if that makes much sense, but it did not decrease the execution times.</p>
<p><strong>Can I solve this problem by allocating more resources to Google Cloud run somehow or is my approach flawed and I'm doing something wrong or should already switch to something more sophisticated like <a href="https://fastapi.tiangolo.com/tutorial/background-tasks/#caveat" rel="nofollow noreferrer">Celery</a>?</strong></p>
<hr />
<p>Edit (not really relevant to the problem I had, see accepted answer):</p>
<p>I read the accepted answer to <a href="https://stackoverflow.com/questions/71516140/fastapi-runs-api-calls-in-serial-instead-of-parallel-fashion">this question</a> and it helped clarify some things, but doesn't really answer my question why there is such a big difference in execution time between running directly vs. as a background task. Both versions call the CPU-intensive <code>workflow.execute()</code> asynchronously if I'm not mistaken.</p>
<p>I can't really change the endpoint's definition to <code>def</code>, because I am awaiting other code inside.</p>
<p>I tried changing the background function to</p>
<pre><code>async def run_workflow_in_background(workflow: Workflow):
await run_in_threadpool(workflow.execute)
</code></pre>
<p>and</p>
<pre><code>async def run_workflow_in_background(workflow: Workflow):
loop = asyncio.get_running_loop()
with concurrent.futures.ThreadPoolExecutor() as pool:
res = await loop.run_in_executor(pool, workflow.execute)
</code></pre>
<p>and</p>
<pre><code>async def run_workflow_in_background(workflow: Workflow):
res = await asyncio.to_thread(workflow.execute)
</code></pre>
<p>and</p>
<pre><code>async def run_workflow_in_background(workflow: Workflow):
loop = asyncio.get_running_loop()
with concurrent.futures.ProcessPoolExecutor() as pool:
res = await loop.run_in_executor(pool, workflow.execute)
</code></pre>
<p>as suggested and it didn't help.</p>
<p>I tried increasing the number of workers as suggested and it didn't help.</p>
<p>I guess I will look into switching to Celery, but still eager to understand why it works so slowly with fastAPI background tasks.</p>
|
<python><performance><google-cloud-platform><fastapi>
|
2023-04-19 14:47:08
| 1
| 389
|
Clang
|
76,055,832
| 1,919,581
|
OpenSSL internal error, assertion failed: FATAL FIPS SELFTEST FAILURE
|
<p>I had an application written in Python3.9 packaged as executable file using PyInstaller in a CentOs7 docker image. Able to install the app successfully in linux machines where FIPS is disabled.</p>
<p>If I try install it in FIPS enabled RHEL8.7 machine it gives the below error</p>
<pre><code>fips.c(145): OpenSSL internal error, assertion failed: FATAL FIPS SELFTEST FAILURE
</code></pre>
<p>Then after a bit of research, thought the issue could be due to building the application in a FIPS disabled docker image. So tried to enable FIPS in the docker image and then built the application.</p>
<p>With OpenSSL v1.0.2t and openssl-fips v2.0.16, the app executable file works fine in both FIPS enabled as well as FIPS disabled RHEL8.7 linux machine.</p>
<p>But as OpenSSL v1.0.2 EOLed, tried to use Openssl-3.1.0 with FIPS enabled and built the application. It works fine if FIPS disabled, but gives the same error if FIPS enabled.</p>
<pre><code>fips.c(145): OpenSSL internal error, assertion failed: FATAL FIPS SELFTEST FAILURE
</code></pre>
<p>The RHEL8.7 machine which I am using to test the application has Openssl <code>OpenSSL 1.1.1k FIPS 25 Mar 2021</code> version.</p>
<p>If I build the app in the same RHEL machine it works fine, but not if built with Openssl 3.1.0 FIPS enabled in a CentOs7 image.</p>
<p>Any ideas, suggestions on how to resolve this issue, thanks.</p>
|
<python><linux><openssl><fips>
|
2023-04-19 14:41:54
| 0
| 511
|
user1919581
|
76,055,828
| 3,487,001
|
Python Generic[T] - typehint inherited nested classes
|
<p>After a few rewrites due to cluttered code, I now have a setup that involves a set of BaseClasses and multiple inherited classes thereof such that</p>
<pre class="lang-py prettyprint-override"><code>class Base:
class Nested: ...
class Nested2: ...
...
class NestedN: ...
class A(Base):
class Nested(Base.Nested): ...
class Nested2(Base.Nested2): ...
...
class NestedN(Base.NestedN): ...
class B(Base):
class Nested(Base.Nested): ...
class Nested2(Base.Nested2): ...
...
class NestedN(Base.NestedN): ...
class C(Base):
...
</code></pre>
<p>This suits my needs perfectly in terms of structure, maintainability and readability however it screws up my typehinting - especially when it comes to <code>Generic</code>s. More explicitly I would like to create a generic class for all of these inherited classes with proper typing like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic
T = TypeVar("T", bound=Base)
class TypeHintedGenericClass(Generic[T]):
def function_1(obj: T) -> T.Nested2: # E: I can only typehint Base.Nested2 not T.Nested2
...
def function_2(nst: T.Nested) -> T: # E: I can only typehint Base.Nested not T.Nested
...
</code></pre>
<p>My goal is to be able to create generic classes like this:</p>
<pre class="lang-py prettyprint-override"><code>generic_A = TypeHintedGenericClass[A]
generic_A.function_1(...) # returns A.Nested2
generic_B = TypeHintedGenericClass[B]
generic_B.function_1(...) # returns B.Nested2
</code></pre>
<p>I understand that this structure might not be completetly according to the books but it seems to be the most readable and understandable for my (rather large but very repetitive) usecase.</p>
<p>I am open to any suggestions.... Thanks!</p>
|
<python><generics><inheritance><python-typing>
|
2023-04-19 14:41:27
| 0
| 551
|
Schorsch
|
76,055,800
| 16,363,897
|
Change dataframe values based on other dataframes
|
<p>Let's say we have the following "df1" dataframe with cities as column names:</p>
<pre><code> NY LA Rome London Milan
date
2023-01-01 1 81 26 55 95
2023-01-02 92 42 96 98 7
2023-01-03 14 4 60 88 73
</code></pre>
<p>In another "df2" dataframe I have cities and their countries:</p>
<pre><code> City Country
0 NY US
1 LA US
2 London UK
3 Rome Italy
4 Milan Italy
</code></pre>
<p>In a third "df3" dataframe I have some values for each country and each date:</p>
<pre><code> US UK Italy
date
2023-01-01 70 41 32
2023-01-02 98 46 45
2023-01-03 83 50 17
</code></pre>
<p>My output dataframe has the same strutcture as the first dataframe. This is the expected output:</p>
<pre><code> NY LA Rome London Milan
date
2023-01-01 -69 11 -6 14 63
2023-01-02 -6 -56 51 52 -38
2023-01-03 -69 -79 43 38 56
</code></pre>
<p>For example, the 51 value for "Rome" on 2023-01-02 is the difference between the value of the same cell from df1 (96) and the value of the country where Rome is located (Italy) on 2023-01-02 (45).</p>
<p>Any help? Thanks.</p>
|
<python><pandas><dataframe>
|
2023-04-19 14:39:11
| 1
| 842
|
younggotti
|
76,055,764
| 7,614,968
|
Running random forest for production
|
<p>I am trying to productionize a random forest model using AWS Lambda Layers. I have used sklearn random forest to create a model. Following are the dependencies:</p>
<pre><code>joblib==1.1.0
numpy==1.23.1
pandas==1.4.2
python-dateutil==2.8.2
pytz==2022.1
scikit-learn==1.1.1
scipy==1.9.0
six==1.16.0
threadpoolctl==3.1.0
tokenizers==0.12.1
</code></pre>
<p>I am using Windows 10 OS. The size of the installation of these packages on disk is ~290 MB. However, Lambda Layers require less than 250 MB package size.</p>
<p>What are some ways I can reduce this installation size ? Is there a way to install something lightweight for inference or somehow install parts of sklearn (sklearn has dependency on scipy which is eating up most of the space). Is there any other way/strategy ?</p>
|
<python><amazon-web-services><aws-lambda><random-forest><aws-lambda-layers>
|
2023-04-19 14:35:23
| 1
| 635
|
palash
|
76,055,688
| 12,967,353
|
Generate aligned requirements.txt and dev-requirements.txt with pip-compile
|
<p>I have a Python project that depends on two packages <code>moduleA</code> and <code>moduleB</code>. I have the following <code>pyproject.toml</code>:</p>
<pre><code>[project]
name = "meta-motor"
version = "3.1.0.dev"
dependencies = [
"moduleA==1.0.0"
]
[project.optional-dependencies]
dev = [
"moduleB==1.0.0"
]
</code></pre>
<p>with <code>moduleA</code> depending on <code>moduleC>=1.0.0</code> and <code>moduleB</code> depending on <code>moduleC==1.1.0</code>.</p>
<p>I compile my requirements.txt and dev-requirements.txt like this:</p>
<pre><code>$ pip-compile -o requirements.txt pyproject.toml
$ pip-compile --extra dev -o dev-requirements.txt pyproject.toml
</code></pre>
<p>With this, I get</p>
<p><em>requirements.txt</em></p>
<pre><code>moduleA==1.0.0
# via pyproject.toml
moduleC==1.2.0
# via moduleA
</code></pre>
<p><em>dev-requirements.txt</em></p>
<pre><code>moduleB==1.0.0
# via pyproject.toml
moduleA==1.0.0
# via pyproject.toml
moduleC==1.1.0
# via
# moduleB
# moduleA
</code></pre>
<p>As you can see, the <code>moduleC</code> version is different in both <code>requirements.txt</code> files.
How can I solve this so I have <code>moduleC==1.1.0</code> in both ?</p>
<p>I could specify <code>moduleC==1.1.0</code> in my <code>pyproject.toml</code>, but this is not practicable for larger project with lots of dependencies like this.</p>
|
<python><requirements.txt><pyproject.toml><pip-tools><pip-compile>
|
2023-04-19 14:26:28
| 1
| 809
|
Kins
|
76,055,331
| 7,987,455
|
How to scrape data that appear when click on button?
|
<p>I am trying to scrape phone numbers from website, but the numbers will appear only if I click on the first number. In other words, the phone will be hidden in the HTML code, and when I click it will appear. can you help please?
I used the following code:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = "https://hipages.com.au/connect/makermanservices"
req = requests.get(url).text
soup = BeautifulSoup(req,"html.parser")
phone = soup.find('a', class_='PhoneNumber__MobileOnly-sc-4ewwun-1 izNnbI phone-number__mobile')
print(phone)
</code></pre>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2023-04-19 13:50:00
| 1
| 315
|
Ahmad Abdelbaset
|
76,055,244
| 194,305
|
swig python derived and base in different modules
|
<p>I am trying to reproduce a python example from the swig 4.0 documentation <a href="https://www.swig.org/Doc4.0/Modules.html#Modules_nn1" rel="nofollow noreferrer">Modules Basics</a>. I got properly generated _base_module.so and _derived_module.so . However when I try to use derived module from my test program</p>
<pre><code>#!/usr/bin/env python3
import sys
import derived_module
...
</code></pre>
<p>I got an import error:ImportError: /homes/../_derived_module.so: undefined symbol: _ZTI4Base</p>
<p>The reason for the error is quite understandable but it is not clear how to avoid it. Below are my files.</p>
<p>base.h:</p>
<pre><code>#ifndef _BASE_H_INCLUDED_
#define _BASE_H_INCLUDED_
class base {
public:
virtual int foo();
};
#endif // _BASE_H_INCLUDED_
</code></pre>
<p>base.cpp:</p>
<pre><code>#include "base.h"
int base::foo() { return 10; }
</code></pre>
<p>base_module.i:</p>
<pre><code>%module base_module
%{
#include "base.h"
%}
%include base.h
</code></pre>
<p>derived_module.i:</p>
<pre><code>%module derived_module
%{
#include "base.h"
%}
%import(module="base_module") "base.h"
%inline %{
class derived : public base {
public:
int foo() override { return 20; }
};
%}
</code></pre>
<p>test.py:</p>
<pre><code>#!/usr/bin/env python3
import sys
import derived_module
if __name__ == "__main__":
derived = derived_module.derived();
res = derived.foo();
print("res={}".format(res));
sys.exit()
</code></pre>
<p>Makefile:</p>
<pre><code>SWIG = swig4
CCX = g++
LD = g++
all: derived_module.py
derived_module.py: base.h base.cpp base_module.i derived_module.i
${SWIG} -python -py3 -c++ -cppext cpp base_module.i
${SWIG} -python -py3 -c++ -cppext cpp derived_module.i
${CCX} -O2 -fPIC -c base.cpp
${CCX} -O2 -fPIC -c base_module_wrap.cpp -I/usr/include/python3.8
${CCX} -O2 -fPIC -c derived_module_wrap.cpp -I/usr/include/python3.8
${LD} -shared base.o base_module_wrap.o -o _base_module.so
${LD} -shared derived_module_wrap.o -o _derived_module.so
run: derived_module.py
python3 ./test.py
clean:
rm -rf *~ derived_module.py base_module.py *_wrap.* *.o *.so __pycache__
</code></pre>
|
<python><inheritance><swig>
|
2023-04-19 13:40:24
| 1
| 891
|
uuu777
|
76,055,206
| 3,323,526
|
CJK full-width characters as Python names: how does Python deal with it and is it common in other programming languages?
|
<p>Not only ASCII, but also other Unicode characters can be used as names in Python. For example:</p>
<pre><code>my_variable = 'var 1' # Normal ASCII characters as name
ζηει = 'var 2' # Chinese characters as name
print(my_variable)
print(ζηει)
</code></pre>
<p>The code above generates output normally:</p>
<pre><code>var 1
var 2
</code></pre>
<p>But in CJK (Chinese, Japanese, Korean) characters, there are a set of special characters which look like ASCII characters but have totally different code. For example:</p>
<ul>
<li>Character A is UTF-8 \x41 and Unicode \u0041,</li>
<li>Character οΌ‘ is UTF-8 \xEF\xBC\xA1 and Unicode \uFF21.</li>
</ul>
<p>From the point of view of human, A and οΌ‘ are similar, but from that of computer, they are totally different characters.</p>
<p>Based on this understanding, I thought the following code:</p>
<pre><code>my_var = 'var 1' # Name with normal ASCII characters
ο½ο½οΌΏο½ο½ο½ = 'var 2' # Name with CJK full-width characters
print(my_var)
print(ο½ο½οΌΏο½ο½ο½)
</code></pre>
<p>would print 'var 1' and 'var 2', but the actual result is:</p>
<pre><code>var 2
var 2
</code></pre>
<p>By printing "locals()", it seems that Python automatically converted the CJK full-width characters to the corresponding ASCII characters.</p>
<p>My questions are:</p>
<ul>
<li>Why does Python convert them automatically? Is there any PEP or issue discussion about that? I searched it but didn't get an answer.</li>
<li>Does Python automatically convert such characters in other areas? I've tested that in dict, 'my_var' and 'ο½ο½οΌΏο½ο½ο½' are different keys, but what else?</li>
<li>Is it a normal behavior in programming language design? For example C, Java, JavaScript, PHP, etc.?</li>
</ul>
<p>Despite I've never used CJK full-width as variable name in my daily programming, but I want to know how does Python deal with such characters: in what circumstances 'my_var' and 'ο½ο½οΌΏο½ο½ο½' are considered as same, and in what circumstances they are not.</p>
|
<python><unicode><cjk>
|
2023-04-19 13:35:53
| 0
| 3,990
|
Vespene Gas
|
76,055,198
| 10,906,063
|
Using Request mixed Forms and Files with Annotation and optional fields
|
<p>I'd like to post mixed form fields and upload files to a FastAPI endpoint. The FastAPI documentation <a href="https://fastapi.tiangolo.com/tutorial/request-forms-and-files/#__tabbed_2_2" rel="nofollow noreferrer">here</a> states that you can mix params using <code>Annotated</code> and the <code>python-multipart</code> library e.g.</p>
<pre class="lang-py prettyprint-override"><code>@router.post("/upload")
async def upload_contents(
an_int: Annotated[int, Form()],
a_string: Annotated[str, Form()],
some_files: Annotated[list[UploadFile], File()]
):
</code></pre>
<p>However if I want to make posting of <code>a_string</code> optional, I have not found a way to make this work.</p>
<p>I have tried things like:</p>
<pre class="lang-py prettyprint-override"><code>@router.post("/upload")
async def upload_contents(
an_int: Annotated[int, Form()],
a_string: Annotated[Union[str, None], Form()],
some_files: Annotated[list[UploadFile], File()]
):
</code></pre>
<p>And also on @flakes suggestion:</p>
<pre class="lang-py prettyprint-override"><code>@router.post("/upload")
async def upload_contents(
an_int: Annotated[int, Form()],
a_string: Optional[Annotated[str], Form()]],
some_files: Annotated[list[UploadFile], File()]
):
</code></pre>
<p>While these run, the server is still requesting I provide the <code>a_string</code> parameter. Any guidance would be gratefully received.</p>
|
<python><fastapi><python-typing>
|
2023-04-19 13:34:57
| 2
| 579
|
Chris
|
76,055,072
| 20,220,485
|
How do you sort a dataframe with the integer in a column with strings and integers on every row?
|
<p>How would you sort the following dataframe:</p>
<pre><code>df = pd.DataFrame({'a':['abc_1.2.6','abc_1.2.60','abc_1.2.7','abc_1.2.9','abc_1.3.0','abc_1.3.10','abc_1.3.100','abc_1.3.11'], 'b':[1,2,3,4,5,6,7,8]})
>>>
a b
0 abc_1.2.6 1
1 abc_1.2.60 2
2 abc_1.2.7 3
3 abc_1.2.9 4
4 abc_1.3.0 5
5 abc_1.3.10 6
6 abc_1.3.100 7
7 abc_1.3.11 8
</code></pre>
<p>to achieve this output?</p>
<pre><code>>>>
a b
0 abc_1.2.6 1
1 abc_1.2.7 3
2 abc_1.2.9 4
3 abc_1.2.60 2
4 abc_1.3.0 5
5 abc_1.3.10 6
6 abc_1.3.11 8
7 abc_1.3.100 7
</code></pre>
<p>I understand that integers in strings can be accessed through string transformations, however I'm unsure how to handle this in a dataframe. Obviously <code>df.sort_values(by=['a'],ignore_index=True)</code> is unhelpful in this case.</p>
|
<python><pandas><string><sorting><integer>
|
2023-04-19 13:24:27
| 3
| 344
|
doine
|
76,055,001
| 386,861
|
Using pandas to read HTML
|
<p>This should be easy but I've got errors that I can't work out. I've got some air pollution stats for the UK that I want to parse.</p>
<p><a href="https://uk-air.defra.gov.uk/data/DAQI-regional-data?regionIds%5B%5D=999&aggRegionId%5B%5D=999&datePreset=6&startDay=01&startMonth=01&startYear=2022&endDay=01&endMonth=01&endYear=2023&queryId=&action=step2&go=Next%20" rel="nofollow noreferrer">https://uk-air.defra.gov.uk/data/DAQI-regional-data?regionIds%5B%5D=999&aggRegionId%5B%5D=999&datePreset=6&startDay=01&startMonth=01&start</a>Year=2022&endDay=01&endMonth=01&endYear=2023&queryId=&action=step2&go=Next+</p>
<p>But using read_html results in the error:</p>
<pre><code>ParserError: Error tokenizing data. C error: Expected 1 fields in line 7, saw 2
df = pd.read_html("https://uk-air.defra.gov.uk/data/DAQI-regional-data?regionIds%5B%5D=999&aggRegionId%5B%5D=999&datePreset=6&startDay=01&startMonth=01&startYear=2022&endDay=01&endMonth=01&endYear=2023&queryId=&action=step2&go=Next+")
df
</code></pre>
<p>This returns the data as a list. But I want to turn that list into a dataframe.</p>
<p>Which is the best way to solve the problem?</p>
|
<python><pandas>
|
2023-04-19 13:18:01
| 3
| 7,882
|
elksie5000
|
76,054,972
| 8,618,380
|
Groupby and get value in N-days in Pandas
|
<p>I have the following dataframe, representing daily stock values:</p>
<pre><code>print(df)
date ticker price
19/04/22 AAPL 10
19/04/22 TSLA 15
20/04/22 TSLA 15
20/04/22 AAPL 10
(...)
</code></pre>
<p>For each date and ticker, I would like to retrieve the (future) <strong>actual</strong> price value, if present within the DataFrame, in N-days (with N = 30).</p>
<p>How can I achieve that using Pandas?</p>
|
<python><pandas>
|
2023-04-19 13:15:03
| 1
| 1,975
|
Alessandro Ceccarelli
|
76,054,903
| 5,834,316
|
Is there a way to add columns to the data outputted to a headless Locust session?
|
<p>I would like to add a column with a custom metric to the headless output of Locust. Is this possible? I am struggling to see anything in the documentation that points to modifying the output.</p>
<p>Just to be specific, I want to modify this output:</p>
<pre><code>Type Name # reqs # fails | Avg Min Max Med | req/s failures/s
--------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|-----------
GET /version 3 0(0.00%) | 17 14 22 15 | 0.33 0.00
--------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|-----------
Aggregated 3 0(0.00%) | 17 14 22 15 | 0.33 0.00
</code></pre>
|
<python><output><load-testing><locust>
|
2023-04-19 13:08:08
| 1
| 1,177
|
David Ross
|
76,054,890
| 2,708,714
|
A python script does not work from command line in contrast to running it from ipython
|
<p>The following code reports an error when executed from the command line, whereas it executes fine from ipython (line by line). The script uses a locally-stored HuggingFace model, reads a scientific paper from a pdf file and answers questions about the paper.</p>
<pre><code>import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = 'hf-xxxxxx'
from langchain.embeddings import HuggingFaceEmbeddings
from langchain import HuggingFaceHub
from langchain.llms import HuggingFacePipeline
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, AutoModelForSeq2SeqLM
model_id = 'google/flan-t5-large'
tokenizer = AutoTokenizer.from_pretrained(model_id,max_length=1500)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, max_length=1500)
llm = HuggingFacePipeline(pipeline=pipe)
from langchain.document_loaders import UnstructuredPDFLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.text_splitter import CharacterTextSplitter
pdf_folder_path = "/mnt/d/test/langchain/pdfs"
loaders = [UnstructuredPDFLoader(os.path.join(pdf_folder_path, fn)) for fn in os.listdir(pdf_folder_path)]
index = VectorstoreIndexCreator(
embedding=HuggingFaceEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)).from_loaders(loaders)
from langchain.chains import RetrievalQA
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=index.vectorstore.as_retriever(),
input_key="question")
chain.run('What method was used in calculations?')
</code></pre>
<p>Here is the error when running from the command line (using <code>python script.py</code>):</p>
<pre><code>Token indices sequence length is longer than the specified maximum sequence length for this model (1142 > 512). Running this sequence through the model will result in indexing errors
Exception ignored in: <function DuckDB.__del__ at 0x7f0d0b1b1fc0>
Traceback (most recent call last):
File "/home/popsi/.local/lib/python3.10/site-packages/chromadb/db/duckdb.py", line 355, in __del__
AttributeError: 'NoneType' object has no attribute 'info'
</code></pre>
<p>When executing from ipython I get the warning similar to the command-line run but without the error in DuckDB:</p>
<pre><code>Token indices sequence length is longer than the specified maximum sequence length for this model (1142 > 512). Running this sequence through the model will result in indexing errors
</code></pre>
<p>and the program finishes correctly with correct answers.</p>
<p>I believe python (command line) and ipython may depend on different environment variable or something similar but I cannot find on the Internet the solution to this problem. Any idea?</p>
<p><strong>EDIT:</strong> When I use <code>print(chain.run('What method was used in calculations?'))</code> instead of the last line stated above, then even the command-line-run prints the answer. However the error message about <code>__del__</code> of duckdb.py is still reported at the end. I am not a Python expert but I suppose that <code>__del__</code> is a destructor of object duckdb which is called at the end of the script execution. From the ipython the destructor is never called since program never ends during run of ipython. Is it a correct reasoning? If so, there might be a bug in DuckDB.</p>
|
<python><ipython><langchain>
|
2023-04-19 13:06:57
| 0
| 2,630
|
Igor Popov
|
76,054,847
| 1,432,980
|
exclude class names from breadcrumbs and leave only package names
|
<p>I am trying to imitate the behaviour that is shown in this documentation</p>
<p><a href="https://faker.readthedocs.io/en/master/providers/baseprovider.html" rel="nofollow noreferrer">https://faker.readthedocs.io/en/master/providers/baseprovider.html</a></p>
<p>I have installed ReadTheDocs theme for Sphinx and also use autodoc extension.</p>
<p>My <code>index.rts</code> looks like this</p>
<pre><code> Contents
--------
.. toctree::
:maxdepth: 4
a_faker.faker.providers
</code></pre>
<p><code>a_faker.faker.providers.rst</code> page looks like this</p>
<pre><code> Custom providers
======================================================
.. toctree::
:maxdepth: 4
a_faker.faker.providers.person
a_faker.faker.providers.transaction
</code></pre>
<p>and specific provider page looks like this</p>
<pre><code> faker.providers.person
=================================================
.. automodule:: a_faker.faker.providers.person
:members:
:undoc-members:
:show-inheritance:
</code></pre>
<p>While the content of the page is fine, the end result is that it contains class names and methods on the left side of the page</p>
<p><a href="https://i.sstatic.net/DVmMQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DVmMQ.png" alt="enter image description here" /></a></p>
<p>How to exclude them? So that I would get only plain</p>
<pre><code>Custom Providers
faker.providers.person
</code></pre>
|
<python><python-sphinx><faker><autodoc>
|
2023-04-19 13:00:57
| 1
| 13,485
|
lapots
|
76,054,718
| 12,040,751
|
Type hint for class with __dict__ method
|
<p>I have a class that can be instantiated from a dataclass.</p>
<pre><code>class Asd:
@classmethod
def from_dataclass(cls, dataclass):
return cls(**dataclass.__dict__)
def __init__(self, **kwargs):
...
</code></pre>
<p>In principle I could pass to <code>from_dataclass</code> any class with a <code>__dict__</code> method, and I would like the type hint to reflect this.</p>
<p>What I have done is to create my protocol.</p>
<pre><code>from typing import Protocol
class HasDict(Protocol):
def __dict__(self):
...
class Asd:
@classmethod
def from_dataclass(cls, dataclass: HasDict):
return cls(**dataclass.__dict__)
def __init__(self, **kwargs):
...
</code></pre>
<p>I wonder if I'm reinventing the wheel or if there is a better way to do this.</p>
|
<python><type-hinting>
|
2023-04-19 12:48:17
| 1
| 1,569
|
edd313
|
76,054,671
| 1,630,244
|
Is it Pythonic to import an external symbol imported into another module?
|
<p>Please guide me on the pythonic way for a minor code-standards question. Searching SO re imports gets me many discussions comparing <code>import foo</code> vs <code>from foo import bar</code> which is not my concern here. Still I apologize if I am creating a dupe Q.</p>
<p>My team sometimes puts up code for review with <code>import</code> statements like below and I'm struggling to give advice:</p>
<p>Module <code>our_a.py</code> has a classic import statement, this is fine:</p>
<pre><code>from famous_package import some_function
# .. call the famous package's some_function()
</code></pre>
<p>Module <code>our_b.py</code> imports the same symbol, but from <code>our_a</code> not from <code>famous_package</code>:</p>
<pre><code>from our_a import some_function
# .. call the famous package's some_function()
</code></pre>
<p>Is there a name for this practice? Indirect import? Re-import??
Anyhow it works fine, and flake8 does not complain about import statements in module <code>our_b</code>. Is there any benefit here?
My instinct says, <code>our_b</code> should just import <code>some_function</code> from <code>famous_package</code> directly. Thanks in adv for your insights.</p>
|
<python><code-standards>
|
2023-04-19 12:43:44
| 0
| 4,482
|
chrisinmtown
|
76,054,571
| 7,056,765
|
How can I visualize the epsilon-delta criterion in Python?
|
<p>I'm a math student and I'm trying to understand the epsilon-delta criterion for limits. I know that the criterion says that for every epsilon > 0, there exists a delta > 0 such that if 0 < |x - c| < delta, then |f(x) - L| < epsilon, where L is the limit of f(x) as x approaches c.</p>
<p>I want to visualize this concept in Python using Matplotlib, so that I can explore different values of epsilon and delta and see how they affect the function f(x). Ideally, I would like to create an interactive plot where I can change the values of epsilon and delta using sliders.</p>
|
<python><function><math><visualization><limit>
|
2023-04-19 12:33:14
| 1
| 1,065
|
Createdd
|
76,054,552
| 15,095,104
|
Selecting a database to use per specific APIView in the Django application
|
<p>Recently I added a second database to my Django application. I have prepared API a long time ago with the use of djangorestframework and now I would like to be able to choose which view should use which database.</p>
<p>I tried using <a href="https://docs.djangoproject.com/en/4.2/topics/db/multi-db/#database-routers" rel="nofollow noreferrer">Database Router</a> however, I do not think it suits my need cause many of my views use the same models. I also hope to find a more elegant solution than just adding <em>.using()</em> method to every single query.</p>
|
<python><django><django-rest-framework>
|
2023-04-19 12:31:11
| 0
| 954
|
TymoteuszLao
|
76,054,532
| 1,021,888
|
Update Time Series date range in Colab
|
<p>As you can see in the last cell when you change the date slider and click on Update everything works but I duplicate the Figure everytime underneath the panel. I am new to Python and Colab so I am stuck I need some help thanks :)
<a href="https://colab.research.google.com/drive/11SWIbkbOVonXWtEQnA6C_ns1Ij0RZh43?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/11SWIbkbOVonXWtEQnA6C_ns1Ij0RZh43?usp=sharing</a></p>
|
<python><google-colaboratory><panel>
|
2023-04-19 12:28:43
| 1
| 1,781
|
Brett
|
76,054,444
| 16,733,101
|
Is there an option in Pycaret to stop looking for best model if the current models achieve a desired performance
|
<p>I have a data frame with 860k rows, and I am using GPU but still very slow. I decided to stop the training, and I am satisfied with the performance of the last model.</p>
<p>Is there an option in Pycaret to stop looking for the best model if the current models achieve the desired performance?</p>
|
<python><classification><pycaret>
|
2023-04-19 12:21:09
| 1
| 9,984
|
Hamzah Al-Qadasi
|
76,054,373
| 6,477,678
|
Multiple p-values calculator for Multiple Chi-Squared Values
|
<p>This is not a theory question, just one of efficiency. Let's say I have a list (long list) of chi-squared random variables and I want to calculate a p-value for each. For ease let's say they all have the same degrees of freedom. Something like:</p>
<pre><code>Chi Square [44,2.3, 33.4,........] -> p_value [.11,.95,.65,..........]
</code></pre>
<p>Is there a numpy function or online calculator that can do this? Losing my mind doing single entries atm.</p>
<p>Cheers</p>
|
<python><numpy><statistics><p-value>
|
2023-04-19 12:13:54
| 1
| 687
|
Canuck
|
76,054,304
| 6,583,606
|
How does `KSComp` behave when using the options `lower_flag`, `ref`, `ref0` and `units` and `upper` at the same time?
|
<p>Reading the <a href="https://openmdao.org/newdocs/versions/latest/features/core_features/adding_desvars_cons_objs/adding_constraint.html" rel="nofollow noreferrer">Adding Constraints</a> page, I have understood that the scaling of the a constraint value happens in the following order:</p>
<ul>
<li>the constraint value is converted from the connected outputβs units to the desired units;</li>
<li><code>ref0</code> and <code>ref</code> scale the constraint value to 0 and 1, respectively, and they have to be defined in the desired units.</li>
</ul>
<p>My understanding is that also <code>lower</code> and <code>upper</code> have to defined in the desired units and that they are scaled according to the values of <code>ref0</code> and <code>ref</code>. Is this correct?</p>
<p>Now, I'm finding a little bit trickier to understand what happens with <code>KSComp</code>. Say I have the following situation:</p>
<ul>
<li>the connected output feeds an array of stiffnesses defined in N/mm;</li>
<li>the <code>KSComp</code> component is defined with the following options: <code>lower_flag=True</code>, <code>units=N/m</code> and <code>upper=1</code>, meaning that the stiffnesses will be defined in N/m and will be constrained to be larger than 1.</li>
</ul>
<p>My understanding is that the array of stiffnesses will be converted to N/m and then the component will transform the input array and calculate the KS function such that the constraint is satisfied if it is less than zero.</p>
<p>How should I use <code>ref0</code> and <code>ref</code> in this case? In theory they should represent the physical values in the desired units that map to 0 and 1, respectively. However, looking at the <a href="https://openmdao.org/newdocs/versions/latest/_modules/openmdao/components/ks_comp.html" rel="nofollow noreferrer">source code of <code>KSComp</code></a> it seems that the scaling is applied to the computed KS function. In this case, if I only use the <code>ref</code> option and I scale a feasible value to 1, this will never satisfy the constraint, because with the option <code>add_constraint=True</code> the constraint is set to be satisfied when the KS function si smaller than 0 (0 scales to 0). Am I getting any of this wrong?</p>
|
<python><constraints><openmdao>
|
2023-04-19 12:06:16
| 1
| 319
|
fma
|
76,054,207
| 276,052
|
Exploding a large json-file using ijson
|
<p>I have a large json-file (too large to fit in memory) with the following structure:</p>
<pre><code>{
"commonVal": "foo",
"entries": [
{ "bar": "baz1" },
{ "bar": "baz2" },
...,
]
}
</code></pre>
<p>I would like to process this json file as objects like this:</p>
<pre><code>commonVal: foo, bar: baz1
commonVal: foo, bar: baz2
...
</code></pre>
<p>I.e.</p>
<ol>
<li>Read <code>commonVal</code> (outside of the list) and store that in a variable,</li>
<li>Iterate over <code>entries</code> one by one</li>
</ol>
<p>To my help I have the <a href="https://pypi.org/project/ijson/" rel="nofollow noreferrer"><code>ijson</code></a> library.</p>
<p>I can perform step 1 using <code>kvitems</code>, and step 2 using <code>items</code>, but I can't figure out how to "mix" the two. (I would very much like to avoid dropping to the events-api because the entries in the list are more complex than in the example.)</p>
|
<python><json><ijson>
|
2023-04-19 11:55:03
| 1
| 422,550
|
aioobe
|
76,054,162
| 6,758,862
|
How to use TypeVar with dictionaries
|
<p>Following is a minimal example of a class dictionary defined with keys tuples of classes, and values callables accepting instances of those classes:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, Dict, Tuple, TypeVar
from typing import Type
T = TypeVar("T")
A: Dict[
Tuple[Type[T],...], Callable[[T], str]
] = {
(bool, str, int, float): lambda d: str(d)
}
</code></pre>
<p>This example brings a MyPy error: <code>Type variable T is unbound</code>.</p>
<p>I also tried with a <code>Generic</code> container class.</p>
<pre class="lang-py prettyprint-override"><code>
from typing import Callable, Dict, Tuple, TypeVar
from typing import Generic, Type
T = TypeVar("T")
class Test(Generic[T]):
A: Dict[
Tuple[Type[T],...], Callable[[T], str]
] = {
(bool, str, int, float): lambda d: str(d)
}
</code></pre>
<p>From there, I am getting the error:
<code>error: Dict entry 0 has incompatible type "Tuple[Type[bool], Type[str], Type[int], Type[float]]": "Callable[[T], str]"; expected "Tuple[Type[T], ...]"</code></p>
<p>Then, I tried without tuples:</p>
<pre><code>class Test(Generic[T]):
A: Dict[Type[T], Callable[[T], str]] = {
bool: lambda d: str(d)
}
</code></pre>
<p>From there, I am getting the error:
<code>error: Dict entry 0 has incompatible type "Type[bool]": "Callable[[T], str]"; expected "Type[T]": "Callable[[T], str]</code></p>
<p>I proceeded by even more simplifying the case:</p>
<pre><code>T = TypeVar("T")
class Test(Generic[T]):
A: Dict[T, T] = {bool: bool}
</code></pre>
<p>This returns an error: <code>Dict entry 0 has incompatible type "Type[bool]": "Type[bool]"; expected "T": "T"</code></p>
<p>What is the right way to define a dictionary with values types bound by the types of the keys?</p>
|
<python><mypy><python-typing>
|
2023-04-19 11:49:27
| 0
| 723
|
Vasilis Lemonidis
|
76,054,004
| 1,389,394
|
set edges colour based on node colour in networkx graph
|
<p>For a directed networkx graph the data is read from a csv file;
The nodes' colours are set by degree of nodes.
I want to set each edge line colour to take the node's colour.
There has to be an efficient way to access the indices of nodes and respective edges of each node.</p>
<p>Any ideas to achieve this?</p>
<p>current sample data and code as follows:</p>
<pre><code>import networkx as nx
import matplotlib.pylab as plt
import pandas as pd
G = nx.Graph()
edges = [(1, 2, 10), (1, 6, 15), (2, 3, 20), (2, 4, 10),
(2, 6, 20), (3, 4, 30), (3, 5, 15), (4, 8, 20),
(4, 9, 10), (6, 7, 30)]
G.add_weighted_edges_from(edges)
def nodescoldeg(g):
nodecol = [g.degree(u) for u in g]
return nodecol
#edge_colour can be a sequence of colors with the same length as edgelist
def edgecoldeg(g):
nodecol = nodecoldeg(g)
i = 0
for index1, u in enumerate(g.nodes()):
for index2, v in enumerate(nx.dfs_edges(g,u)):
edgecol[i] = nodecol[index1]
i = i+1
#??? edgecol gives out of bound error...
return edgecoldeg
nx.draw(G, node_color=nodecoldeg(G), edge_color=edgecoldeg(G), with_labels=True)
</code></pre>
|
<python><import><colors><networkx><edges>
|
2023-04-19 11:32:56
| 0
| 14,411
|
bonCodigo
|
76,053,926
| 5,574,107
|
Python more efficient method than .apply()
|
<p>I have a large dataframe with projected data 60 months into the future, and I need to drop the projections for months that haven't happened yet. I have a functioning way to do this but it's throwing memory errors for a 16 million row dataframe (I have removed all unnecessary columns):</p>
<pre><code>from dateutil.relativedelta import relativedelta
from tqdm import tqdm
tqdm.pandas()
def add_months(start_date, delta_period):
end_date = start_date + relativedelta(months=delta_period)
return end_date
# Apply function on the dataframe using lambda operation.
snapshots["End_Date"] = snapshots.progress_apply(lambda row: add_months(row["startDate"], row["projectedMonth"]), axis = 1)
</code></pre>
<p>Then I would drop columns where end_date>today. I tried to import 'swifter' but my organisation's settings won't allow that. Is there a more efficient way to deal with this? I wondered about doing</p>
<pre><code>snapshots['End_Date']=snapshots['startDate']+relativedelta(months=snapshots['projectedMonth'])
</code></pre>
<p>But get the error about relativedelta needing int not series. Thanks!</p>
|
<python><pandas><apply>
|
2023-04-19 11:25:40
| 1
| 453
|
user13948
|
76,053,688
| 1,096,660
|
How about `__name__ != "__main__"` in Python?
|
<p>I'm writing a Python script that is supposed to be imported. It has some plumbing to do and uses a function from itself. If I put this at the top of the script I have to put the function definition first and it gets ugly.</p>
<p>Is <code>__name__ != "__main__"</code> a proper solution to execute code on import? Or are there any other conventions for that?</p>
<p>Basically what I want to do is: Get definitions first and then run this code.</p>
<p>Here is an example to how it basically looks:</p>
<pre class="lang-py prettyprint-override"><code>import logging
def setup():
global logger = logging.get_logger(__name__)
def get_logger(name:str):
# do magic
# packed in a function to be reusable elsewhere
return logging.get_logger(name)
if __name__ != "__main__":
setup()
</code></pre>
<p>To clarify why: This is a <code>utils.py</code> with stuff being used in other scripts. One function it has is <code>get_logger(name: str)</code>. That function sets up a logger with all bells and whistles. In other scripts I call <code>get_logger(__name__)</code> to do it in one line. Now I want to call this within my <code>utils.py</code> as well. I don't want to call this at the bottom of the script, but at the top where it's nice and visible. However I can't just call the function before its definition.</p>
|
<python><python-import><python-importlib>
|
2023-04-19 10:57:45
| 1
| 2,629
|
JasonTS
|
76,053,620
| 3,490,424
|
How to stop the learning process with PPO in stablelines?
|
<p>So, I created a custom environment based on gymnasium and I want to train it with PPO from <code>stable_baselines3</code>. I'm using version 2.0.0a5 of the latter, in order to use gymnasium. I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>env = MyEnv()
model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=1, progress_bar=True)
</code></pre>
<p>This code does not stop, the progress bar goes over the total number of time steps and just goes on... I may be doing something wrong with the environment but I am not sure what and why it would mean that the learning process makes more iterations than the <code>total_timesteps</code> fixed by the user.</p>
<p>So, what could go wrong with the environment? What should I check that could make the learning process infinite?</p>
<p>Edit: the plot thickens. I tried the same thing with an SAC agent and it does not go into an infinite loop during learning. But it does one during evaluation!</p>
|
<python><openai-gym><stable-baselines>
|
2023-04-19 10:47:52
| 1
| 1,288
|
Benares
|
76,053,605
| 8,219,760
|
Subclassing `Process` to set process level constant
|
<p>I am trying to subclass <code>mp.Process</code> to create process level constant to decide between GPUs on my desktop. To achieve this, I'd like to have a device id inside each process object and later pass it to a function in <code>run</code> method. The example code here does not actually yet use <code>self._gpu_id</code> and does without error handling, but attempts to create subclassed process with a <code>gpu_id</code>.</p>
<p><a href="https://stackoverflow.com/questions/14755754/pass-keyword-argument-only-to-new-and-never-further-it-to-init">From</a> I attempted</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing as mp
from typing import Type
# Attempt to override default behaviour of the `Process`
# to add a gpu_id parameter during construction.
class GPUProcessMeta(type):
def __call__(cls, *args, **kwargs):
obj = cls.__new__(cls, *args, **kwargs)
gpu_id = cls.next_id
if gpu_id in cls.used_ids:
raise RuntimeError(
f"Attempt to reserve reserved processor {gpu_id} {cls.used_ids=}"
)
cls.next_id += 1
cls.used_ids.append(gpu_id)
kwargs["gpu_id"] = gpu_id
obj.__init__(*args, **kwargs)
return obj
class GPUProcess(mp.Process, metaclass=GPUProcessMeta):
used_ids: list[int] = []
next_id: int = 0
def __init__(
self,
group=None,
target=None,
name=None,
args=(),
kwargs={},
*,
daemon=None,
gpu_id=None,
):
super(GPUProcess, self).__init__(
group,
target,
name,
args,
kwargs,
daemon=daemon,
)
self._gpu_id = gpu_id
@property
def gpu_id(self):
return self._gpu_id
def __del__(self):
GPUProcess.used_ids.remove(self.gpu_id)
def __repr__(self) -> str:
return f"<{type(self)} gpu_id={self.gpu_id} hash={hash(self)}>"
@classmethod
def create_gpu_context(cls) -> Type[mp.context.DefaultContext]:
context = mp.get_context()
context.Process = cls
return context
def test_gpu_pool():
ctx = GPUProcess.create_gpu_context()
with ctx.Pool(2) as pool:
payload = (tuple(range(3)) for _ in range(10))
response = pool.starmap(
_dummy_func,
payload,
)
assert response == ((0, 1, 2),) * 10
def _dummy_func(*args, **kwargs):
return args, kwargs
if __name__ == "__main__":
test_gpu_pool()
</code></pre>
<p>However, this crashes with <code>RecursionError: maximum recursion depth exceeded</code>. I do not understand what could cause this. How can I create this implementation <code>Process</code> with a unique process level variable?</p>
<p>EDIT: After some digging to python <a href="https://github.com/python/cpython/blob/bd2ed066c855dadbc739e89f9bc32e218dfc904e/Lib/multiprocessing/context.py#L220" rel="nofollow noreferrer">source</a></p>
<pre><code>#
# Type of default context -- underlying context can be set at most once
#
class Process(process.BaseProcess):
_start_method = None
@staticmethod
def _Popen(process_obj):
return _default_context.get_context().Process._Popen(process_obj)
@staticmethod
def _after_fork():
return _default_context.get_context().Process._after_fork()
</code></pre>
<p>Where <code>_POpen</code> causes call to <code>_POpen</code> with infinite recursion and crashes my code. I am not sure how <code>GPUProcess</code> can differ from standard <code>Process</code> but the root for the crash is here.</p>
|
<python><multiprocessing>
|
2023-04-19 10:45:36
| 2
| 673
|
vahvero
|
76,053,482
| 11,963,167
|
pd.Timestamp is assimilated to dtype('O')?
|
<p>I developed a simple function which returns an empty dataframe with correct column names and dtypes from a dictionary:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def return_empty_dataframe(schema: dict[str, np.dtype]):
return pd.DataFrame(columns=schema.keys()).astype(schema)
</code></pre>
<p>which is used like that:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
schema = {'time': np.datetime64, 'power': float, 'profile': str}
empty_df = return_empty_dataframe(schema)
</code></pre>
<p>I wanted to add the possibility to define a column to be of type <code>pd.Timestamp</code>. As pandas does not understand its own type and requires a timestamp column to be of type <code>np.datetime64</code>, I added the next code snippet to my function (to convert pd.Timestamp to np.datetime64 in the schema used to build the dataframe):</p>
<pre class="lang-py prettyprint-override"><code>def return_empty_dataframe(schema: dict[str, np.dtype]):
dict_col_types_no_timestamp = {key: val for key, val in schema.items() if val != pd.Timestamp}
dict_col_types_just_timestamp = {key: np.datetime64 for key, val in schema.items() if val == pd.Timestamp}
dict_col_types = dict_col_types_no_timestamp | dict_col_types_just_timestamp
return pd.DataFrame(columns=dict_col_types.keys()).astype(dict_col_types)
</code></pre>
<p>and so far so good, I can define my columns to be of type pd.Timestamp</p>
<pre class="lang-py prettyprint-override"><code>schema = {'time': pd.Timestamp, 'power': float, 'profile': str}
empty_df = return_empty_dataframe(schema) # works
</code></pre>
<p>However, I have a problem when I use this function with some automatic column type detection, as it seems columns of dtype <code>object</code> (<code>dtype('O')</code>) are interpreted as pd.Timestamp.</p>
<p>To check that:</p>
<pre class="lang-py prettyprint-override"><code>pd.Timestamp == np.dtype('O') # usually I have dtype('O') for string, or mixed types
> True
</code></pre>
<p>Is that a regular behaviour ?</p>
<p>It is a problem for me, as for instance</p>
<pre class="lang-py prettyprint-override"><code>schema = {'string_data': np.dtype(0), 'power': float, 'profile': str}
empty_df = return_empty_dataframe(schema) # works
</code></pre>
<p>and the column <code>string_data</code> is turned into a <code>np.datetime64</code> column.</p>
|
<python><pandas><numpy><timestamp>
|
2023-04-19 10:33:36
| 1
| 496
|
Clej
|
76,053,434
| 2,954,256
|
Python mutiprocessing within the same AWS Glue 4.0 job hangs
|
<p>I am trying to use Python Multiprocessing to process data in parallel within the same AWS Glue 4.0 job. I know that I could use Glue Workflows with multiple jobs to achieve parallel data processing, but for reasons that are irrelevant here, it is something that I don't want to do.</p>
<p>This is my Python code:</p>
<pre><code>from multiprocessing import Pool
import sys
import time
import random
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
args = getResolvedOptions(sys.argv, ['JOB_NAME', 'TempDir'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
print(f"{args['JOB_NAME']} STARTED")
def worker(table_name, tmp_dir):
print(f"STARTED WORKER: {table_name}")
data = load_data(table_name, tmp_dir)
process_data(table_name, data)
print(f"FINISHED WORKER: {table_name}")
def load_data(table_name, tmp_dir):
print(f"LOADING: {table_name}")
data = glueContext.create_dynamic_frame.from_catalog(database="my_database",
table_name=table_name,
redshift_tmp_dir=f"{tmp_dir}/{table_name}",
transformation_ctx=f"data_source_{table_name}")
time.sleep(random.randint(1, 5)) # added here to simulate different loading times
print(f"LOADED: {table_name} has {data.count()} rows")
return data
def process_data(table_name, data):
print(f"PROCESSING: {table_name}")
# do something
time.sleep(random.randint(1, 5)) # added here to simulate different processing times
print(f"DONE: {table_name}")
pool = Pool(4)
tables = ['TABLE1', 'TABLE2', 'TABLE3', 'TABLE4', 'TABLE5', 'TABLE6', 'TABLE7', 'TABLE8', 'TABLE9']
for table in tables:
pool.apply_async(worker, args=(table, args['TempDir']))
pool.close()
pool.join()
print(f"{args['JOB_NAME']} COMPLETED")
job.commit()
</code></pre>
<p>Unfortunately, while it seems to start multiple workers correctly, it hangs and never completes until the Glue job finally times out.</p>
<p>This is what I see in the CloudWatch output log. There are no errors in the error log.</p>
<pre><code>2023-04-19T12:01:49.566+02:00 STARTED WORKER: TABLE1 LOADING: TABLE1
2023-04-19T12:01:49.566+02:00 STARTED WORKER: TABLE2 LOADING: TABLE2
2023-04-19T12:01:49.566+02:00 STARTED WORKER: TABLE3 LOADING: TABLE3
2023-04-19T12:01:49.566+02:00 STARTED WORKER: TABLE4 LOADING: TABLE4
2023-04-19T12:01:49.603+02:00 STARTED WORKER: TABLE5 LOADING: TABLE5
2023-04-19T12:01:49.604+02:00 STARTED WORKER: TABLE6 LOADING: TABLE6
2023-04-19T12:01:49.607+02:00 STARTED WORKER: TABLE7 LOADING: TABLE7
2023-04-19T12:01:49.608+02:00 STARTED WORKER: TABLE8 LOADING: TABLE8
2023-04-19T12:01:49.609+02:00 STARTED WORKER: TABLE9 LOADING: TABLE9
</code></pre>
<p>I have tried several things, but I cannot understand exactly what the problem is, except that it seems to be hanging on <code>create_dynamic_frame.from_catalog()</code>.</p>
<p>Has anybody attempted to do the same and solved it?
Why doesn't it work?</p>
<p>Thank you in advance!</p>
|
<python><multiprocessing><aws-glue>
|
2023-04-19 10:28:40
| 1
| 412
|
Roberto A.
|
76,053,313
| 13,078,279
|
Avoiding ZeroDivisionError for Runge-Kutta 4(5) solver
|
<p>I am trying to create a Runge-Kutta 4(5) solver to solve the differential equation <code>y' = 2t</code> with the initial condition <code>y(0) = 0.5</code>. This is what I have so far:</p>
<pre class="lang-py prettyprint-override"><code>def rk45(f, u0, t0, tf=100000, epsilon=0.00001, debug=False):
h = 0.002
u = u0
t = t0
# solution array
u_array = [u0]
t_array = [t0]
if debug:
print(f"t0 = {t}, u0 = {u}, h = {h}")
while t < tf:
h = min(h, tf-t)
k1 = h * f(u, t)
k2 = h * f(u+k1/4, t+h/4)
k3 = h * f(u+3*k1/32+9*k2/32, t+3*h/8)
k4 = h * f(u+1932*k1/2197-7200*k2/2197+7296*k3/2197, t+12*h/13)
k5 = h * f(u+439*k1/216-8*k2+3680*k3/513-845*k4/4104, t+h)
k6 = h * f(u-8*k1/27+2*k2-3544*k3/2565+1859*k4/4104-11*k5/40, t+h/2)
u1 = u + 25*k1/216+1408*k3/2565+2197*k4/4104-k5/5
u2 = u + 16*k1/135+6656*k3/12825+28561*k4/56430-9*k5/50+2*k6/55
R = abs(u1-u2) / h
print(f"R = {R}")
delta = 0.84*(epsilon/R) ** (1/4)
if R <= epsilon:
u_array.append(u1)
t_array.append(t)
u = u1
t += h
h = delta * h
if debug:
print(f"t = {t}, u = {u1}, h = {h}")
return np.array(u_array), np.array(t_array)
def test_dydx(y, t):
return 2 * t
initial = 0.5
sol_rk45 = rk45(test_dydx, initial, t0=0, tf=2, debug=True)
</code></pre>
<p>When I run it, I get this:</p>
<pre><code>t0 = 0, u0 = 0.5, h = 0.002
R = 5.551115123125783e-14
t = 0.002, u = 0.5000039999999999, h = 0.19463199004973464
R = 0.0
---------------------------------------------------------------------------
ZeroDivisionError
</code></pre>
<p>This is because the 4th order solution <code>u1</code> and 5th order solution <code>u2</code> are so close together that their difference is essentially zero, and when I calculate <code>delta</code> I get <code>1/0</code> which obviously results in a ZeroDivisionError.</p>
<p>One way to solve this is to not calculate <code>delta</code> and use a much simpler version of RK45:</p>
<pre class="lang-py prettyprint-override"><code>def rk45(f, u0, t0, tf=100000, epsilon=0.00001, debug=False):
h = 0.002
u = u0
t = t0
# solution array
u_array = [u0]
t_array = [t0]
if debug:
print(f"t0 = {t}, u0 = {u}, h = {h}")
while t < tf:
h = min(h, tf-t)
k1 = h * f(u, t)
k2 = h * f(u+k1/4, t+h/4)
k3 = h * f(u+3*k1/32+9*k2/32, t+3*h/8)
k4 = h * f(u+1932*k1/2197-7200*k2/2197+7296*k3/2197, t+12*h/13)
k5 = h * f(u+439*k1/216-8*k2+3680*k3/513-845*k4/4104, t+h)
k6 = h * f(u-8*k1/27+2*k2-3544*k3/2565+1859*k4/4104-11*k5/40, t+h/2)
u1 = u + 25*k1/216+1408*k3/2565+2197*k4/4104-k5/5
u2 = u + 16*k1/135+6656*k3/12825+28561*k4/56430-9*k5/50+2*k6/55
R = abs(u1-u2) / h
if R <= epsilon:
u_array.append(u1)
t_array.append(t)
u = u1
t += h
else:
h = h / 2
if debug:
print(f"t = {t}, u = {u1}, h = {h}")
return np.array(u_array), np.array(t_array)
</code></pre>
<p>But this, while it works, seems incredibly pointless to me because it negates the adaptive step size advantage of a RK45 method as compared to an RK4 method.</p>
<p>Is there any way to preserve an adaptive step size while not running into ZeroDivisionErrors?</p>
|
<python><differential-equations><runge-kutta>
|
2023-04-19 10:15:22
| 2
| 416
|
JS4137
|
76,053,220
| 5,930,047
|
How to check GET-query-parameters with requests-mock?
|
<p>I have a function which makes a GET-request with a dict of params. In my unit-tests I want to make sure that the parameters are set correctly. However when I try to use <code>requests-mock</code> to mock the request, I can only check for the URL without the parameters. It seems like the params are not picked up by requests-mock.</p>
<p>Example Code:</p>
<pre class="lang-py prettyprint-override"><code>import requests as req
def make_request():
url = "mock://some.url/media"
headers = {"Authorization": "Bearer ..."}
params = {"since": 0, "until": 1}
req.get(url, params=params, headers=self.headers, timeout=30)
</code></pre>
<p>Example Test:</p>
<pre class="lang-py prettyprint-override"><code>import requests_mock
def my_test():
auth_headers = {"Authorization": "Bearer ..."}
query_params = "?since=0&until=1"
expected_url = f"mock://some.url/media{query_params}" # no mock address
with requests_mock.Mocker() as mocker:
mocker.get(
expected_url,
headers=auth_headers,
text="media_found",
)
response = make_request()
assert response.text == "media_found"
# assert mocker.last_request.params == query_params
</code></pre>
<p>This is what would've made sense to me. However, as long as I use the <code>query_params</code> in <code>expected_url</code> I get a <code>No Mock Address</code> error.</p>
<p>When I'm using the URL without the <code>query_params</code> the test runs fine, but doesn't check that the query params are set correctly. I found the <a href="https://requests-mock.readthedocs.io/en/latest/matching.html#query-strings" rel="nofollow noreferrer">query-string-matcher</a> in the documentation, but didn't get it to work. The <code>No Mock Address</code> error kept coming up.</p>
<p>I also tried debugging the <code>request_history</code>-object provided by requests mock, but there was no params/query field set in there.</p>
<pre><code>_url_parts: ParseResult(scheme='mock', netloc='some.url', path='/media', params='', query='', fragment='')
_url_parts_: ParseResult(scheme='mock', netloc='some.url', path='/media', params='', query='', fragment='')
</code></pre>
<p>This was surprising, because when doing the actual request outside of the code, the params are added correctly.</p>
<p>Can someone point me in the right direction on how I can check the GET-query-parameters with <code>requests-mock</code> without creating a spy on the <code>req.get()</code> function and checking the parameters the function was called with?</p>
<p>Thanks!</p>
|
<python><unit-testing><python-requests><requests-mock>
|
2023-04-19 10:04:10
| 1
| 367
|
Philip Koch
|
76,053,201
| 5,539,674
|
Building a new pd.dataframe with statistics from own functions
|
<p>I am trying to creat some summary statistics for text data that I am working with, namely the average length of text columns in my dataFrame.</p>
<p>I am working with two columns: <code>short</code> and <code>long</code></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {
'short': ['Fruit', 'Vehicle', 'Animal', 'City'],
'long': ['An edible object that is usually sweet and grows on trees or plants.',
'A mode of transportation that is used to move people or goods from one place to another.',
'A living organism that typically feeds on organic matter and has the ability to move.',
'A large and populous settlement, usually the seat of government or important cultural institutions.']
}
df = pd.DataFrame(data)
</code></pre>
<p>For that I have written a function that calculates that information for a single column:</p>
<pre class="lang-py prettyprint-override"><code>def mean_chars_in_col(data: pd.DataFrame,
col: str):
return data[col].str.len().mean().round(2)
</code></pre>
<p>This gives me the needed information for a single column:</p>
<pre><code>mean_chars_in_col(df, "short")
## 5.5
</code></pre>
<p>How can I extend this functionality so that I get this information for every column in the dataFrame, ideally with a new dataframe as output where I have two columns: <code>colname</code> and <code>mean_chars</code>, denoting each observed column in the original dataframe as a row in <code>colname</code> and each result as a row in <code>mean_chars?</code></p>
<p>Would it also be possible to add onto that with new functions, that then write their output to their own column for each row?</p>
<pre class="lang-py prettyprint-override"><code>#expected output, ideally a new dataframe
colname, mean_chars, mean_length
short, 5.5, 1
long, 85, 15.25
</code></pre>
<p>Thank you.</p>
|
<python><pandas><dataframe>
|
2023-04-19 10:01:55
| 2
| 315
|
O RenΓ©
|
76,053,164
| 572,575
|
How to delete oldest data in table if more than 5 row using django?
|
<p>If data in table more than 5 row I want to delete oldest row such as.</p>
<pre><code>id value
1 a1 <= delete
2 a2
3 a3
4 a4
5 a5
6 a6
</code></pre>
<p>I want to keep new 5 rows which is id 2-6 and delete oldest row which is id 1. I use this code.</p>
<pre><code> objects_to_keep = Data.objects.filter(key=key).order_by('id')[:5]
Data.objects.exclude(pk__in=objects_to_keep).delete()
</code></pre>
<p>It show error like this.</p>
<pre><code>NotSupportedError at /device/
(1235, "This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'")
</code></pre>
<p>How to fix it?</p>
|
<python><django>
|
2023-04-19 09:58:14
| 2
| 1,049
|
user572575
|
76,053,085
| 4,751,700
|
How to use csv.reader in an async context?
|
<p>I query multiple services with asynchronously using <code>httpx</code>. These services return csv data that could be very large so I'm using <a href="https://www.python-httpx.org/async/#streaming-responses" rel="nofollow noreferrer">streams</a>.</p>
<p>So far so good.</p>
<p>The problem I'm having is that the Python standard <code>csv.reader</code> doesn't work with <code>AsyncIterator</code>'s and the async stream won't let you call the sync functions like <code>iter_lines()</code>. Doing that gets you the following error:</p>
<blockquote>
<p>RuntimeError: Attempted to call a sync iterator on an async stream.</p>
</blockquote>
<p>So you're stuck with the async functions like <code>aiter_lines()</code> which return an <code>AsyncIterator</code>.</p>
<p>I've seen <code>aiocsv</code> but that can't be used with <code>AsyncIterator</code>'s since it expects an object with a <code>read(size: int)</code>.</p>
<p>Example code below:</p>
<pre class="lang-py prettyprint-override"><code>from httpx import AsyncClient
from csv import reader
async with AsyncClient() as client:
async with client.stream("POST", url, content=query, headers=HEADER, timeout=25) as resp:
lines = resp.aiter_lines()
await anext(lines) # skip headers
for line in reader(lines):
yield QueryResult(*line) # create namedtuple for ease of use
</code></pre>
<blockquote>
<p>TypeError: 'async_generator' object is not iterable</p>
</blockquote>
|
<python><csv><python-asyncio><httpx>
|
2023-04-19 09:49:39
| 0
| 391
|
fanta fles
|
76,053,007
| 13,868,186
|
Use of widget.pack() does not recover forgotten pack widget in Tkinter
|
<pre><code>import tkinter
top = tkinter.Tk()
top.geometry("800x800")
top.title("Text Editor")
edit = False
name = tkinter.Entry(top)
def editTrue():
edit = True
name.pack_forget()
def editFalse():
edit = False
name.pack()
menubar = tkinter.Menu(top)
filemenu = tkinter.Menu(menubar, tearoff=0)
filemenu.add_command(label="New", command=editFalse)
filemenu.add_command(label="Edit", command=editTrue)
menubar.add_cascade(label="File", menu=filemenu)
top.config(menu=menubar)
text = tkinter.Text(top, height=800, width=800)
def saveNotes():
notes = text.get(1.0, "end")
f = open(name.get() + ".txt", "a")
f.write(notes)
f.close()
button = tkinter.Button(top, text="Save", command=saveNotes)
button.pack()
name.pack()
text.pack()
top.mainloop()
</code></pre>
<p>This is my code so far, and the problem is, when I call the <code>widget.pack_forget()</code> function, it works perfectly well. It hides it and all that stuff. But, when I call the <code>widget.pack()</code> again to recover it, it doesn't come up. I don't understand why. I'd appreciate some help.</p>
|
<python><python-3.x><tkinter><python-3.8><tkinter-entry>
|
2023-04-19 09:41:41
| 2
| 926
|
Daniel Tam
|
76,052,993
| 3,259,222
|
How to split DataFrame/Array given a set of masks and perform calculations for each split
|
<p>You have 2 DataFrames/Arrays:</p>
<ul>
<li><code>mat</code>: size (N,T), type bool or float, nullable</li>
<li><code>masks</code>: size (N,T), type bool, non-nullable
<ul>
<li>can be split to T masks, each of size N</li>
</ul>
</li>
</ul>
<p>The goal is to split <code>mat</code> to T slices by applying each mask, perform calculations and store a set of stats for each slice in a quick and efficient way.</p>
<p>Simplest implementation is via loop over each mask, however:</p>
<ol>
<li>This scales linearly with T, which can make it slow for large T</li>
<li>Is it optimal to split and calculate in parallel, how would you do this?</li>
<li>Is it optimal to store the slices for each mask in a (N,T,T) tensor and calculate stats with matrix operations, how would you do this?</li>
<li>Would you implement with DataFrame or Array? given that we need to store results for multiple stats, DataFrame has some advantages.</li>
</ol>
<pre><code>import pandas as pd
import numpy as np
import datetime
N = 1000
T = 100
t0 = datetime.date(2022, 1, 1)
t_last = t0 + datetime.timedelta(days=T-1)
mat_np = np.random.choice(a=[False, True, np.nan], size=(N, T), p=[0.5, 0.4, 0.1])
mat_df = pd.DataFrame(mat_np, columns=pd.date_range(t0, t_last, freq='D'))
# This is just simple mask for example but in general it could be any bool array with the size (N,T)
masks_np = (mat_np == False)
masks_df = (mat_df == False)
results = {}
# Take first 5 masks only for example simplicity but in general need to apply for all masks
for date in masks_df.columns[:5]:
mat_tmp = mat_df.loc[masks_df.loc[:,date]].loc[:,date:]
# Example with 2 stats, could be more
results[date] = pd.DataFrame({
'stat_1': mat_tmp.notna().sum(),
'stat_2': mat_tmp.cummax(axis=1).ffill(axis=1).sum(),
# ... stat_X
})
results_df = pd.concat(results.values(), keys=results.keys(), axis=1)
</code></pre>
|
<python><arrays><pandas><dataframe><numpy>
|
2023-04-19 09:39:53
| 1
| 431
|
Konstantin
|
76,052,775
| 6,632,138
|
Customize "return by value" in IntEnum
|
<p>I have a class that extends <code>IntEnum</code> class that defines positions in a bit-encoded variable:</p>
<pre><code>from enum import IntEnum
class Bits(IntEnum):
@classmethod
def data(cls, value: int):
return [ e for e in cls if (1 << e) & value ]
class Status(Bits):
READY = 0
ERROR = 1
WARNING = 2
x = Status.data(3) # x <- [<Status.READY: 0>, <Status.ERROR: 1>]
y = Status.data(4) # y <- [<Status.WARNING: 2>]
z = Status.data(8) # z <- []
</code></pre>
<p>Would it be possible to customize "return by value" of <code>IntEnum</code> without breaking anything? Since there will be multiple classes that will extend the <code>Bits</code> class, ideally this functionality should be implemented in the <code>Bits</code> class.</p>
<p>What I want to achieve is this:</p>
<pre><code>x = Status(3) # x <- [<Status.READY: 0>, <Status.ERROR: 1>]
y = Status(4) # y <- [<Status.WARNING: 2>]
z = Status(8) # z <- []
</code></pre>
<p>I tried overriding the <code>__call__</code> method (see <a href="https://docs.python.org/3/library/enum.html#" rel="nofollow noreferrer">enum β Support for enumerations</a>), but it does not seem to be called in this case.</p>
|
<python><enums><integer><overriding>
|
2023-04-19 09:17:59
| 2
| 577
|
Marko Gulin
|
76,052,758
| 246,754
|
How to only show related objects in Django Admin screens
|
<p>I have an app where I want administrators to be able to create events via the admin view. I have members, who will have multiple cars and attend multiple meets. I want to record which vehicle they used at a meet.</p>
<p>In the django admin view for a member, I can manage their cars successfully and only cars belonging to that member show up in the <code>inline</code> on the Member Admin page, but when attempting to manage meets, all cars are listed, not only cars belonging to that member.</p>
<p>I want the same list of cars from the CarInline to appear in the MeetInline - i.e. cars registered to the member. Any ideas how to achieve this?</p>
<p>I <em>think</em> that the issue is that the <code>Meet</code> model has a ManyToMany relationship with the <code>Car</code> model but wasn't sure how else to model this.</p>
<p><em>models.py</em></p>
<pre><code>class Member(models.model):
member_number = models.CharField(primary_key=True, max_length=30)
is_active = models.BooleanField(default=True)
class Car(models.model)
member = models.ForeignKey(Member)
registration = models.CharField(max_length=80)
class Meet(models.model)
meet_date = models.DateTimeField()
member = models.ForeignKey(Member)
car = models.ManyToManyField(Car)
</code></pre>
<p><em>admin.py</em></p>
<pre><code>class CarInline(admin.TabularInline):
model = Car
extra = 0
fields = ["registration"]
class MeetInline(admin.TabularInline):
model = Meet
extra = 0
fields =["meet_date", "car"]
class MemberAdmin(admin.ModelAdmin):
list_filter = [
"is_active"
]
search_fields = ["member_number"]
fieldsets = [
(None, {"fields": ["member_number", "is_active"]}),
]
inlines = [CarInline, MeetInline]
</code></pre>
|
<python><django><django-admin>
|
2023-04-19 09:15:59
| 1
| 526
|
Anonymouslemming
|
76,052,704
| 3,935,797
|
brokenaxes AttributeError: 'SubplotSpec' object has no attribute 'is_last_row'
|
<p>I am trying to use the python brokenaxes to break both x and y axis. I run the code suggested here <a href="https://www.codespeedy.com/create-a-plot-with-broken-axis-in-python-using-matplotlib/" rel="nofollow noreferrer">https://www.codespeedy.com/create-a-plot-with-broken-axis-in-python-using-matplotlib/</a></p>
<pre><code>import matplotlib.pyplot as plt
from brokenaxes import brokenaxes
import numpy as np
fig = plt.figure(figsize=(6,4))
baxes = brokenaxes(xlims=((-2,3),(5,8)), ylims=((0,8),(9.5,21)), hspace=.1)
X = np.array([3,-1,0,4,5,-2,7])
Y = x**2
Z = x**3
baxes.plot(X,Y,label="squared")
baxes.plot(X,Z,label="cubed")
baxes.legend(loc="best")
plt.plot()
plt.show()
</code></pre>
<p>However I get error:</p>
<pre><code>AttributeError: 'SubplotSpec' object has no attribute 'is_last_row'
</code></pre>
|
<python><matplotlib><axis>
|
2023-04-19 09:09:36
| 1
| 1,028
|
RM-
|
76,052,646
| 1,096,660
|
How to implement logging in reusable packages?
|
<p>I'm more and more offloading frequently used tasks in small little libraries of mine. However I haven't figured out what the best practice for logging is. There are lots of resources explaining something like:</p>
<pre><code>import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logging.debug('This is a log message.')
</code></pre>
<p>But how to deal with that in packages intended for reuse? Should I just import und message, but do the config once in the main program importing? Obviously I want to set the logging style and level in my main app, but not every single package.</p>
<p>So how to make packages "ready for logging"?</p>
|
<python><python-logging>
|
2023-04-19 09:03:46
| 1
| 2,629
|
JasonTS
|
76,052,527
| 13,438,859
|
Can't retreive environment variable in python
|
<h3>Problem</h3>
<p>After I do</p>
<pre><code>export key=value
</code></pre>
<p>or</p>
<pre><code>echo "export key=value" >> ~/.zshrc && source ~/.zshrc
</code></pre>
<p>I can retrieve it in terminal using <code>echo $key</code></p>
<p>But when I'm try to retrieve <code>key</code> in Python using <code>os.environ['key']</code>,</p>
<pre><code>>>> import os
>>> os.environ["key"]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/os.py", line 673, in __getitem__
raise KeyError(key) from None
KeyError: 'key'
</code></pre>
<p>There's always an error.</p>
<h3>My Questions</h3>
<ol>
<li>Where does <code>export key=value</code> store variables? It doesn't store it in <code>~/.zshrc</code> because I can't see it when I look inside <code>.zshrc</code></li>
<li>Where does Python inquiry when it processes <code>os.environ['key']</code>?</li>
</ol>
|
<python><environment-variables>
|
2023-04-19 08:52:27
| 0
| 325
|
Ian Hsiao
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.