QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,499,954
| 13,848,874
|
Why did they opt for the message "most recent call last"?
|
<p>If <em>"most recent call last"</em> means the most recent call is at the end of the stack, so what? What does this imply? How is this piece of information useful for me?</p>
|
<python>
|
2024-05-18 13:58:49
| 1
| 473
|
Malihe Mahdavi sefat
|
78,499,926
| 17,867,413
|
How to get outer key in a protobuf
|
<p>I am reading data from 2 proto files:</p>
<p>file.proto: this is a wrapper</p>
<p>file2.proto: this has all the columns</p>
<p>file.proto:</p>
<pre><code>syntax = "proto3";
package com.oracle;
import "file2.proto";
option go_package = "github.com/cle/sdk/go_sdk";
// This is the inbound message intended to inform the Oracle of new answers to be persisted
message AnswerUpdateRequest {
Entity entity = 1;
repeated Answer answers = 2;
}
// This is the outbound message informing Oracle subscribers of new answers
message AnswersUpdated {
Entity entity = 1;
repeated Answer answers = 2;
}
</code></pre>
<p>file2.proto:</p>
<pre><code>syntax = "proto3";
package com.oracle;
import "google/protobuf/timestamp.proto";
option go_package = "github.com/embroker/oracle/sdk/go_sdk";
message Entity {
Type type = 1;
string id = 2;
enum Type {
ORGANIZATION = 0;
USER = 1;
APPLICATION = 2;
}
}
message AnswerSource {
Type type = 1;
string id = 2;
enum Type {
UNKNOWN = 0;
USER = 1;
DOCUMENT = 2;
EXTERNAL = 3;
}
}
message Answer {
string key = 1;
AnswerSource source = 2;
google.protobuf.Timestamp provided_at = 3;
google.protobuf.Timestamp received_at = 4;
AnswerFieldType type = 5;
Value value = 6;
message Value {
oneof value {
string text = 1;
float decimal = 2;
// ...
}
}
}
enum AnswerFieldType {
ANSWER_FIELD_TYPE_UNSTRUCTURED = 0; // Can be useful for LLM purposes
ANSWER_FIELD_TYPE_TEXT = 1;
ANSWER_FIELD_TYPE_INTEGER = 2;
ANSWER_FIELD_TYPE_BOOLEAN = 3;
ANSWER_FIELD_TYPE_DECIMAL = 4;
ANSWER_FIELD_TYPE_DATE = 5;
ANSWER_FIELD_TYPE_ADDRESS = 6;
}
</code></pre>
<p>My python function to map to proto</p>
<pre><code>import file.proto
import file2.proto
def create_answer_update_request(json_data):
data = json_data
answer_update_request = events_pb2.AnswerUpdateRequest()
entity = answer_update_request.entity
entity.type = model_pb2.Entity.Type.Value(data["answerUpdateRequest"]["entity"]["type"])
entity.id = data["answerUpdateRequest"]["entity"]["id"]
for answer_data in data["answerUpdateRequest"]["answers"]:
answer = Answer()
answer.key = answer_data['key']
source = AnswerSource()
source.type = AnswerSource.Type.Value(answer_data['source']['type'])
source.id = answer_data['source']['id']
answer.source.CopyFrom(source)
provided_at_datetime = datetime.fromisoformat(answer_data['provided_at'])
answer.provided_at.FromDatetime(provided_at_datetime)
received_at_datetime = datetime.fromisoformat(answer_data['received_at'])
answer.received_at.FromDatetime(received_at_datetime)
answer.type = AnswerFieldType.Value(f"ANSWER_FIELD_TYPE_{answer_data['type']}")
value = Answer.Value()
value.text = answer_data['value']['text']
answer.value.CopyFrom(value)
answer_update_request.answers.append(answer)
return answer_update_request.SerializeToString()
</code></pre>
<p>While deserializing data I am not getting wrapper:</p>
<p>Expected output:</p>
<pre><code>{
"answerUpdateRequest": {
"entity": {
"type": "ORGANIZATION",
"id": "UU12334ID"
},
"answers": [
{
"key": "legal_company_name",
"source": {
"type": "DOCUMENT",
"id": "3ea20f68e73ec | DocumentType.application"
},
"provided_at": "2024-05-02T15:54:15.941988",
"received_at": "2024-05-02T15:54:15.945350",
"type": "TEXT",
"value": {
"text": "Cicne Law, LLC"
}
},
{
"key": "company_website_ind",
"source": {
"type": "DOCUMENT",
"id": "3ea20440-83fb-43c0-b409-1dd8f68e73ec | DocumentType.application"
},
"provided_at": "2024-05-02T15:54:15.941988",
"received_at": "2024-05-02T15:54:15.945365",
"type": "BOOLEAN",
"value": {
"text": "Yes"
}
]
}
}
</code></pre>
<p>Error:
I am not getting "answerUpdateRequest" " in the final output, rest everthing is working for me as expected how to get this?</p>
|
<python><protocol-buffers>
|
2024-05-18 13:46:26
| 1
| 1,253
|
Xi12
|
78,499,881
| 72,911
|
Testing python code with different byteorder / endianness
|
<p>I'm writing some python code that is sensitive to the byteorder / endianness. I want to ensure my unit tests are run in both byteorders.</p>
<p>I'm currently running my tests on x86_64 Linux, which has little byteorder. What is the easiest way to run my python unit tests with big byteorder?</p>
|
<python><linux><unit-testing><endianness>
|
2024-05-18 13:28:43
| 0
| 9,642
|
Gary van der Merwe
|
78,499,781
| 1,121,892
|
Why is pyright issuing a type incompatibility error here?
|
<pre><code>from pandas import DataFrame, Series
# This function prunes the dataframe rows to those meeting specified criteria
def prune_to_wanted_rows(st_df: DataFrame, recency_date: str) -> DataFrame:
st_df = st_df[ # pyright error message (see below) here
(st_df["assetType"] == "ore")
& (st_df["priceCurrency"] == "USD")
& (st_df["endDate"] >= recency_date)
# other boolean expressions omitted for brevity
]
return st_df
</code></pre>
<p>The error message, all on the one line, with hyphens here replacing some spaces, is:</p>
<blockquote>
<p>Expression of type "Series | Unknown | DataFrame" is incompatible with declared type "DataFrame"------Type "Series | Unknown | DataFrame" is incompatible with type "DataFrame"------Type "Series" is incompatible with "DataFrame"</p>
</blockquote>
<p><em>I think</em> this is only a problem because the error message is ugly. I can suppress it with <code># pyright: ignore</code> and the code works as intended. I don't (yet) see a type incompatibility, but is there a sense in which pyright is "right"?</p>
|
<python><pandas><dataframe><pyright>
|
2024-05-18 12:52:57
| 2
| 319
|
brec
|
78,499,364
| 893,254
|
What is the default `datetime.now()` timezone?
|
<pre><code>from datetime import datetime
from datetime import timezone
print(f'datetime.now(): {datetime.now()}')
print(f'current timezone: {datetime.now().tzinfo}') # prints `None`
</code></pre>
<p>As seen in the above code snippet, the default timezone for a <code>datetime</code> object created with <code>datetime.now()</code> is <code>None</code>.</p>
<p>My question is how to interpret this?</p>
<p>No explicit timezone is set, however the value produced by <code>datetime.now()</code> matches the value of the system clock of the server on which this code was run.</p>
<p>This implies the timezone is actually the same as the system timezone, in this case, at the time of writing, <code>BST</code>.</p>
<p>Why does <code>datetime.now()</code> not set a timezone? I would like to know more detail about the internal working of <code>datetime</code> which might explain the behaviour. It seems surprising.</p>
|
<python><datetime><timezone>
|
2024-05-18 10:10:02
| 0
| 18,579
|
user2138149
|
78,499,295
| 5,837,992
|
Using SKLearn KMeans With Externally Generated Correlation Matrix
|
<p>I receive a correlation file from an external source. It is a fairly straightforward file and looks like the following.</p>
<p><a href="https://i.sstatic.net/pYHLvrfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pYHLvrfg.png" alt="enter image description here" /></a></p>
<p>A sample csv can be found here</p>
<p><a href="https://www.dropbox.com/scl/fi/1ytmnk23zb70twns2owsi/corrmatrix.csv?rlkey=ev6ya520bc0n94yfqswasi3o6&st=p4vntit1&dl=0" rel="nofollow noreferrer">https://www.dropbox.com/scl/fi/1ytmnk23zb70twns2owsi/corrmatrix.csv?rlkey=ev6ya520bc0n94yfqswasi3o6&st=p4vntit1&dl=0</a></p>
<p>I want to use this file to do some kmeans clustering and I am using the code that follows:</p>
<pre><code>import pandas as pd
correlation_mat=pd.read_csv("C:/temp/corrmatrix.csv",index_col=False)
from sklearn.cluster import KMeans
# Utility function to print the name of companies with their assigned cluster
def print_clusters(df_combined,cluster_labels):
cluster_dict = {}
for i, label in enumerate(cluster_labels):
if label not in cluster_dict:
cluster_dict[label] = []
cluster_dict[label].append(df_combined.columns[i])
# Print out the companies in each cluster
for cluster, companies in cluster_dict.items():
print(f"Cluster {cluster}: {', '.join(companies)}")
# Perform k-means clustering with four clusters
clustering = KMeans(n_clusters=4, random_state=0).fit(correlation_mat)
# Print the cluster labels
cluster_labels=clustering.labels_
print_clusters(correlation_mat,cluster_labels)
</code></pre>
<p>Even though this file looks like a correlation file as generated by Pandas, I cannot get it to work.</p>
<p>I keep getting the following error</p>
<pre><code>ValueError: could not convert string to float: 'ABBV'
</code></pre>
<p>How can I get this file to work with SKLearn? I merely receive the data from a third party, so regenerating the correlations myself is not an option</p>
<p>Is there a way to have SKLearn see this as it would see a Pandas generated correlation file?</p>
<p>Would very much appreciate any help that can be provided</p>
|
<python><pandas><import><k-means><sklearn-pandas>
|
2024-05-18 09:40:55
| 2
| 1,980
|
Stumbling Through Data Science
|
78,499,234
| 1,115,833
|
huggingface optimum circular dependency issue
|
<p>I have a fresh virtual env where I am trying to exec an onnx model like so:</p>
<pre><code># Load Locally Saved ONNX Model and use for inference
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForCustomTasks
sentence = "This is a test sentence."
local_onnx_model = ORTModelForCustomTasks.from_pretrained("./model_onnx")
tokenizer = AutoTokenizer.from_pretrained("./model_onnx")
inputs = tokenizer(
sentence,
padding="longest",
return_tensors="np",
)
outputs = local_onnx_model.forward(**inputs)
print(outputs)
</code></pre>
<p>but it ouputs this annoying error:</p>
<pre><code>The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
2024-05-18 03:36:29.312177: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-18 03:36:29.888001: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Traceback (most recent call last):
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1510, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/onnx/__main__.py", line 26, in <module>
from ...commands.export.onnx import parse_args_onnx
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/commands/__init__.py", line 17, in <module>
from .export import ExportCommand, ONNXExportCommand, TFLiteExportCommand
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/commands/export/__init__.py", line 16, in <module>
from .base import ExportCommand
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/commands/export/base.py", line 18, in <module>
from .onnx import ONNXExportCommand
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/commands/export/onnx.py", line 23, in <module>
from ...exporters import TasksManager
ImportError: cannot import name 'TasksManager' from partially initialized module 'optimum.exporters' (most likely due to a circular import) (/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1510, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/onnx/graph_transformations.py", line 19, in <module>
import onnx
File "/home/foo/new_virtual_env/rezatec_cpy/src/onnx.py", line 1, in <module>
from optimum.exporters.onnx import main_export
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1500, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1512, in _get_module
raise RuntimeError(
RuntimeError: Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):
cannot import name 'TasksManager' from partially initialized module 'optimum.exporters' (most likely due to a circular import) (/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1510, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/onnxruntime/modeling_ort.py", line 61, in <module>
from ..exporters import TasksManager
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/__init__.py", line 16, in <module>
from .tasks import TasksManager # noqa
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/tasks.py", line 139, in <module>
class TasksManager:
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/tasks.py", line 297, in TasksManager
"clip-text-model": supported_tasks_mapping(
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/tasks.py", line 111, in supported_tasks_mapping
importlib.import_module(f"optimum.exporters.{backend}.model_configs"), config_cls_name
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/onnx/model_configs.py", line 23, in <module>
from ...onnx import merge_decoders
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1500, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1512, in _get_module
raise RuntimeError(
RuntimeError: Failed to import optimum.onnx.graph_transformations because of the following error (look up to see its traceback):
Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):
cannot import name 'TasksManager' from partially initialized module 'optimum.exporters' (most likely due to a circular import) (/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/foo/new_virtual_env/rezatec_cpy/src/onnx2.py", line 1, in <module>
from optimum.onnxruntime import ORTModelForSequenceClassification
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1500, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1512, in _get_module
raise RuntimeError(
RuntimeError: Failed to import optimum.onnxruntime.modeling_ort because of the following error (look up to see its traceback):
Failed to import optimum.onnx.graph_transformations because of the following error (look up to see its traceback):
Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):
cannot import name 'TasksManager' from partially initialized module 'optimum.exporters' (most likely due to a circular import) (/home/foo/miniconda3/envs/new_virtual_env/lib/python3.10/site-packages/optimum/exporters/__init__.py)
</code></pre>
<p>Its a new env and I have run:</p>
<p><code>pip install --upgrade --upgrade-strategy --no-cache-dir eager optimum[exporters,onnxruntime]</code></p>
<p>as per the docs to install the pkg.</p>
<p>Totally lost!</p>
|
<python><huggingface><onnx><onnxruntime>
|
2024-05-18 09:22:25
| 1
| 7,096
|
JohnJ
|
78,499,193
| 1,469,465
|
How to reverse a URL in format namespace:view:endpoint?
|
<p>I have a Django project with the following urlpattern defined on project level:</p>
<pre><code>urlpatterns = [
path('', include(('ws_shopify.urls', 'ws_shopify'), namespace='shopify')),
]
</code></pre>
<p>The file <code>ws_shopify.urls</code> defines the following urlpatterns:</p>
<pre><code>urlpatterns = [
path('api/shopify/', ShopifyWebhookAPIView.as_view(), name='webhook'),
]
</code></pre>
<p>The class <code>ShopifyWebhookAPIView</code> defines the following endpoint:</p>
<pre><code>from rest_framework.views import APIView
class ShopifyWebhookAPIView(APIView):
@action(detail=False, methods=['post'], url_path='(?P<webshop_name>.+)/order_created', name='order-created')
def order_created(self, request: Request, webshop_name=None):
return Response({'message': f'Order created for {webshop_name}'})
</code></pre>
<p>My question is:</p>
<p><strong>What is the correct name of this endpoint to be used in a test case using <code>reverse()</code>?</strong></p>
<p>I am trying this now:</p>
<pre><code>class WebhookTestCase(TestCase):
def test_that_url_resolves(self):
url = reverse('shopify:webhook:order-created', kwargs={'webshop_name': 'my-shop'})
self.assertEqual(url, '/api/shopify/my-shop/order_created')
</code></pre>
<p>But this fails with the following error message:</p>
<pre><code>django.urls.exceptions.NoReverseMatch: 'webhook' is not a registered namespace inside 'shopify'
</code></pre>
<p>Not sure if it's relevant or not, this is how <code>ws_shopify.apps</code> looks like:</p>
<pre><code>from django.apps import AppConfig
class ShopifyConfig(AppConfig):
name = 'ws_shopify'
</code></pre>
|
<python><django><django-rest-framework><url-pattern>
|
2024-05-18 09:10:41
| 1
| 6,938
|
physicalattraction
|
78,499,036
| 7,695,845
|
How to draw a large number of circles/spheres in matplotlib efficiently?
|
<p>I want to make a simulation of a large number of particles colliding with each other in matplotlib (on the order of ~10,000 particles). I want to make both 1D, 2D, and 3D simulations and I am struggling with how to draw a large number of particles with a given radius efficiently. In 1D and 2D, I figured out that my best bet is to use a <code>CircleCollection</code> or a <code>PatchCollection</code> to draw the particles efficiently. However, these <code>Collection</code> classes only support 2D shapes as much as I understand, so I don't know what to do in 3D. I think the only reliable way to draw many points in 3D is by using <code>scatter</code>, but this introduces a new problem: <code>scatter</code> accepts its marker size in <code>points**2</code> and not data coordinates. Since my particles are going to collide, it's important that their visual radius will match their actual collision radius. I couldn't find out how to convert a radius in data coordinates to a marker size in <code>points**2</code>, and I couldn't find any other tools to draw a large number of 3D balls in matplotlib.</p>
<p>Can somebody show me an example of how to draw a large number of 3D particles with a given radius in data coordinates? Or alternatively, explain how I can convert such a radius to a marker size for <code>scatter</code>.</p>
<p>For example, I have the following 3D scatter, and I want the marker size to be <code>points_size = 0.3</code> in data coordinates. In other words, the balls should be <code>0.3</code> of the side of the box containing them:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
rng = np.random.default_rng()
points = rng.random((5, 3))
points_size = 0.3
fig, ax = plt.subplots(subplot_kw={"projection": "3d"})
ax.scatter(
points[:, 0],
points[:, 1],
points[:, 2],
s=points_size, # Wrong since I want `points_size` radius in data coordinates
)
fig.tight_layout()
plt.show()
</code></pre>
<p>I can't figure out how to convert <code>points_size</code> to a marker size for <code>scatter</code>. Alternatively, if someone knows another way to draw many 3D balls that allows me to control their radius in data coordinates, it would also be good.</p>
|
<python><matplotlib>
|
2024-05-18 07:59:15
| 0
| 1,420
|
Shai Avr
|
78,499,028
| 5,378,816
|
What does the tempfile.mkstemp(text=...) parameter actually do?
|
<p>Is the <code>text=True|False</code> parameter in <code>mkstemp</code> something Windows specific? I'm sorry that I have to ask, but I'm a UNIX/Linux person.</p>
<p>At the low level of file descriptors - where the <code>mkstemp</code> operates - are all files just bytes. I was surprised to see the <code>text=</code> parameter. The only hint I found is a comment in <code>os.open</code> docs:</p>
<blockquote>
<p>In particular, on Windows adding O_BINARY is needed to open files in binary mode.</p>
</blockquote>
<hr />
<p>For completness the <code>tempfile.mkstemp</code> docs:</p>
<blockquote>
<p>If text is specified and true, the file is opened in text mode.
Otherwise, (the default) the file is opened in binary mode.</p>
</blockquote>
<blockquote>
<p>mkstemp() returns a tuple containing an OS-level handle to an open
file (as would be returned by os.open()) and the absolute pathname of
that file, in that order.</p>
</blockquote>
<p>And an example. It indeed returns a file descriptor and a filename:</p>
<pre><code>>>> import tempfile
>>> tempfile.mkstemp(text=False)
(3, '/tmp/tmp9z8rp2_2')
>>> tempfile.mkstemp(text=True)
(4, '/tmp/tmpc6z9j2yu')
</code></pre>
|
<python>
|
2024-05-18 07:55:17
| 2
| 17,998
|
VPfB
|
78,499,014
| 6,696,746
|
How to collapse or better organize long Jupyter notebook cells in Pycharm IDE?
|
<p>I'm working with long Jupyter notebooks (.ipynb files) that contain numerous cells and outputs, and it's becoming cumbersome to navigate through them in PyCharm. The IDE doesn't seem to natively support cell collapsing, which makes managing the notebook quite difficult.</p>
<p>Are there any plugins, settings, or workarounds to better organize or collapse cells in PyCharm? How can I make my notebook file more manageable within this IDE?</p>
|
<python><jupyter-notebook><pycharm><jupyter>
|
2024-05-18 07:50:11
| 1
| 474
|
icaine
|
78,498,783
| 14,472,762
|
Can we use post_save signals in apps.py?
|
<p>There.
I'm using django python.
I've custom user model. I've created a separate django signals file. there is my method.
I'm importing them in my apps.py and wanting to use there.
cause, post_migrate signals are working that way. but post_save is different cause that takes model. whereas postmigrate are working with app name.
You better understand with following code.</p>
<p><strong>signals.py</strong></p>
<pre><code>import secrets
from django.core.mail import send_mail
def create_first_account(sender, **kwargs):
if sender.name == 'accounts':
user = User.objects.get_user_by_username("meidan")
if user:
return None
password = secrets.token_hex(4)
user = User.objects.create(
username="meidan",
email="meidanpk@gmail.com",
password=password
)
user.is_active= True
User.objects.save_me(user)
subject = 'Meidan Account'
message = password
from_email = settings.EMAIL_HOST_USER
recipient_list = ['ahmedyasin1947@gmail.com']
send_mail(subject, message, from_email, recipient_list, fail_silently=False)
def create_first_account(sender,created, **kwargs):
if sender.name == 'accounts':
print(created)
</code></pre>
<p><strong>apps.py</strong></p>
<pre><code>from django.apps import AppConfig
from django.db.models.signals import post_migrate,post_save
from django.contrib.auth import get_user_model
User = get_user_model()
class AccountsConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'accounts'
def ready(self):
from accounts.signals import generate_banks,create_first_account,create_first_account
post_migrate.connect(generate_banks, sender=self)
post_migrate.connect(create_first_account, sender=self)
post_save.connect(create_first_account,sender=User)
</code></pre>
<p>right now getting error.</p>
<pre><code> File "/home/aa/playapp-backend/envMy/lib/python3.10/site-packages/django/apps/registry.py", line 201, in get_model
self.check_apps_ready()
File "/home/aa/playapp-backend/envMy/lib/python3.10/site-packages/django/apps/registry.py", line 136, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
|
<python><python-3.x><django><django-models><django-rest-framework>
|
2024-05-18 06:02:43
| 1
| 958
|
Ahmed Yasin
|
78,498,781
| 3,623,537
|
check if assessing an address will cause a segfault without crashing python
|
<p>What I've tried:</p>
<ol>
<li><code>faulthandler</code> is really useful to get a traceback where segfault occurred but it doesn't allow handling it properly.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import faulthandler
faulthandler.enable()
import ctypes
try:
# Windows fatal exception: access violation
# Current thread 0x00001334 (most recent call first):
# File "c:\Users\Andrej\Desktop\pomoika\test.py", line 6 in <module>
ctypes.c_byte.from_address(0).value
print('never printed')
except:
print('never printed')
</code></pre>
<ol start="2">
<li>setting up a handler using <code>singal.signal</code> - it worked, as mentioned in the <a href="https://docs.python.org/3/library/signal.html" rel="nofollow noreferrer">docs</a> and in <a href="https://stackoverflow.com/questions/57651775/python-process-not-responding-with-custom-signal-handler-dealing-with-segmentati">this question</a>, handler's python code is never executed as handling segfault also causes a segfault recursively.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import ctypes
import signal
class AccessError(Exception):
pass
def handler(signum, frame):
raise AccessError("Memory access violation")
def is_safe(address: int) -> bool:
try:
# Set signal handler for segmentation faults
signal.signal(signal.SIGSEGV, handler)
v = ctypes.c_uint.from_address(address)
a = v.value
return True
except AccessError:
return False
finally:
# Reset signal handler to default behavior
signal.signal(signal.SIGSEGV, signal.SIG_DFL)
# Test the function
print('before')
print(is_safe(0)) # Should print False or True depending on the address
print('after')
</code></pre>
<ol start="3">
<li>Checking it in a separate thread. It kind of works but on Windows it returns False, True, False, so I guess it's not cross-platform.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import ctypes
import multiprocessing
def check_address(queue, address):
try:
ctypes.c_byte.from_address(address).value
queue.put(True)
except Exception:
queue.put(False)
def is_safe(address: int, timeout: float = 1.0) -> bool:
queue = multiprocessing.Queue()
process = multiprocessing.Process(target=check_address, args=(queue, address))
process.start()
process.join(timeout)
if process.exitcode is None:
process.terminate()
raise Exception(f"Process is stuck (it took longer than {timeout}).")
elif process.exitcode == 0:
process.terminate()
process.join()
v = queue.get()
return v
return False
if __name__ == "__main__":
print(0, is_safe(0)) # False
print("id(object)", is_safe(id(object))) # True
a = object()
print("a", is_safe(id(a))) # True
</code></pre>
|
<python><segmentation-fault><ctypes><python-c-api>
|
2024-05-18 06:01:54
| 1
| 469
|
FamousSnake
|
78,498,727
| 1,232,087
|
Can we use python f-string placeholder with index number?
|
<p>How can we achieve the following in Python 3.6 using <a href="https://www.w3schools.com/python/python_string_formatting.asp" rel="nofollow noreferrer">f-string</a> (<strong>instead</strong> of using <code>format()</code>) method?</p>
<pre><code>quantity = 3
itemno = 567
price = 49
myorder = "I want {0} pieces of item number {1} for {2:.2f} dollars."
print(myorder.format(quantity, itemno, price))
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>I want 3 pieces of item number 567 for 49.00 dollars.
</code></pre>
|
<python><python-3.x><python-3.6>
|
2024-05-18 05:23:12
| 2
| 24,239
|
nam
|
78,498,541
| 6,676,101
|
How do I replace all numbers with something other than a number, using Python and regular expressions?
|
<p>Suppose that we want to replace all integers (<code>0</code>, <code>1</code>, <code>2</code>, <code>3</code>, ...) with a new-line character <code>\n</code>.</p>
<p>How would we do that?</p>
<p>Input: <code>"1 apple 2 orange 3 kiwi"</code>
Output: <code>"\n apple \n orange \n kiwi"</code></p>
|
<python><regex>
|
2024-05-18 03:06:03
| 1
| 4,700
|
Toothpick Anemone
|
78,498,510
| 16,687,283
|
Python generic protocol with method taking instance of the protocol (e.g. Functor)
|
<p>With the current typing implementation, how far can we make this work as expected?
Is there any innate problem in this approach?</p>
<p>The problematic point is the method of generic protocol, taking an instance of self, where the type variable could be different from self.</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from typing import Any, Callable, Generic, Protocol, TypeVar
from dataclasses import dataclass
A = TypeVar("A")
B = TypeVar("B")
class Functor(Protocol[A]):
@staticmethod
def fmap(f: Callable[[A], B], x: Functor[A]) -> Functor[B]: ...
class Applicative(Functor[A], Protocol[A]):
@staticmethod
def pure(x: Any) -> Applicative[A]: ...
@staticmethod
def ap(f: Applicative[Callable[[A], B]], x: Applicative[A]) -> Applicative[B]: ...
class Monad(Applicative[A], Protocol[A]):
@staticmethod
def bind(x: Monad[A], f: Callable[[A], Monad[B]]) -> Monad[B]: ...
@dataclass
class Io(Generic[A]):
action: Callable[[], A]
@staticmethod
def fmap(f: Callable[[A], B], x: Io[A]) -> Io[B]:
return Io(lambda: f(x.action()))
@staticmethod
def pure(x: A) -> Io[A]:
return Io(lambda: x)
@staticmethod
def ap(f: Io[Callable[[A], B]], x: Io[A]) -> Io[B]:
return Io(lambda: f.action()(x.action()))
@staticmethod
def bind(x: Io[A], f: Callable[[A], Io[B]]) -> Io[B]:
return Io(lambda: f(x.action()).action())
def taking_monad(x: Monad[int]) -> bool:
return True
taking_monad(Io(lambda: 1))
</code></pre>
<p>This results error like this .</p>
<pre><code>Argument 1 to "taking_monad" has incompatible type "Io[int]"; expected "Monad[int]"Mypy[arg-type](https://mypy.readthedocs.io/en/latest/_refs.html#code-arg-type)
Following member(s) of "Io[int]" have conflicts:
Expected:
def [B] ap(f: Applicative[Callable[[int], B]], x: Applicative[int]) -> Applicative[B]
Got:
def [B] ap(f: Io[Callable[[int], B]], x: Io[int]) -> Io[B]
Expected:
def [B] bind(x: Monad[int], f: Callable[[int], Monad[B]]) -> Monad[B]
Got:
def [B] bind(x: Io[int], f: Callable[[int], Io[B]]) -> Io[B]
<2 more conflict(s) not shown>Mypy
Argument 1 to "taking_monad" has incompatible type "Io[int]"; expected "Monad[int]"Mypy[arg-type](https://mypy.readthedocs.io/en/latest/_refs.html#code-arg-type)
Following member(s) of "Io[int]" have conflicts:
</code></pre>
<p>The same question is also on the discussion of <a href="https://github.com/python/typing/discussions/1739" rel="nofollow noreferrer">python/typing</a></p>
|
<python><mypy><python-typing>
|
2024-05-18 02:38:39
| 0
| 553
|
lighthouse
|
78,498,481
| 23,805,311
|
UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR
|
<p>I'm trying to train a model with Yolov8. Everything was good but today I suddenly notice getting this warning apparently related to <code>PyTorch</code> and <code>cuDNN</code>. In spite the warning, the training seems to be progressing though. I'm not sure if it has any negative effects on the training progress.</p>
<pre><code>site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
</code></pre>
<p><strong>What is the problem and how to address this?</strong></p>
<p>Here is the output of <code>collect_env</code>:</p>
<pre><code>Collecting environment information...
PyTorch version: 2.3.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.31
Python version: 3.9.7 | packaged by conda-forge | (default, Sep 2 2021, 17:58:34) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 515.105.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.0
[pip3] onnxruntime==1.17.3
[pip3] onnxruntime-gpu==1.17.1
[pip3] onnxsim==0.4.36
[pip3] optree==0.11.0
[pip3] torch==2.3.0+cu118
[pip3] torchaudio==2.3.0+cu118
[pip3] torchvision==0.18.0+cu118
[pip3] triton==2.3.0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] pytorch-quantization 2.2.1 pypi_0 pypi
[conda] torch 2.1.1+cu118 pypi_0 pypi
[conda] torchaudio 2.1.1+cu118 pypi_0 pypi
[conda] torchmetrics 0.8.0 pypi_0 pypi
[conda] torchvision 0.16.1+cu118 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
</code></pre>
|
<python><pytorch><nvidia><torchvision><cudnn>
|
2024-05-18 02:13:18
| 2
| 409
|
Mary H
|
78,498,394
| 1,613,983
|
How do I receive a non primitive object as query parameter?
|
<p>I'm trying to receive a nontrivial query parameter object. At the moment I'm doing this via a json-encoded string since objects seem to be assumed to arrive in the body in <code>fastapi</code>, so I've had to do something like this on the server:</p>
<pre><code>class MyParam(pydantic.BaseClass):
param1: list[str]
param2: typing.Dict[str, SomeOtherPydanticType]
@router.get("/my/route")
def get_vhist_snap_pnl(
param: pydantic.types.Json = fastapi.Query(
...,
description="json-encoded dictionary",
)
) -> str:
obj = MyParam(param)
...
return 'hello world'
</code></pre>
<p>I can use <code>param</code> to construct an instance of <code>MyParam</code> as above, but it would be nice for <code>fastapi</code> to do this for me directly in the function declaration because I'd get better swagger docs and it would be less code to write. How can I do this?</p>
|
<python><fastapi>
|
2024-05-18 01:03:39
| 1
| 23,470
|
quant
|
78,498,374
| 2,345,484
|
How to add stacked bar plot in a subplot in Plotly?
|
<p>I have a data frame <code>df</code>, it has over 100 columns, I want to plot a stacked bar plot for these 100 columns, <code>plotly.express</code> is very nice, I can just do this</p>
<pre><code>import plotly.express as px
# df.columns = ['date', 'val1', 'val2', ..., 'val100', 'cost', 'numSales']
cols_to_be_stacked = df.columns[1:-2]
px.bar(df, x='time', y=cols_to_be_stacked)
</code></pre>
<p>But I want to have a subplots with (numRows=2, numCols=1), where the two rows share the <code>x</code> axis,</p>
<pre><code>fig = make_subplots(rows=2, cols=1)
## Q1: how should I do my_stacked_bar_plot for 100 columns?
fig.add_trace(my_stacked_bar_plot, row=1, col=1)
# Q2: I want to add time-cost plot to the y-axis on the right side, how
# should I write the line below?
# In matplotlib, i can just use ax.twinx().
fig.add_trace(go.Scatter(x=df['date'].values, y=df['cost'].values), row=1, col=1)
## now plot the time-numSales on 2nd row.
fig.add_trace(go.Scatter(x=df['date'].values, y=df['numSales'].values), row=2, col=1)
</code></pre>
<p>Can someone help me with <code>Q1</code> and <code>Q2</code> in the comments above? Or is there another way to achieve this other than using <code>add_trace</code>?
Thanks!</p>
<p><strong>Edit(shorter description as below)</strong></p>
<p>As a simple illustration, if I have a <code>df</code> like this (here I have only 6 of <code>val</code> columns instead of 100 <code>val</code> columns</p>
<pre><code>df = pd.DataFrame([[20240502, -2, 3, -3, 7, -9, 6, 6, 8],
[20240503, 4, -6, -5, 7, -3, -2, 12, 9]],
columns=["date", 'val1', 'val2', 'val3', 'val4',
'val5', 'val6', 'cost', 'numSales'])
date val1 val2 val3 val4 val5 val6 cost numSales
0 20240502 2 3 -3 -7 9 6 6 8
1 20240503 4 -6 5 7 -3 8 12 9
</code></pre>
<p>I want to a plot like below, (x-axis is shared for the top and bottom subplot)</p>
<p><a href="https://i.sstatic.net/OlIxtqX1.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlIxtqX1.jpg" alt="I want the plot to be like below" /></a></p>
|
<python><plotly>
|
2024-05-18 00:49:22
| 1
| 3,280
|
Allanqunzi
|
78,498,284
| 12,956,240
|
Bokeh Bar Chart - how to remove separator lines (x axis with nested categories)
|
<p>I've successfully created a Bokeh bar chart with nested categories by following the examples at the official bokeh site (<a href="https://stackoverflow.com">https://docs.bokeh.org/en/latest/docs/user_guide/basic/bars.html</a>)</p>
<p>However, despite a thorough review of all the styling attributes and how they work, I simply cannot find the attribute name or location for a specific part of the visual-- the light grey lines appearing between the nested categories on the x-axis. I'm familiar with how to modify the visuals in my python code that generates the bar chart: the issue is that I can't figure out what attribute, if any, pertains to these specific lines!</p>
<p>Please see the included image: I have circled in red one of the (multiple) separation lines that I do not want to display. Why? Because in my bar chart, the labels (e.g. '2015' in the example image) are much longer words, which results in them overlapping with the vertical separators, resulting in a messy and cluttered display.</p>
<p><a href="https://i.sstatic.net/M62GDTlp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M62GDTlp.png" alt="unwanted vertical separators" /></a></p>
|
<python><bokeh>
|
2024-05-17 23:41:11
| 1
| 310
|
m.arthur
|
78,498,023
| 2,355,903
|
Removing rsid information from word XML in VBA
|
<p>I am trying to translate a bunch of .doc documents to .docx. I am using the script below, which works fine on its own.</p>
<pre><code>Sub TranslateDocIntoDocx()
Dim objWordApplication As New Word.Application
Dim objWordDocument As Word.Document
Dim strFile As String
Dim strFolder As String
strFolder = ""
strFile = Dir(strFolder & "*.doc", vbNormal)
destFolder = ""
While strFile <> ""
With objWordApplication
Set objWordDocument = .Documents.Open(FileName:=strFolder & strFile, AddToRecentFiles:=False, ReadOnly:=True, Visible:=False)
With objWordDocument
.SaveAs FileName:=destFolder & Replace(strFile, "doc", "docx"), FileFormat:=16
.Close
End With
End With
strFile = Dir()
Wend
Set objWordDocument = Nothing
Set objWordApplication = Nothing
End Sub
</code></pre>
<p>Except when I do this and then parse the xml, I have a bunch of extraneous rsid tags breaking up words, making the parsing a little more complex. I'm trying to understand how to remove those, but so far haven't had any success. Below is a modified version of the above code I tried, but didn't remove the tags.</p>
<pre><code>Sub TranslateDocIntoDocx()
Dim objWordApplication As New Word.Application
Dim objWordDocument As Word.Document
Dim strFile As String
Dim strFolder As String
strFolder = ""
strFile = Dir(strFolder & "*.doc", vbNormal)
destFolder = ""
While strFile <> ""
With objWordApplication
Set objWordDocument = .Documents.Open(FileName:=strFolder & strFile, AddToRecentFiles:=False, ReadOnly:=True, Visible:=False)
With objWordDocument
'Remove revisions
.TrackRevisions = False
'.AcceptAllRevisionsShown
.RemoveDocumentInformation wdRDIDocumentProperties
.RemoveDocumentInformation wdRDIRevisions
.RemoveDocumentInformation wdRDIComments
.RemoveDocumentInformation wdRDIRemovePersonalInformation
'Turn off grammar and spell check options
.ShowGrammaticalErrors = False
.ShowSpellingErrors = False
.GrammarChecked = False
.SpellingChecked = False
.SaveAs FileName:=destFolder & Replace(strFile, "doc", "docx"), FileFormat:=16
.Close
End With
End With
strFile = Dir()
Wend
Set objWordDocument = Nothing
Set objWordApplication = Nothing
End Sub
</code></pre>
<p>In addition, I tried adding in</p>
<pre><code>.StoreRSIDOnSave = False
</code></pre>
<p>But depending on where I placed it either errored out or didn't remove the rsid tags. Also tried accepting all revisions with either .AcceptAllRevisions or .AcceptAllRevisionsShown but would get error when including them. Does anyone know if there's a VBA solution for doing this? My environment is pretty limited so I would be looking for a solution in either VBA or base python.</p>
|
<python><xml><vba><ms-word>
|
2024-05-17 21:35:56
| 1
| 663
|
user2355903
|
78,497,891
| 10,161,315
|
How to incorporate data cleansing into trained model
|
<p>If I cleanse the data and impute median value into NaN values, am I supposed to somehow incorporate this into my model that will be used on the test data? In other words, doesn't my test data need to be cleansed and imputed as well, or will the training take care of this?</p>
<p>I want to say it needs to be incorporated, because otherwise the NaN values break the model, plus any skewness wouldn't have been addressed.</p>
<p>In particular:</p>
<p>Replace NaN with median:</p>
<pre><code>data = data.fillna(data.median())
</code></pre>
<p>Deal with skewness using Quantile Transformation to follow a normal distribution for each feature (the following is just for one).</p>
<pre><code>qualtile_transformer = QuantileTransformer(output_distribution='normal', random_state=0')
data['feat_0'] = quantile_transformer.fit_transform(data['feat_0'].values.reshape(-1,1)).flatten()
</code></pre>
<p>Model:</p>
<pre><code>from sklearn.linear_model import LinearRegression
linear_regr = LinearRegression()
linear_regr.fit(Xtrain,Ytrain)
</code></pre>
<p>Prediction:</p>
<pre><code># make prediction using the testing set
Ypred = linear_regr.predict(Xtest)
</code></pre>
<p>So, ultimately, if I were to take my model and use it on a similar but different data, how can I be sure it doesn't fail and the NaNs and Quantile transformation will be taken care of with the new data before the prediction is implemented so it won't fail?</p>
|
<python><machine-learning><data-cleaning><mlmodel>
|
2024-05-17 20:47:39
| 1
| 323
|
Jennifer Crosby
|
78,497,859
| 20,022,511
|
Failed to install module `PyStruct` using pip
|
<p>I wanted to install the module <code>PyStray</code> for a project.</p>
<p>I used the generic command ->
<code>pip install Pystruct</code></p>
<p>Here is the error :</p>
<pre><code>Collecting Pystruct
Using cached pystruct-0.3.2.tar.gz (5.6 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [17 lines of output]
Traceback (most recent call last):
File "C:\Users\Meit Sant\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\Meit Sant\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\Meit Sant\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\Meit Sant\AppData\Local\Temp\pip-build-env-3y5ayshq\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\Meit Sant\AppData\Local\Temp\pip-build-env-3y5ayshq\overlay\Lib\site-packages\setuptools\build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "C:\Users\Meit Sant\AppData\Local\Temp\pip-build-env-3y5ayshq\overlay\Lib\site-packages\setuptools\build_meta.py", line 487, in run_setup
super().run_setup(setup_script=setup_script)
File "C:\Users\Meit Sant\AppData\Local\Temp\pip-build-env-3y5ayshq\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 3, in <module>
ModuleNotFoundError: No module named 'numpy'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>As it says <code>ModuleNotFoundError: No module named 'numpy'</code>, I installed <code>Numpy</code> using <code>pip install numpy</code>, which completed successfully.</p>
<pre><code>Collecting numpy
Downloading numpy-1.26.4-cp310-cp310-win_amd64.whl.metadata (61 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.0/61.0 kB 819.4 kB/s eta 0:00:00
Downloading numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.8/15.8 MB 2.6 MB/s eta 0:00:00
Installing collected packages: numpy
Successfully installed numpy-1.26.4
</code></pre>
<p>However on re-running the command <code>pip install Pystruct</code>. It threw the same error. Even though <code>numpy</code> <strong>is</strong> installed.</p>
|
<python><pip>
|
2024-05-17 20:38:27
| 1
| 1,373
|
MT_276
|
78,497,824
| 2,893,712
|
Pandas Split By First Delimiter and Add Excess to Different Column
|
<p>I have a column in my dataframe for email addresses. Sometimes the field contains multiple emails separated by <code>; </code></p>
<pre><code>EMAIL COMMENT
email1@example.com Example Comment
email2@example.com; email3@example.com
email4@example.com; email5@example.com; e6@ex.com Another comment
</code></pre>
<p>My goal is to have the <code>EMAIL</code> field only have 1 email and append all the extra emails to the <code>COMMENT</code> column. Here is my ideal output:</p>
<pre><code>EMAIL COMMENT
email1@example.com Example Comment
email2@example.com Emails=email3@example.com
email4@example.com Another comment|Emails=email5@example.com; e6@ex.com
</code></pre>
<p>Here is my code thus far:</p>
<pre><code>df['EMAIL2'] = df['EMAIL'].str.split('; ', 1).str[0] # Split on `; ` if exist and only keep first email
df['COMMENT'] += "|Emails=" + df['EMAIL'].str.split('; ', 1).str[1]
df['EMAIL'] = df['EMAIL2'] # Set Email = Email2
</code></pre>
|
<python><pandas><dataframe><split>
|
2024-05-17 20:26:12
| 2
| 8,806
|
Bijan
|
78,497,582
| 46,503
|
How to add spaces between sentences but ignore links?
|
<p>I need to put a space between sentences like:</p>
<pre><code>"This is one.This is two"
</code></pre>
<p>should be:</p>
<pre><code>"This is one. This is two"
</code></pre>
<p>In Python, I used the following regular expression:</p>
<pre><code>text = re.sub(r'\.([A-Z])', r'. \1', text)
</code></pre>
<p>It worked pretty well until I have links in the text with dots inside, like:</p>
<pre><code>"This is a text with link https://www.website.com/Article.Name.pdf"
</code></pre>
<p>turned out to be:</p>
<pre><code>"This is a text with link https://www.website.com/Article. Name.pd"
</code></pre>
<p>The case is important, it can't be changed. That is I need the regex to recognize it's a link and ignore it. Not sure how it can be done.</p>
|
<python><regex>
|
2024-05-17 19:23:16
| 2
| 5,287
|
mimic
|
78,497,389
| 5,284,054
|
Python tkinter close first window while opening second window
|
<p>I'm trying to close the first window as the second window opens. Both windows close, or the first window closes and the second window never opens.</p>
<p>This question has a similar problem but was solved by addressing the imported libraries: <a href="https://stackoverflow.com/questions/74816578/tkinter-is-opening-a-second-windows-when-the-first-one-is-closing">Tkinter is opening a second windows when the first one is closing</a></p>
<p>This question also has a similar problem, but the solution keeps all windows active/open all the time, but withdraws the windows from view/user interaction when they are not being used. I want a solution without keeping 23 windows open at the same time. <a href="https://stackoverflow.com/questions/78492310/python-tkinter-cloase-first-window-while-opening-second-window">Python tkinter cloase first window while opening second window</a></p>
<p>I don't want to keep multiple windows open at the same time. What I am asking in this question is a MWE. So I've included only two windows in this question. My actual application has 23 windows in a tree-like structure.</p>
<p><code>quit</code> and <code>destroy</code> both close the <code>mainloop</code>. Ideally, I'd use one <code>mainloop</code> and open and close windows as I need them rather than close one <code>mainloop</code> and sequentially open 22 other <code>mainloop</code>s.</p>
<p>Here's my code, which was taken from here <a href="https://www.pythontutorial.net/tkinter/tkinter-toplevel/" rel="nofollow noreferrer">https://www.pythontutorial.net/tkinter/tkinter-toplevel/</a></p>
<pre><code>from tkinter import *
from tkinter import ttk
class Window(Toplevel):
def __init__(self, parent):
super().__init__(parent)
self.geometry('300x100')
self.title('Toplevel Window')
ttk.Button(self,
text='Close',
command=self.destroy).pack(expand=True)
class App():
def __init__(self, root):
super().__init__()
self.root = root
root.geometry('300x200')
root.title('Main Window')
# place a button on the root window
ttk.Button(root,
text='Open a window',
command=self.open_window).pack(expand=True)
def open_window(self):
window = Window(self.root)
window.grab_set()
self.root.quit()
if __name__ == "__main__":
root = Tk()
App(root)
root.mainloop()
</code></pre>
<p>I added in <code>self.destroy()</code> in the function <code>def open_window(self)</code>. I also tried <code>root.quit()</code>. Both will close both windows.</p>
<p>I adapted this script so that <code>class App</code> takes <code>root</code> as an argument, rather than being a subclass of <code>tk.Tk</code>.</p>
|
<python><tkinter>
|
2024-05-17 18:32:09
| 2
| 900
|
David Collins
|
78,497,266
| 11,441,069
|
View Not Called in Django: Possible URLs.py Configuration Problem
|
<p>I'm facing a strange issue with Django. I have set up a simple form in a template that submits to a view, but I'm not getting the expected 404 response. Instead, I'm being redirected to the URL http://localhost:8000/clear/ without seeing the 404 error.</p>
<p>Here is my setup:</p>
<p>Template:</p>
<pre><code><form method="post" action="{% url 'clear_chat' %}">
{% csrf_token %}
<button type="submit" class="btn btn-danger clear-button me-2">
Clear Chat
</button>
</form>
</code></pre>
<p>View:</p>
<pre><code>from django.http import HttpResponse
from django.contrib.auth.decorators import login_required
@login_required
def clear_chat(request):
return HttpResponse("Invalid request method.", status=404)
@login_required
def chat_view(request, chat_code=None):
if request.method == "POST":
form = MessageForm(request.POST)
if form.is_valid():
user_message = form.save(commit=False)
user_message.chat_code = chat_code
user_message.save()
response_text = 'lorem ipsum'
Message.objects.create(chat_code=chat_code, text=response_text)
return redirect('chat', chat_code=chat_code)
else:
form = MessageForm()
messages = Message.objects.filter(chat_code=chat_code).order_by('created_at')
return render(request, 'chatapp/chat.html', {'form': form, 'messages': messages, 'chat_code': chat_code})
</code></pre>
<p>Urls:</p>
<pre><code>from django.urls import path
from .views import clear_chat
urlpatterns = [
path('<str:chat_code>/', chat_view, name='chat'),
path('clear/', clear_chat, name='clear_chat'),
]
</code></pre>
<p>Instead of seeing the 404 response, I'm redirected to the URL http://localhost:8000/clear/ without any error message. I have no clue what is wrong with that code.</p>
|
<python><python-3.x><django>
|
2024-05-17 18:03:46
| 0
| 509
|
Krzysztof Krysztofczyk
|
78,497,151
| 1,460,910
|
Parsing dict like structure to dict
|
<p>I want to parse this input to output.</p>
<pre><code>input = {
"a.b.c[0][0].d": "i",
"a.b.c[0][1].e-f": "j",
"a.b.c[0][2].g-h[0]": "x",
"a.b.c[0][3].g-h[1]": "y",
"a.b.c[0][4].g-h[2]": "z",
"x": [
{
"b": {
"x.y.z": "a"
},
},
{
"b": {
"d.f.a": "b"
},
},
{
"b": {
"g.h.q": "c"
},
},
],
}
output = {
"a": {
"b" : {
"c": [
[
{
{"d": "i"},
{"e": "j"},
{"g": ["x", "y", "z"]},
}
]
]
}
},
"x": [
{
"b": {
"x": { "y": { "z": "a" } }
}
},
{
"b": {
"d": { "f": { "a": "b" } }
}
},
{
"b": {
"g": { "h": { "q": "c" } }
}
}
]
}
</code></pre>
<p>I have done for simple keys however I am stuck at the list indices keys e.g. a[0][0] , which should create a list of values.
I am missing the parse for this type of keys. It should take the key and return a list depending how many level deep it needs to go based on the indices.</p>
<pre><code>def convert(self, data: Dict):
if isinstance(data, dict):
new_dict = {}
for key, value in data.items():
if not key or key is None or key == '':
continue
parts = key.split(".") if "." in key else key.split("_")
current_dict = new_dict
for part in parts[:-1]:
#print("Current Dict", current_dict)
current_dict.setdefault(part, {})
current_dict = current_dict[part]
if isinstance(value, dict):
current_dict[parts[-1]] = self.convert(value)
elif isinstance(value, list):
# Handle lists containing nested dictionaries
new_list = []
for item in value:
if isinstance(item, dict):
new_list.append(self.convert(item))
elif isinstance(item, str) and "," in item:
# Split comma-separated string and convert to list of strings
new_list.extend(item.split(","))
else:
new_list.append(item)
current_dict[parts[-1]] = new_list
else:
current_dict[parts[-1]] = value
return new_dict
else:
return data
print(convert(input))
</code></pre>
<p>This results in:</p>
<pre><code>{'a': {'b': {'c[0][0]': {'d': 'i'}, 'c[0][1]': {'e-f': 'j'}, 'c[0][2]': {'g-h[0]': 'x'}, 'c[0][3]': {'g-h[1]': 'y'}, 'c[0][4]': {'g-h[2]': 'z'}}}, 'x': [{'b': {'x': {'y': {'z': 'a'}}}}, {'b': {'d': {'f': {'a': 'b'}}}}, {'b': {'g': {'h': {'q': 'c'}}}}]}
</code></pre>
|
<python><dictionary>
|
2024-05-17 17:39:56
| 0
| 555
|
Pranaya Behera
|
78,497,111
| 14,517,452
|
pydantic model incorrectly applies validation
|
<p>I have a model defined like this;</p>
<pre><code>class SomeModel(BaseModel):
name: str
class SomeOtherModel(BaseModel):
name: int
class MyModel(BaseModel):
items: List[Union[SomeModel, SomeOtherModel]]
@validator("items", always=True)
def validate(cls, value):
by_type = list(filter(lambda v: isinstance(v, SomeModel), value))
if len(by_type) < 1:
raise ValueError("we need at least one SomeModel")
</code></pre>
<p>The submodels are unimportant, essentially I need a list of different sub-models, and the list must contain at least one of the first type. All well and good.</p>
<p>Elsewhere in my code I am referring to this model (context: for the purposes of saving user settings, should be irrelevant for this question).</p>
<pre><code>class ComposerModels(BaseModel):
user: List[MyModel] = []
system: List[MyModel] = []
class ComposerSettings(BaseModel):
models: ComposerModels
class UserSettings(BaseModel):
composer: ComposerSettings
</code></pre>
<p>My program needs to be able to save new models into the UserSettings model, something like this;</p>
<pre><code>my_model = MyModel(items=[SomeModel(name="a"), SomeOtherModel(name=1)])
user_settings = UserSettings(
composer=ComposerSettings(
models=ComposerModels(
user=[my_model]
)
)
)
</code></pre>
<p>However, this throws an error;</p>
<pre><code>Traceback (most recent call last):
File ".../pydantic/shenanigans.py", line 38, in <module>
user=[my_model]
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ComposerModels
user -> 0
we need at least one SomeModel (type=value_error)
</code></pre>
<p>This is where it gets weird.</p>
<p>For some reason, when trying to instantiate <code>UserSettings</code>, it goes through validation of the <code>items</code> field inside <code>MyModel</code>, but instead of having been passed a list of sub-models, as expected, somehow it's getting a list of MyModel instances instead. This obviously fails validation, and raises the error above. However, if I comment out the validation code, it works. No error, and the user settings model contains the <code>MyModel</code> we'd expect.</p>
<p>I can't just disable that validation, I need it elsewhere in my program... Any ideas on what's going on here? I'm stumped...</p>
<p>I'm running python 3.7 with pydantic 1.10 on centos 7. I can't easily upgrade any versions, because my company doesn't believe in devops, so if this is a known bug with pydantic at that version I'll have to think of something else.</p>
|
<python><pydantic>
|
2024-05-17 17:26:31
| 2
| 748
|
Edward Spencer
|
78,497,084
| 2,192,423
|
Why the setter for the child class is called when calling super()
|
<p>I have been struggling with this for few hours now. I thought to seek the help of this community.</p>
<p>I am creating a class to inherit from numpy.ndarray. I also need to create a getter and a setter for the dtype property.</p>
<pre><code>import numpy as np
class myarray(np.ndarray):
__dtype = None
def __new__(cls, in_data, dtype):
obj = np.array(in_data, dtype).view(cls)
return obj
def view(self, dtype = None):
if dtype is None:
return self
else:
return super().view(np.uint8)
@property
def dtype(self):
return self.__dtype
@dtype.setter
def dtype(self, value):
print("Setter for dtype in the child class is called!!")
a = myarray([1, 2, 3, 4], dtype = np.uint16)
b = a.view(dtype = np.uint8)
print(b)
</code></pre>
<p>When I run this code, I get the following:</p>
<blockquote>
<p>Setter for dtype in the child class is called!!</p>
<p>[1 2 3 4]</p>
</blockquote>
<p>Why when I call super().view(np.uint8), the setter for the child class is getting called and is there a way to avoid it?</p>
<p>Thanks in advance</p>
|
<python><numpy><numpy-ndarray>
|
2024-05-17 17:21:57
| 0
| 849
|
MrAliB
|
78,496,873
| 3,015,186
|
How to write CSV data directly from string (or bytes) to a duckdb database file in Python?
|
<p>I would like to write CSV data directly from a bytes (or string) object in memory to duckdb database file (i.e. I want to avoid having to write and read the temporary .csv files). This is what I've got so far:</p>
<pre class="lang-py prettyprint-override"><code>import io
import duckdb
data = b'a,b,c\n0,1,2\n3,4,5'
rawtbl = duckdb.read_csv(
io.BytesIO(data), header=True, sep=","
)
con = duckdb.connect('some.db')
con.sql('CREATE TABLE foo AS SELECT * FROM rawtbl')
</code></pre>
<p>which throws following exception:</p>
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------------------------
IOException Traceback (most recent call last)
Cell In[1], line 10
5 rawtbl = duckdb.read_csv(
6 io.BytesIO(data), header=True, sep=","
7 )
9 con = duckdb.connect('some.db')
---> 10 con.sql('CREATE TABLE foo AS SELECT * FROM rawtbl')
IOException: IO Error: No files found that match the pattern "DUCKDB_INTERNAL_OBJECTSTORE://2843be5a66472f9c"
</code></pre>
<p>However, it is possible to do:</p>
<pre class="lang-py prettyprint-override"><code>>>> duckdb.sql('CREATE TABLE foo AS SELECT * FROM rawtbl')
>>> duckdb.sql('show tables')
┌─────────┐
│ name │
│ varchar │
├─────────┤
│ foo │
└─────────┘
>>> duckdb.sql('SELECT * from foo')
┌───────┬───────┬───────┐
│ a │ b │ c │
│ int64 │ int64 │ int64 │
├───────┼───────┼───────┤
│ 0 │ 1 │ 2 │
│ 3 │ 4 │ 5 │
└───────┴───────┴───────┘
</code></pre>
<p>since <code>rawtbl</code> is a <code>duckdb.duckdb.DuckDBPyRelation</code> object. But that is the in-memory duckdb database, not the 'some.db' file.</p>
<h4>Question</h4>
<p>How to read csv data directly from bytes (or a string) to duckdb database file, without using intermediate CSV files?</p>
<h5>Versions</h5>
<p>duckdb 0.10.2 on Python 3.12.2 on Ubuntu</p>
|
<python><duckdb>
|
2024-05-17 16:29:13
| 2
| 35,267
|
Niko Fohr
|
78,496,800
| 7,773,783
|
Error when calling cursor.execute with psycopg2 sql.SQL object
|
<p>I am trying to execute a raw update query using psycopg2 in django. The code is as below:</p>
<pre><code>from django.db import connection
from psycopg2 import sql
model_instances_to_update = []
for model_instance in models_queryset:
model_instances_to_update.append(
sql.Identifier(
f"({id},{col_1_value},{col_2_value},{col_3_value},{col_4_value})"
)
)
model_instance_query_values_string = sql.SQL(", ").join(model_instances_to_update)
model_instance_sql_params = {"updated_model_instance_values": model_instance_query_values_string}
model_instance_update_query = sql.SQL(
"""
UPDATE {table} AS model_table SET
col_1 = model_table_new.col_1,
col_2 = model_table_new.col_2,
col_3 = model_table_new.col_3,
col_4 = model_table_new.col_4
FROM (VALUES (%(updated_model_instance_values)s)) AS model_table_new(id, col_1, col_2, col_3, col_4)
WHERE model_table.id = model_table_new.id;
"""
).format(
table=sql.Identifier("table_name"),
)
with connection.cursor() as cursor:
cursor.execute(
model_instance_update_query,
params=model_instance_sql_params,
)
</code></pre>
<p>But when I try to execute this query I am getting the following error:</p>
<pre><code>TypeError: "object of type 'Composed' has no len()"
</code></pre>
<p>What am I doing wrong here and how to correct it?</p>
<p><strong>UPDATE</strong>
I am now passing the list of tuples directly in cursor.execute. I have added placeholders in the SQL object string but I am still getting the same error.</p>
<pre><code>for model_instance in models_queryset:
model_instances_to_update.append(
(
id,
col_1_value,
col_2_value,
col_3_value,
col_4_value
)
)
model_instance_update_query = sql.SQL(
"""
UPDATE {table} AS model_table SET
col_1 = model_table_new.col_1,
col_2 = model_table_new.col_2,
col_3 = model_table_new.col_3,
col_4 = model_table_new.col_4
FROM (VALUES ({records_list_template})) AS model_table_new(id, col_1, col_2, col_3, col_4)
WHERE model_table.id = model_table_new.id;
"""
).format(
table=sql.Identifier("table_name"),
records_list_template=sql.SQL(",").join(
[sql.Placeholder()] * len(model_instances_to_update)
),
)
with connection.cursor() as cursor:
cursor.execute(
model_instance_update_query,
model_instances_to_update,
)
</code></pre>
<p><strong>UPDATE 2</strong></p>
<p>Below is the output of printing sql from inside <code>execute</code> function.</p>
<pre><code>Composed([SQL('\n UPDATE '),
Identifier('table_name'),
SQL(' AS model_table SET\n
col_2 = model_table_new.col_2,\n
col_3 = model_table_new.col_3,\n
col_4 = model_table_new.col_4,\n
col_5 = model_table_new.col_5\n
FROM (VALUES ('),
Composed([Placeholder(), SQL(', '), Placeholder(), SQL(', '),
Placeholder(), SQL(', '), Placeholder(), SQL(', '),
Placeholder()]),
SQL(')) AS
model_table_new(col_1, col_2, col_3, col_4, col_5)\n
WHERE model_table.id = model_table_new.id;\n ')])
</code></pre>
<p>I tried executing <code>print(model_instance_update_query.as_string(connection))</code> but I got this error - <code>argument 2 must be connection or cursor</code>. This is because I am using connection object from django.db which is a proxy and not the connection object from psycopg2. So tried as above.</p>
|
<python><postgresql><psycopg2><django-orm><database-cursor>
|
2024-05-17 16:08:48
| 1
| 1,139
|
Lax_Sam
|
78,496,761
| 5,565,275
|
Selecting coefficient rows for statsmodels.iolib.summary.Summary
|
<p>I am building a real estate model and one-hot encoded the first three digits of zip code. I don't want to see all those zip codes. How can I print(model.summary()) for all the other features?</p>
|
<python><statsmodels>
|
2024-05-17 16:00:09
| 1
| 443
|
Duy Đặng
|
78,496,560
| 1,123,336
|
Is there a way to shut down the Python multiprocessing resource tracker process?
|
<p>I submitted a <a href="https://stackoverflow.com/questions/78460988/how-to-shut-down-the-resource-tracker-after-running-pythons-processpoolexecutor">question</a> a week ago about persistent processes after terminating the ProcessPoolExecutor, but there have been no replies. I think this might be because not enough people are familiar with how ProcessPoolExecutor is coded, so I thought it would be helpful to ask a more general question to those who use the multiprocessing module.</p>
<p>In the Python documentation, it states that</p>
<blockquote>
<p>On POSIX using the spawn or forkserver start methods will also start a resource tracker process which tracks the unlinked named system resources (such as named semaphores or SharedMemory objects) created by processes of the program. When all processes have exited the resource tracker unlinks any remaining tracked object.</p>
</blockquote>
<p>However, there is nothing in the documentation stating how to shut down this resource tracker when it is no longer needed. As far as I can tell, the tracker PID is not available to the ProcessPoolExecutor, but I did read somewhere that it might be accessible using a Pool instead. Can anyone confirm if this is true before I refactor my code?</p>
|
<python><python-multiprocessing>
|
2024-05-17 15:20:20
| 1
| 582
|
Ray Osborn
|
78,496,523
| 5,429,320
|
Passing "shopify_access_token" in the header of my request to API endpoint from Shopify web app
|
<p>I have an embedded Shopify app that my store uses. The app is just the front end and I have a few Azure Function App to host my different APIs for the app.</p>
<p>In these function app I have a helper file which will take headers from the request and verify the Shopify token. This is to restrict the endpoints from being called from anywhere other than the Shopify app.</p>
<p><strong>authentication_helper.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import os
import requests
import json
from azure.functions import HttpRequest, HttpResponse
from shared.log_helper import logger
INTERNAL_SECRET_KEY = os.getenv('INTERNAL_SECRET')
def verify_shopify_token(domain: str, token: str) -> bool:
try:
response = requests.get(f'https://{domain}/admin/shop.json', headers={'X-Shopify-Access-Token': token})
return response.status_code == 200
except requests.RequestException:
return False
def shopify_auth_required(func):
def wrapper(req: HttpRequest):
logger.info("Incoming headers: %s", req.headers)
logger.info("Incoming params: %s", req.params)
# Check for internal secret key
internal_key = req.headers.get('X-Internal-Secret')
if internal_key == INTERNAL_SECRET_KEY:
return func(req)
# Perform Shopify authentication
domain = req.headers.get('X-Shopify-Domain') or req.params.get('domain')
token = req.headers.get('X-Shopify-Access-Token') or req.params.get('token')
if not domain or not token or not verify_shopify_token(domain, token):
response_content = json.dumps({'error': 'Unauthorized', 'message': 'You are not authorized to access this resource.'})
return HttpResponse(response_content, status_code=401, mimetype="application/json")
return func(req)
return wrapper
</code></pre>
<p>In my frontend web app, I call the request using a ajax call and passing in the necessary headers. This code is below:</p>
<pre class="lang-js prettyprint-override"><code> // --- Utility Functions --- //
const fetchAPI = (url, method, data, successHandler, errorHandler) => {
$.ajax({
url,
method,
contentType: "application/json",
dataType: "json",
headers: {
"X-Shopify-Domain": "{{ shopify_domain }}",
"X-Shopify-Access-Token": "{{ shopify_access_token }}"
},
data: JSON.stringify(data),
success: successHandler,
error: errorHandler,
});
};
</code></pre>
<p>I feel like this is more secure from an API perspective as only the frontend app can successfully call the endpoints.</p>
<p>However, I feel like this opens me up to other issues if anyone is able to get the <code>X-Shopify-Access-Token</code> from the request header.</p>
<p>Is this a suitable solution to adding authentication to my APIs?</p>
<p>I have tried to stay away from having to add another login process for my Shopify app as I am already logged into my Shopify admin portal, and this would require quiet a significate amount of work to implement.</p>
|
<python><python-3.x><azure-functions><shopify><shopify-app>
|
2024-05-17 15:12:56
| 1
| 2,467
|
Ross
|
78,496,475
| 6,751,456
|
django bulk update with batch in multiple transactions
|
<p>I've a certain code that updates bulk rows:</p>
<pre><code>from simple_history.utils import bulk_update_with_history
from django.utils.timezone import now
bulk_update_list = []
for chart in updated_chart_records:
chart.coder_assignment_sequence = count
chart.queue_id = work_queue_pk
chart.level_id = role_id
updated_num = bulk_update_with_history(bulk_update_list, Chart, ["coder_assignment_sequence", "l1_auditor_assignment_sequence", "l2_auditor_assignment_sequence", "l3_auditor_assignment_sequence","queue_id", "level_id", "comments", "reason_id"],
batch_size=10000, default_change_reason="work_queue")
</code></pre>
<p>Now <code>data</code> here can be very huge and sometimes thousands of rows to update.</p>
<p>This caused the database in <code>lock:relation</code> state for about 2-3 minutes.</p>
<p>On dissecting the <code>bulk_update_with_history</code> method which belonged to a <code>simple-history</code> package, the doc string states that:</p>
<pre><code>def bulk_update_with_history(
objs,
model,
fields,
batch_size=None,
default_user=None,
default_change_reason=None,
default_date=None,
manager=None,
):
"""
Bulk update the objects specified by objs while also bulk creating
their history (all in one transaction).
:param objs: List of objs of type model to be updated
:param model: Model class that should be updated
:param fields: The fields that are updated
:param batch_size: Number of objects that should be updated in each batch
</code></pre>
<p>So the theory is that this particular function wraps all batch updates in a single transaction which is what caused the db to lock state.</p>
<p>I want to know how <code>batch_size</code> works in <code>bulk_update</code>. Django doc states that:</p>
<pre><code>The batch_size parameter controls how many objects are saved in a single query. The default is to update all objects in one batch.
</code></pre>
<p>Also one suggestion was that to chunk list of records to be updated and call <code>bulk_update_with_history</code> for each chunk. This will ensure a separate transaction for each chunk and thus relieve the lock.</p>
<p>Any suggestions or idea on this please?</p>
|
<python><django><bulkupdate><database-locking><django-simple-history>
|
2024-05-17 15:02:37
| 0
| 4,161
|
Azima
|
78,496,335
| 19,363,912
|
Get XOR between 2 dataframes
|
<p>How to get difference between 2 pandas dataframes (symmetric difference)?</p>
<pre><code>import pandas as pd
a = pd.DataFrame({'a': [1, 2], 'b': ['x', 'y']})
b = pd.DataFrame({'a': [1, 2, 3], 'b': ['x', 'z', '']})
result = pd.DataFrame({'a': [2, 2, 3], 'b': ['y', 'z', ''], 'source': ['a', 'b', 'b']})
</code></pre>
<p>Visual</p>
<pre><code> a b
0 1 x
1 2 y
a b
0 1 x
1 2 z
2 3
Out[103]:
a b source
0 2 y a
1 2 z b
2 3 b
</code></pre>
<hr />
<p>Attempted solution seems too complicated</p>
<pre><code>diff_a = pd.concat([a, b, b]).drop_duplicates(keep=False)
diff_a['source'] = 'a'
diff_b = pd.concat([b, a, a]).drop_duplicates(keep=False)
diff_b['source'] = 'b'
out = pd.concat([diff_a, diff_b]).reset_index(drop=True)
</code></pre>
|
<python><pandas><xor>
|
2024-05-17 14:37:18
| 1
| 447
|
aeiou
|
78,496,328
| 12,415,855
|
Permisson Error when writing openpy-workbook?
|
<p>i try to open and save an openpyxl workbook using the following code:</p>
<pre><code>import openpyxl as ox
if __name__ == '__main__':
wb = ox.load_workbook("colSort.xlsx", rich_text=True)
wb.save("TEST.xlsx")
</code></pre>
<p>But i get this Permission Error -</p>
<pre><code>$ python test.py
Traceback (most recent call last):
File "C:\DEV\Fiverr2024\ORDER\robalf\SOLcolSorting\test.py", line 5, in <module>
wb.save("TEST.xlsx")
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\workbook\workbook.py", line 386, in save
save_workbook(self, filename)
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\writer\excel.py", line 294, in save_workbook
writer.save()
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\writer\excel.py", line 275, in save
self.write_data()
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\writer\excel.py", line 77, in write_data
self._write_worksheets()
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\writer\excel.py", line 215, in _write_worksheets
self.write_worksheet(ws)
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\writer\excel.py", line 200, in write_worksheet
writer.write()
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\worksheet\_writer.py", line 359, in write
self.write_rows()
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\worksheet\_writer.py", line 125, in write_rows
self.write_row(xf, row, row_idx)
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\worksheet\_writer.py", line 147, in write_row
write_cell(xf, self.ws, cell, cell.has_style)
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\cell\_writer.py", line 87, in etree_write_cell
whitespace(text)
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\xml\functions.py", line 85, in whitespace
stripped = node.text.strip()
^^^^^^^^^^^^^^^
AttributeError: 'xml.etree.ElementTree.Element' object has no attribute 'strip'
Exception ignored in atexit callback: <function _openpyxl_shutdown at 0x0000018250C27880>
Traceback (most recent call last):
File "C:\DEV\.venv\xlwings\Lib\site-packages\openpyxl\worksheet\_writer.py", line 32, in _openpyxl_shutdown
os.remove(path)
PermissionError: [WinError 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird: 'C:\\Users\\WRSPOL\\AppData\\Local\\Temp\\1\\openpyxl.er0fcti4'
</code></pre>
<p>When im opening the workbook without richtext with</p>
<pre><code>wb = ox.load_workbook("colSort.xlsx")
</code></pre>
<p>it seems to work fine.</p>
<p>Why is the saving not possible when i am opening the excel with richtext?</p>
|
<python><openpyxl>
|
2024-05-17 14:36:24
| 1
| 1,515
|
Rapid1898
|
78,496,267
| 2,955,541
|
Adding a Private Attribute to a Subclassed NumPy Array
|
<p>In the <a href="https://numpy.org/devdocs/user/basics.subclassing.html#slightly-more-realistic-example-attribute-added-to-existing-array" rel="nofollow noreferrer">NumPy documentation</a>, it is demonstrated that you can add a custom attribute by subclassing the ndarray:</p>
<pre><code>import numpy as np
class RealisticInfoArray(np.ndarray):
def __new__(cls, input_array, info=None):
obj = np.asarray(input_array).view(cls)
obj.info = info
return obj
def __array_finalize__(self, obj):
if obj is None: return
self.info = getattr(obj, 'info', None)
arr = np.arange(5)
obj = RealisticInfoArray(arr, info='information')
print(obj.info)
</code></pre>
<p>This example works perfectly fine. However, if I want to make the new attribute "private" (i.e., by calling the attribute <code>self._info</code>):</p>
<pre><code>import numpy as np
class RealisticInfoArray(np.ndarray):
def __new__(cls, input_array, info=None):
obj = np.asarray(input_array).view(cls)
obj.info = info
return obj
def __array_finalize__(self, obj):
if obj is None: return
self._info = getattr(obj, 'info', None)
arr = np.arange(5)
obj = RealisticInfoArray(arr, info='information')
print(obj._info)
</code></pre>
<p>Then <code>None</code> is printed. Can somebody help me understand why this is and how to fix this so that <code>obj._info</code> is not <code>None</code> and behaves the same as the first example?</p>
|
<python><arrays><numpy>
|
2024-05-17 14:25:23
| 1
| 6,989
|
slaw
|
78,496,200
| 2,082,026
|
How to pass on the chopped details from URL in Django?
|
<p>I have a project with the following <code>urls.py</code>.</p>
<pre class="lang-py prettyprint-override"><code>urlpatterns = [
path('category*/', include('student.urls')) // * could be replaced by a number
]
</code></pre>
<p>In that project I then have an application student whose <code>urls.py</code> looks like this:</p>
<pre class="lang-py prettyprint-override"><code>urlpatterns = [
path('all/', views.all, name='all')
]
</code></pre>
<p>So lets say when I type in the URL <code>category1/all/</code> I get the list of all Category 1 students, but when I do <code>category2/all/</code> I should be able to get the list of all Category 2 students. So by the time I reach <code>all/</code> I have lost for which category I want to retrieve the list of all students. How can I still know in my students application which category students data should be retrieved?</p>
|
<python><django><django-urls>
|
2024-05-17 14:10:15
| 1
| 498
|
Rishik Mani
|
78,496,083
| 3,258,600
|
Create an SQLAlchemy relationship between schemas without circular imports
|
<p>I have models in two different schemas with a foreign key relationship like this.</p>
<p>File 1:</p>
<pre><code>Base = declarative_base(metadata=MetaData(schema="a"))
class Advertiser(Base):
__tablename__ = 'advertisers'
id = Column(Integer, primary_key=True)
campaigns = relationship("Campaign", back_populates="advertiser")
</code></pre>
<p>File 2:</p>
<pre><code>import file1
Base = declarative_base(metadata=MetaData(schema="b"))
class Campaign(Base):
__tablename__ = 'campaigns'
id = Column(Integer, primary_key=True)
advertiser_id = Column(Integer, ForeignKey(file1.Advertiser.id))
advertiser = relationship(file1.Advertiser, back_populates="campaigns")
</code></pre>
<p>I want to be able to call <code>Advertiser.campaigns</code> to get a list of campaigns associated with this advertiser, but this call fails since schema a is not aware of the table <code>Campaigns</code> in schema b. I can't import <code>Campaigns</code> because that would create a circular relationship between the files. I can set <code>Advertiser.campaigns</code> in file2, but <code>Advertiser.campaigns</code> would fail unless I import file 2. How do I set up this relationship?</p>
|
<python><sqlalchemy>
|
2024-05-17 13:48:24
| 1
| 12,963
|
kellanburket
|
78,496,068
| 6,419,513
|
Passing Sample Weights to Sklearn Pipeline object with XGBoost
|
<p>There are some good questions on this <a href="https://stackoverflow.com/questions/36205850/sklearn-pipeline-applying-sample-weights-after-applying-a-polynomial-feature-t">topic</a>, however, I haven't found any solution to this error involving using XGBoost models with <code>sample_weight</code> in sklearn's <code>Pipeline</code> framework.</p>
<p>Here is my example and the subsequent error:</p>
<pre><code>from xgboost_distribution import XGBDistribution
from scipy.stats import nbinom
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
# ... split dataset into X_train, X_test, y_train, y_test, and sample_wgt
model = XGBDistribution(
distribution="negative-binomial"
)
pipe = Pipeline(
[
('scaler', StandardScaler()),
('model', model)
]
)
pipe.fit(X_train, y_train, model__sample_weight=sample_wgt)
</code></pre>
<p>But I get the following warning message:</p>
<pre><code>WARNING: /workspace/src/learner.cc:742:
Parameters: { "sample_weight" } are not used.
</code></pre>
<p>I know the <code>Pipeline</code> object is clunky with accepting parameters, but how can I ensure the XGB model accepts the sample weights?</p>
|
<python><scikit-learn><pipeline><xgboost>
|
2024-05-17 13:45:26
| 0
| 1,732
|
a.powell
|
78,496,010
| 6,714,667
|
How can i compare each list in a list of lists and compare one of the last elements of the preceding list with first element of next list?
|
<pre><code>lists = [['1. what is your name','alice','what is your age','98'],
['2. how old are you','24','city of birth','washington 3. None what is your fav subject?'],
['3. what is your fav subject? please choose from maths, english, science','maths','school','elemetary oak']
]
</code></pre>
<p>i have the above list of lists, i would like to remove '3. None what is your fav subject?' from the the second list by seeing whats in common with the first element from next list.
i have tried:</p>
<pre><code>import itertools
for x,y in zip(lists, itertools.islice(lists, 1, None)):
if x[2] != y[0]:
print("CHECKING")
new_str = ''.join([i for i in x[2] if i in y[0]])
print("RESULT===============",new_str)
</code></pre>
<p>but this return: <code>washington 3. one what is your fav subject?</code> i need it to just be washington</p>
<p>how can i iterate through a list of lists and check if last element of last list contains anything in common with first sentence in next list and then remove whats in common from the last element?</p>
|
<python><list><python-itertools>
|
2024-05-17 13:33:14
| 1
| 999
|
Maths12
|
78,495,832
| 4,211,297
|
why is Pydantic saying email is missing when I'm declaring it
|
<p>In the code below, I'm creating an api that would accept a notification that should send an email. There are fields I want to add to the notification like an ID and TS after its been submitted. I have modeled it like below.</p>
<pre><code>class NotificationPriority(Enum):
high = "high"
medium = "medium"
low = "low"
class Notification(BaseModel):
notification: str
priority: NotificationPriority
notification_from: str
class EmailNotification(Notification):
email_to: str
email_from: str | None = None
class EmailNotificationSystem(EmailNotification):
id: uuid.UUID = uuid.uuid4()
ts: datetime.datetime = datetime.datetime.now(datetime.UTC).isoformat()
email: EmailNotification
@app.post("/notifications/email")
async def create_notification(email_notification: EmailNotification):
print(email_notification.model_dump())
system = EmailNotificationSystem(
email=email_notification,
)
return system
</code></pre>
<p>Pydantic fails saying all the fields inside the email_notification that I'm setting with <code>email=email_notification</code> are missing.</p>
<p><code>Field required [type=missing, input_value={'email': EmailNotificati...rom='namec@ccsmed.com')}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing priority Field required [type=missing, input_value={'email': EmailNotificati...rom='namec@ccsmed.com')}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing notification_from Field required [type=missing, input_value={'email': EmailNotificati...rom='namec@ccsmed.com')}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing email_to Field required [type=missing, input_value={'email': EmailNotificati...rom='namec@ccsmed.com')}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing</code></p>
<p>when I <code>print(email_notification.model_dump())</code> I can see the fields are in there and satisfied. The request body looks like this:</p>
<pre><code> {
"notification": "new notification",
"priority": "low",
"notification_from": "sampleapp",
"email_to":"nameb@ccsmed.com;nameb@ccsmed.com",
"email_from":"namec@ccsmed.com"
}
</code></pre>
<p>Primary Q: Why is it saying I'm missing fields?</p>
<p>Secondary Q: Is there a better practice to doing what I'm doing?</p>
|
<python><fastapi><pydantic>
|
2024-05-17 12:59:48
| 1
| 2,351
|
Pompey Magnus
|
78,495,627
| 1,070,092
|
XML data created by pytesseract does not show elements
|
<p>I created an xml file with pytesseract in the following way:</p>
<pre><code>from xml.etree import ElementTree as ET
import pytesseract
xml = pytesseract.image_to_alto_xml("test.png")
root = ET.fromstring(xml)
for string_element in root.iter("String"):
print(string_element.attrib)
</code></pre>
<p>The created xml content should find some elements, but does not.</p>
<p>According link <a href="https://python.readthedocs.io/en/stable/library/xml.etree.elementtree.html" rel="nofollow noreferrer">ElementTree Doc</a> I modified my code like this:</p>
<pre><code>data_context = """<?xml version="1.0" encoding="UTF-8"?>
<alto xmlns="http://www.loc.gov/standards/alto/ns-v3#" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/standards/alto/ns-v3# http://www.loc.gov/alto/v3/alto-3-0.xsd">
<Description>
<MeasurementUnit>pixel</MeasurementUnit>
<sourceImageInformation>
<fileName></fileName>
</sourceImageInformation>
<OCRProcessing ID="OCR_0">
<ocrProcessingStep>
<processingSoftware>
<softwareName>tesseract 4.1.1</softwareName>
</processingSoftware>
</ocrProcessingStep>
</OCRProcessing>
</Description>
<Layout>
<Page WIDTH="1016" HEIGHT="1020" PHYSICAL_IMG_NR="0" ID="page_0">
<PrintSpace HPOS="0" VPOS="0" WIDTH="1016" HEIGHT="1020">
<ComposedBlock ID="cblock_0" HPOS="982" VPOS="46" WIDTH="13" HEIGHT="950">
<TextBlock ID="block_0" HPOS="982" VPOS="46" WIDTH="13" HEIGHT="950">
<TextLine ID="line_0" HPOS="982" VPOS="46" WIDTH="13" HEIGHT="950">
<String ID="string_0" HPOS="982" VPOS="46" WIDTH="13" HEIGHT="950" WC="0.95" CONTENT=" "/>
</TextLine>
</TextBlock>
</ComposedBlock>
<ComposedBlock ID="cblock_35" HPOS="26" VPOS="985" WIDTH="966" HEIGHT="34">
<TextBlock ID="block_35" HPOS="26" VPOS="985" WIDTH="966" HEIGHT="34">
<TextLine ID="line_40" HPOS="26" VPOS="985" WIDTH="966" HEIGHT="34">
<String ID="string_40" HPOS="26" VPOS="985" WIDTH="966" HEIGHT="34" WC="0.95" CONTENT=" "/>
</TextLine>
</TextBlock>
</ComposedBlock>
</PrintSpace>
</Page>
</Layout>
</alto>
"""
root = ET.fromstring(data_context)
for string_elem in root.iter("String"):
print(string_elem.attrib)
print("What's wrong here?")
</code></pre>
<p>Still the same. No elements found. What's wrong here?
Any help is appreciated.
Thanks</p>
<blockquote>
<p>Blockquote</p>
</blockquote>
|
<python><xml><python-tesseract>
|
2024-05-17 12:19:32
| 1
| 345
|
Vik
|
78,495,571
| 1,169,091
|
How to get the response from the AI Model
|
<p>I adapted this code from <a href="https://www.datacamp.com/tutorial/llama-cpp-tutorial" rel="nofollow noreferrer">https://www.datacamp.com/tutorial/llama-cpp-tutorial</a></p>
<pre><code>from llama_cpp import Llama
# GLOBAL VARIABLES
my_model_path = "./model/zephyr-7b-beta.Q4_0.gguf"
CONTEXT_SIZE = 512
# LOAD THE MODEL
zephyr_model = Llama(model_path=my_model_path,
n_ctx=CONTEXT_SIZE)
def generate_text_from_prompt(user_prompt,
max_tokens = 100,
temperature = 0.3,
top_p = 0.1,
echo = True,
stop = ["Q", "\n"]):
# Define the parameters
model_output = zephyr_model(
user_prompt,
max_tokens=max_tokens,
temperature=temperature,
top_p=top_p,
echo=echo,
stop=stop,
)
return model_output
if __name__ == "__main__":
my_prompt = "What do you think about fishing for catfish?"
zephyr_model_response = generate_text_from_prompt(my_prompt)
final_result = zephyr_model_response["choices"][0]["text"].strip()
print(final_result)
</code></pre>
<p>How do I print the response to the prompt? It prints a huge amount of information but I don't see the response.</p>
<p>When I print model_output I get</p>
<pre><code>{'id': 'cmpl-f13d4d89-5f25-4c01-b274-97df4ebaba84', 'object': 'text_completion', 'created': 1715958626, 'model': './model/zephyr-7b-beta.Q4_0.gguf', 'choices': [{'text': 'What do you think about fishing for catfish?', 'index': 0, 'logprobs': None, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 11, 'completion_tokens': 1, 'total_tokens': 12}}
</code></pre>
|
<python><llamacpp>
|
2024-05-17 12:09:09
| 1
| 4,741
|
nicomp
|
78,495,548
| 1,726,805
|
How can I convert a json file to a pandas dataframe
|
<p>I have a json file with this structure:</p>
<pre><code>[
{
"name": "myName",
"type": {
"x": {
"id": [
"x1",
"x2"
]
},
"y": {
"id": "y1"
},
"z": {
"id": "z1"
}
}
}
]
</code></pre>
<p><code>Type</code> can only take values x, y, and z. Any of those can be missing, but there is always at least one x, y, or z. There is no limit to the number of id's for each type, although there are never more than ten id-values in the data.</p>
<p>I would like to transform that into a Pandas dataframe, with structure:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>type</th>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>myName</td>
<td>x</td>
<td>x1</td>
</tr>
<tr>
<td>myName</td>
<td>x</td>
<td>x2</td>
</tr>
<tr>
<td>myName</td>
<td>y</td>
<td>y1</td>
</tr>
<tr>
<td>myName</td>
<td>z</td>
<td>z1</td>
</tr>
</tbody>
</table></div>
<p>I've experimented with <code>to_json{orient="..."}</code> and <code>json_normalize()</code>, but I'm not able to flatten the lists in the json.</p>
|
<python><json><pandas>
|
2024-05-17 12:04:46
| 2
| 609
|
Matthijs
|
78,495,482
| 78,903
|
How do I await a coroutine within a non-async function in Python?
|
<p>Consider this contrived example:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
async def main():
print(non_async_function())
def non_async_function():
# Syntax error due to "await".
return await async_function()
async def async_function():
return "foo"
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
<p>Since the <code>await</code> keyword cannot be used in a non-async function, how do I await an awaitable from a non-async function? <code>asyncio.run</code> and <code>asyncio.get_running_loop().run_until_complete</code> don't work: <strong>RuntimeError:</strong> <em>This event loop is already running</em>.</p>
<p><code>asgiref.async_to_sync</code> also doesn't work: <strong>RuntimeError:</strong> <em>You cannot use AsyncToSync in the same thread as an async event loop - just await the async function directly.</em></p>
<p>This problem comes up with chained decorators where a decorator can provide both async and sync variants using <code>inspect.iscoroutinefunction</code> -- but if one of the decorators doesn't do this, subsequent decorators will mistakenly assume they're wrapping a sync function. A sync decorator can guard against this by checking the result (assuming it needs to operate on the result before returning it):</p>
<pre class="lang-py prettyprint-override"><code>result = func(*args, **kwargs)
if inspect.isawaitable(result):
return sync_await(result)
</code></pre>
<p>But how do I implement this <code>sync_await</code> in a way that uses the current async context with current context vars?</p>
|
<python><asynchronous><async-await>
|
2024-05-17 11:52:42
| 3
| 2,866
|
Kiran Jonnalagadda
|
78,495,333
| 10,806,496
|
Celery executes tasks sequentially, one after another
|
<p>I have a Django application that has large I/O-bound tasks.</p>
<p>I use Celery to run these tasks in threads and manage the progress in the UI with a progress bar.</p>
<p>Here's my configuration :</p>
<p><strong>Django version</strong> : 5.0.2</p>
<p><strong>Celery version</strong> : 5.3.6</p>
<p><strong>Redis version</strong> : Redis for Windows 5.0.14.1 (<a href="https://github.com/tporadowski/redis/releases" rel="nofollow noreferrer">https://github.com/tporadowski/redis/releases</a>)</p>
<p><strong>SERVER</strong></p>
<p>Windows Server 2016 (<strong>can't change that; I have data stored in an Access Database</strong>)</p>
<p>Hosting app in IIS default AppPool</p>
<p>Processor : 4 core</p>
<p>RAM : 4 GB</p>
<p><strong>web.config configuration :</strong></p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="Python FastCGI" path="*" verb="*" modules="FastCgiModule" scriptProcessor="C:\Python311\python.exe|C:\Python311\Lib\site-packages\wfastcgi.py" resourceType="Unspecified" requireAccess="Script" />
</handlers>
<directoryBrowse enabled="true" />
</system.webServer>
<appSettings>
<add key="PYTHONPATH" value="C:\inetpub\Django-LIAL\WEBAPPLIAL" />
<add key="WSGI_HANDLER" value="WEBAPPLIAL.wsgi.application" />
<add key="DJANGO_SETTINGS_MODULE" value="WEBAPPLIAL.settings" />
</appSettings>
</configuration>
</code></pre>
<p><strong>Django wsgi configuration :</strong></p>
<pre><code>from gevent import monkey
monkey.patch_all()
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'WEBAPPLIAL.settings')
application = get_wsgi_application()
</code></pre>
<p><strong>Django celery configuration :</strong></p>
<pre><code>#Celery setting
CELERY_BROKER_URL = 'redis://127.0.0.1:6379/0'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'django-db'
CELERY_CACHE_BACKEND = 'django-cache'
CELERY_TASK_ALWAYS_EAGER = False
CELERY_TASK_TRACK_STARTED = True
</code></pre>
<p><strong>celery command line launched in git :</strong></p>
<pre><code>$ celery -A WEBAPPLIAL worker -l info -P gevent
</code></pre>
<p>*** what the celery command line do : ***</p>
<pre><code>-------------- celery@WIN-RHK2AHPNGJ1 v5.3.6 (emerald-rush)
--- ***** -----
-- ******* ---- Windows-10-10.0.14393-SP0 2024-05-17 12:05:49
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: WEBAPPLIAL:0x17207492650
- ** ---------- .> transport: redis://127.0.0.1:6379/0
- ** ---------- .> results:
- *** --- * --- .> concurrency: 4 (gevent)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. APPLICATION.A13.A13_LOG_0002.model.task.extract_data
. APPLICATION.A13.A13_LOG_0005.tasks.launch_app
. WEBAPPLIAL.celery.debug_task
[2024-05-17 12:05:49,995: WARNING/MainProcess] C:\Python311\Lib\site-packages\celery\worker\consumer\consumer.py:507: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-05-17 12:05:50,010: INFO/MainProcess] Connected to redis://127.0.0.1:6379/0
[2024-05-17 12:05:50,010: WARNING/MainProcess] C:\Python311\Lib\site-packages\celery\worker\consumer\consumer.py:507: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-05-17 12:05:50,026: INFO/MainProcess] mingle: searching for neighbors
[2024-05-17 12:05:51,048: INFO/MainProcess] mingle: all alone
[2024-05-17 12:05:51,048: WARNING/MainProcess] C:\Python311\Lib\site-packages\celery\worker\consumer\consumer.py:507: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-05-17 12:05:51,048: INFO/MainProcess] pidbox: Connected to redis://127.0.0.1:6379/0.
[2024-05-17 12:05:51,063: INFO/MainProcess] celery@WIN-RHK2AHPNGJ1 ready.
</code></pre>
<p><strong>Quick look at my function:</strong></p>
<pre><code>@shared_task(bind=True)
def launch_app(self, laiteries, formated_date):
@shared_task(bind=True)
def extract_data(self, date_start, date_end):
</code></pre>
<p>They are both called with <code>.delay()</code>
Each function interacts with the Django ORM but on a different model.</p>
<hr />
<p><strong>Actual behaviour</strong></p>
<p>Then, when I launch my first function (by interacting with the web app) and immediately launch the second function, this is what happens:</p>
<pre><code>[2024-05-17 12:06:28,464: INFO/MainProcess] Task APPLICATION.A13.A13_LOG_0002.model.task.extract_data[baf19fc9-dd9c-4574-af8d-c7ed9a522c0e] received
[2024-05-17 12:06:56,144: INFO/MainProcess] Task APPLICATION.A13.A13_LOG_0002.model.task.extract_data[baf19fc9-dd9c-4574-af8d-c7ed9a522c0e] succeeded in 27.60899999999998s: 'Proc▒dure termin▒e !'
[2024-05-17 12:06:56,159: INFO/MainProcess] Task APPLICATION.A13.A13_LOG_0005.tasks.launch_app[435df153-9879-47a4-93ba-5ba9ed90cf76] received
[2024-05-17 12:07:01,662: INFO/MainProcess] Task APPLICATION.A13.A13_LOG_0005.tasks.launch_app[435df153-9879-47a4-93ba-5ba9ed90cf76] succeeded in 5.5s: 'Tout les emails ont bien ▒t▒ envoyer !'
</code></pre>
<hr />
<p><strong>Problems :</strong>
The problem is that Celery performs tasks sequentially and not in parallel.</p>
<p>My expected behavior would be something like this:</p>
<pre><code>[2024-05-17 12:06:28,464: INFO/MainProcess] Task APPLICATION.A13.A13_LOG_0002.model.task.extract_data[baf19fc9-dd9c-4574-af8d-c7ed9a522c0e] received
[2024-05-17 12:06:29,159: INFO/MainProcess] Task APPLICATION.A13.A13_LOG_0005.tasks.launch_app[435df153-9879-47a4-93ba-5ba9ed90cf76] received
[2024-05-17 12:07:34,662: INFO/MainProcess] Task APPLICATION.A13.A13_LOG_0005.tasks.launch_app[435df153-9879-47a4-93ba-5ba9ed90cf76] succeeded in 5.5s: 'Tout les emails ont bien ▒t▒ envoyer !'
[2024-05-17 12:06:56,144: INFO/MainProcess] Task APPLICATION.A13.A13_LOG_0002.model.task.extract_data[baf19fc9-dd9c-4574-af8d-c7ed9a522c0e] succeeded in 27.60899999999998s: 'Proc▒dure termin▒e !'
</code></pre>
<p>If you need any more details, please ask!</p>
|
<python><python-3.x><django><celery><gevent>
|
2024-05-17 11:26:53
| 1
| 324
|
Mougnou
|
78,495,301
| 5,457,202
|
How to plot large dataset of Shapely LineString with Plotly?
|
<p>I've been given a DXF file at work to analyse it, and they want me to plot it as well. This file contains a layout of a private area. The data looks like this:</p>
<pre><code>Layer PaperSpace SubClasses Linetype EntityHandle Text geometry
860 0 None AcDbEntity:AcDbPolyline None 59E None LINESTRING (441.981 186.290, 441.981 189.994, 441.599 189.993, 441.599 186.289, 441.980 186.289)
861 0 None AcDbEntity:AcDbPolyline None 59F None LINESTRING (441.981 193.740, 441.981 190.036, 441.599 190.036, 441.599 193.740, 441.980 193.740)
862 0 None AcDbEntity:AcDbPolyline None 5A0 None LINESTRING (441.790 189.994, 441.790 190.036)
863 0 None AcDbEntity:AcDbPolyline None 5A1 None LINESTRING (441.790 190.015, 441.790 190.015)
864 0 None AcDbEntity:AcDbPolyline None 5A2 None LINESTRING (441.790 186.544, 441.790 186.544)
</code></pre>
<p>The whole dataset has <strong>397.535</strong> records. The column I'm interested is the geometry one, which is full of <a href="https://shapely.readthedocs.io/en/stable/reference/shapely.LineString.html" rel="nofollow noreferrer">Shapely</a> <code>LineString</code> objects. Even if I can plot it without issues with GeoPandas plot, I want to be able to plot the geometry column with <code>Plotly</code> (since this allow me to zoom in, out, see values when hovering over the plot, etc.). However, I cannot plot the whole dataset without having VSCode or the browser (I tried Jupyter Notebook on web and also tried to open an html file generated with <code>fig.write_html</code>, with no success) to crash.</p>
<p>This is the whole code (partially adapted to be able to run it):</p>
<pre><code>import plotly.graph_objects as go
plot_data = []
sample_coords = [[[441.9805811878579, 441.9805811878579, 441.5995787878578, 441.5995787878578, 441.9805811878579], [186.2897580001509, 189.9939480001509, 189.9939480001509, 186.2897580001509, 186.2897580001509]],
[[441.9805811878579, 441.9805811878579, 441.5995787878578, 441.5995787878578, 441.9805811878579], [193.7404716001509, 190.0362816001509, 190.0362816001509, 193.7404716001509, 193.7404716001509]],
[[441.7900799878579, 441.7900799878579], [189.9939480001509, 190.0362816001509]],
[[441.7900799878579, 441.7900799878579], [190.0151148001509, 190.0151148001509]],
[[441.7900799878579, 441.7900799878579], [186.5437596001509, 186.5437596001509]]]
for x, y in sample_coords:
#x, y = linestring.xy
#print(y)
plot_data.append(
go.Scatter(
x=x,
y=y,
fill="toself", # Fills the linestring
mode="none", # Hides markers
showlegend=False
)
)
layout = go.Layout(
autosize=False,
width=1100,
height=800,
margin=go.layout.Margin(l=50, r=50, b=100, t=100, pad=4),
)
# Create the figure and display
fig = go.Figure(data=plot_data, layout=layout)
fig.show()
</code></pre>
<p>For instance, plotting the head of said DataFrame produces this plot:</p>
<p><a href="https://i.sstatic.net/26cFx3TM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26cFx3TM.png" alt="enter image description here" /></a></p>
|
<python><pandas><plot><plotly><shapely>
|
2024-05-17 11:20:51
| 0
| 436
|
J. Maria
|
78,495,189
| 395,069
|
How is Dataframe normalization being done?
|
<p>I am trying to understand the Normalization of the Dataframe values. Here is the scenario from the famous disaster i.e. Titanic and here is the code and result from a query:</p>
<pre class="lang-py prettyprint-override"><code>dftitanic.groupby('Fsize')['Survived'].value_counts(normalize=False).reset_index(name='perc')
</code></pre>
<p>Result:</p>
<pre><code> Fsize Survived perc
0 1 0 374
1 1 1 163
2 2 1 89
3 2 0 72
4 3 1 59
5 3 0 43
6 4 1 21
7 4 0 8
8 5 0 12
9 5 1 3
10 6 0 19
11 6 1 3
12 7 0 8
13 7 1 4
14 8 0 6
15 11 0 7
</code></pre>
<p>And if I use <code>.value_counts(normalize=True)</code>, the result would be:</p>
<pre class="lang-py prettyprint-override"><code>dftitanic.groupby('Fsize')['Survived'].value_counts(normalize=True).reset_index(name='perc')
</code></pre>
<pre><code> Fsize Survived perc
0 1 0 0.696462
1 1 1 0.303538
2 2 1 0.552795
3 2 0 0.447205
4 3 1 0.578431
5 3 0 0.421569
6 4 1 0.724138
7 4 0 0.275862
8 5 0 0.800000
9 5 1 0.200000
10 6 0 0.863636
11 6 1 0.136364
12 7 0 0.666667
13 7 1 0.333333
14 8 0 1.000000
15 11 0 1.000000
</code></pre>
<p>And the data from <code>describe()</code>:</p>
<pre><code> Fsize Survived Perc
count 16.0000 16.000000 16.000000
mean 4.6875 0.437500 55.687500
std 2.7500 0.512348 95.378347
min 1.0000 0.000000 3.000000
25% 2.7500 0.000000 6.750000
50% 4.5000 0.000000 15.500000
75% 6.2500 1.000000 62.250000
max 11.0000 1.000000 374.000000
</code></pre>
<h3>My effort:</h3>
<p>From <a href="https://stackoverflow.com/a/41532180">https://stackoverflow.com/a/41532180</a>, I got the following methods:</p>
<ol>
<li><p>normalized_df=(df-df.mean())/df.std()</p>
</li>
<li><p>normalized_df=(df-df.min())/(df.max()-df.min()</p>
</li>
</ol>
<p>However, from the results of <code>describe()</code>, the above two methods not matching the results of <code>.values_counts(normalize=True)</code>.</p>
<p>A similar formula and description is present <a href="https://www.javatpoint.com/normalization-in-machine-learning" rel="nofollow noreferrer">here</a>: but didn't get understandable results.</p>
<h3>Question:</h3>
<p>How this Normalization being done? i.e. <code>.value_counts(normalize=True)</code></p>
|
<python><dataframe><normalization>
|
2024-05-17 10:58:59
| 1
| 2,213
|
Farrukh Waheed
|
78,495,188
| 5,722,359
|
Is there a ready solution to quickly extract the contents of pipenv `Pipfile.lock` to a Python dictionary?
|
<p>The contents of the <code>pipenv</code> <code>Pipfile.lock</code> file looks like a nested Python dictionary.</p>
<p>Is there a ready solution (function/class/module) to quickly extract the contents of <code>Pipfile.lock</code> to a Python dictionary?</p>
<p>I tried:</p>
<pre><code>content = []
with open(Pipfilelock) as f:
for row in f:
content.append(row.strip())
parsed_content = ' '.join(content).replace('true', 'True')
print(f"{parsed_content=}")
</code></pre>
<p>and obtained :</p>
<pre><code>parsed_content='{ "_meta": { "hash": { "sha256": "0ed7c1ed593ef7fb644e2faa43e07f28a3c7d297b3b7da70b3a08807a4638fae" }, "pipfile-spec": 6, "requires": { "python_full_version": "3.10.12", "python_version": "3.10" }, "sources": [ { "name": "pypi", "url": "https://pypi.org/simple", "verify_ssl": True } ] }, "default": { "numpy": { "hashes": [ "sha256:03a8c78d01d9781b28a6989f6fa1bb2c4f2d51201cf99d3dd875df6fbd96b23b", "sha256:08beddf13648eb95f8d867350f6a018a4be2e5ad54c8d8caed89ebca558b2818", "sha256:1af303d6b2210eb850fcf03064d364652b7120803a0b872f5211f5234b399f20", "sha256:1dda2e7b4ec9dd512f84935c5f126c8bd8b9f2fc001e9f54af255e8c5f16b0e0", "sha256:2a02aba9ed12e4ac4eb3ea9421c420301a0c6460d9830d74a9df87efa4912010", "sha256:2e4ee3380d6de9c9ec04745830fd9e2eccb3e6cf790d39d7b98ffd19b0dd754a", "sha256:3373d5d70a5fe74a2c1bb6d2cfd9609ecf686d47a2d7b1d37a8f3b6bf6003aea", "sha256:47711010ad8555514b434df65f7d7b076bb8261df1ca9bb78f53d3b2db02e95c", "sha256:4c66707fabe114439db9068ee468c26bbdf909cac0fb58686a42a24de1760c71", "sha256:50193e430acfc1346175fcbdaa28ffec49947a06918b7b92130744e81e640110", "sha256:52b8b60467cd7dd1e9ed082188b4e6bb35aa5cdd01777621a1658910745b90be", "sha256:60dedbb91afcbfdc9bc0b1f3f402804070deed7392c23eb7a7f07fa857868e8a", "sha256:62b8e4b1e28009ef2846b4c7852046736bab361f7aeadeb6a5b89ebec3c7055a", "sha256:666dbfb6ec68962c033a450943ded891bed2d54e6755e35e5835d63f4f6931d5", "sha256:675d61ffbfa78604709862923189bad94014bef562cc35cf61d3a07bba02a7ed", "sha256:679b0076f67ecc0138fd2ede3a8fd196dddc2ad3254069bcb9faf9a79b1cebcd", "sha256:7349ab0fa0c429c82442a27a9673fc802ffdb7c7775fad780226cb234965e53c", "sha256:7ab55401287bfec946ced39700c053796e7cc0e3acbef09993a9ad2adba6ca6e", "sha256:7e50d0a0cc3189f9cb0aeb3a6a6af18c16f59f004b866cd2be1c14b36134a4a0", "sha256:95a7476c59002f2f6c590b9b7b998306fba6a5aa646b1e22ddfeaf8f78c3a29c", "sha256:96ff0b2ad353d8f990b63294c8986f1ec3cb19d749234014f4e7eb0112ceba5a", "sha256:9fad7dcb1aac3c7f0584a5a8133e3a43eeb2fe127f47e3632d43d677c66c102b", "sha256:9ff0f4f29c51e2803569d7a51c2304de5554655a60c5d776e35b4a41413830d0", "sha256:a354325ee03388678242a4d7ebcd08b5c727033fcff3b2f536aea978e15ee9e6", "sha256:a4abb4f9001ad2858e7ac189089c42178fcce737e4169dc61321660f1a96c7d2", "sha256:ab47dbe5cc8210f55aa58e4805fe224dac469cde56b9f731a4c098b91917159a", "sha256:afedb719a9dcfc7eaf2287b839d8198e06dcd4cb5d276a3df279231138e83d30", "sha256:b3ce300f3644fb06443ee2222c2201dd3a89ea6040541412b8fa189341847218", "sha256:b97fe8060236edf3662adfc2c633f56a08ae30560c56310562cb4f95500022d5", "sha256:bfe25acf8b437eb2a8b2d49d443800a5f18508cd811fea3181723922a8a82b07", "sha256:cd25bcecc4974d09257ffcd1f098ee778f7834c3ad767fe5db785be9a4aa9cb2", "sha256:d209d8969599b27ad20994c8e41936ee0964e6da07478d6c35016bc386b66ad4", "sha256:d5241e0a80d808d70546c697135da2c613f30e28251ff8307eb72ba696945764", "sha256:edd8b5fe47dab091176d21bb6de568acdd906d1887a4584a15a9a96a1dca06ef", "sha256:f870204a840a60da0b12273ef34f7051e98c3b5961b61b0c2c1be6dfd64fbcd3", "sha256:ffa75af20b44f8dba823498024771d5ac50620e6915abac414251bd971b4529f" ], "index": "pypi", "markers": "python_version >= \'3.9\'", "version": "==1.26.4" }, "pillow": { "hashes": [ "sha256:0304004f8067386b477d20a518b50f3fa658a28d44e4116970abfcd94fac34a8", "sha256:0689b5a8c5288bc0504d9fcee48f61a6a586b9b98514d7d29b840143d6734f39", "sha256:0eae2073305f451d8ecacb5474997c08569fb4eb4ac231ffa4ad7d342fdc25ac", "sha256:0fb3e7fc88a14eacd303e90481ad983fd5b69c761e9e6ef94c983f91025da869", "sha256:11fa2e5984b949b0dd6d7a94d967743d87c577ff0b83392f17cb3990d0d2fd6e", "sha256:127cee571038f252a552760076407f9cff79761c3d436a12af6000cd182a9d04", "sha256:154e939c5f0053a383de4fd3d3da48d9427a7e985f58af8e94d0b3c9fcfcf4f9", "sha256:15587643b9e5eb26c48e49a7b33659790d28f190fc514a322d55da2fb5c2950e", "sha256:170aeb00224ab3dc54230c797f8404507240dd868cf52066f66a41b33169bdbe", "sha256:1b5e1b74d1bd1b78bc3477528919414874748dd363e6272efd5abf7654e68bef", "sha256:1da3b2703afd040cf65ec97efea81cfba59cdbed9c11d8efc5ab09df9509fc56", "sha256:1e23412b5c41e58cec602f1135c57dfcf15482013ce6e5f093a86db69646a5aa", "sha256:2247178effb34a77c11c0e8ac355c7a741ceca0a732b27bf11e747bbc950722f", "sha256:257d8788df5ca62c980314053197f4d46eefedf4e6175bc9412f14412ec4ea2f", "sha256:3031709084b6e7852d00479fd1d310b07d0ba82765f973b543c8af5061cf990e", "sha256:322209c642aabdd6207517e9739c704dc9f9db943015535783239022002f054a", "sha256:322bdf3c9b556e9ffb18f93462e5f749d3444ce081290352c6070d014c93feb2", "sha256:33870dc4653c5017bf4c8873e5488d8f8d5f8935e2f1fb9a2208c47cdd66efd2", "sha256:35bb52c37f256f662abdfa49d2dfa6ce5d93281d323a9af377a120e89a9eafb5", "sha256:3c31822339516fb3c82d03f30e22b1d038da87ef27b6a78c9549888f8ceda39a", "sha256:3eedd52442c0a5ff4f887fab0c1c0bb164d8635b32c894bc1faf4c618dd89df2", "sha256:3ff074fc97dd4e80543a3e91f69d58889baf2002b6be64347ea8cf5533188213", "sha256:47c0995fc4e7f79b5cfcab1fc437ff2890b770440f7696a3ba065ee0fd496563", "sha256:49d9ba1ed0ef3e061088cd1e7538a0759aab559e2e0a80a36f9fd9d8c0c21591", "sha256:51f1a1bffc50e2e9492e87d8e09a17c5eea8409cda8d3f277eb6edc82813c17c", "sha256:52a50aa3fb3acb9cf7213573ef55d31d6eca37f5709c69e6858fe3bc04a5c2a2", "sha256:54f1852cd531aa981bc0965b7d609f5f6cc8ce8c41b1139f6ed6b3c54ab82bfb", "sha256:609448742444d9290fd687940ac0b57fb35e6fd92bdb65386e08e99af60bf757", "sha256:69ffdd6120a4737710a9eee73e1d2e37db89b620f702754b8f6e62594471dee0", "sha256:6fad5ff2f13d69b7e74ce5b4ecd12cc0ec530fcee76356cac6742785ff71c452", "sha256:7049e301399273a0136ff39b84c3678e314f2158f50f517bc50285fb5ec847ad", "sha256:70c61d4c475835a19b3a5aa42492409878bbca7438554a1f89d20d58a7c75c01", "sha256:716d30ed977be8b37d3ef185fecb9e5a1d62d110dfbdcd1e2a122ab46fddb03f", "sha256:753cd8f2086b2b80180d9b3010dd4ed147efc167c90d3bf593fe2af21265e5a5", "sha256:773efe0603db30c281521a7c0214cad7836c03b8ccff897beae9b47c0b657d61", "sha256:7823bdd049099efa16e4246bdf15e5a13dbb18a51b68fa06d6c1d4d8b99a796e", "sha256:7c8f97e8e7a9009bcacbe3766a36175056c12f9a44e6e6f2d5caad06dcfbf03b", "sha256:823ef7a27cf86df6597fa0671066c1b596f69eba53efa3d1e1cb8b30f3533068", "sha256:8373c6c251f7ef8bda6675dd6d2b3a0fcc31edf1201266b5cf608b62a37407f9", "sha256:83b2021f2ade7d1ed556bc50a399127d7fb245e725aa0113ebd05cfe88aaf588", "sha256:870ea1ada0899fd0b79643990809323b389d4d1d46c192f97342eeb6ee0b8483", "sha256:8d12251f02d69d8310b046e82572ed486685c38f02176bd08baf216746eb947f", "sha256:9c23f307202661071d94b5e384e1e1dc7dfb972a28a2310e4ee16103e66ddb67", "sha256:9d189550615b4948f45252d7f005e53c2040cea1af5b60d6f79491a6e147eef7", "sha256:a086c2af425c5f62a65e12fbf385f7c9fcb8f107d0849dba5839461a129cf311", "sha256:a2b56ba36e05f973d450582fb015594aaa78834fefe8dfb8fcd79b93e64ba4c6", "sha256:aebb6044806f2e16ecc07b2a2637ee1ef67a11840a66752751714a0d924adf72", "sha256:b1b3020d90c2d8e1dae29cf3ce54f8094f7938460fb5ce8bc5c01450b01fbaf6", "sha256:b4b6b1e20608493548b1f32bce8cca185bf0480983890403d3b8753e44077129", "sha256:b6f491cdf80ae540738859d9766783e3b3c8e5bd37f5dfa0b76abdecc5081f13", "sha256:b792a349405fbc0163190fde0dc7b3fef3c9268292586cf5645598b48e63dc67", "sha256:b7c2286c23cd350b80d2fc9d424fc797575fb16f854b831d16fd47ceec078f2c", "sha256:babf5acfede515f176833ed6028754cbcd0d206f7f614ea3447d67c33be12516", "sha256:c365fd1703040de1ec284b176d6af5abe21b427cb3a5ff68e0759e1e313a5e7e", "sha256:c4225f5220f46b2fde568c74fca27ae9771536c2e29d7c04f4fb62c83275ac4e", "sha256:c570f24be1e468e3f0ce7ef56a89a60f0e05b30a3669a459e419c6eac2c35364", "sha256:c6dafac9e0f2b3c78df97e79af707cdc5ef8e88208d686a4847bab8266870023", "sha256:c8de2789052ed501dd829e9cae8d3dcce7acb4777ea4a479c14521c942d395b1", "sha256:cb28c753fd5eb3dd859b4ee95de66cc62af91bcff5db5f2571d32a520baf1f04", "sha256:cb4c38abeef13c61d6916f264d4845fab99d7b711be96c326b84df9e3e0ff62d", "sha256:d1b35bcd6c5543b9cb547dee3150c93008f8dd0f1fef78fc0cd2b141c5baf58a", "sha256:d8e6aeb9201e655354b3ad049cb77d19813ad4ece0df1249d3c793de3774f8c7", "sha256:d8ecd059fdaf60c1963c58ceb8997b32e9dc1b911f5da5307aab614f1ce5c2fb", "sha256:da2b52b37dad6d9ec64e653637a096905b258d2fc2b984c41ae7d08b938a67e4", "sha256:e87f0b2c78157e12d7686b27d63c070fd65d994e8ddae6f328e0dcf4a0cd007e", "sha256:edca80cbfb2b68d7b56930b84a0e45ae1694aeba0541f798e908a49d66b837f1", "sha256:f379abd2f1e3dddb2b61bc67977a6b5a0a3f7485538bcc6f39ec76163891ee48", "sha256:fe4c15f6c9285dc54ce6553a3ce908ed37c8f3825b5a51a15c91442bb955b868" ], "index": "pypi", "markers": "python_version >= \'3.8\'", "version": "==10.2.0" } }, "develop": {} }'
</code></pre>
<p>How do I unstring <code>parsed_content</code> to convert it to a nested dictionary? Manually, I can cut and paste the section within the ' ' symbols into a Python idle and it will immediately recognize it as a nested dictionary. Yet, how can this procedure be done via programming.</p>
|
<python><pipenv>
|
2024-05-17 10:58:44
| 1
| 8,499
|
Sun Bear
|
78,495,170
| 2,695,082
|
Replace an expression containing '/' with div function call in Python
|
<p>I am working on Python code to replace an expression containing '/' with actual function call. For eg: '(n/7-7) +(n/3+3)' should become '(div(n,7)-7 + ( div(n,3)+3)'. Please note only '/' operand needs to be replaced.</p>
<p>I am using ast.NodeVisitor for the same.</p>
<pre class="lang-py prettyprint-override"><code>class visitor(ast.NodeVisitor):
def visit_Expression(self, node): return self.visit(node.body)
def visit_Name(self, node): return node.id
def visit_Constant(self, node): return str(node.value)
def visit_UnaryOp(self, node): return f'{self.visit(node.op)}({self.visit(node.operand)})'
def visit_UAdd(self, node): return '+'
def visit_USub(self, node): return '-'
def visit_BinOp(self, node):
if isinstance(node.op,ast.Div):
return f'{self.visit(node.op)}({self.visit(node.left)},{self.visit(node.right)})'
else:
return f'{self.visit(node.left)} {self.visit(node.op)} {self.visit(node.right)}'
def visit_Add(self, node): return '+'
def visit_Sub(self, node): return '-'
def visit_Mult(self, node): return '*'
def visit_Div(self, node): return 'div'
def generic_visit(self, node): raise ValueError('Invalid Token')
def tokenize(source):
return visitor().visit(ast.parse(source, mode='eval'))
</code></pre>
<p>I am calling this function as and get the output div(n,5)-1:</p>
<pre><code> expression = 'n/5-1'
expression = tokenize(expression)
</code></pre>
<p>This however does not work for trigonometric functions like tan/radians
For eg:</p>
<pre><code>expression = 'tan(radians(top))'
</code></pre>
<p>Do I need to add visit_tan and visit_radians as well?</p>
|
<python><abstract-syntax-tree>
|
2024-05-17 10:54:30
| 1
| 329
|
user2695082
|
78,494,817
| 13,942,929
|
How to adjust my external custom library import in __init__.py?
|
<p>So I have a big folder called Geometry and I divided in CPP folder and Cython folder as follows.</p>
<pre><code>- Geometry
- CPP (Where all the cpp codes are)
- Cython
- src folder
- Geometry Package
- Point Package
- __init__.py
- libcustom.so
- Circle Package
- __init__.py
- libcustom.so
- __init__.py (Ignore this file)
- libcustom.so
- test folder
- setup.py
</code></pre>
<p>In <code>Point.__init__.py</code> and <code>Circle.__init__.py</code>, I have to add the following</p>
<pre><code>import ctypes
import os
libcustom = ctypes.CDLL(os.path.join(os.path.dirname(__file__), "libcustom.so"))
</code></pre>
<p>Now as you can see, in order for my libcustom to work on all packages including Point and Circle, I have to add the libcustom.so in both Point and Circle, which doesn't look so good.
Now I want to remove all libcustom.so from Point and Circle and only keep the one outside.</p>
<pre><code>- Geometry
- CPP (Where all the cpp codes are)
- Cython
- src folder
- Geometry Package
- Point Package
- __init__.py
- Circle Package
- __init__.py
- libcustom.so
- __init__.py (Ignore this file)
- test folder
- setup.py
</code></pre>
<p>How should I change my <code>Point.__init__.py</code> and <code>Circle.__init__.py</code> code so that it will point a reference to the <code>libcustom.so</code> outside?</p>
|
<python><cython><ctypes>
|
2024-05-17 09:43:12
| 1
| 3,779
|
Punreach Rany
|
78,494,790
| 76,701
|
Jax: Passing a constant argument into a scanned function
|
<p>I'm refactoring a project written in Jax.</p>
<p>There's a function, let's call it <code>foo</code>, that gets fed into <code>jax.lax.scan</code>. It has an argument <code>bar</code> that is currently part of the carry (i.e. the first argument, which is a tuple of different variables that gets passed ahead to the next call.) I noticed that the <code>bar</code> argument doesn't change throughout a single scan, i.e. the function unpacks it from the received carry and packs it into the returned carry without modification. I figured I better remove it from the carry, but I couldn't figure out how to do that.</p>
<p>At first I tried removing it from the carry, adding it as a keyword argument to the function and changing the <code>scan</code> call to use <code>partial(foo, bar=bar)</code>. However, I noticed it was slow. The <code>foo</code> function is jitted and I'm guessing that the way I added the argument makes it be jitted every single time instead of just once.</p>
<p>I then tried to feed <code>bar</code> into the <code>xs</code> argument, but I got <code>builtins.ValueError: scan got value with no leading axis to scan over: 0, 0, 0, 0, 0, 0, 0, 0.</code></p>
<p><strong>Is there a recommended way to do this?</strong></p>
|
<python><jax>
|
2024-05-17 09:38:19
| 0
| 89,497
|
Ram Rachum
|
78,494,758
| 874,380
|
Gunicorn with muiltple works but shared memory
|
<p>My Falcon + Gunicorn backend accesses a relational database holding data that forms a graph – a table with nodes and a table with edges. Since the graph is not large (a couple of hundred nodes and edges) I load the data once a startup into a Networkx <code>DiGraph</code> and keep it in memory. Thus, when I request the graph data</p>
<ul>
<li>no constant access to the database is required which sounds good w.r.t. performance</li>
<li>easy use of built-in methods provided by NetworkX to extract different types of subgraphs</li>
</ul>
<p><strong>Question 1:</strong> Is this good practice or are there any (principle) downsides to this approach (beyond my problem below)?</p>
<p>While the graph is not highly dynamic – no "normal" API endpoint edits the graph – it might grow over time in the database. I therefore created an additional endpoint <code>/graph/reload</code> that when called simply reloads the graph into memory (i.e., the NetworkX <code>DiGraph</code> object). So anytime there is indeed a change to the database, I call this endpoint.</p>
<p>In principle, this seems to work. However, the problem now is when I run the server using Gunicorn and multiple workers. When I call <code>/graph/reload</code> then this is done only for one worker instance; the others still use the old graph.</p>
<p><strong>Question 2:</strong> What is the best way to handle this?</p>
<p>Of course, I could always access the database for each request. I still would be curious if this can be done with keeping the graph in memory</p>
|
<python><networkx><gunicorn><falconframework>
|
2024-05-17 09:30:33
| 1
| 3,423
|
Christian
|
78,494,317
| 11,304,830
|
Tours and activities *prices* using Amadeus API
|
<p>I am trying to collect the price of tours and activities using the Python API with Amadeus. I read a couple of papers collecting the price of German package holidays using Amadeus Germany GmbH (which I assume to have the same API and data availability). However, despite searching for many examples, I wasn't able to retrieve the price of tours and activities. While for airfares (or hotels), I run the following script:</p>
<pre><code>import pandas as pd
from amadeus import Client
amadeus_api = 'MY_API'
amadeus_secret = 'SECRET_CODE'
amadeus = Client(client_id = amadeus_api,
client_secret = amadeus_secret)
flights = amadeus.shopping.flight_offers_search.get(originLocationCode = 'airport_departure', destinationLocationCode = 'airport_arrival',departureDate = '2024-07-01',
adults = 1).data
</code></pre>
<p>However, when it comes to tours and activities the only code I was able to find is the following (or similar variations):</p>
<pre><code>amadeus.shopping.activity('56777').get().result
</code></pre>
<p>However, the above code retrieves only some info on the activity. I would like to extract the price and be able to set the dates. Ideally, I would like to know whether the tour is bundled with air tickets.</p>
<p>Do you know whether it is possible to retrieve such a piece of info on Amadeus? If so, does anyone know how to do this?</p>
|
<python><amadeus>
|
2024-05-17 08:09:17
| 1
| 1,623
|
Rollo99
|
78,493,571
| 2,360,477
|
Pandas df to Apache iceberg table
|
<p>I'm trying to insert into a Iceberg table, but I'm getting issues due to the data types mismatching. I've pasted a part of my code. The error message I'm getting is :
"errorMessage": "Schema change detected: {'new_columns': {}, 'modified_columns': {'low': 'decimal(13,10)', 'close': 'decimal(13,10)', 'open': 'decimal(13,10).'.....</p>
<pre><code>import json
import awswrangler as wr
import pandas as pd
from decimal import Decimal, getcontext
getcontext().prec = 30
df['open'] = df['open'].apply(lambda x: Decimal(x).quantize(Decimal('0.0000000000')))
df['high'] = df['high'].apply(lambda x: Decimal(x).quantize(Decimal('0.0000000000')))
df['low'] = df['low'].apply(lambda x: Decimal(x).quantize(Decimal('0.0000000000')))
df['close'] = df['close'].apply(lambda x: Decimal(x).quantize(Decimal('0.0000000000')))
df['adjustedclose'] = df['adjustedclose'].apply(lambda x: Decimal(x).quantize(Decimal('0.0000000000')))
#print(df.columns)
print(df.dtypes)
#print(df.head(10))
wr.athena.to_iceberg(
df=df,
database='market_test',
table='price',
table_location='s3://blah',
temp_path='s3://blah-temp',
data_source = 'AwsDataCatalog'
)
</code></pre>
<p>Error message</p>
<pre><code>{
"errorMessage": "Schema change detected: {'new_columns': {}, 'modified_columns': {'open': 'decimal(13,10)', 'high': 'decimal(13,10)', 'adjustedclose': 'decimal(13,10)', 'close': 'decimal(13,10)', 'low': 'decimal(13,10)', 'volume': 'bigint'}, 'missing_columns': {}}",
"errorType": "InvalidArgumentValue",
"requestId": "[]",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 40, in lambda_handler\n wr.athena.to_iceberg(\n",
" File \"/opt/python/awswrangler/_config.py\", line 715, in wrapper\n return function(**args)\n",
" File \"/opt/python/awswrangler/_utils.py\", line 178, in inner\n return func(*args, **kwargs)\n",
" File \"/opt/python/awswrangler/athena/_write_iceberg.py\", line 434, in to_iceberg\n raise exceptions.InvalidArgumentValue(f\"Schema change detected: {schema_differences}\")\n"
]
}```
</code></pre>
|
<python><pandas><apache-iceberg>
|
2024-05-17 05:08:21
| 1
| 1,075
|
user172839
|
78,493,562
| 11,922,765
|
Python data filtering to remove outliers around a density plot
|
<p>Referring to the below plot, I would like to remove all the outliers outside the density region marked in black color oval shape. I can use simple horizontal filters, like, -4 < data < 4. But outliers still remain. I am looking for any technique that precisely captures the density samples but drops the outliers.</p>
<p><a href="https://i.sstatic.net/DEva024E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DEva024E.png" alt="enter image description here" /></a></p>
<p><strong>Sample data:</strong></p>
<pre><code>x = array([1243. , 1261. , 973. , 842. , 592. , 499. , 1088. , 739.5,
567.5, 536.5, 854. , 763. , 671. , 574. , 498.5, 510.5,
541.5, 544. , 565.5, 482. , 416. , 412.5, 440. , 540. ,
652. , 735. , 878. , 1030. , 1022. , 1105. , 1034. , 1064. ,
1089. , 1115. , 1145. , 1146. , 1111. , 1117. , 1140. , 1168. ,
845. , 1173. , 898. , 1091. , 591. , 570.5, 506. , 592.5,
682.5, 619.5, 663. , 593. , 470. , 810. , 694.5, 900. ,
965. , 954. , 771. , 608.5, 631. , 593. , 652. , 428. ,
486. , 445. , 395.5, 387.5, 383. , 390. , 408. , 420. ,
470. , 543.5, 686. , 550. , 588. , 556.5, 475.5, 606. ,
617. , 674. , 571. , 810. , 913. , 868. , 621.5, 417. ,
388. , 428. , 501. , 586.5, 668. , 739. , 914. , 829. ,
966. , 995. , 1008. , 961. ])
y = array([[-10.6, 0.4, 0.1, -0.1, -0.5, 0. ],
[-12.5, 1.5, 1.4, 0.9, 0.7, 0.7],
[ 4.5, 0.3, 0.2, 0. , 0.6, 0.2],
[ 4.6, -0.7, -0.8, -0.9, -0.7, -0.8],
[ 1.8, -1.3, -1.6, -1.8, -1.4, -1.5],
[ 10.4, -1.4, -1.5, -1.1, -1.2, -1.1],
[ 1. , -0.6, -0.5, -0.3, -0.2, -0.2],
[ 0. , 0.2, -0.1, 0.1, -0.1, -0.1],
[ -1.7, -1.1, -1. , -0.9, -0.8, -0.7],
[ 1.6, -1. , -1.3, -0.7, -1. , -0.8],
[ 0.5, 0. , 0. , 0.3, 0.1, 0.3],
[ -0.1, -0.3, -0.5, -0.2, -0.1, -0.1],
[ 0.8, -0.4, -0.3, -0.4, -0.5, -0.5],
[ -1.3, -0.8, -1. , -1. , -1.3, -1.1],
[ -0.1, -1.9, -2.2, -1.6, -1.7, -1.5],
[ -0.9, -1.3, -1.5, -1.9, -1.7, -2.1],
[ -0.5, -0.8, -0.9, -1.3, -1.4, -1.3],
[ -0.2, -0.6, -0.5, -0.8, -1.6, -0.9],
[ -0.8, -1.2, -1. , -0.6, -0.8, -0.9],
[ -1.2, -0.6, -1. , -0.4, -1.3, -0.4],
[ -1.1, -1. , -1.1, -1.2, -1. , -1.3],
[ -0.8, -0.9, -1. , -1. , -2.7, -1. ],
[ -1.2, -1.4, -1.4, -1.1, -1.6, -1.1],
[ -0.4, -0.6, -0.7, -0.5, 3.5, -0.6],
[ 0.4, 0.1, 0. , 0.1, 7.3, 0.1],
[ 0.2, -0.1, 0. , 0.5, 3.2, 0.6],
[ 0.3, 0.4, 0.2, 0.1, -16.7, 0.1],
[ 1.3, 1.1, 1.1, 1.4, -2.1, 1.3],
[ 1.2, 1.4, 1.3, 1.3, -1.7, 1.4],
[ 1.6, 1.2, 1.3, 1.5, 1.6, 1.6],
[ 0.8, 1.3, 1.3, 1.1, 1.1, 1.2],
[ 0.4, 1. , 1.1, 0.6, 0.8, 0.7],
[ 1. , 1.1, 1.3, 0.9, 1. , 1.1],
[ 0. , 0.3, 0.3, -0.2, -0.4, -0.2],
[ 0.4, 0.6, 0.7, 0.1, -0.1, 0.2],
[ 1.6, 1. , 0.9, 0.6, 0.8, 0.6],
[ 0.3, 0.6, 0.6, 0.3, 0.4, 0.5],
[ 0.2, -0.6, 0. , 0.2, 0.1, 0.2],
[ -0.3, 0.6, 0.2, -0.1, -0.2, -0.2],
[ 0.4, 0.5, 0.6, 0.2, 0.2, 0.3],
[ -0.1, 0.1, 0.1, -0.2, 0. , -0.2],
[ -0.3, -0.6, -0.5, -0.3, -0.4, -0.2],
[ 0.2, 0.1, 0.3, 0.1, 0.1, 0. ],
[ -0.3, -0.5, -0.5, -0.7, -0.7, -0.6],
[ -1.1, -0.8, -0.9, -0.8, -1. , -0.9],
[ -2.9, -1.9, -2.2, -2.3, -2.3, -2.4],
[ -3. , -2.4, -2.5, -2.2, -1.9, -2.3],
[ -0.4, -1.5, -1.4, -0.8, -0.6, -0.9],
[ 0.4, 0.1, 0. , 0.4, 0. , 0.4],
[ -0.1, -0.8, -0.7, 0. , -0.1, -0.1],
[ -0.3, -0.6, -0.3, -0.2, -0.2, -0.2],
[ 0.4, 0.4, 0.2, -0.1, -0.1, -0.1],
[ -1.9, -1.6, -1.8, -1.7, -1.8, -1.8],
[ -0.5, -0.8, -0.8, -0.6, -0.1, -0.6],
[ 0.8, 0.4, 0.5, 0.8, 0.7, 0.7],
[ 1.1, 1. , 1. , 0.7, 0.9, 0.8],
[ 0.7, 0.8, 0.9, 0.7, 0.6, 0.7],
[ 1. , 1.1, 1. , 0.8, 0.8, 0.8],
[ 0.2, 0.5, 0.4, 0.3, 0.1, 0.3],
[ -0.3, -1.2, -1. , -0.7, -0.5, -0.8],
[ -0.4, -0.5, -0.4, -0.2, -0.4, -0.2],
[ 0. , -0.5, -0.2, 0.3, 0.1, 0.2],
[ 0.2, 0. , 0.1, 0.1, -0.1, 0. ],
[ -1.1, -0.6, -0.8, -0.7, -0.6, -0.7],
[ -0.8, -0.9, -0.9, -0.6, -0.7, -0.6],
[ -0.7, -0.4, -0.6, -0.5, -0.6, -0.4],
[ -1.6, -1.2, -1.4, -1.1, -1.2, -1.3],
[ -0.5, -1.6, -1.5, -0.7, -0.7, -0.7],
[ -1. , -1.2, -1.3, -0.6, -0.9, -0.8],
[ -0.7, -0.4, -0.4, -0.5, -0.7, -0.5],
[ -0.1, -0.2, -0.3, 0. , -0.2, -0.1],
[ -0.5, -0.4, -0.4, -0.3, -0.3, -0.2],
[ -0.5, -0.3, -0.5, -0.3, -0.4, -0.4],
[ 0.2, 0. , 0. , 0.1, 0. , 0.1],
[ 0.9, 0.7, 0.8, 0.5, 0.6, 0.6],
[ 0.5, 0.6, 0.5, 0.6, 0.5, 0.5],
[ -0.1, 0.2, 0.2, 0.4, 0.4, 0.4],
[ 0. , 0.2, 0.1, 0.2, 0.2, 0.2],
[ -0.4, -0.2, -0.4, -0.2, -0.3, -0.2],
[ -0.1, -0.1, -0.1, -0.3, -0.2, -0.2],
[ 0.1, 0.4, 0.3, 0.1, 0.1, 0.1],
[ 0. , 0. , -0.1, 0.2, 0.2, 0.3],
[ 0.7, 0.8, 0.9, 0.6, 0.6, 0.5],
[ 0.4, 0.2, 0.4, -0.1, 0. , 0.1],
[ 1.7, 1.4, 1.4, 1.2, 1.3, 1.2],
[ 0.9, 1. , 1. , 0.8, 1. , 0.8],
[ 0.3, 0.5, 0.6, 0.4, 0.3, 0.3],
[ -1.4, -1. , -1.2, -0.9, -0.7, -0.8],
[ -1. , -1. , -1. , -1. , -1.2, -1.1],
[ -0.6, -0.7, -0.8, -0.9, -0.9, -0.8],
[ -0.5, -0.8, -0.7, -0.3, -0.4, -0.4],
[ 0. , -0.2, -0.1, -0.3, -0.5, -0.3],
[ -0.3, 0.2, 0. , 0.1, 0. , 0. ],
[ 0.8, 0.3, 0.4, 0.4, 0.5, 0.5],
[ 1.2, 1. , 1.2, 0.8, 0.8, 0.6],
[ 1.7, 1.3, 1.4, 1.8, 1.8, 1.7],
[ 1.2, 1.1, 1.2, 1.1, 1.3, 1.3],
[ 1.5, 1.6, 1.6, 1.4, 1.7, 1.4],
[ 1.7, 1.8, 2. , 1.5, 1.8, 1.5],
[ 0.6, 0.8, 1. , 0.8, 1.3, 1. ]])
</code></pre>
|
<python><dataframe><scikit-learn><cluster-analysis><outliers>
|
2024-05-17 05:05:26
| 1
| 4,702
|
Mainland
|
78,493,544
| 6,123,824
|
How to get single colorbar with shared x- and y-axis for seaborn heatmaps in subplot?
|
<p>I want to plot multiple confusion matrices in a single plot with a single colorbar and with a shared x- and y-axis. Here is my code I have tried so far</p>
<pre><code>#Calculate the onfusion matrices
predicted_mod1 = df_binary["Model1"]
actual_class = df_binary["Observed"]
out_df_mod1 = pd.DataFrame(np.vstack([predicted_mod1, actual_class]).T,columns=['predicted_class','actual_class'])
CF_mod1 = pd.crosstab(out_df_mod1['actual_class'], out_df_mod1['predicted_class'], rownames=['Actual'], colnames=['Predicted'])
predicted_mod2 = df_binary["Model2"]
out_df_mod2 = pd.DataFrame(np.vstack([predicted_mod2, actual_class]).T,columns=['predicted_class','actual_class'])
CF_mod2 = pd.crosstab(out_df_mod2['actual_class'], out_df_mod2['predicted_class'], rownames=['Actual'], colnames=['Predicted'])
predicted_mod4 = df_binary["Model4"]
out_df_mod4 = pd.DataFrame(np.vstack([predicted_mod4, actual_class]).T,columns=['predicted_class','actual_class'])
CF_mod4 = pd.crosstab(out_df_mod4['actual_class'], out_df_mod4['predicted_class'], rownames=['Actual'], colnames=['Predicted'])
predicted_mod5 = df_binary["Model5"]
out_df_mod5 = pd.DataFrame(np.vstack([predicted_mod5, actual_class]).T,columns=['predicted_class','actual_class'])
CF_mod5 = pd.crosstab(out_df_mod5['actual_class'], out_df_mod5['predicted_class'], rownames=['Actual'], colnames=['Predicted'])
predicted_mod6 = df_binary["Model6"]
out_df_mod6 = pd.DataFrame(np.vstack([predicted_mod6, actual_class]).T,columns=['predicted_class','actual_class'])
CF_mod6 = pd.crosstab(out_df_mod6['actual_class'], out_df_mod6['predicted_class'], rownames=['Actual'], colnames=['Predicted'])
</code></pre>
<p>Now I have plotted these matrices using the following code</p>
<pre><code>fig = plt.figure(figsize=(6, 3), dpi=300)
fig.subplots_adjust(hspace=0.8, wspace=0.6)
ax = fig.add_subplot(2, 3, 1)
sns.heatmap(CF_mod1, cmap='Blues', annot=True, fmt='d')
ax = fig.add_subplot(2, 3, 2)
sns.heatmap(CF_mod2, cmap='Blues', annot=True, fmt='d')
ax = fig.add_subplot(2, 3, 3)
sns.heatmap(CF_mod3, cmap='Blues', annot=True, fmt='d')
ax = fig.add_subplot(2, 3, 4)
sns.heatmap(CF_mod4, cmap='Blues', annot=True, fmt='d')
ax = fig.add_subplot(2, 3, 5)
sns.heatmap(CF_mod5, cmap='Blues', annot=True, fmt='d')
ax = fig.add_subplot(2, 3, 6)
sns.heatmap(CF_mod6, cmap='Blues', annot=True, fmt='d')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/0kzHjYEC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kzHjYEC.png" alt="enter image description here" /></a>
My expected output is something like the following
<a href="https://i.sstatic.net/o6lwVaA4.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o6lwVaA4.jpg" alt="enter image description here" /></a>
Now how can I have only one single colorbar with a shared x- and y-axis?</p>
<p><strong>Data</strong></p>
<pre><code>Model1,Model2,Model3,Model4,Model5,Model6,Observed
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
No,No,No,No,No,No,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,No,Yes,No,Yes,Yes
No,Yes,No,No,No,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,No,No,No,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,No,Yes,Yes,Yes,No,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,No,Yes,Yes,Yes,No,Yes
Yes,No,Yes,Yes,Yes,No,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
Yes,Yes,Yes,Yes,Yes,Yes,Yes
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
Yes,Yes,Yes,Yes,Yes,Yes,No
No,No,No,No,No,No,No
No,Yes,No,No,No,Yes,No
No,Yes,No,No,No,Yes,No
Yes,Yes,Yes,Yes,Yes,Yes,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,Yes,No,Yes,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
Yes,Yes,Yes,Yes,Yes,Yes,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
No,No,No,No,No,No,No
</code></pre>
|
<python><matplotlib><seaborn><pivot-table><heatmap>
|
2024-05-17 04:56:59
| 1
| 8,294
|
UseR10085
|
78,493,405
| 1,779,091
|
Is running python inside an activated venv similar to using venv\python.exe?
|
<p>I'm trying to schedule a python script to run in an venv via windows task scheduler.</p>
<p>In a normal command prompt I would have done:</p>
<pre><code>Cd\foldername
Venv\scripts\activate
Python filename.py
</code></pre>
<p>However in task scheduler is the following doing the same?</p>
<pre><code>C:\foldername\venv\python.exe filename.py
</code></pre>
|
<python>
|
2024-05-17 03:58:07
| 0
| 9,866
|
variable
|
78,493,334
| 17,867,413
|
Time data '2024-05' does not match format '%Y-%m-%dT%H:%M:%S' (Protobuff)
|
<p>I am creating a protobuffer from a json and json strcture loooks like this:</p>
<pre class="lang-json prettyprint-override"><code> {
"answerUpdateRequest": {
"entity": {
"type": "ORGANIZATION",
"id": "UU1234321234ID"
},
"answers": [
{
"key": "legal_company_name",
"source": {
"type": "DOCUMENT",
"id": "3ea20440-83fb-43c0-b409-1dd8f68e73ec | DocumentType.application"
},
"provided_at": "2024-05-02T15:54:15.941988",
"received_at": "2024-05-02T15:54:15.945350",
"type": "TEXT",
"value": {
"text": "Ciccone Law, LLC"
}
},
{
"key": "company_website_ind",
"source": {
"type": "DOCUMENT",
"id": "3ea20440-83fb-43c0-b409-1dd8f68e73ec | DocumentType.application"
},
"provided_at": "2024-05-02T15:54:15.941988",
"received_at": "2024-05-02T15:54:15.945365",
"type": "BOOLEAN",
"value": {
"text": "Yes"
}
},
{
"key": "company_webiste",
"source": {
"type": "DOCUMENT",
"id": "3ea20440-83fb-43c0-b409-1dd8f68e73ec | DocumentType.application"
},
"provided_at": "2024-05-02T15:54:15.941988",
"received_at": "2024-05-02T15:54:15.945388",
"type": "TEXT",
"value": {
"text": "www.Justice-Insight.com"
}
}
]
},
"documentKey": "3ea20440-83fb-43c0-b409-1dd8f68e73ec",
"applicationId": "1343245432",
"activityId": "1111"
}
</code></pre>
<pre class="lang-py prettyprint-override"><code> def create_answer_update_request(json_data):
data = json_data
print("Data is "+str(data))
answer_update_request = events_pb2.AnswerUpdateRequest()
answer_update_request.entity.type = model_pb2.Entity.Type.Value(data["answerUpdateRequest"]["entity"]["type"])
answer_update_request.entity.id = data["answerUpdateRequest"]["entity"]["id"]
# Convert answers to protobuf format
for answer_data in data["answerUpdateRequest"]["answers"]:
answer = answer_update_request.answers.add()
answer.key = answer_data["key"]
answer.source.type = answer_data["source"]["type"]
answer.source.id = answer_data["source"]["id"]
# Handle provided_at and received_at fields
provided_at_str = answer_data["provided_at"].split('.')[0]
print(provided_at_str)
provided_at_datetime = datetime.strptime(provided_at_str, "%Y-%m-%dT%H:%M:%S")
provided_at = Timestamp()
provided_at.FromJsonString(provided_at_datetime.isoformat())
print("##########provided_at############")
answer.provided_at.CopyFrom(provided_at_datetime.isoformat())
received_at_str = answer_data["received_at"]
if len(received_at_str) == 7: # Only contains 'YYYY-MM'
received_at_str += "-01T00:00:00" # Append default day and time
received_at = Timestamp()
received_at.FromJsonString(received_at_str)
answer.received_at.CopyFrom(received_at)
# Ensure the type value is valid and convert it
if answer_data["type"] not in events_pb2.Answer.Type.keys():
raise ValueError(f"Invalid answer type: {answer_data['type']}")
answer.type = events_pb2.Answer.Type.Value(answer_data["type"])
# Handle different value types based on your schema
if answer_data["type"] == "TEXT":
answer.value.text = answer_data["value"]["text"]
elif answer_data["type"] == "BOOLEAN":
answer.value.boolean = answer_data["value"]["boolean"]
# Add handling for other types as needed
# Serialize message
serialized_data = answer_update_request.SerializeToString()
return serialized_data
</code></pre>
<p>I have tried to print <code>provided_at_str</code> and it is printing <code>"2024-05-16T22:20:28"</code></p>
<pre class="lang-none prettyprint-override"><code>Error:
tt, fraction, gmtoff_fraction = _strptime(data_string, format)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/_strptime.py", line 349, in _strptime
raise ValueError("time data %r does not match format %r" %
ValueError: time data '2024-05' does not match format '%Y-%m-%dT%H:%M:%S'
</code></pre>
|
<python><python-3.x><protocol-buffers><protobuf-net>
|
2024-05-17 03:24:37
| 2
| 1,253
|
Xi12
|
78,493,313
| 3,179,698
|
When install jupyterhub, many other files installed
|
<p>Hi I just installed jupyterhub using <code>conda install jupyterhub</code></p>
<p>However It not work until I installed notebook.</p>
<p>I noticed when I install jupyterhub, it has some other component installed, like jupyterhub-singleuser, jupyter-kernel,jupyter-lab,jupyter-notebook,jupyter-server,etc.</p>
<p>My question is:
1 what are those components used for? I just want a jupyterhub service.
2 since it has already installed jupyter-lab, jupyter-notebook, why I still need to install notebook package to complete the pieces.</p>
<p>I mean it provide a lot of service but didn't provide the essential needed package.</p>
|
<python><jupyter-notebook><jupyterhub>
|
2024-05-17 03:17:11
| 1
| 1,504
|
cloudscomputes
|
78,493,186
| 2,153,235
|
Robust way to fix Errno 13 from reopening files on Windows?
|
<p>I am following a <a href="https://www.geeksforgeeks.org/working-with-wav-files-in-python-using-pydub" rel="nofollow noreferrer">tutorial on PyDub</a>. The very first exercise is to play a WAV file. Here is my adapted code. I'm not too worried about the warning at this point, as I am trying to resolve the <code>[Errno 13]</code> resulting from the final command <code>play(wav_file)</code>:</p>
<pre><code>>>> import os
>>> os.chdir('C:/cygwin64/home/User.Name/Some.Path')
>>> from pydub import AudioSegment
>>> from pydub.playback import play
>>> wav_file = AudioSegment.from_file(file = "SomeFile.wav", format = "wav")
C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
>>> play(wav_file)
C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pydub\utils.py:184: RuntimeWarning: Couldn't find ffplay or avplay - defaulting to ffplay, but may not work
warn("Couldn't find ffplay or avplay - defaulting to ffplay, but may not work", RuntimeWarning)
Traceback (most recent call last):
Cell In[5], line 1
play(wav_file)
File ~\anaconda3\envs\py39\lib\site-packages\pydub\playback.py:71 in play
_play_with_ffplay(audio_segment)
File ~\anaconda3\envs\py39\lib\site-packages\pydub\playback.py:15 in _play_with_ffplay
seg.export(f.name, "wav")
File ~\anaconda3\envs\py39\lib\site-packages\pydub\audio_segment.py:867 in export
out_f, _ = _fd_or_path_or_tempfile(out_f, 'wb+')
File ~\anaconda3\envs\py39\lib\site-packages\pydub\utils.py:60 in _fd_or_path_or_tempfile
fd = open(fd, mode=mode)
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\User.Name\\AppData\\Local\\Temp\\tmpryjpscsv.wav'
</code></pre>
<p>I confirmed that my user account can write to files in <code>Temp</code> folder. Much googling reveals the likely cause of the error to be that on Windows, you cannot repeatedly open a file without closing it. The vary last frame in the stack trace is line 60 of <code>utils.py</code>, where the risk of multiple openings seems clear:</p>
<pre><code>/c/Users/User.Name/anaconda3/envs/py39/lib/site-packages/pydub/utils.py
-----------------------------------------------------------------------
53 def _fd_or_path_or_tempfile(fd, mode='w+b', tempfile=True):
54 close_fd = False
55 if fd is None and tempfile:
56 fd = TemporaryFile(mode=mode)
57 close_fd = True
58
59 if isinstance(fd, basestring):
60 fd = open(fd, mode=mode)
61 close_fd = True
62
63 try:
64 if isinstance(fd, os.PathLike):
65 fd = open(fd, mode=mode)
66 close_fd = True
67 except AttributeError:
68 # module os has no attribute PathLike, so we're on python < 3.6.
69 # The protocol we're trying to support doesn't exist, so just pass.
70 pass
71
72 return fd, close_fd
</code></pre>
<p>I noticed that whenever <code>fd</code> is assigned an open file, the <code>close_fd</code> flag is set to <code>True</code>. I can use this to avoid multiple reopenings. Lines 60 and 65 represent potential sites of <code>[Errno 13]</code>. Would it be robust to AND the immediately preceding <code>if</code> conditions as follows?</p>
<pre><code>59 if not close_fd and isinstance(fd, basestring):
64 if not close_fd and isinstance(fd, os.PathLike):
</code></pre>
<p>The 2nd of the two conditional openings refers to <code>os.PathLike</code>. While I can find webpages on that, I can't find a simple explanation. So I'm not sure what the enclosing <code>try</code> block is meant to do. I hope that understanding it is not necessary if the logic at the microscopic level is robust enough for general situations, i.e., <code>if not close_fd and ...</code>.</p>
<p><strong>Afternote:</strong> I took the plunge and modified lines 59 and 64 as per above. No luck, no change in behaviour, and I'm not sure why. The logic should have prevented repeated openings. I don't want to use the solution of closing the file before each opening it because temp files created by <code>TemporaryFile()</code> apparently disappears when closed. Furthermore, <code>fd</code> is a mere string before opening on line 60 and it is <code>None</code> before opening on line 56, so I can't close it anyway. Finally, the use of the <code>close_fd</code> flag clearly indicates that the author wants to stave off file closing until later.</p>
|
<python><pydub>
|
2024-05-17 02:14:17
| 1
| 1,265
|
user2153235
|
78,493,059
| 5,284,054
|
Python tkinter change OptionMenu during runtime
|
<p>My second dropdown list is dependent on my first dropdown list. I can make it work by creating a second OptionMenu that overwrites the first OptionMenu. The following code works.</p>
<p>However, I'm looking for way that will use the same OptionMenu, but will substitute in a new list for the <code>second_choice_proxylist</code> in the <code>second_list</code>.</p>
<pre><code>from tkinter import *
from tkinter import ttk
root = Tk()
root.geometry('400x275')
def option_selected(event):
if event == option_list[0]:
option_list2 = first_sublist
if event == option_list[1]:
option_list2 = second_sublist
second_list = ttk.OptionMenu(root, second_choice, 'foo or bar', *option_list2)
second_list.grid(column=2, row=2, sticky=(W, E))
choice_var = StringVar(root)
choice_var.set("Option 1")
option_list = ["Option 1", "Option 2"]
first_sublist = ['foo1', 'foo2']
second_sublist = ['bar1', 'bar2']
first_list = ttk.OptionMenu(root, choice_var, 'Make a Choice', *option_list, command=option_selected)
first_list.grid(column=2, row=1)
second_choice = StringVar()
second_choice_proxylist = ['Make First Selection First']
second_list = ttk.OptionMenu(root, second_choice, 'Make a Choice', *second_choice_proxylist)
second_list.grid(column=2, row=2)
first_label = ttk.Label(root, text = 'Choose Here First')
first_label.grid(column=1, row=1)
second_label = ttk.Label(root, text = 'Choose Here Second')
second_label.grid(column=1, row=2)
root.mainloop()
</code></pre>
|
<python><tkinter><tkinter.optionmenu>
|
2024-05-17 01:15:36
| 1
| 900
|
David Collins
|
78,492,801
| 8,474,432
|
How to implement MC dropout in keras?
|
<p>I'm very new to ML and especially more sophisticated techniques like dropout. I have a simple 1D CNN (regression problem), and I would like to capture uncertainties in the predictions for each output pixel. I would like to take advantage of MC dropout for this. This is the model where I've now added a dropout layer after each convolutional layer (not present earlier):</p>
<pre><code># Define the model architecture
model = keras.Sequential()
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, start_from_epoch=20, min_delta=0.00001,
verbose=True, restore_best_weights=True)
# Convolutional layers
model.add(layers.Conv1D(filters=64, kernel_size=6, activation='relu', input_shape=(input_size,1), name='conv1d_1'))
model.add(layers.MaxPooling1D(pool_size=2, name='max_pooling1d_1'))
model.add(layers.Dropout(0.2))
model.add(layers.Conv1D(filters=48, kernel_size=6, activation='relu', name='conv1d_3'))
model.add(layers.MaxPooling1D(pool_size=2, name='max_pooling1d_3'))
model.add(layers.Dropout(0.2))
model.add(layers.Conv1D(filters=16, kernel_size=6, activation='relu', name='conv1d_4'))
model.add(layers.MaxPooling1D(pool_size=2, name='max_pooling1d_4'))
model.add(layers.Dropout(0.2))
# Flatten the output for dense layers
model.add(layers.Flatten(name='flatten_1'))
# Dense layers
model.add(layers.Dense(64, activation='relu', name='dense_2'))
model.add(layers.Dense(128, activation='relu', name='dense_3'))
# Output layer
model.add(layers.Dense(output_size, activation='linear', name='dense_4'))
# Compile the model
model.compile(optimizer='adam', loss='mse', metrics=['mean_absolute_percentage_error'])
</code></pre>
<p>To train the model and quantify uncertainties:</p>
<pre><code># Train (shape of train and test data are (N_samples, 500))
history = model.fit(train_data[..., np.newaxis], train_truth, batch_size=32, epochs=50, validation_data=(val_data[..., np.newaxis], val_truth), callbacks=[callback])
# Get predictions and uncertainties (only checking for first 10 samples)
y = np.stack([model(test_data[:10, ..., np.newaxis], training=True) for sample in range(100)])
mu = np.mean(y, axis=0)
sigma = np.std(y, axis=0)
</code></pre>
<p>However, <code>sigma</code> is basically zero for each prediction at each pixel, even though the error on the testing set predictions are over 10%. I also noticed that if I comment out the dropout layers and re-train and re-evaluate, the resulting <code>sigma</code> values are also zero (but I think this would be expected in this case).</p>
<p>I would think that including the dropout layers should result in non-negligible uncertainties, especially since the performance is fairly poor already! What am I doing wrong?</p>
|
<python><tensorflow><keras><dropout>
|
2024-05-16 23:12:05
| 0
| 1,216
|
curious_cosmo
|
78,492,749
| 7,547,047
|
Update python dictionary value as a list of strings enclosed within double quotes
|
<p>I am trying to call a post request using python where the body is a dictionary in which one parameter has to change based on the values of a list.</p>
<p>The list is as below:</p>
<pre><code>my_list = ['a14mnas','ty6798h']
</code></pre>
<p>The body for the api call is as below:</p>
<pre><code>api_body = {
"Page": 1,
"Pages": 3,
"Ids": []
}
</code></pre>
<p>I am trying to update the body as:</p>
<pre><code>api_body["Ids"] = my_list[0]
</code></pre>
<p>which will update the body to</p>
<pre><code>"Ids": ['a14mnas']
</code></pre>
<p>but I need the value in the list to be enclosed with double quotes as below:</p>
<pre><code>"Ids": ["a14mnas"]
</code></pre>
<p>The api call takes only list with value enclosed with double quotes. How do I make the value to be in double quotes.</p>
<p>Tried adding <code>''' +</code> to the string but it gives a single and double quote</p>
<pre><code>''' + my_list[0] + '''
</code></pre>
<p>If I use this below code, I am able to get the post request to work just fine:</p>
<pre><code>api_body = {
"Page": 1,
"Pages": 3,
"Ids": ["a14mnas"]
}
myrequest = requests.post(api_url, headers = headers, data = json.dumps(api_body))
</code></pre>
<p>Where headers have the authentication token.While looping through the list to update the dict (key: Ids), how do I ensure that it takes the double quote?</p>
|
<python><json><list><dictionary><python-requests>
|
2024-05-16 22:52:28
| 0
| 397
|
msksantosh
|
78,492,715
| 9,669,142
|
Call function in Python script in a Gitlab repositry from another script in another Gitlab repository
|
<p>I have two Gitlab repositories that are both private and on an internal server. The first repo (let's call this Repo A) contains a Python script with some general functions. The second repo (Repo B) contains multiple Python scripts that are developed by the users. Calling a function in another Python script can easily be done by importing the script and then call the function, but I don't know if this is possible when the scripts are in two different repo's.</p>
<p>Is there a way to call a function in the Python script in Repo A, using a Python script in Repo B?</p>
|
<python><gitlab>
|
2024-05-16 22:34:46
| 1
| 567
|
Fish1996
|
78,492,708
| 7,662,164
|
JAX custom_jvp with 'None' output leads to TypeError
|
<p>I try to define a function whose jvp is only defined for selected output(s). Below is a simple example:</p>
<pre><code>from jax import custom_jvp, jacobian
@custom_jvp
def func(x, y):
return x+y, x*y
@func.defjvp
def func_jvp(primals, tangents):
x, y = primals
t0, t1 = tangents
primals_out = func(x, y)
tangents_out = (t0+t1, None)
return primals_out, tangents_out
if __name__ == "__main__":
x = 1.
y = 2.
print(jacobian(func)(x, y))
</code></pre>
<p>The error says:</p>
<pre><code>TypeError: Custom JVP rule func_jvp for function func must produce primal and tangent outputs with equal container (pytree) structures, but got PyTreeDef((*, *)) and PyTreeDef((*, None)) respectively.
</code></pre>
<p>Is there a workaround for this case?</p>
|
<python><typeerror><jax><automatic-differentiation>
|
2024-05-16 22:30:43
| 1
| 335
|
Jingyang Wang
|
78,492,549
| 1,030,287
|
Pandas resample bi-monthly on even months
|
<p>I've got the following daily data:</p>
<pre><code>1983-03-30 0.001224
1983-03-31 -0.003741
1983-04-04 0.005121
1983-04-05 0.009171
1983-04-06 0.006395
1983-04-07 0.009030
1983-04-08 0.006961
1983-04-11 -0.003950
1983-04-12 0.018837
1983-04-13 -0.000324
...
</code></pre>
<p>I'd like to resample this using bimonthly frequency. So I do:</p>
<pre><code>s.resample('2ME').agg('last') # Use '2M' for older versions of Pandas
</code></pre>
<p>This produces the following:</p>
<pre><code>1983-03-31 -0.003741
1983-05-31 0.001987
1983-07-31 0.005657
1983-09-30 -0.007843
1983-11-30 -0.005444
1984-01-31 0.003011
1984-03-31 -0.000324
1984-05-31 0.000649
1984-07-31 -0.001447
1984-09-30 0.002705
</code></pre>
<p>However, I'd like to group (3,4), (5,6), (7,8) etc. so I'd like a series where the timestamps are:</p>
<pre><code>1983-04-30
1983-06-31
1983-08-31
</code></pre>
<p>To reproduce this:</p>
<pre><code>index = pd.date_range(start='1983-03-28', end='1984-01-01', freq='B')
s = pd.Series(data=np.random.randn(len(index)), index=index)
</code></pre>
<p>How can I tell resample to group on even months? Or even better, to always take the first two when grouping rather than the first 1 as in the case above - it has grouped March by itself then April and May together rather than grouping March and April together.</p>
|
<python><pandas>
|
2024-05-16 21:33:02
| 2
| 12,343
|
s5s
|
78,492,471
| 15,163,656
|
Openvpn WARNING: Failed running command (--auth-user-pass-verify): could not execute external program with python/bash auth script
|
<p>I am trying to execute auth flow with openvpn auth-pass-verify options.
sample config from openvpn.conf:</p>
<pre><code>auth-user-pass-verify /etc/openvpn/auth2.sh via-file
verify-client-cert require
script-security 3
</code></pre>
<p>And the scripts I am trying to use</p>
<pre class="lang-bash prettyprint-override"><code>#!/usr/bin/bash
PATH=$PATH:/usr/local/bin
set -e
env
auth_usr=$(head -1 $1)
auth_passwd=$(tail -1 $1)
if [ $common_name = $auth_usr ]; then
result=curl -v -X GET -H "Content-type: application/json" -d '{"username"="${auth_usr}"&"password"="${auth_passwd}"}' http://openvpn-ui:8080/auth
echo $result
if [ $result = "Authorized" ]; then
echo "Authorization succeeded"
exit 0
else
echo "Authorization failed"
exit 1
fi
else
echo "Authorization failed"
exit 1
fi
</code></pre>
<p>py:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3
import sys
import sqlite3
DB_FILE = '/etc/openvpn/db/openvpn-ui.db'
def main():
with open(sys.argv[1], 'r') as tmpfile:
username = tmpfile.readline().rstrip('\n')
password = tmpfile.readline().rstrip('\n')
creds = get_password(username)
if not creds:
print(f'>> user {username} not defined.')
sys.exit(1)
if password != creds[0][1]:
print(f'>> Incorrect password provided by user {username}.')
sys.exit(1)
sys.exit(0)
def get_password(username):
db = sqlite3.connect(DB_FILE)
cursor = db.cursor()
cursor.execute('''select username, password from user where username=?''', (username, ))
creds = cursor.fetchall()
db.close()
return creds
if __name__ == '__main__':
main()
</code></pre>
<p>In each one I have the same error:
<code>WARNING: Failed running command (--auth-user-pass-verify): could not execute external program</code></p>
<p>Permissions on script <code>-rwxr-xr-x 1 root root 493 May 16 20:52 auth2.sh</code></p>
<p>I have tried changing permissions already. openvpn running from user nobody.
Any suggestions are welcome.</p>
|
<python><linux><bash><openvpn>
|
2024-05-16 21:04:27
| 0
| 357
|
DISCO
|
78,492,373
| 9,095,840
|
Are there any modern image formats that use plain Huffman coding?
|
<p>I have some images that are very noisy, but the values are all in a narrow range of near-zero uint8 values. I think Huffman coding might be optimal for compressing them, since there is a lot of bit-wise redundancy but not a lot of sequence redundancy. However I can't find any image formats that use the plain bit-wise algorithm without encoding sequences (like PNG or Deflate do). The closest I found was an obsolete Unix command line tool called "pack". Is there an image format that uses plain old Huffman compression?</p>
<p>Bonus points if there's a Python encoder for it.</p>
<p><a href="https://i.sstatic.net/i9sf2yj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i9sf2yj8.png" alt="noise image and histogram" /></a></p>
|
<python><image><compression><huffman-code><image-compression>
|
2024-05-16 20:39:12
| 1
| 1,824
|
markemus
|
78,492,310
| 5,284,054
|
Python tkinter cloase first window while opening second window
|
<p>I'm trying to close the first window as the second window opens. Both windows close, or the first window closes and the second window never opens.</p>
<p>This question has a similar problem but was solved by addressing the imported libraries: <a href="https://stackoverflow.com/questions/74816578/tkinter-is-opening-a-second-windows-when-the-first-one-is-closing">Tkinter is opening a second windows when the first one is closing</a></p>
<p>Here's my code, which was taken from here <a href="https://www.pythontutorial.net/tkinter/tkinter-toplevel/" rel="nofollow noreferrer">https://www.pythontutorial.net/tkinter/tkinter-toplevel/</a></p>
<pre><code>import tkinter as tk
from tkinter import ttk
class Window(tk.Toplevel):
def __init__(self, parent):
super().__init__(parent)
self.geometry('300x100')
self.title('Toplevel Window')
ttk.Button(self,
text='Close',
command=self.destroy).pack(expand=True)
class App(tk.Tk):
def __init__(self):
super().__init__()
self.geometry('300x200')
self.title('Main Window')
# place a button on the root window
ttk.Button(self,
text='Open a window',
command=self.open_window).pack(expand=True)
def open_window(self):
window = Window(self)
window.grab_set()
self.destroy()
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>I added in <code>self.destroy()</code> in the function <code>def open_window(self)</code>. But it doesn't close the window created by this class: <code>class App(tk.Tk)</code>.</p>
|
<python><class><tkinter>
|
2024-05-16 20:22:06
| 1
| 900
|
David Collins
|
78,492,179
| 5,284,054
|
Python tkinter class multiple windows
|
<p>Using tkinter, I'm trying to open one window from another window and doing so by creating the windows in a class.</p>
<p>This question talks about tkinter and class, but not multiple windows: <a href="https://stackoverflow.com/questions/51867579/python-tkinter-with-classes">Python Tkinter with classes</a></p>
<p>This question addresses multiple windows but it's creating an instance of the class: <a href="https://stackoverflow.com/questions/61022533/python-tkinter-multiple-windows">Python tkinter multiple windows</a>. I don't want to create an instance of the class because tkdoc.com has the <code>root = Tk()</code> being passed into the class rather than creating an instance.</p>
<p>So, I have this example from <a href="https://www.pythontutorial.net/tkinter/tkinter-toplevel/" rel="nofollow noreferrer">https://www.pythontutorial.net/tkinter/tkinter-toplevel/</a> which does what I want but it creates a subclass of <code>tk.TK</code> rather than passing in <code>root</code>. I'm trying to adapt this example to pass in <code>root</code> because that's what the official docs do.</p>
<p>Here's the example of one window opening up another window that works, but it creates a subclass of <code>tk.Tk</code>:</p>
<pre><code>import tkinter as tk
from tkinter import ttk
class Window(tk.Toplevel):
def __init__(self, parent):
super().__init__(parent)
self.geometry('300x100')
self.title('Toplevel Window')
ttk.Button(self,
text='Close',
command=self.destroy).pack(expand=True)
class App(tk.Tk):
def __init__(self):
super().__init__()
self.geometry('300x200')
self.title('Main Window')
# place a button on the root window
ttk.Button(self,
text='Open a window',
command=self.open_window).pack(expand=True)
def open_window(self):
window = Window(self)
window.grab_set()
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>Here's my adaptation:</p>
<pre><code>from tkinter import *
from tkinter import ttk
class Window(Toplevel):
def __init__(self, parent):
super().__init__(parent)
self.geometry('300x100')
self.title('Toplevel Window')
ttk.Button(self,
text='Close',
command=self.destroy).pack(expand=True)
class App():
def __init__(self, root):
super().__init__()
root.geometry('300x200')
root.title('Main Window')
# place a button on the root window
ttk.Button(root,
text='Open a window',
command=self.open_window).pack(expand=True)
def open_window(self):
window = Window(self)
window.grab_set()
if __name__ == "__main__":
root = Tk()
App(root)
root.mainloop()
</code></pre>
<p>The first window opens fine (<code>class App()</code>). I'm getting an <code>AttributeError</code> that occurs when attempting to open the second window (<code>class Window(Toplevel)</code>). Yet, the <code>AttributeError</code> is on the first window.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "c:\Users\User\Documents\Python\Python_Tutorial_net\Multiple_Windows_pass-in-Class.py", line 30, in open_window
window = Window(self)
^^^^^^^^^^^^
File "c:\Users\User\Documents\Python\Python_Tutorial_net\Multiple_Windows_pass-in-Class.py", line 7, in __init__
super().__init__(parent)
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py", line 2678, in __init__
BaseWidget.__init__(self, master, 'toplevel', cnf, {}, extra)
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py", line 2623, in __init__
self._setup(master, cnf)
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\tkinter\__init__.py", line 2592, in _setup
self.tk = master.tk
^^^^^^^^^
AttributeError: 'App' object has no attribute 'tk'
</code></pre>
|
<python><class><tkinter>
|
2024-05-16 19:49:16
| 1
| 900
|
David Collins
|
78,492,177
| 11,394,520
|
Slowly Updating Side Inputs & Session Windows - Transform node AppliedPTransform was not replaced as expected
|
<p>In my apache beam streaming pipeline, I have an unbounded pub/sub source which I use with <a href="https://beam.apache.org/documentation/programming-guide/#session-windows" rel="nofollow noreferrer">session windows</a>.</p>
<p>There is some bounded configuration data which I need to pass into some of the DoFns of the pipeline as side input. This data resides in BigQuery. It's slowly changing, I expect a few changes per month at variable points in time. In order to combine the bounded and the unbounded data, I applied <a href="https://beam.apache.org/documentation/patterns/side-inputs/#slowly-updating-global-window-side-inputs" rel="nofollow noreferrer">this pattern</a> which creates a <a href="https://beam.apache.org/releases/pydoc/2.30.0/apache_beam.transforms.periodicsequence.html#apache_beam.transforms.periodicsequence.PeriodicImpulse" rel="nofollow noreferrer">PeriodicImpulse</a> each hour. Subsequently, the DoFn reads in the config data from BQ, transforms it into a dict and returns it.</p>
<p>Later on, the result of the above mentioned is passed to one of the DoFns of the main pipeline as sideinput.</p>
<p>When executing the pipeline with the LocalRunner, I get a pretty unspecific
<code>RuntimeError: Transform node AppliedPTransform(PeriodicImpulse/GenSequence/ProcessKeyedElements/GroupByKey/GroupByKey, _GroupByKeyOnly) was not replaced as expected. </code></p>
<p>However, if I replace the PeriodicPulse step by a simple Create(["DummyValue"]), the pipeline is working fine (of course except for the fact that it will ignore all changes to the config data that happened after the initial read from BQ).</p>
<p>What do I need to change in order to get it working?</p>
<pre><code>
n = 1
SESSION_GAP_SIZE = 3600 * 24
p_opt = PipelineOptions(
pipeline_args, streaming=True, save_main_session=True,allow_unsafe_triggers=True,runner="DirectRunner"
, ...)
with Pipeline(options=p_opt) as p:
cfg_data = (p
| 'PeriodicImpulse' >> PeriodicImpulse(fire_interval=3600,apply_windowing=True)
| "Retrieve Segment Config from BQ" >> ParDo(get_segment_config_from_bq)
)
main_p = (
p
| "Read Stream from Pub/Sub" >> io.ReadFromPubSub(subscription=SUBSCRIPTION,with_attributes=True)
| "Filter 1" >> Filter(Filter1())
| "Filter 2" >> Filter(Filter2())
| "Decode Pub/Sub Messages" >> ParDo(ReadPubSubMessage())
| "Extract Composite Key" >> ParDo(ExtractKey())
| "Build Session Windows" >> WindowInto(window.Sessions(SESSION_GAP_SIZE ), trigger=AfterCount(n),accumulation_mode=AccumulationMode.ACCUMULATING)
| "Another GroupByKey" >> GroupByKey()
| "Enrich Stream Data by Config" >> ParDo(EnrichWithConfig(),segment_cfg=pvalue.AsSingleton(cfg_data))
| "Output to PubSub" >> WriteToPubSub(topic=TARGET_TOPIC)
)
</code></pre>
|
<python><stream><google-cloud-dataflow><apache-beam><stream-processing>
|
2024-05-16 19:48:51
| 2
| 560
|
Thomas W.
|
78,491,897
| 358,980
|
Heroku Golang api with some python callouts
|
<p>I have a heroku golang 'app' thats deploying fine. The issue is that it calls some python scripts as the result of some REST requests. I have a <code>requirements.txt</code> in my <code>/bin</code> (python) dir but I'm wondering if it's possible to have the golang deploy process also process pip3 python3 requirements when I <code>git push heroku master</code>? Is this possible in a Procfile? Thanks!</p>
|
<python><heroku>
|
2024-05-16 18:43:43
| 1
| 4,112
|
Mike S
|
78,491,825
| 12,560,539
|
check python code location after pip installl
|
<p>I am using setuptools to manage my python project,</p>
<pre><code>from setuptools import setup, find_packages
setup(
name="falcon",
version="1.0.0",
packages=find_packages(),
install_requires=open("./requirements.txt").readlines(),
entry_points={
"console_scripts": [
"falcon-extract = xxxx.yyyy.main:main",
]
},
)
</code></pre>
<p>when I run the following command to install packages</p>
<pre><code>pip install -e .
</code></pre>
<p>I am wondering where the package is installed, and where the console command "falcon-extract" is installed.</p>
<p>(1) Suppose I have a virtual environment, for example, ~/venv/my-project, where is the package installed? and where is console command "falcon-extract" installed?</p>
<p>(2) Suppose I am using MacOS, I don't create virtual environment, when I run "pip install -e .", where is the package installed? and where is console command "falcon-extract" installed?</p>
<p>(3) In the installed folder, does it install source code ? Could I view these source code?</p>
|
<python>
|
2024-05-16 18:25:53
| 0
| 405
|
Joe
|
78,491,788
| 825,227
|
Trim trailing NaN values in Python dataframe
|
<p>Is there a way to trim trailing NaNs for each column in a dataframe?</p>
<p>Acquainted with <code>dropna()</code> and its parameters (eg, axis, how) for dealing with stuff like this but doesn't seem to address this case.</p>
<p>Sample data looks like this:</p>
<pre><code> 1 2 3 4 5 6
2023-02-10 NaN NaN NaN 0.00 0.00 NaN
2023-02-13 NaN NaN NaN 0.02 0.02 NaN
2023-02-14 NaN NaN NaN 0.00 0.00 NaN
2023-02-15 NaN NaN NaN 0.01 0.01 NaN
2023-02-16 NaN NaN NaN -0.01 -0.01 NaN
2023-02-17 NaN NaN NaN -0.01 -0.01 NaN
2023-02-21 NaN NaN NaN -0.03 -0.03 NaN
2023-02-22 NaN NaN NaN 0.00 0.00 NaN
2023-02-23 NaN NaN NaN 0.00 0.00 NaN
2023-02-24 NaN -0.02 NaN -0.02 -0.02 NaN
2023-02-27 NaN 0.01 NaN 0.01 0.01 NaN
2023-02-28 NaN 0.03 0.03 0.00 0.00 NaN
2023-03-01 NaN -0.04 -0.04 -0.01 -0.01 NaN
2023-03-02 NaN 0.00 0.00 0.00 0.00 NaN
2023-03-03 NaN -0.01 -0.01 0.04 0.04 NaN
2023-03-06 NaN -0.02 -0.02 0.02 0.02 NaN
2023-03-07 -0.01 -0.01 -0.01 -0.01 -0.01 NaN
2023-03-08 -0.01 -0.01 -0.01 NaN 0.01 NaN
2023-03-09 0.00 -0.02 -0.02 NaN -0.01 NaN
2023-03-10 -0.03 -0.01 -0.01 NaN -0.01 NaN
2023-03-13 0.02 -0.03 -0.03 NaN 0.01 NaN
2023-03-14 -0.02 -0.02 -0.02 NaN 0.01 NaN
2023-03-15 -0.04 0.00 0.00 NaN 0.00 NaN
2023-03-16 -0.03 0.00 0.00 NaN 0.02 NaN
2023-03-17 0.01 -0.02 -0.02 NaN -0.01 -0.01
2023-03-20 -0.01 -0.01 -0.01 NaN 0.02 0.02
2023-03-21 0.03 0.01 0.01 NaN 0.01 0.01
2023-03-22 0.03 -0.05 -0.05 NaN -0.01 -0.01
2023-03-23 -0.01 -0.02 -0.02 NaN 0.01 0.01
2023-03-24 0.01 0.00 0.00 NaN 0.01 0.01
</code></pre>
<p>I'd like a result that looks like this:</p>
<pre><code> 1 2 3 4 5 6
2023-02-10 NaN NaN NaN NaN 0.00 NaN
2023-02-13 NaN NaN NaN NaN 0.02 NaN
2023-02-14 NaN NaN NaN NaN 0.00 NaN
2023-02-15 NaN NaN NaN NaN 0.01 NaN
2023-02-16 NaN NaN NaN NaN -0.01 NaN
2023-02-17 NaN NaN NaN NaN -0.01 NaN
2023-02-21 NaN NaN NaN NaN -0.03 NaN
2023-02-22 NaN NaN NaN NaN 0.00 NaN
2023-02-23 NaN NaN NaN NaN 0.00 NaN
2023-02-24 NaN -0.02 NaN NaN -0.02 NaN
2023-02-27 NaN 0.01 NaN NaN 0.01 NaN
2023-02-28 NaN 0.03 0.03 NaN 0.00 NaN
2023-03-01 NaN -0.04 -0.04 NaN -0.01 NaN
2023-03-02 NaN 0.00 0.00 0.00 0.00 NaN
2023-03-03 NaN -0.01 -0.01 0.02 0.04 NaN
2023-03-06 NaN -0.02 -0.02 0.00 0.02 NaN
2023-03-07 -0.01 -0.01 -0.01 0.01 -0.01 NaN
2023-03-08 -0.01 -0.01 -0.01 -0.01 0.01 NaN
2023-03-09 0.00 -0.02 -0.02 -0.01 -0.01 NaN
2023-03-10 -0.03 -0.01 -0.01 -0.03 -0.01 NaN
2023-03-13 0.02 -0.03 -0.03 0.00 0.01 NaN
2023-03-14 -0.02 -0.02 -0.02 0.00 0.01 NaN
2023-03-15 -0.04 0.00 0.00 -0.02 0.00 NaN
2023-03-16 -0.03 0.00 0.00 0.01 0.02 NaN
2023-03-17 0.01 -0.02 -0.02 0.00 -0.01 -0.01
2023-03-20 -0.01 -0.01 -0.01 -0.01 0.02 0.02
2023-03-21 0.03 0.01 0.01 0.00 0.01 0.01
2023-03-22 0.03 -0.05 -0.05 0.04 -0.01 -0.01
2023-03-23 -0.01 -0.02 -0.02 0.02 0.01 0.01
2023-03-24 0.01 0.00 0.00 -0.01 0.01 0.01
</code></pre>
|
<python><pandas><dataframe>
|
2024-05-16 18:17:52
| 1
| 1,702
|
Chris
|
78,491,778
| 3,100,515
|
Pytest dependency doesn't work when BOTH across files AND parametrized
|
<p>I'm running into a problem wherein pytest_dependency works as expected when</p>
<p>EITHER</p>
<ul>
<li>Doing parametrization, and dependent tests are in the same file</li>
</ul>
<p>OR</p>
<ul>
<li>Not doing parametrization, and dependent tests are in a separate file</li>
</ul>
<p><strong>But</strong>, I can't get the dependency to work properly when doing BOTH - parametrized dependent tests in a different file. It skips all the tests even when the dependencies have succeeded.</p>
<p>I have a directory structure like so:</p>
<pre><code>tests/
- common.py
- test_0.py
- test_1.py
</code></pre>
<p>common.py:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
ints = [1, 2, 3]
strs = ['a', 'b']
pars = list(zip(np.repeat(ints, 2), np.tile(strs, 3)))
</code></pre>
<p>test_0.py:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pytest
from pytest_dependency import depends
from common import pars
def idfr0(val):
if isinstance(val, (int, np.int32, np.int64)):
return f"n{val}"
def idfr1(val):
return "n{}-{}".format(*val)
# I use a marker here because I have a lot of code parametrized this way
perm_mk = pytest.mark.parametrize('num, lbl', pars, ids=idfr0)
# 2 of these parametrized tests should fail
@perm_mk
@pytest.mark.dependency(scope="session")
def test_a(num, lbl):
if num == 2:
assert False
else:
assert True
# I set up a dependent parametrized fixture just like the in the documentation
@pytest.fixture(params=pars, ids=idfr1)
def perm_fixt(request):
return request.param
@pytest.fixture()
def dep_perms(request, perm_fixt):
depends(request, ["test_a[n{}-{}]".format(*perm_fixt)])
return perm_fixt
# This one works
@pytest.mark.dependency(scope="session")
def test_b(dep_perms):
pass
# These are non-parametrized independent tests
@pytest.mark.dependency(scope="session")
def test_1():
pass
@pytest.mark.xfail()
@pytest.mark.dependency(scope="session")
def test_2():
assert False
</code></pre>
<p>test_1.py:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from pytest_dependency import depends
from common import pars
def idfr2(val):
return "n{}-{}".format(*val)
@pytest.fixture(params=pars, ids=idfr2)
def perm_fixt(request):
return request.param
@pytest.fixture()
def dep_perms(request, perm_fixt):
depends(request, ["test_0.py::test_a[n{}-{}]".format(*perm_fixt)])
return perm_fixt
# Same use of a parametrized fixture, but this one doesn't work
@pytest.mark.dependency(scope="session")
def test_c(dep_perms):
num, lbl = dep_perms
assert True
# These are non-parametrized dependent tests that work as expected
@pytest.mark.dependency(scope="session", depends=["test_0.py::test_1"])
def test_3():
pass
@pytest.mark.dependency(scope="session", depends=["test_0.py::test_2"])
def test_4():
pass
</code></pre>
<p>I expect to see <code>test_a</code> pass for 4 of its 6 parametrized runs and fail 2, <code>test_b</code> pass 4 and skip 2, and <code>test_c</code> likewise pass 4 and skip 2. I expect <code>test_1</code> to pass, <code>test_2</code> to xfail, <code>test_3</code> to pass, and <code>test_4</code> to be skipped. All of the above happens perfectly except for <code>test_c</code> - all of it gets skipped.</p>
<p>I've confirmed that the test names look like they are right. I run pytest from the tests directory like so:</p>
<pre><code>pytest --tb=no -rpfxs ./test_0.py ./test_1.py
</code></pre>
<p>The output is:</p>
<pre><code>collected 22 items
test_0.py ..FF....ss...x [ 63%]
test_1.py ssssss.s [100%]
=================================================================================== short test summary info ===================================================================================
PASSED test_0.py::test_a[n1-a]
PASSED test_0.py::test_a[n1-b]
PASSED test_0.py::test_a[n3-a]
PASSED test_0.py::test_a[n3-b]
PASSED test_0.py::test_b[n1-a]
PASSED test_0.py::test_b[n1-b]
PASSED test_0.py::test_b[n3-a]
PASSED test_0.py::test_b[n3-b]
PASSED test_0.py::test_1
PASSED test_1.py::test_3
FAILED test_0.py::test_a[n2-a] - assert False
FAILED test_0.py::test_a[n2-b] - assert False
XFAIL test_0.py::test_2
SKIPPED [1] test_0.py:36: test_b[n2-a] depends on test_a[n2-a]
SKIPPED [1] test_0.py:36: test_b[n2-b] depends on test_a[n2-b]
SKIPPED [1] test_1.py:20: test_c[n1-a] depends on test_0.py::test_a[n1-a]
SKIPPED [1] test_1.py:20: test_c[n1-b] depends on test_0.py::test_a[n1-b]
SKIPPED [1] test_1.py:20: test_c[n2-a] depends on test_0.py::test_a[n2-a]
SKIPPED [1] test_1.py:20: test_c[n2-b] depends on test_0.py::test_a[n2-b]
SKIPPED [1] test_1.py:20: test_c[n3-a] depends on test_0.py::test_a[n3-a]
SKIPPED [1] test_1.py:20: test_c[n3-b] depends on test_0.py::test_a[n3-b]
SKIPPED [1] ..\..\Miniconda3\envs\python_utils\Lib\site-packages\pytest_dependency.py:101: test_4 depends on test_0.py::test_2
===================================================================== 2 failed, 10 passed, 9 skipped, 1 xfailed in 0.43s ======================================================================
</code></pre>
<p>Notice that it explicitly states that (for example) <code>test_0.py::test_a[n1-a]</code> has passed, but later it skips <code>test_c[n1-a]</code> because it depends on <code>test_0.py::test_a[n1-a]</code>. Yet <code>test_3</code> passes because <code>test_1</code> passed, and <code>test_4</code> is skipped because <code>test_2</code> xfailed, so I know my root node name is right.</p>
<p>I've scoured the other issues here but the vast majority of them are from naming or scope issues, both of which don't appear to be a problem here. <strong>Can anybody tell me why test_c doesn't work?</strong></p>
|
<python><pytest><pytest-dependency>
|
2024-05-16 18:15:07
| 1
| 5,678
|
Ajean
|
78,491,673
| 1,513,388
|
NATS Python example equivalent to 'nats reply' and 'nats request`
|
<p>I'm trying to learn more about NATS and I'm working through the python examples <a href="https://github.com/ConnectEverything/nats-by-example/tree/main" rel="nofollow noreferrer">here</a> and <a href="https://github.com/nats-io/nats.py" rel="nofollow noreferrer">here</a> I started with the command line and tested the Request-Response pattern using these commands.</p>
<pre><code>#Terminal 1
nats reply greet.sue 'OK, I CAN HELP!!!'
18:42:54 Listening on "greet.sue" in group "NATS-RPLY-22"
18:42:56 [#0] Received on subject "greet.sue":
I need help!
#Terminal 2
nats reply greet.sue 'OK, I CAN HELP!!!'
18:42:54 Listening on "greet.sue" in group "NATS-RPLY-22"
18:42:56 [#0] Received on subject "greet.sue":
I need help!
#Terminal 3
nats request greet.sue 'I need help!'
18:42:56 Sending request on "greet.sue"
18:42:56 Received with rtt 834.292µs
OK, I CAN HELP!!!
</code></pre>
<p>This worked as per the documentation and the two <code>reply</code> services respond in a kind of <code>round-robin</code> way.</p>
<p>I now need the Python equivalents to continue my testing, but all of the examples are trying to be way to clever - i.e. they include both the <code>request</code> and <code>reply</code> logic in the same example (e.g. <a href="https://github.com/ConnectEverything/nats-by-example/tree/main/examples/messaging/request-reply/python" rel="nofollow noreferrer">request-reply</a>) which makes it much harder and confusing to understand what's happening.</p>
<p>I can't seem to find any simple examples that break out the Python logic into separate components that behave in the same way as the CLI commands. Can anybody help me out here please.</p>
|
<python><nats.io>
|
2024-05-16 17:48:29
| 0
| 7,523
|
user1513388
|
78,491,610
| 5,837,992
|
Comparing One Record in Pandas Dataframe To All Other Records in Dataframe
|
<p>I have a situation where I want to compare every value in one column of a dataframe against every other value in the same column. In this case, for every product, I want to see for Hyundais the comparison to Kias in each warehouse</p>
<p>There are ~10,000 products (500,000 records total) that I want to compare - every products against every other products in the dataset.</p>
<p>Please note that the concat of warehouses in the sample code below is to handle situations where a cross join is needed (warehouse for one product doesn't carry the other)</p>
<p>I'm trying to see if there is an easier/quicker way to do this rather than do the double "for" as per the below. The output is correct but the code takes over a day to run.</p>
<p>How is the most efficient way to make this work?</p>
<pre><code>import pandas as pd
data = {'productid' : ['hyundai', 'hyundai', 'hyundai', 'kia','kia', 'kia'],
'warehouse' : ['New Jersey', 'New York', 'California', 'New Jersey', 'New York', 'California'],
'pct_total' : [35, 45, 20,65,55,80]}
df = pd.DataFrame(data)
dfoutput2 = pd.DataFrame()
for productid1 in df.productid.unique():
for productid2 in df.productid.unique():
if productid1 != productid2:
df1=df[df.productid==productid1]
df2=df[df.productid==productid2]
allwarehouses=pd.concat( [df1.warehouse, df2.warehouse])
allwarehouses=allwarehouses.drop_duplicates()
merged1=pd.merge(allwarehouses,df1, how="left", on=["warehouse"])
mergedfinal=pd.merge(merged1,df2, how="left", on=["warehouse"])
mergedfinal['lowervalue'] = mergedfinal[['pct_total_x','pct_total_y']].min(axis=1)
dfoutput2=pd.concat( [dfoutput2, mergedfinal])
print("done")
</code></pre>
|
<python><pandas><loops><compare>
|
2024-05-16 17:33:06
| 1
| 1,980
|
Stumbling Through Data Science
|
78,491,587
| 7,687,981
|
Multiclass UNet with n-dimensional satellite images
|
<p>I'm trying to use a UNet in Pytorch to extract prediction masks from multidimensional (8 band) satellite images. I'm having trouble getting the prediction masks to look somewhat expected/coherent. I'm not sure if the issue is the way my training data is formatted, my training code, or the code I'm using to make predictions. My suspicion is that it is the way my training data is being fed to the model. I have 8 band satellite images and single band masks with values ranging 0-n number of classes with 0 being background and 1-n being target labels like this:</p>
<p><a href="https://i.sstatic.net/BHbMgmvz.png?" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHbMgmvz.png?" alt="enter image description here" /></a></p>
<p>With the image shape being (8, 512, 512) and the mask shape being (512, 512) in the case of the single channel example, (512, 512, 8) in the OHE case, and (512, 512, 3) in the stacked case.</p>
<p>Some masks may contain all class labels, some may only have a couple or be background labels only. I've tried using these single channel masks, I've also converted them into 3 channel masks with the first channel being all the labels for a given image, and I've also tried one hot encoding them such that each mask is 0-n dimensions and each channel a different label with binary 0-1 for background/target.</p>
<p><strong>EDIT</strong>
After changing the softmax <code>dim=2</code>, the outputs started looking a little better. However, it appears the model is not learning at all after the first few warmup epochs as the training loss decreases initially but then immediately plateaus or increases and the prediction masks stop making sense (either all black or random blobs). I suspect there is an issue with my training pipeline (below) or possibly due to the class imbalance with class 0 (background).</p>
<pre class="lang-py prettyprint-override"><code>import os
import torch
import numpy as np
from skimage import io
from tqdm import tqdm
import torch.nn as nn
import torch.optim as optim
import segmentation_models_pytorch as smp
image_dir = r'test_segmentation\images'
mask_dir = r'test_segmentation\masks'
data_dir=r'unet_training'
os.makedirs(data_dir, exist_ok=True)
model_dir = os.path.join(data_dir, 'models')
os.makedirs(model_dir, exist_ok=True)
pred_dir = os.path.join(data_dir, 'predictions')
os.makedirs(pred_dir, exist_ok=True)
num_bands = 8
num_classes = 9
epochs = 10
learning_rate = 0.001
weight_decay = 0
encoder = 'resnet50'
encoder_weights = 'imagenet'
model = smp.Unet(in_channels=num_bands, encoder_name=encoder, encoder_weights=encoder_weights, classes=num_classes).to(device)
optimizer = optim.Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay)
loss_function = nn.CrossEntropyLoss() if num_classes > 1 else nn.BCEWithLogitsLoss()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for epoch in range(1, epochs + 1):
train_loss = 0
val_loss = 0
train_loop = tqdm(enumerate(train_loader), total=len(train_loader), desc=f"Epoch {epoch} Training")
model.train()
for batch_idx, (data, targets) in train_loop:
optimizer.zero_grad()
data = data.float().to(device)
targets = targets.long().to(device)
predictions = model(data)
loss = loss_function(predictions, targets)
train_loss += loss.item()
loss.backward()
optimizer.step()
train_loop.set_postfix(loss=train_loss)
val_loop = tqdm(enumerate(val_loader), total=len(val_loader), desc=f"Epoch {epoch} Validation")
model.eval()
for batch_idx, (data, targets) in val_loop:
data, targets = data.to(device).float(), targets.to(device).long()
preds = model(data)
val_loss = loss_function(preds, targets).item()
softmax = torch.nn.Softmax(dim=2)
preds = torch.argmax(softmax(preds), dim=1).cpu().numpy()
preds = np.array(preds[0, :, :], dtype=np.uint8)
labels = np.array(targets.cpu().numpy()[0, :, :], dtype=np.uint8)
#save prediction and label mask
pred_path = os.path.join(pred_dir, f"{epoch}_{batch_idx}_pred.png")
label_path = os.path.join(pred_dir, f"{epoch}_{batch_idx}_label.png")
io.imsave(pred_path, preds)
io.imsave(label_path, labels)
val_loop.set_postfix(loss=val_loss)
avg_train_loss = train_loss / (batch_idx + 1)
avg_val_loss = val_loss/ (batch_idx + 1)
print(f"\nEpoch {epoch} Train Loss: {avg_train_loss}, Val Loss: {avg_val_loss}")
checkpoint_name = os.path.join(model_dir, f"{modeltype}_bands{num_bands}_classes{num_classes}_{encoder}_{learning_rate}_{epoch}.pt")
if epoch == 1:
torch.save(model.state_dict(), checkpoint_name)
elif epoch % 10 == 0:
torch.save(model.state_dict(), checkpoint_name)
elif epoch == epochs:
torch.save(model.state_dict(), checkpoint_name)
else:
pass
</code></pre>
|
<python><machine-learning><deep-learning><pytorch><semantic-segmentation>
|
2024-05-16 17:28:46
| 1
| 815
|
andrewr
|
78,491,565
| 4,209,368
|
Why first criteria only executed when filtering the rows by matching two columns in Python Pandas Dataframe
|
<p>I have below code to filter out rows of earliest year and month in the data. I used the logical operator '&' as in the Method 1 segment of this below code. But Method 1 segment of below code filter out rows if first criteria condition met no matter second criteria met or not as it seems. Method 2 gives the correct output.</p>
<p>For your info the columns "Year" and "Month" in excel file is in "General" format. And I tried <code>Original_File_df.query</code> method also.</p>
<pre><code>import pandas as pd
Original_File_df = pd.read_excel(open("6 Months.xlsx", "rb"), sheet_name = 0)
New_Data_File_df = pd.read_excel(open("1 Month.xlsx", "rb"), sheet_name = 0)
#Method 1 Start
Original_File_YMMin = pd.to_datetime(Original_File_df["Year"].astype(str) + "-" + Original_File_df["Month"].astype(str), format='%Y-%m').min()
Original_File_df = Original_File_df[(Original_File_df["Year"] != Original_File_YMMin.year) & (Original_File_df["Year"] != Original_File_YMMin.month)]
#Method 1 End
#Method 2 Start
Original_File_df["YearMonth"] = pd.to_datetime(Original_File_df["Year"].astype(str) + "-" + Original_File_df["Month"].astype(str), format='%Y-%m')
Original_File_YMMin = Original_File_df["YearMonth"].min()
Original_File_df = Original_File_df[Original_File_df["YearMonth"] != Original_File_YMMin]
#Method 2 End
Original_File_df = Original_File_df._append(New_Data_File_df, ignore_index=True)
print(Original_File_df)
</code></pre>
<p><a href="https://i.sstatic.net/Tnv3LiJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tnv3LiJj.png" alt="Screenshot of Expected & Output Results" /></a></p>
|
<python><pandas>
|
2024-05-16 17:22:23
| 1
| 619
|
Dhay
|
78,491,423
| 2,153,235
|
pandas read_csv silently not parsing separate date & time fields despite provision of date_format
|
<p>I'm am reading a CSV file <a href="http://archive.ics.uci.edu/static/public/360/air+quality.zip" rel="nofollow noreferrer"><code>AirQualityUCI.csv</code></a> containing date data [1]. Here are a sample of 10 lines, including the 1st line for field names:</p>
<pre><code>Date;Time;CO(GT);PT08.S1(CO);NMHC(GT);C6H6(GT);PT08.S2(NMHC);NOx(GT);PT08.S3(NOx);NO2(GT);PT08.S4(NO2);PT08.S5(O3);T;RH;AH;;
10/03/2004;18.00.00;2,6;1360;150;11,9;1046;166;1056;113;1692;1268;13,6;48,9;0,7578;;
10/03/2004;19.00.00;2;1292;112;9,4;955;103;1174;92;1559;972;13,3;47,7;0,7255;;
10/03/2004;20.00.00;2,2;1402;88;9,0;939;131;1140;114;1555;1074;11,9;54,0;0,7502;;
10/03/2004;21.00.00;2,2;1376;80;9,2;948;172;1092;122;1584;1203;11,0;60,0;0,7867;;
31/03/2005;06.00.00;0,9;1068;-200;6,1;816;191;681;80;1264;1011;12,6;71,5;1,0380;;
31/03/2005;07.00.00;4,0;1531;-200;23,6;1394;673;407;133;1860;1683;12,7;69,6;1,0206;;
31/03/2005;08.00.00;4,1;1184;-200;10,9;1012;547;567;170;1397;1341;16,8;51,6;0,9818;;
31/03/2005;09.00.00;1,9;1064;-200;6,8;848;275;692;139;1212;1014;20,3;40,1;0,9439;;
31/03/2005;10.00.00;1,3;996;-200;4,9;759;200;773;124;1119;751;21,0;37,3;0,9139;;
</code></pre>
<p>The ingestion code is adapted from <a href="https://realpython.com/pandas-groupby/#example-2-air-quality-dataset" rel="nofollow noreferrer">here</a>, which does not have the <code>date_format</code> specification below:</p>
<pre><code>import pandas as pd
df = pd.read_csv(
"AirQualityUCI.csv", sep=';', decimal=',',
date_format='%d/%m/%Y',
parse_dates=[["Date", "Time"]],
na_values=[-200],
usecols=["Date", "Time", "CO(GT)", "T", "RH", "AH"]
).rename(
columns={
"CO(GT)": "co",
"Date_Time": "tstamp",
"T": "temp_c",
"RH": "rel_hum",
"AH": "abs_hum",
}
).set_index("tstamp")
</code></pre>
<p>The use of the <code>date_format</code> option eliminated a previous warning:</p>
<pre><code>c:\cygwin64\home\User.Name\Some.Path\trygroupby.py:28:
UserWarning: Could not infer format, so each element will be
parsed individually, falling back to `dateutil`. To ensure parsing
is consistent and as-expected, please specify a format.
df = pd.read_csv(
</code></pre>
<p>Despite this, date/time index is still merely a string, as it was before the use of <code>date_format</code>:</p>
<pre><code>>>> df.index.day_name()
AttributeError: 'Index' object has no attribute 'day_name'
>>> type(df.index.min())
str
</code></pre>
<p>What am I doing wrong with <code>date_format</code>?</p>
<p><strong>Notes</strong></p>
<p><strong>[1]</strong> <a href="http://archive.ics.uci.edu/dataset/360/air+quality" rel="nofollow noreferrer">Here</a> is the host webpage. The last few lines of <code>AirQualityUCI.csv</code> contain nothing but empty fields, which I removed.</p>
|
<python><pandas><datetime><read-csv>
|
2024-05-16 16:46:01
| 1
| 1,265
|
user2153235
|
78,491,419
| 4,076,764
|
sentence-transformers progress bar can't disable
|
<p>We're using <code>sentence transformers</code> to batch encode large text</p>
<pre><code>model = SentenceTransformer('/app/resources/all-MiniLM-L6-v2')
embeddings = []
for obj in data:
text = obj['text']
row = {'id': obj['id'],'embedding' : np.ndarray.tolist(model.encode(text)}
embeddings.append(row)
</code></pre>
<p>Which under the covers is using <code>tqdm</code> to log and produces these spammy batch logs</p>
<p><a href="https://i.sstatic.net/TpNxRSoJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TpNxRSoJ.png" alt="enter image description here" /></a></p>
<p>Since we are not interacting with tqdm directly, we're looking for a global way to disable these. From <a href="https://stackoverflow.com/questions/37091673/silence-tqdms-output-while-running-tests-or-running-the-code-via-cron">related thread</a> it was suggested that it can be done via an env. variable</p>
<pre><code>export TQDM_DISABLE=1/False/None
</code></pre>
<p><a href="https://pypi.org/project/tqdm/" rel="nofollow noreferrer">This page</a> confirms that this should be valid. In fact, I was able to change the color on the progress bar via: <code>export TQDM_COLOUR='red'</code></p>
<p><a href="https://i.sstatic.net/cWWNSIhg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWWNSIhg.png" alt="enter image description here" /></a></p>
<p>Yet the <code>TQDM_DISABLE</code> doesn't work. I've tried many hacks from other answers - but the only thing that works is completely suppressing logging to <code>warning</code> level, which I don't want to do. Is there any way to just turn off <code>tqdm</code> logs entirely reliably?</p>
|
<python><torch><tqdm>
|
2024-05-16 16:45:48
| 1
| 16,527
|
Adam Hughes
|
78,491,362
| 5,403,987
|
How to combine non-contiguous numpy slices
|
<p>I have a numpy array with shape (M, N, N) - it's effectively a bunch (M) of (N,N) covariance matrices. I want to be able to extract submatrices out of this with shape (M, P, P). But I'm trying to access non-contiguous indices. Here's an example:</p>
<pre><code>import numpy as np
# Display all the columns
np.set_printoptions(threshold=False, edgeitems=50, linewidth=200)
# Create a 6 x 6 matrix.
x = np.arange(36).reshape(6,6)
# Now make multiple copies to practice with.
y = np.array([x, x])
print(f"{y.shape=}\n")
print(f"{y=}\n")
# We want to extract the submatrices containting the first 2 indices
# and the last 2 indices. There are an "unknown" number of intermediate
# indices - in this example 2. Thus I'm using negative indices to get the
# last two indicies.
# Extraction using advanced indexing
s = np.array([[0, 1] + [-2, -1]])
subset = y[:, s.T, s]
print(f"{subset=}\n")
# Now try it with numpy slices. This approach doesn't work
first_slice = np.s_[0:2]
second_slice = np.s_[-2:]
combined_slice = np.r_[first_slice, second_slice]
subset = y[:, combined_slice, combined_slice]
print(subset)
</code></pre>
<p>That produces the following output:</p>
<pre><code>y.shape=(2, 6, 6)
y=array([[[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]],
[[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]]])
subset=array([[[ 0, 1, 4, 5],
[ 6, 7, 10, 11],
[24, 25, 28, 29],
[30, 31, 34, 35]],
[[ 0, 1, 4, 5],
[ 6, 7, 10, 11],
[24, 25, 28, 29],
[30, 31, 34, 35]]])
[[0 7]
[0 7]]
</code></pre>
<p>Using array based indexing works, but it seems like there should be a way to do this with slice objects too. Any tips would be appreciated. Thanks!</p>
|
<python><numpy><numpy-slicing>
|
2024-05-16 16:34:47
| 0
| 2,224
|
Tom Johnson
|
78,491,233
| 4,343,563
|
How to list all packages that are being used in python script?
|
<p>I want to check all the packages that are being used by my python script. I have a list of imports at the top of the script, but I'm not sure if those packages are actually getting used. So I want to check, of all the commands/functions I am using within by script, which of my imports are being used.</p>
<p>For example in the sample below I am only using the pandas package but I am importing numpy as well:</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.read_csv("/path/to/file.csv")
</code></pre>
<p>I want to be able to get back a result of just <code>pandas</code> since that is the only package that is getting used.</p>
<p>All the solutions I've seen will just get all the packages that are imported.</p>
|
<python><import>
|
2024-05-16 16:09:12
| 1
| 700
|
mjoy
|
78,491,230
| 2,036,035
|
Why is my simple vec3 pyo3 pyclass so much slower than py glm's equivalent class at construction and multiplication?
|
<p>I'm trying to create a Python extension in Pyo3 that creates a type that is similar to a vec3/glm vec3, but instead from rust.</p>
<p>I've created the following directory structure:</p>
<pre class="lang-none prettyprint-override"><code>.
├── Cargo.toml
├── pyproject.toml
├── python
│ └── main.py
└── src
└── lib.rs
</code></pre>
<p>Where only <code>lib.rs</code> and <code>main.py</code> are custom made, the rest was generated by <code>maturin init</code></p>
<p><code>lib.rs</code> is :</p>
<pre class="lang-rust prettyprint-override"><code>use pyo3::prelude::*;
#[pyclass]
#[derive(Clone)]
struct Float3{
#[pyo3(get, set)]
x : f64,
#[pyo3(get, set)]
y : f64,
#[pyo3(get, set)]
z : f64,
}
#[pymethods]
impl Float3 {
#[new]
fn py_new(x : f64, y : f64, z : f64) -> Self {
Float3 { x, y, z}
}
fn __rmul__ (&self, lhs : f64) -> Self{
return Float3{ x: self.x * lhs, y : self.y * lhs, z : self.z * lhs};
}
fn __mul__ (&self, lhs : f64) -> Self{
return Float3{ x: self.x * lhs, y : self.y * lhs, z : self.z * lhs};
}
fn __add__ (&self, lhs : &Self) -> Self{
return Float3{x : self.x + lhs.x, y: self.y + lhs.y, z: self.z + lhs.z};
}
fn __sub__ (&self, lhs : &Self) -> Self{
return Float3{x : self.x - lhs.x, y: self.y - lhs.y, z: self.z - lhs.z};
}
fn __iadd__ (&mut self, lhs : &Self) -> (){
*self = Float3{x : self.x + lhs.x, y: self.y + lhs.y, z: self.z + lhs.z};
}
fn __isub__ (&mut self, lhs : &Self) -> (){
*self = Float3{x : self.x - lhs.x, y: self.y - lhs.y, z: self.z - lhs.z};
}
}
/// A Python module implemented in Rust.
#[pymodule]
fn test_pyo3(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_class::<Float3>()?;
Ok(())
}
</code></pre>
<p>And <code>main.py</code> is:</p>
<pre class="lang-py prettyprint-override"><code>import test_pyo3
import glm
import time
def main():
samples = 100000
tic = time.time()
for i in range(samples):
temp = glm.dvec3(0.0, 0.0, 9.81)
print("average time per construction glm {}".format((time.time() - tic) / samples))
tic = time.time()
for i in range(samples):
temp = test_pyo3.Float3(0.0, 0.0, 9.81)
print("average time per construction test_pyo3 {}".format((time.time() - tic) / samples))
tic = time.time()
for i in range(samples):
temp = 1.5 * glm.dvec3(0.0, 0.0, 9.81)
print("average time per multiply operation glm {}".format((time.time() - tic) / samples))
tic = time.time()
for i in range(samples):
temp = 1.5 * test_pyo3.Float3(0.0, 0.0, 9.81)
print("average time per multiply operation test_pyo3 {}".format((time.time() - tic) / samples))
pass
if __name__ == '__main__':
main()
</code></pre>
<p>main.py runs a benchmark on constructing glm.dvec3, multiplying with a scalar, and the equivalent for my class.</p>
<p>The output is :</p>
<pre class="lang-none prettyprint-override"><code>average time per construction glm 2.125263214111328e-07
average time per construction test_pyo3 2.834796905517578e-07
average time per construction glm 2.486872673034668e-07
average time per construction test_pyo3 4.966330528259277e-07
</code></pre>
<p>I compile the rust side with "maturin develop -r" which should build in <a href="https://www.maturin.rs/local_development" rel="nofollow noreferrer">release</a>, and I set pyo3 to be pyo3 = "0.21.2" in my Cargo.toml</p>
<p>There's a 40 ish percentage between construction, and nearly 2x performance reduction in simple scalar multiplication. When looking at the call graph using py-spy, all I see is "trampoline" everywhere with a bunch of random numbers, and it's not clear what is actually going on. Regardless, this should be an apples to apples comparison, PyGLM is just using C bindings directly, and not using rust to do so. I wouldn't expect there to be a massive difference in performance with such a simple test case, both should be doing the same kind of work.</p>
<p>Is there a way to bring the performance of my simple class more inline with PyGLM?</p>
|
<python><python-3.x><rust><ffi><pyo3>
|
2024-05-16 16:09:04
| 1
| 5,356
|
Krupip
|
78,491,105
| 903,188
|
How do I get pydantic to report a violation if a YAML mapping to an optional attribute omits the space after the colon?
|
<p>The assertion at the end of the following code fails because there is no space between <code>walk</code> and <code>True</code> in the yaml:</p>
<pre><code>import yaml
from pydantic import BaseModel, parse_obj_as
from typing import List
class path_cls (BaseModel):
path : str
walk : bool = False
other : bool = False
class myschema (BaseModel):
paths: List[path_cls]
ydata = yaml.safe_load("""\
---
paths:
- {path: .., walk:True }
- {path: ../../somefolder }
""")
data = parse_obj_as(myschema, ydata)
assert data.paths[0].walk
</code></pre>
<p>Since <code>walk:True</code> represents a valid YAML scalar, and because <code>walk</code> is an optional attribute in <code>path_cls</code>, neither the parser nor the validator see a problem.</p>
<p>In my schema, there will never be a case where <code>{key:value, scalar}</code> makes sense.</p>
<p>How do I get pydantic to recognize <code>walk:True</code> as an error? And is there a general solution that would flag a bad mapping of any optional attribute?</p>
|
<python><yaml><pydantic>
|
2024-05-16 15:45:33
| 1
| 940
|
Craig
|
78,490,549
| 279,097
|
Polars: read_parquet with how='diagonal'
|
<p>Is there a way to do the same as the "how" parameter from pl.concat but with read_parquet?</p>
|
<python><dataframe><python-polars>
|
2024-05-16 14:16:40
| 2
| 415
|
Mac Fly
|
78,490,272
| 7,179,546
|
How to debug when a process receives a SIGABORT signal in Python
|
<p>I'm running an application in Python that uses asyncio and it's failing when I'm calling the function <code>await asyncio.gather()</code></p>
<p>The error I'm getting is just <code>Process 90788 failed to create lock file</code> and the process just exits, without showing any traces of the error. I can see that the process received a <code>SIGABRT</code> signal, but nothing else.</p>
<p>How can I debug this?</p>
<p>I'm using Virtual Studio Code for my debugging, so if it's possible to debug it there, I would prefer it over other solutions, but anything with other tools will be useful as well.</p>
|
<python><python-asyncio><sigabrt>
|
2024-05-16 13:34:18
| 0
| 737
|
Carabes
|
78,490,267
| 2,057,516
|
How to type hint an abstractmethod property and make mypy happy?
|
<p>Rewriting this question, because it turns out what I thought the problem was, wasn't the problem. In fact, I have a seemingly equivalent case that mypy doesn't complain about.</p>
<p>Here is an example where there has no mypy complaints that seems equivalent:</p>
<pre><code>class TableLoader(ABC):
@property
@abstractmethod
def FieldToDataHeaderKey(self):
# dict of model dicts of field names and header keys
pass
def set_headers(self, custom_headers=None):
for mdl in self.FieldToDataHeaderKey.keys():
class StudyTableLoader(TableLoader):
FieldToDataHeaderKey = {
Study.__name__: {
"code": CODE_KEY,
"name": NAME_KEY,
"description": DESC_KEY,
},
}
</code></pre>
<p>However, this seemingly equivalent code produces a mypy error:</p>
<pre><code>class PeakAnnotationsLoader(ABC):
@property
@abstractmethod
def add_columns_dict(self):
pass
@classmethod
def add_df_columns(cls, df_dict: dict):
for sheet, column_dict in cls.add_columns_dict.items():
# ^^^ This is the line that produces the error
...
class IsocorrLoader(PeakAnnotationsLoader):
add_columns_dict = {}
</code></pre>
<p>produces the following errors from <code>mypy</code>:</p>
<pre><code>DataRepo/loaders/peak_annotations_loader.py:213: error: "Callable[[PeakAnnotationsLoader], Any]" has no attribute "items" [attr-defined]
DataRepo/loaders/peak_annotations_loader.py:321: error: Need type annotation for "add_columns_dict" (hint: "add_columns_dict: Dict[<type>, <type>] = ...") [var-annotated]
</code></pre>
<p>I can't figure out how to make mypy happy in the second case. All of this code works, BTW. How do I satisfy mypy here?</p>
<p>Is there a different way that I should be creating abstract class attributes? I have multiple classes that inherit from <code>TableLoader</code> and none of them produce mypy errors WRT <code>FieldToDataHeaderKey</code>. Is it just that mypy doesn't handle <code>.items()</code> the way it handled <code>.keys()</code>?</p>
|
<python><mypy><python-typing>
|
2024-05-16 13:33:24
| 1
| 1,225
|
hepcat72
|
78,490,201
| 904,910
|
Langchain GenericLoader Unsupported mime type when loading java files
|
<p>I am trying to take a bunch of java files and create an embedding to be used by an LLM.</p>
<p>So want to read java files to create the embeddings in a chroma db.</p>
<p>I have previously managed to do this successfully with pdf files but now I want to use my java code in the embeddings.</p>
<p>I have a method</p>
<pre class="lang-py prettyprint-override"><code>
from langchain_community.document_loaders.generic import GenericLoader
def load_documents():
loader = GenericLoader.from_filesystem(
"./path/to/java/files",
glob="**/*",
suffixes=[".java"],
parser=LanguageParser()
)
</code></pre>
<p>I am getting the following error</p>
<pre><code> main()
File "/Volumes/SamsungT5/langchain/populate_database.py", line 33, in main
documents = load_documents()
^^^^^^^^^^^^^^^^
File "/path/populate_database.py", line 50, in load_documents
return loader.load()
^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/langchain_community/document_loaders/generic.py", line 116, in lazy_load
yield from self.blob_parser.lazy_parse(blob)
File "/opt/anaconda3/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/generic.py", line 70, in lazy_parse
raise ValueError(f"Unsupported mime type: {mimetype}")
ValueError: Unsupported mime type: text/x-java-source
</code></pre>
<p>I have no clue why its telling me <code>Unsupported mime type: text/x-java-source</code>. Its my understanding that java file loading is supported.</p>
<hr />
<p>Edit</p>
<p>I used the DirectoryLoader instead and got done with what I was trying to do.</p>
<pre><code>loader = loader = DirectoryLoader(DATA_PATH, glob="**/*.java",loader_cls=TextLoader,use_multithreading=True)
return loader. Load()
</code></pre>
|
<python><langchain>
|
2024-05-16 13:22:14
| 0
| 460
|
chhil
|
78,490,151
| 188,331
|
AutoTokenizer.from_pretrained took forever to load
|
<p>I used the following code to load my custom-trained tokenizer:</p>
<pre><code>from transformers import AutoTokenizer
test_tokenizer = AutoTokenizer.from_pretrained('raptorkwok/cantonese-tokenizer-test')
</code></pre>
<p>It took forever to load. Even if I replace the <code>AutoTokenizer</code> with <code>PreTrainedTokenizerFast</code>, it still loads forever.</p>
<p>How to debug or fix this issue?</p>
|
<python><huggingface-transformers><huggingface-tokenizers>
|
2024-05-16 13:14:02
| 3
| 54,395
|
Raptor
|
78,490,140
| 1,498,018
|
Python pandas read_excel fills blank cells with 0
|
<p>I am at a complete loss over this. I'm reading in a horrible Excel file with pandas using the following line:</p>
<pre><code>pd_df = pd.read_excel(file_path, sheet_name=sheet, header=10, skipfooter=7, dtype='string', na_filter=False)
</code></pre>
<p>For one sheet, pandas apparently replaces/fills all empty cells with 0 - where "empty" means either really empty or containing two spaces. For another sheet, where blank cells are filled with at least five spaces, no replacement happens. Does anyone have any experience with that? It seems so random that this happens with exactly the same code.</p>
|
<python><pandas><excel>
|
2024-05-16 13:12:53
| 1
| 1,673
|
Lilith-Elina
|
78,490,018
| 4,271,491
|
How to best handle badly concatenated json
|
<p>I receive a single json file from a client which is not correct. <br>The client concatenates multiple json resposnses into one file:</p>
<pre><code>{
object1
{
...
}
}
{
object2
{
...
}
}
...
</code></pre>
<p>When I parse it with dataframe in pyspark, I always get a count of only one root object which is correct, because it reads only the first object and doesn't care about the rest. <br>
I need to somehow handle this and I'm trying to figure out what is the best way performance wise?
<br>Can dataframe handle bad jsons or can I easily fix this with python?</p>
|
<python><json><pyspark>
|
2024-05-16 12:52:32
| 1
| 528
|
Aleksander Lipka
|
78,489,976
| 8,035,076
|
Google Cloud Job logs splitting single line log into multiple lines
|
<p>I am running a python script in GCP cloud jobs and the logs are getting weirdly split into multiple lines. Pretty much unreadable. Is there a way to avoid this</p>
<pre><code>log = getLogger()
log_formatter = Formatter(
"[%(process)d] [%(threadName)s] %(asctime)s %(name)s: %(message)s"
)
for handler in log.handlers:
handler.setFormatter(log_formatter)
log.setLevel(INFO)
</code></pre>
<p>The below log prints as a single line in my local console</p>
<p><a href="https://i.sstatic.net/2fuPJ42M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fuPJ42M.png" alt="enter image description here" /></a></p>
|
<python><google-cloud-functions><google-cloud-run>
|
2024-05-16 12:46:25
| 0
| 917
|
gd vigneshwar
|
78,489,962
| 7,227,627
|
PyVista ray_trace for structured grids?
|
<p><strong>What I want to do</strong></p>
<p>For a boundary meteorological application in complex terrain I would like to check whether a sun ray hits the topography so that the cell under consideration is shaded (see photo)</p>
<p><strong>What I have done and what the problem is</strong></p>
<p>PyVista has in form of a lightweight ray tracing procedure the perfect template for my need. It can be found <a href="https://docs.pyvista.org/version/stable/examples/01-filter/poly-ray-trace" rel="nofollow noreferrer">here</a> and <a href="https://docs.pyvista.org/version/stable/api/core/_autosummary/pyvista.PolyDataFilters.ray_trace.html" rel="nofollow noreferrer">here</a>. These examples are based on <code>sphere = pv.Sphere()</code>.</p>
<p>But I use a structured grid <code>myMesh = pv.StructuredGrid()</code> and this rises the error message : <code>AttributeError: 'StructuredGrid' object has no attribute 'ray_trace'</code></p>
<p><strong>My question</strong></p>
<p>Is there a work-around so that I can use <code>PyVista.ray_trace</code> with a structured grid?</p>
<p><a href="https://i.sstatic.net/WxZEGPaw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxZEGPaw.png" alt="enter image description here" /></a></p>
|
<python><pyvista>
|
2024-05-16 12:44:49
| 1
| 1,998
|
pyano
|
78,489,951
| 4,434,140
|
how do I log on my azure cosmosdb for mongodb from aks through my workload identity?
|
<p>I have a private Azure Cosmos DB for MongoDB account (RU) together with a private AKS cluster. I want to access the MongoDB server from the AKS cluster through a workload identity. I followed the tutorial</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/learn/tutorial-kubernetes-workload-identity" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/learn/tutorial-kubernetes-workload-identity</a></p>
<p>My AKS Cluster has the OIDC stuff enabled. Also, I have created a User-Assigned Managed Identity (called <code>test-mongo</code> below) with the relevant role assignments (<code>DocumentDB Account Contributor</code>). The goal is that the <code>test-mongo</code> managed identity be able to perform data operations on the MongoDB server. In this particular post, I want to list the database names. Later on, I will not want that identity to be able to do that anymore, instead I will want it to read / write documents in some collections of some database.</p>
<p>First, I created the service account</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
azure.workload.identity/client-id: <my-user-assigned-managed-identity-app-id>
azure.workload.identity/tenant-id: <my-tenant-id>
labels:
azure.workload.identity/use: "true"
name: test-mongo
namespace: test-mongo
</code></pre>
<p>Then, I established the federated identity credential like this:</p>
<pre class="lang-yaml prettyprint-override"><code>az identity federated-credential create --name test-mongo --identity-name test-mongo \
--resource-group my-resource-group \
--issuer https://eastus.oic.prod-aks.azure.com/<my-tenant-id>/<my-subscription-id>/ \
--subject system:serviceaccount:test-mongo:test-mongo \
--subscription <my-subscription-id> \
--audience api://AzureADTokenExchange
</code></pre>
<p>Finally, I deployed the following pod:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: test-mongo
namespace: test-mongo
labels:
azure.workload.identity/use: "true"
spec:
serviceAccountName: test-mongo
containers:
- image: my-docker-image
name: container-0
</code></pre>
<p>When I <code>kubectl describe</code> my pod, I can see that the <code>AZURE_CLIENT_ID</code>, <code>AZURE_AUTHORITY_HOST</code>, <code>AZURE_FEDERATED_TOKEN_FILE</code>, and <code>AZURE_TENANT_ID</code> are provided to it.</p>
<p>My docker image is a python application which I'm not sure how to write. Currently, it looks like this:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from pymongo import MongoClient
def main():
credential = DefaultAzureCredential()
connection_string = f"mongodb://<cosmosdb-account-name>:{credential.get_token()}@<cosmosdb-account-name>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<cosmosdb-account-name>@"
client = MongoClient(connection_string)
print(client.is_primary)
if __name__ == "__main__":
main()
</code></pre>
<p>Unfortunately, that doesn't work, because I am not passing the right scope to the <code>credential.get_token()</code> method:</p>
<pre><code>WorkloadIdentityCredential: "get_token" requires at least one scope
</code></pre>
<p>In the case of the connection with a postgres server, I believe (but I have never been in a situation to test that yet), that the following code would be correct:</p>
<pre class="lang-py prettyprint-override"><code>credential = DefaultAzureCredential()
# here we pass the scope for postgres servers
token = credential.get_token("https://ossrdbms-aad.database.windows.net/.default")
postgresUser = '<aad-user-name>@<server-name>'
host = "<server-name>.postgres.database.azure.com"
dbname = "<database-name>"
conn_string = "host={0} user={1} dbname={2} password={3}".format(host, postgresUser, dbname, token.token)
conn = psycopg2.connect(conn_string)
</code></pre>
<p>How should I proceed in the MongoDB case?</p>
<p>Here's the documentation I checked so far on the topic:</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/migrate-passwordless?tabs=sign-in-azure-cli%2Cdotnet%2Cazure-portal-create%2Cazure-portal-associate%2Capp-service-identity" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/migrate-passwordless?tabs=sign-in-azure-cli%2Cdotnet%2Cazure-portal-create%2Cazure-portal-associate%2Capp-service-identity</a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/learn/tutorial-kubernetes-workload-identity" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/learn/tutorial-kubernetes-workload-identity</a></li>
<li><a href="https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos?tabs=azure-cli#access-data" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos?tabs=azure-cli#access-data</a></li>
</ul>
<h2>EDIT 1</h2>
<p>I followed along <a href="https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos?tabs=azure-cli#access-data" rel="nofollow noreferrer">this tutorial</a> and tried to adapt it to python. Apparently, with the <code>CosmosClient</code>, we necessarily need to access a CosmosDB server with SQL API, which is not my case, hence my adapted code fails:</p>
<pre class="lang-py prettyprint-override"><code>credential = DefaultAzureCredential(
managed_identity_client_id="<my-user-assigned-managed-identity-app-id>"
)
data_client = CosmosClient(
url="https://<cosmosdb-account-name>.documents.azure.com:443/",
credential=credential,
)
</code></pre>
<p>with error</p>
<pre><code>Message: Request blocked by Auth <cosmosdb-account-name>: Request is blocked because principal [<principal-id>] does not have required RBAC permissions to perform action [Microsoft.DocumentDB/databaseAccounts/readMetadata] on resource [/]. Learn more: https://aka.ms/cosmos-native-rbac.
ActivityId: c1e9f553-3cac-481b-a450-d099feebfb72, Microsoft.Azure.Documents.Common/2.14.0
</code></pre>
<p>and trying to add that role definition to my user-assigned managed identity doesn't work because</p>
<pre><code>The Database Account [<cosmosdb-account-name>] has an API type which is invalid for processing SQL Role Definitions: [MongoDB]
</code></pre>
<h2>EDIT 2</h2>
<p>I tried to do this lately:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from pymongo import MongoClient
def main():
credential = DefaultAzureCredential(
managed_identity_client_id="<my-user-assigned-managed-identity-app-id>"
)
token = credential.get_token("https://<cosmosdb-account-name>.documents.azure.com/.default").token
connection_string = f"mongodb://<cosmosdb-account-name>:{token}@<cosmosdb-account-name>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<cosmosdb-account-name>@"
client = MongoClient(connection_string)
client.list_database_names()
</code></pre>
<p>this throws</p>
<pre><code>raise OperationFailure(errmsg, code, response, max_wire_version)
pymongo.errors.OperationFailure: Invalid key, full error: {'ok': 0.0, 'errmsg': 'Invalid key', 'code': 18, 'codeName': 'AuthenticationFailed'}
</code></pre>
|
<python><mongodb><azure-aks><workload-identity>
|
2024-05-16 12:42:03
| 1
| 1,331
|
Laurent Michel
|
78,489,892
| 9,295,873
|
Python - Azure App Service for Containers running multiple Celery Beat instances and duplicating tasks
|
<p>We have a Python project and we are handling scheduled tasks using Celery infrastructure. We have deployed our Celery beat and worker components in 2 separate Azure App Services for Containers.</p>
<p>The issue that we are facing is that tasks are being duplicated. After checking the app service logs, I could see that beat has scheduled the tasks 5 times that at the exact same time (08:10 AM).</p>
<p>After checking logs more, I could see that beat has scheduled the tasks with different internal IPs.</p>
<p>The following logs are repeated:</p>
<pre><code>2024-05-16T12:20:00.349516119Z [2024-05-16 12:20:00,349: DEBUG/MainProcess] beat: Synchronizing schedule...
2024-05-16T12:20:00.351217950Z [2024-05-16 12:20:00,351: DEBUG/MainProcess] beat: Waking up in 5.00 minutes.
2024-05-16T12:20:00.324680768Z [2024-05-16 12:20:00,324: DEBUG/MainProcess] beat: Synchronizing schedule...
2024-05-16T12:20:00.326966684Z [2024-05-16 12:20:00,326: DEBUG/MainProcess] beat: Waking up in 5.00 minutes.
2024-05-16T12:20:00.333486516Z [2024-05-16 12:20:00,333: DEBUG/MainProcess] beat: Synchronizing schedule...
2024-05-16T12:20:00.336071158Z [2024-05-16 12:20:00,335: DEBUG/MainProcess] beat: Waking up in 5.00 minutes.
2024-05-16T12:20:00.372055059Z [2024-05-16 12:20:00,371: DEBUG/MainProcess] beat: Synchronizing schedule...
2024-05-16T12:20:00.373849705Z [2024-05-16 12:20:00,373: DEBUG/MainProcess] beat: Waking up in 5.00 minutes.
2024-05-16T12:20:00.361135357Z [2024-05-16 12:20:00,360: DEBUG/MainProcess] beat: Synchronizing schedule...
2024-05-16T12:20:00.363359490Z [2024-05-16 12:20:00,363: DEBUG/MainProcess] beat: Waking up in 5.00 minutes.
2024-05-16T12:21:40 No new trace in the past 1 min(s).
2024-05-16T12:22:09.826617525Z 169.254.134.1 - - [16/May/2024 12:22:09] "GET / HTTP/1.1" 200 -
2024-05-16T12:22:09.828852131Z 169.254.141.1 - - [16/May/2024 12:22:09] "GET / HTTP/1.1" 200 -
2024-05-16T12:22:09.828742024Z 169.254.142.1 - - [16/May/2024 12:22:09] "GET / HTTP/1.1" 200 -
2024-05-16T12:22:09.829842011Z 169.254.136.1 - - [16/May/2024 12:22:09] "GET / HTTP/1.1" 200 -
2024-05-16T12:22:09.829044130Z 169.254.143.1 - - [16/May/2024 12:22:09] "GET / HTTP/1.1" 200 -
</code></pre>
<p>I am not sure if Azure App Service is creating multiple containers in the same App Service or it's beat creating multiple instances.</p>
<p>Thanks.</p>
|
<python><azure><containers><azure-appservice><celerybeat>
|
2024-05-16 12:31:23
| 1
| 631
|
Shubh Rocks Goel
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.