QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,320,052
| 1,770,604
|
Providing dynamically created pydantic model as type annotation to function without mypy errors
|
<p>I am using a pluggable architecture which registers pydantic models with a central registry on startup.</p>
<p>These models are then added to an existing model as a <code>Union</code> field dynamically.</p>
<p>This resulting class is then used by a framework we use (dagster) to provide yaml configuration in a UI.</p>
<p>The issue I am having is that dagster uses the type annotation of an argument called <code>config</code> on decorated functions to register and use these config classes. I am using a factory function to create these functions in the following way:</p>
<pre class="lang-py prettyprint-override"><code># assets.py
from pydantic import BaseModel
from dagster import asset, AssetsDefinition
# Factory function to dynamically create an asset
def create_asset(identifier: str, config_class: Type[BaseModel]) -> AssetsDefinition:
# Library code wrapped in framework stuff
@asset(key=identifier)
def _inner(config: config_class) -> None:
...
return __inner
</code></pre>
<pre class="lang-py prettyprint-override"><code># Library entrypoint code
# Register all plugins
plugin_configs: List[BaseModel] = register_plugins()
class MyBase(BaseModel):
configs: Any # will be replaced with a Union[<registed config classes>] field
config_class: MyBase = create_updated_model(MyBase, plugin_configs)
# Load a list of ids to create assets for in config
my_list_of_ids: List[str] = load_asset_ids()
assets = [create_asset(id, config_class) for id in my_list_of_ids]
</code></pre>
<p>When I run mypy on this I will get:</p>
<pre><code>error: Variable "assets.config_class" is not valid as a type [valid-type]
</code></pre>
<p>If the framework wasn't reliant on the type annotation for actually using the config class I could just create a protocol in the asset module for type hinting / IDE help to avoid this.</p>
<p>My fundamental understanding of the problem here is shaky at best after reading this: <a href="https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases" rel="nofollow noreferrer">https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases</a></p>
<p>For now I am ignoring this error via <code># type: ignore</code> but I want to know if this is solvable in a way that keeps the framework happy (that needs the actual dynamically generated class as the annotation) and mypy happy?</p>
<p>Ran mypy, expected no errors but got <code>Variable </code>asset.config_class<code> is not valid as a type [valid-type]</code></p>
|
<python><mypy><dagster>
|
2024-04-13 08:52:24
| 0
| 691
|
Martin O Leary
|
78,320,047
| 3,383,640
|
Stalling of Multiprocessing in Python
|
<p>I experience a strange behavior of the multiprocessing module. Can anyone explain what is going on here?</p>
<p>The following MWE stalls (runs forever without error):</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import multiprocessing
import numpy as np
from skimage import io
from sklearn.cluster import KMeans
def create_model():
sampled_pixels = np.random.randint(0, 255, (800,3))
kmeans_model = KMeans(n_clusters=8, random_state=0).fit(sampled_pixels)
def process_image(test, test2):
image = np.random.randint(0, 255, (800,3))
kmeans_model = KMeans(n_clusters=8, random_state=0).fit(image)
image = kmeans_model.predict(image)
def main():
create_model()
with multiprocessing.Pool(1) as pool:
pool.apply_async(process_image, args=('test', 'test'))
pool.close()
pool.join()
if __name__ == "__main__":
main()
</code></pre>
<p>However, if I either remove the line <code>create_model()</code> OR change</p>
<pre class="lang-py prettyprint-override"><code>def process_image(test, test2)
# as well as
pool.apply_async(process_image, args=('test', 'test'))
</code></pre>
<p>to</p>
<pre class="lang-py prettyprint-override"><code>def process_image(test)`
# and
pool.apply_async(process_image, args=('test'))
</code></pre>
<p>the code runs successfully, as it should, since the arguments as well as the function call <code>create_model()</code> are completely redundant.</p>
<hr />
<p>Appendix</p>
<pre><code>> pip list
Package Version
------------- ---------
imageio 2.34.0
joblib 1.4.0
lazy_loader 0.4
networkx 3.3
numpy 1.26.4
packaging 24.0
pillow 10.3.0
pip 23.2.1
scikit-image 0.23.1
scikit-learn 1.4.2
scipy 1.13.0
threadpoolctl 3.4.0
tifffile 2024.2.12
> python --version
Python 3.12.2
</code></pre>
|
<python><scikit-learn><multiprocessing><python-multiprocessing>
|
2024-04-13 08:50:05
| 1
| 5,078
|
Suuuehgi
|
78,319,882
| 11,770,390
|
How to resample a dataframe to stretch from start to enddate in intervals (containing 0 for not available values)
|
<p>I have the following setup:</p>
<p>I have sparse information about queries hitting my endpoint at certain timepoints in a csv file. I parse this csv file with dates according to <code>date_format='ISO8601'</code> in the index column. Now what I want to do is this: I want to count the queries in certain intervals and put them into a dataframe that represents from start to enddate how many queries in said distinct intervals have hit the endpoint.</p>
<p>The problem is this: Using resample() I can aggregate and count the queries in the time intervals that contain information. But I can't find a way to extend this interval to always stretch from start to end date (with intervals filled with '0' by default).</p>
<p>I tried a combination of reindexing and resampling:</p>
<p><strong>csv:</strong></p>
<pre><code>datetime,user,query
2024-03-02T00:00:00Z,user1,query1
2024-03-18T03:45:00Z,user1,query2
2024-03-31T12:01:00Z,user1,query3
</code></pre>
<p><strong>myscript.py:</strong></p>
<pre><code>df = pd.read_csv(infile, sep=',', index_col='datetime', date_format='ISO8601', parse_dates=True)
df_timerange = df[start_date:end_date]
df_period = pd.date_range(start=start_date, end=end_date, freq='1M')
df_sampled = df_timerange['query'].resample('1M').count().fillna(0)
df_sampled = df_timerange.reindex(df_period)
</code></pre>
<p>However this will just produce a dataframe where index dates range from <code>2023-04-30T07:37:39.750Z</code> to <code>2024-03-31T07:37:39.750Z</code> in frequencies of 1 month, but the original data from the csv (<code>df_timerange</code>) is somehow not represented (all values are NaN)... Also I wonder why the dates start at this weird time: <code>07:37:39.750</code>. My guess is that the reindexing didn't hit the timepoints where <code>df_timerange</code> contains values so they are just skipped? Or the timezone generated by pd.date_range() is not ISO8601 and this causes a mismatch.. Again, I'm not too experienced with panda dataframes to make sense of it.</p>
<p><strong>Minimal reproducible example:</strong></p>
<p>Run this with python 3.11:</p>
<pre><code>from datetime import datetime, timezone
import pandas as pd
start_date = datetime(2023, 4, 15, 4, 1, 40, tzinfo=timezone.utc)
end_date = datetime(2024, 4, 15, 0, 0, 0, tzinfo=timezone.utc)
df = pd.read_csv('test.csv', sep=',', index_col='datetime', date_format='ISO8601', parse_dates=True)
df_timerange = df[start_date:end_date]
df_period = pd.date_range(start=start_date, end=end_date, freq='1M')
df_sampled = df_timerange['query'].resample('1M').count().fillna(0)
df_sampled = df_timerange.reindex(df_period)
print(df_sampled)
</code></pre>
|
<python><pandas><reindex><pandas-resample>
|
2024-04-13 07:44:26
| 1
| 5,344
|
glades
|
78,319,798
| 8,748,098
|
error: 'tf.TensorListSetItem' op is neither a custom op nor a flex op while trying to quantize a model
|
<p>I am trying to learn about quantization so was playing with a github repo trying to quantize it into int8 format. I have used the following code to quantize the model.</p>
<pre><code>modelClass = DTLN_model()
modelClass.build_DTLN_model(norm_stft=False)
modelClass.model.load_weights(model_path)
# tf.saved_model.save(modelClass.model, output_dir)
converter = tf.lite.TFLiteConverter.from_keras_model(modelClass.model)
converter.experimental_new_converter = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS
]
converter._experimental_lower_tensor_list_ops = False
converter.target_spec.supported_types = [tf.int8]
converter.representative_dataset = lambda: generate_representative_data(num_samples)
tflite_model = converter.convert()
with open('saved_model.tflite', 'wb') as f:
f.write(tflite_model)
</code></pre>
<p>And for the representative data, I have converted the data into numpy, saved them as .npy and then used the following code to use them as representative data.</p>
<p>But after I run the code I get the following error:</p>
<pre><code>error: 'tf.TensorListSetItem' op is neither a custom op nor a flex op
error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: Angle, Exp, IRFFT, TensorListReserve, TensorListSetItem, TensorListStack
Details:
tf.Angle(tensor<?x?x257xcomplex<f32>>) -> (tensor<?x?x257xf32>) : {device = ""}
tf.Exp(tensor<?x?x257xcomplex<f32>>) -> (tensor<?x?x257xcomplex<f32>>) : {device = ""}
tf.IRFFT(tensor<?x?x257xcomplex<f32>>, tensor<1xi32>) -> (tensor<?x?x512xf32>) : {device = ""}
tf.TensorListReserve(tensor<2xi32>, tensor<i32>) -> (tensor<!tf_type.variant<tensor<?x128xf32>>>) : {device = ""}
tf.TensorListSetItem(tensor<!tf_type.variant<tensor<?x128xf32>>>, tensor<i32>, tensor<?x128xf32>) -> (tensor<!tf_type.variant<tensor<?x128xf32>>>) : {device = "", resize_if_index_out_of_bounds = false}
tf.TensorListStack(tensor<!tf_type.variant<tensor<?x128xf32>>>, tensor<2xi32>) -> (tensor<?x?x128xf32>) : {device = "", num_elements = -1 : i64}
Traceback (most recent call last):
File "/home/palak/Projects/Personal_Projects/integration/main.py", line 50, in <module>
tflite_model = converter.convert()
File "/home/palak/Projects/Personal_Projects/integration/integration_test/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 1139, in wrapper
return self._convert_and_export_metrics(convert_func, *args, **kwargs)
File "/home/palak/Projects/Personal_Projects/integration/integration_test/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 1093, in _convert_and_export_metrics
result = convert_func(self, *args, **kwargs)
File "/home/palak/Projects/Personal_Projects/integration/integration_test/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 1601, in convert
saved_model_convert_result = self._convert_as_saved_model()
File "/home/palak/Projects/Personal_Projects/integration/integration_test/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 1582, in _convert_as_saved_model
return super(TFLiteKerasModelConverterV2, self).convert(
File "/home/palak/Projects/Personal_Projects/integration/integration_test/lib/python3.10/site-packages/tensorflow/lite/python/lite.py", line 1371, in convert
result = _convert_graphdef(
File "/home/palak/Projects/Personal_Projects/integration/integration_test/lib/python3.10/site-packages/tensorflow/lite/python/convert_phase.py", line 212, in wrapper
raise converter_error from None # Re-throws the exception.
File "/home/palak/Projects/Personal_Projects/integration/integration_test/lib/python3.10/site-packages/tensorflow/lite/python/convert_phase.py", line 205, in wrapper
return func(*args, **kwargs)
File "/home/palak/Projects/Personal_Projects/integration/integration_test/lib/python3.10/site-packages/tensorflow/lite/python/convert.py", line 984, in convert_graphdef
data = convert(
File "/home/palak/Projects/Personal_Projects/integration/integration_test/lib/python3.10/site-packages/tensorflow/lite/python/convert.py", line 366, in convert
raise converter_error
tensorflow.lite.python.convert_phase.ConverterError: Could not translate MLIR to FlatBuffer. UNKNOWN: /home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.Angle' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.Exp' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListReserve' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListStack' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListReserve' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListStack' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.IRFFT' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListReserve' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListStack' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListReserve' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListStack' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListSetItem' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: called from
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListSetItem' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: called from
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListSetItem' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: called from
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: error: 'tf.TensorListSetItem' op is neither a custom op nor a flex op
tflite_model = converter.convert()
^
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: called from
tflite_model = converter.convert()
^
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
/home/palak/Projects/Personal_Projects/integration/main.py:50:1: note: Error code: ERROR_NEEDS_FLEX_OPS
tflite_model = converter.convert()
^
<unknown>:0: error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: Angle, Exp, IRFFT, TensorListReserve, TensorListSetItem, TensorListStack
Details:
tf.Angle(tensor<?x?x257xcomplex<f32>>) -> (tensor<?x?x257xf32>) : {device = ""}
tf.Exp(tensor<?x?x257xcomplex<f32>>) -> (tensor<?x?x257xcomplex<f32>>) : {device = ""}
tf.IRFFT(tensor<?x?x257xcomplex<f32>>, tensor<1xi32>) -> (tensor<?x?x512xf32>) : {device = ""}
tf.TensorListReserve(tensor<2xi32>, tensor<i32>) -> (tensor<!tf_type.variant<tensor<?x128xf32>>>) : {device = ""}
tf.TensorListSetItem(tensor<!tf_type.variant<tensor<?x128xf32>>>, tensor<i32>, tensor<?x128xf32>) -> (tensor<!tf_type.variant<tensor<?x128xf32>>>) : {device = "", resize_if_index_out_of_bounds = false}
tf.TensorListStack(tensor<!tf_type.variant<tensor<?x128xf32>>>, tensor<2xi32>) -> (tensor<?x?x128xf32>) : {device = "", num_elements = -1 : i64}
</code></pre>
<p>I think the main issue is this error: <code>error: 'tf.TensorListSetItem' op is neither a custom op nor a flex op</code></p>
<p>I have tried to follow the doc <a href="https://www.tensorflow.org/lite/guide/ops_select" rel="nofollow noreferrer">https://www.tensorflow.org/lite/guide/ops_select</a> and some github issues like <a href="https://github.com/tensorflow/tensorflow/issues/34350#issuecomment-579027135" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/34350#issuecomment-579027135</a> and also went through a similar question <a href="https://stackoverflow.com/q/66157766/8748098">Issue with tf.ParseExampleV2 when converting to Tensorflow Lite : "op is neither a custom op nor a flex op"</a></p>
<p>But none of those seemed to be helpful in my case.</p>
<p>Can anyone help me figure out what I am doing wrong?
Thanks in advance.</p>
|
<python><tensorflow><machine-learning><quantization><quantization-aware-training>
|
2024-04-13 07:02:24
| 0
| 327
|
Niaz Palak
|
78,319,719
| 2,962,794
|
How can I make datetime objects for the year 10,000 and beyond?
|
<p>In Python 3.10.12 (or 3.x), how do I make datetime objects for dates in the year 10,000 and beyond? I get a ValueError when I try:</p>
<pre><code>ValueError: year 10000 is out of range
</code></pre>
<p>Is there some reason this error exists? I mean, does our actual calendar not support the year 10,000, or something?</p>
|
<python><python-3.x><datetime>
|
2024-04-13 06:24:24
| 1
| 4,809
|
BrΕtsyorfuzthrΔx
|
78,319,661
| 1,413,856
|
Python Module Help DocStrings - Exclucing extra documentation
|
<p>I have written a simple module (<code>testmodule</code>) which imports some other modules. When I import it and type:</p>
<pre class="lang-py prettyprint-override"><code>help(testmodule)
</code></pre>
<p>I get all of the help from the imported modules as well.</p>
<p>Is there a way of excluding the extra help?</p>
<p>Here is an example of the problem:</p>
<pre class="lang-py prettyprint-override"><code># file: testmodule.py
'''
Test Module etc
'''
import tkinter
#class TestTK(tkinter.Tk):
# '''Test Class'''
# pass
</code></pre>
<p>When testing the code in a python shell:</p>
<pre class="lang-py prettyprint-override"><code>import testmodule
help(testmodule)
# a few lines as expected
</code></pre>
<p>When I uncomment the code for the dummy <code>TestTK</code> class:</p>
<pre class="lang-py prettyprint-override"><code># reload module:
import importlib, sys
importlib.reload(sys.modules['testmodule'])
# try again
help(testmodule)
# hundreds of addition lines from tkinter
</code></pre>
|
<python><module><docstring>
|
2024-04-13 05:54:00
| 0
| 16,921
|
Manngo
|
78,319,625
| 11,922,237
|
Snowflake `Table`s vs Snowflake `DataFrame`s in Snowpark
|
<p>Two things I've come across while trying to learn Snowflake Snowpark is that they mention both <code>Table</code>s and <code>DataFrame</code>s.</p>
<p>The docs say that you can load an existing table from Snowflake database after creating a session:</p>
<pre class="lang-py prettyprint-override"><code>connection_parameters = {
"account": credentials["account"],
"user": credentials["username"],
"password": credentials["password"],
}
new_session = Session.builder.configs(connection_parameters).create()
new_session.sql("USE DATABASE test_db;").collect()
my_table = new_session.table("my_table")
</code></pre>
<p>When I print the type of <code>my_table</code>, it says it is of type <code>Table</code>. But the docs refer to it as DataFrames.</p>
<p>But then, there is this confusing example in the API reference for the <code>Table</code> class:</p>
<pre class="lang-py prettyprint-override"><code>df1 = session.create_dataframe([[1, 2], [3, 4]], schema=["a", "b"])
df1.write.save_as_table("my_table", mode="overwrite", table_type="temporary")
session.table("my_table").collect()
</code></pre>
<p>So, I have no idea which is which and why we need both if they are different things.</p>
|
<python><database><snowflake-cloud-data-platform>
|
2024-04-13 05:31:27
| 1
| 1,966
|
Bex T.
|
78,319,326
| 2,007,222
|
How numpy.linalg.norm works
|
<p>I am trying to understand how these numbers calculated:</p>
<pre><code>>>> np.linalg.norm([1,2,3,4,5],0)
5.0
>>> np.linalg.norm([1,2,3,4,5],1)
15.0
>>> np.linalg.norm([1,2,3,4,5],2)
7.416198487095663
>>> np.linalg.norm([1,2,3,4,5],3)
6.082201995573399
>>> np.linalg.norm([1,2,3,4,5],4)
5.593654949523078
</code></pre>
<p>How np.linalg.norm works?</p>
|
<python><numpy>
|
2024-04-13 01:40:26
| 1
| 1,949
|
Digerkam
|
78,319,072
| 842,476
|
Best way to go through two pandas dfs and using the data in an external function in python
|
<p>I have two pandas dfs one contains a list of workers and the other contains a list of tasks. I plan to assign the tasks to the workers. Each task requires a certain number of workers and the way I am doing it right now is by going through the tasks df and stop at each row and go through the workers df and keep assigning workers until the task has enough. Then I move to the next task and so on. I also go through the workers one by one and decide if the worker is assigned or not based on an external function that gives 1 to assign or 0 zero to pass.</p>
<p>I am not sure if the way I am approaching this is right or if I should be doing it in a different way. Let me know if you have a suggestion for a cleaner way to assign each task the required number of workers based on the decision of the external function.</p>
<p>The tasks df looks like this:</p>
<pre><code>task_id day shift workers_needed
0 1 4 1 3
1 2 4 2 5
2 3 7 1 4
3 4 5 2 4
</code></pre>
<p>and the workers df looks like this:</p>
<pre><code> worker_id name
0 101 Worker_101
1 102 Worker_102
2 103 Worker_103
3 104 Worker_104
</code></pre>
<p>My current method is in the code below:</p>
<p>you can see the while loop goes while i <len(tasks_df) but then I had to add an if statement to stop the loop if i >len(tasks_df)-1. I tried several options changing the condition of the while loop but it either misses the last row by making the first condition -1 or it goes one extra row if I remove the second condition.</p>
<pre><code> i = 0
cur_task = tasks_df.iloc[i]['task_id']
needed_workers = tasks_df.iloc[i]['workers_needed']
print(f"Current Task is = {cur_task}")
print(f"needed workers = {needed_workers}")
while i < len(tasks_df):
# print(f" i = {i}")
random_int = random.randint(0, 1)
print(f"random_int = {random_int}")
if(random_int==1):
needed_workers-= 1
print(f"updated needed workers = {needed_workers}")
if needed_workers==0:
i+=1
if i>len(tasks_df)-1:
break
cur_task = tasks_df.iloc[i]['task_id']
needed_workers = tasks_df.iloc[i]['workers_needed']
print(f"Current Task is = {cur_task} ----------------")
print(f"needed workers = {needed_workers}")
</code></pre>
<p>I am making the decision to assign for now using a random variable that gives 1 or 0. The decision itself is not important now just the way to go through the dfs and use the decision to assign the needed workers to the tasks.</p>
|
<python><pandas><dataframe>
|
2024-04-12 23:02:57
| 1
| 302
|
Ayham
|
78,319,019
| 20,302,906
|
How to test external fetch API function with connectionerror exception Django
|
<p>My code snippet fetches data from an external API. It works fine and I tested the results when I don't have any issues with the connection. However, I'm interested on testing my <code>try</code> <code>except</code> block that handles any connection problem during the function execution. I'm using python 'requests' library and I don't have any complain about it but I feel testing a connection issue will reassure my code does throw the exception.</p>
<p>This is the closest <a href="https://stackoverflow.com/questions/20885841/how-do-i-simulate-connection-errors-and-request-timeouts-in-python-unit-tests">StackOverflow answer</a> for python only, I couldn't implement it in my code though. Also, I found another answer that suggested using <a href="https://httpretty.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">Httpretty</a> but I don't that's the approach to this question.</p>
<p>Does anyone know how to simulate failures for external API data fetching?</p>
<p><em>views.py</em></p>
<pre><code>def fetch_question(request):
handler = APIHandler()
match request.POST["difficulty"]:
case "easy":
url = (
"https://opentdb.com/api.php?amount=1&category=9&difficulty=easy&type=multiple&token="
+ handler.token
)
case "medium":
url = (
"https://opentdb.com/api.php?amount=1&category=9&difficulty=medium&type=multiple&token="
+ handler.token
)
case "hard":
url = (
"https://opentdb.com/api.php?amount=1&category=9&difficulty=hard&type=multiple&token="
+ handler.token
)
try:
response = requests.get(url)
except ConnectionError: # Want to test this exception
response.raise_for_status()
else:
return render_question(request, response.json())
</code></pre>
<p><em>test_views.py</em></p>
<pre><code>from django.test import TestCase, Client
client = Client()
class QuestionTest(TestCase):
def test_page_load(self):
response = self.client.get("/log/question")
self.assertEqual(response["content-type"], "text/html; charset=utf-8")
self.assertTemplateUsed(response, "log/question.html")
self.assertContains(response, "Choose your question level", status_code=200)
def test_fetch_question(self): #Test for successful data fetch
response = self.client.post(
"/log/question", {"level": "level", "difficulty": "hard"}, follow=True
)
self.assertEqual(len(response.context), 2)
self.assertIn("question", response.context)
self.assertIn("answers", response.context)
self.assertEqual(len(response.context["answers"]), 4)
self.assertTemplateUsed(response, "log/question.html")
# how to test connection error?
</code></pre>
|
<python><django><exception>
|
2024-04-12 22:41:41
| 0
| 367
|
wavesinaroom
|
78,318,955
| 4,307,142
|
Corrected Cramer's V results in division by zero when n = r
|
<p>I recently found <a href="https://stackoverflow.com/a/39266194">this answer</a> which provides the code of an unbiased version of Cramer's V for computing the correlation of two categorical variables:</p>
<pre><code>import scipy.stats as ss
def cramers_corrected_stat(confusion_matrix):
""" calculate Cramers V statistic for categorial-categorial association.
uses correction from Bergsma and Wicher,
Journal of the Korean Statistical Society 42 (2013): 323-328
"""
chi2 = ss.chi2_contingency(confusion_matrix)[0]
n = confusion_matrix.sum()
phi2 = chi2/n
r,k = confusion_matrix.shape
phi2corr = max(0, phi2 - ((k-1)*(r-1))/(n-1))
rcorr = r - ((r-1)**2)/(n-1)
kcorr = k - ((k-1)**2)/(n-1)
return np.sqrt(phi2corr / min( (kcorr-1), (rcorr-1))))
</code></pre>
<p>However, if the number of samples, <code>n</code>, is equal to the number of categories of the first feature, <code>r</code>, then <code>rcorr = n - (n-1) = 1</code>, which yields a division by zero in <code>np.sqrt(phi2corr / min( (kcorr-1), (rcorr-1))</code> if <code>(kcorr-1)</code> is non-negative. I confirmed this with a simple example:</p>
<pre><code>import pandas as pd
data = [
{'name': 'Alice', 'occupation': 'therapist', 'favorite_color': 'red'},
{'name': 'Bob', 'occupation': 'fisherman', 'favorite_color': 'blue'},
{'name': 'Carol', 'occupation': 'scientist', 'favorite_color': 'orange'},
{'name': 'Doug', 'occupation': 'scientist', 'favorite_color': 'red'},
]
df = pd.DataFrame(data)
confusion_matrix = pd.crosstab(df['name'], df['occupation']) # n = 4 (number of samples), r = 4 (number of unique names), k = 3 (number of unique occupations)
print(cramers_corrected_stat(confusion_matrix))
</code></pre>
<p>Output:</p>
<pre><code>/tmp/ipykernel_227998/749514942.py:45: RuntimeWarning: invalid value encountered in scalar divide
return np.sqrt(phi2corr / min( (kcorr-1), (rcorr-1)))
nan
</code></pre>
<p>Is this expected behavior?</p>
<p>If so, how should I use the corrected Cramer's V in cases where <code>n = k</code>, e.g., when all samples have a unique value for some feature?</p>
|
<python><pandas><correlation><categorical-data>
|
2024-04-12 22:20:57
| 1
| 1,167
|
Gabriel Rebello
|
78,318,744
| 984,621
|
How to properly set access to uploaded files to DigitalOcean Spaces cloud storage?
|
<p>I am uploading files through Python scripts to DigitalOcean Spaces (AWS S3-Compatible Object Storage).</p>
<p>Uploading files is working well; the part I am not so sure about is how to properly access those files. I had to set
<code>IMAGES_STORE_S3_ACL = 'public-read'</code></p>
<p>so I am able to display the image that I uploaded to the storage. Otherwise, I would see the <code>Permission Denied</code> error. The way I display the file is that I concatenate path to the file, something like <code>file_path = f'{storage_domain}/my_files/{file_name}'</code>.</p>
<p>When I put this path to the browser, it displays the file. But basically, anyone can concatenate this URL and display the files from the storage.</p>
<p>When uploading the files to the storage, there were these outputs in the console:</p>
<pre><code>Making request for OperationModel(name=PutObject) with params: {'url_path': '/path/to/file.jpg', 'query_string': {}, 'method': 'PUT', 'headers': {'x-amz-meta-width': '749', 'x-amz-meta-height': '562', 'x-amz-acl': 'public-read', 'Cache-Control': 'max-age=172800', 'Content-Type': 'image/jpeg', 'User-Agent': 'Botocore/1.34.83 ua/2.0 os/macos#23.4.0 md/arch#arm64 lang/python#3.11.8 md/pyimpl#CPython cfg/retry-mode#legacy', 'Content-MD5': 'something here==', 'Expect': '100-continue'}, 'body': <_io.BytesIO object at 0x10778fbf0>, 'auth_path': '/path/to/file.jpg', 'url': 'https://location.digitaloceanspaces.com/path/to/file.jpg', 'context': {'client_region': 'region', 'client_config': <botocore.config.Config object at 0x107189210>, 'has_streaming_input': True, 'auth_type': 'v4', 's3_redirect': {'redirected': False, 'bucket': 'bucket_name', 'params': {'Bucket': 'bucket_name', 'Key': 'path/to/file.jpg', 'Body': <_io.BytesIO object at 0x10778fbf0>, 'Metadata': {'width': '749', 'height': '562'}, 'ACL': 'public-read', 'CacheControl': 'max-age=172800', 'ContentType': 'image/jpeg'}}, 'input_params': {'Bucket': 'bucket_name', 'Key': 'path/to/file.jpg'}, 'signing': {'region': 'location', 'signing_name': 's3', 'disableDoubleEncoding': True}, 'endpoint_properties': {'authSchemes': [{'disableDoubleEncoding': True, 'name': 'sigv4', 'signingName': 's3', 'signingRegion': 'fra1'}]}}}
Event request-created.s3.PutObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x1071891d0>>
Event choose-signer.s3.PutObject: calling handler <function set_operation_specific_signer at 0x106070fe0>
Event before-sign.s3.PutObject: calling handler <function remove_arn_from_signing_path at 0x106073560>
Event before-sign.s3.PutObject: calling handler <bound method S3ExpressIdentityResolver.resolve_s3express_identity of <botocore.utils.S3ExpressIdentityResolver object at 0x1071a8c10>>
Calculating signature using v4 auth.
CanonicalRequest:
PUT
/path/to/file.jpg
cache-control:max-age=172800
content-md5:something==
content-type:image/jpeg
host:location.digitaloceanspaces.com
x-amz-acl:public-read
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:20240412T204214Z
x-amz-meta-height:562
x-amz-meta-width:749
cache-control;content-md5;content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-meta-height;x-amz-meta-width
UNSIGNED-PAYLOAD
StringToSign:
AWS4-HMAC-SHA256
20240412T204214Z
20240412/location/s3/aws4_request
something_here
Signature:
something_here
</code></pre>
<p>This output is coming out of the <code>boto3</code> module (I assume) - I am not sure if I can access this data from my Python script.</p>
<p>However, regarding the proper and safe way of accessing the uploaded files - I think my way of concatenating the URL to the file is not very safe. What's the right way to do it?</p>
|
<python><amazon-web-services><amazon-s3><digital-ocean><digital-ocean-spaces>
|
2024-04-12 21:13:07
| 1
| 48,763
|
user984621
|
78,318,626
| 1,275,942
|
Python logging: check if a log will print before doing f-string?
|
<p>Suppose I have code I want to be able to run in production or verbose mode. In verbose, I want to print debug information that will slow the program down:</p>
<pre class="lang-py prettyprint-override"><code>import logging
import time
import sys
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
root = logging.getLogger()
def expensive_verbose_info():
time.sleep(1)
return "Useful (but expensive) logging information!"
def my_function():
logging.info("Entering my_function...")
logging.debug("debug info: %s", expensive_verbose_info())
logging.info("Exiting my_function...")
my_function()
</code></pre>
<p>The program will still hit the <code>time.sleep</code>.</p>
<p>We can make various wrappers around this:</p>
<pre class="lang-py prettyprint-override"><code>class DeferredCall():
def __init__(self, func, *args, **kwargs):
self.func = func
self.args = args
self.kwargs = kwargs
def __str__(self):
return str(self.func(*self.args, **self.kwargs))
logging.debug("debug info: %s", DeferredCall(expensive_verbose_info))
</code></pre>
<p>(This correctly sleeps if logging level is DEBUG and prints correctly if logging is INFO)</p>
<p>Or we can do a manual check and some variety of short-circuit/inline if, but that isn't much better than just having <code>if VERBOSE:</code> scattered everywhere around the code</p>
<pre class="lang-py prettyprint-override"><code>root.level <= logging.DEBUG and logging.debug("debug info: %s", expensive_verbose_info())
</code></pre>
<p>It seems like ideally I'd want something like</p>
<pre><code>logging.debug("debug info: %s", lambda: expensive_verbose_info(), execute_callable=True)
</code></pre>
<p>or some way to make a lazy f-string literal? E.G. <code>lf"this will only evaluate when __str__() is called on it"</code></p>
<p>Does anything like this exist in <code>logging</code> (or in the broader python ecosystem?) Are there reasons to not implement this in my own code?</p>
|
<python><lazy-evaluation><python-logging><f-string>
|
2024-04-12 20:41:02
| 0
| 899
|
Kaia
|
78,318,586
| 2,301,970
|
Propagating true entries along axis in an array
|
<p>I have to perform the operation below many times. Using numpy functions instead of loops I usually get a very good performance but I have not been able to replicate this for higher dimensional arrays. Any suggestion or alternative would be most welcome:</p>
<p>I have a boolean array and I would like to propagate the true indeces to the next 2 positions for example:</p>
<p>If this 1 dimensional array (A) is:</p>
<pre><code>import numpy as np
# Number of positions to propagate the array
propagation = 2
# Original array
A = np.array([False, True, False, False, False, True, False, False, False, False, False, True, False])
</code></pre>
<p>I can create an "empty" array and then find the indices, propagate them, and then flatten argwhere and then flatten it:</p>
<pre><code>B = np.zeros(A.shape, dtype=bool)
# Compute the indeces of the True values and make the two next to it True as well
idcs_true = np.argwhere(A) + np.arange(propagation + 1)
idcs_true = idcs_true.flatten()
idcs_true = idcs_true[idcs_true < A.size] # in case the error propagation gives a great
B[idcs_true] = True
# Array
print(f'Original array A = {A}')
print(f'New array (2 true) B = {B}')
</code></pre>
<p>which gives:</p>
<pre><code>Original array A = [False True False False False True False False False False False True
False]
New array (2 true) B = [False True True True False True True True False False False True
True]
</code></pre>
<p>However, this becomes much more complex and fails if for example:</p>
<pre><code>AA = np.array([[False, True, False, False, False, True, False, False, False, False, False, True, False],
[False, True, False, False, False, True, False, False, False, False, False, True, False]])
</code></pre>
<p>Thanks for any advice.</p>
|
<python><numpy><convolution><array-broadcasting>
|
2024-04-12 20:31:06
| 3
| 693
|
Delosari
|
78,318,544
| 8,849,755
|
plotly animation disable auto play
|
<p>Is it possible to disable the auto play when saving an animated plot created with plotly?</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
df = px.data.gapminder()
px.scatter(df, x="gdpPercap", y="lifeExp", animation_frame="year", animation_group="country",
size="pop", color="continent", hover_name="country",
log_x=True, size_max=55, range_x=[100,100000], range_y=[25,90]).write_html('anim.html')
</code></pre>
<p>When I open <code>anim.html</code>, the animation starts automatically. I can stop it by pressing stop, but would like to change the default behavior.</p>
|
<python><plotly>
|
2024-04-12 20:18:28
| 1
| 3,245
|
user171780
|
78,318,498
| 1,909,378
|
gdal: build from source with python bindings in docker container
|
<p>Iβm struggling with building a gdal feature branch from source. My environment is docker with an ubuntu base image.</p>
<p>It builds fine, but invoking <code>gdal2tiles.py</code> throws the notorious</p>
<p><code>ModuleNotFoundError: No module named 'osgeo'</code></p>
<p>These are my build steps in my docker container:</p>
<pre><code>RUN cmake -DBUILD_PYTHON_BINDINGS=ON -DCMAKE_BUILD_TYPE=Release ..
RUN cmake --build .
RUN cmake --build . --target install
</code></pre>
<p>Clearly I'm missing something for being able to import <code>osgeo</code> in python. But what is it?</p>
|
<python><docker><ubuntu><cmake><gdal>
|
2024-04-12 20:01:09
| 1
| 5,347
|
creimers
|
78,318,497
| 7,941,944
|
Incorrect Placement visited node set in Graph traversal DFS
|
<p>I am having trouble understanding why placing <code>visited.append((i,j))</code> in the else case gives me the right output, while keeping it as shown below gives the incorrect output.
I tried to debug with a simpler example
[[0,1],
[1,1]]
But I am not able to figure out why keeping it outside the else gives wrong result.
Code:</p>
<pre><code>from typing import List
class Solution:
def islandPerimeter(self, grid: List[List[int]]) -> int:
ni, nj = len(grid), len(grid[0])
visited = set()
def dfs(i, j):
nonlocal visited
if (i,j) in visited:
return 0
visited.add((i,j)) # Placing this inside else gives me right result
if i < 0 or j < 0 or i >= ni or j >= nj or grid[i][j] == 0:
return 1
else:
return dfs(i-1, j) + dfs(i+1, j) + dfs(i, j-1) + dfs(i, j+1)
# U D L R
for i in range(len(grid)):
for j in range(len(grid[i])):
if grid[i][j] == 1:
return dfs(i, j)
print(Solution().islandPerimeter([[0,1],
[1,1]]))
</code></pre>
|
<python><data-structures><depth-first-search>
|
2024-04-12 20:01:07
| 1
| 333
|
Vijeth Kashyap
|
78,318,447
| 2,054,629
|
Can I know synchronously if an AsyncIterator is at its end?
|
<p>I have an async iterator. <em>While</em> iterating over its element, I would like to know if I am reading the last element. Is this possible?</p>
<pre class="lang-py prettyprint-override"><code>async def start_stream():
idx = 0
while idx < 100:
yield idx
idx += 1
await asyncio.sleep(0.01)
stream = start_stream()
async for i in stream:
if stream.has_next():
handle(i, is_over=True)
else:
handle(i, is_over=False)
</code></pre>
|
<python><iterator><python-asyncio>
|
2024-04-12 19:46:22
| 0
| 10,562
|
Guig
|
78,318,223
| 4,398,966
|
change KeyboardInterrupt to 'enter'
|
<p>I have the following code:</p>
<pre><code>import time
def run_indefinitely():
while True:
# code to run indefinitely goes here
print("Running indefinitely...")
time.sleep(1)
try:
# Run the code indefinitely until Enter key is pressed
run_indefinitely()
except KeyboardInterrupt:
print("Loop interrupted by user")
</code></pre>
<p>Is there a way to break out of the while loop by hitting 'enter' instead of ctrl+C ?</p>
|
<python><keyboard-events>
|
2024-04-12 18:49:37
| 3
| 15,782
|
DCR
|
78,318,186
| 193,128
|
How can I make the background of this Python/Tk Treeview table all dark?
|
<p>I'm preparing to write a user interface for a REST interface and I've never used Python/Tk before. I've written a program which shows files in the current folder for now while I figure out the look of the UI. Unfortunately I've got stuck trying to make the whole window Dark. I have white patches showing around the table like so and I wondered if anyone knew how to make them dark?</p>
<p><a href="https://i.sstatic.net/zAYKw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zAYKw.png" alt="enter image description here" /></a></p>
<p>The code looks like this:</p>
<pre><code>#!/usr/bin/python3
import os
import tkinter as tk
from tkinter import ttk
from tkinter import messagebox
def get_files_and_sizes():
files = os.listdir('.')
files_and_sizes = {}
for file in files:
if os.path.isfile(file):
size = os.path.getsize(file)
files_and_sizes[file] = size
return files_and_sizes
def display_details(file_name):
details_window = tk.Toplevel()
details_window.title("File Details")
details_window.configure(background="#333333")
label = ttk.Label(details_window, text="File Name: " + file_name, background="#333333", foreground="#CCCCCC")
label.pack(padx=10, pady=10)
close_button = ttk.Button(details_window, text="Close", command=details_window.destroy, style="Dark.TButton")
close_button.pack(pady=5)
def display_files_and_sizes():
def refresh():
# Clear the treeview
for row in tree.get_children():
tree.delete(row)
# Refresh files and sizes
files_and_sizes = get_files_and_sizes()
index = 1
for file, size in files_and_sizes.items():
tree.insert("", index, text=str(index), values=(file, size))
index += 1
def close():
root.destroy()
def show_details():
item = tree.selection()[0]
file_name = tree.item(item, "values")[0]
display_details(file_name)
def sort_column(col, reverse):
data = [(tree.set(child, col), child) for child in tree.get_children('')]
data.sort(reverse=reverse)
for index, (val, child) in enumerate(data):
tree.move(child, '', index)
tree.heading(col, command=lambda: sort_column(col, not reverse))
files_and_sizes = get_files_and_sizes()
root = tk.Tk()
root.title("Files and Sizes")
root.configure(background="#333333")
root.geometry("1024x768")
# Define style for columns
style = ttk.Style()
style.configure("Dark.Treeview.Heading", background="#666666", foreground="#CCCCCC", font=('Helvetica', 10, 'bold'))
style.configure("Dark.Treeview", background="#333333", foreground="#CCCCCC", font=('Helvetica', 10))
style.configure("Blue.Treeview.Cell", foreground="#6FA1D3")
# Define style for buttons
style.configure("Dark.TButton", background="#444444", foreground="#CCCCCC")
tree = ttk.Treeview(root, columns=("File", "Size"), style="Dark.Treeview")
tree.heading("#0", text="Index", anchor=tk.W, command=lambda: sort_column("#0", False))
tree.heading("File", text="File", anchor=tk.W, command=lambda: sort_column("File", False))
tree.heading("Size", text="Size (bytes)", anchor=tk.W, command=lambda: sort_column("Size", False))
tree["show"] = "headings" # Hide the default empty column at the beginning
index = 1
for file, size in files_and_sizes.items():
tree.insert("", index, text=str(index), values=(file, size))
index += 1
# Set style for the first column cells
tree.tag_configure("blue", foreground="#6FA1D3")
for i in range(1, len(files_and_sizes) + 1):
tree.tag_configure(str(i), foreground="#6FA1D3")
tree.pack(expand=True, fill=tk.BOTH)
refresh_button = ttk.Button(root, text="Refresh", command=refresh, style="Dark.TButton")
refresh_button.pack(side=tk.LEFT, padx=5, pady=5)
close_button = ttk.Button(root, text="Close", command=close, style="Dark.TButton")
close_button.pack(side=tk.RIGHT, padx=5, pady=5)
# Right-click menu
menu = tk.Menu(root, tearoff=0, bg="#555555", fg="#CCCCCC")
menu.add_command(label="Details", command=show_details)
def popup(event):
if tree.identify_region(event.x, event.y) == "cell":
menu.post(event.x_root, event.y_root)
tree.bind("<Button-3>", popup)
root.mainloop()
if __name__ == "__main__":
display_files_and_sizes()
</code></pre>
|
<python><tkinter><tk-toolkit>
|
2024-04-12 18:42:45
| 0
| 32,596
|
Benj
|
78,318,135
| 610,569
|
How to save the LLM2Vec model as a HuggingFace PreTrainedModel object?
|
<p>Typically, we should be able to save a merged base + PEFT model, like this:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base MNTP model, along with custom code that enables bidirectional connections in decoder-only LLMs
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
model.save_pretrained("LLM2Vec-Mistral-7B-Instruct-v2-mnt-merged")
</code></pre>
<p>but it's throwing an error (that looks similar to <a href="https://github.com/huggingface/transformers/issues/26972" rel="nofollow noreferrer">https://github.com/huggingface/transformers/issues/26972</a>)</p>
<p>[out]:</p>
<pre class="lang-py prettyprint-override"><code>/usr/local/lib/python3.10/dist-packages/transformers/integrations/peft.py:391: FutureWarning: The `active_adapter` method is deprecated and will be removed in a future version.
warnings.warn(
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
[<ipython-input-3-db27a2801af8>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model.save_pretrained("LLM2Vec-Mistral-7B-Instruct-v2-mnt-merged")
3 frames
[/usr/local/lib/python3.10/dist-packages/transformers/integrations/peft.py](https://localhost:8080/#) in active_adapters(self)
383
384 # For previous PEFT versions
--> 385 if isinstance(active_adapters, str):
386 active_adapters = [active_adapters]
387
UnboundLocalError: local variable 'active_adapters' referenced before assignment
</code></pre>
<hr />
<p>Tested on:</p>
<pre><code>transformers==4.38.2
peft==0.10.0
accelerate==0.29.2
</code></pre>
<h3>How to save the LLM2Vec model as a HuggingFace PreTrainedModel object?</h3>
|
<python><huggingface-transformers><large-language-model><mistral-7b>
|
2024-04-12 18:32:25
| 1
| 123,325
|
alvas
|
78,317,989
| 14,501,168
|
Is it possible to fine-tune a pretrained word embedding model like vec2word?
|
<p>I'm working on semantic matching in my search engine system. I saw that word embedding can be used for this task. However, my dataset is very limited and small, so I don't think that training a word embedding model such as word2vec from scratch will yield good results. As such, I decided to fine-tune a pre-trained model with my data.</p>
<p>However, I can't find a lot of information, such as articles or documentation, about fine-tuning. Some people even say that it's impossible to fine-tune a word embedding model.</p>
<p>This raises my question: is fine-tuning a pre-trained word embedding model possible and has anyone tried this before? Currently, I'm stuck and looking for more information. Should I try to train a word embedding model from scratch or are there other approaches?</p>
|
<python><nlp><artificial-intelligence><word2vec><word-embedding>
|
2024-04-12 17:57:56
| 2
| 323
|
th3plus
|
78,317,726
| 10,410,934
|
Issue with calling parent method of Python subclass?
|
<pre><code>import abc
class Parent(abc.ABC):
def is_valid(self, thing):
return thing is not None
class Child(Parent):
def is_valid(self, thing):
return super(Parent, self).is_valid(thing) and thing > 5
a = Child()
print(a.is_valid(6))
</code></pre>
<p>this gives AttributeError: 'super' object has no attribute 'is_valid'</p>
<p>What am I doing wrong?</p>
|
<python><inheritance>
|
2024-04-12 16:50:27
| 1
| 396
|
Stevie
|
78,317,566
| 3,981,447
|
How to insert a specific variable into the log formatter, avoiding the usual extra parameter?
|
<p>How to drag any parameter into logging.Formatter?
Imagine that you have several threads that execute some code, and each of them writes logs. This includes calling the code of important third-party libraries, the files of which only contain import logging..
I'd like to have some way to say "all log messages in this function call (including child calls) will contain a task_id which I also pass to the function". So something like this:</p>
<pre class="lang-py prettyprint-override"><code>*** boo.py ***
import logging
def fn():
logging.debug('Some module debug message')
logging.info('Some module info message')
*** main.py ***
import logging
import boo
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(fooid)s - %(levelname)s - %(message)s')
file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
def foo(foo_id: str):
logging.trace_magic(foo_id)
logger.debug('Debug message')
logger.info('Info message')
logger.warning('Warning message')
logger.error('Error message')
logger.critical('Critical message')
boo.fn()
foo('test run')
</code></pre>
<p>To have something like this as an output</p>
<pre><code>2024-04-12 19:05:49,792 - test run - DEBUG - Debug message
2024-04-12 19:05:49,792 - test run - INFO - Info message
2024-04-12 19:05:49,792 - test run - WARNING - Warning message
2024-04-12 19:05:49,792 - test run - ERROR - Error message
2024-04-12 19:05:49,792 - test run - CRITICAL - Critical message
2024-04-12 19:05:49,792 - test run - DEBUG - Some module debug message
2024-04-12 19:05:49,792 - test run - INFO - Some module info message
</code></pre>
|
<python><logging><python-logging>
|
2024-04-12 16:18:43
| 0
| 370
|
Vlad Kisly
|
78,317,267
| 451,460
|
unknown file type '.pxd' when installing my package
|
<p>I'm trying to install a package that I'm maintaining (pygtftk). I'm using the latest version and would like to go on with the development but I'm not able to install it under Python 3.10.14 due to some error with a pxd file (I'm clearly not expert in cython but I will work on it...). When running:</p>
<pre><code> git clone git@github.com:dputhier/pygtftk.git
cd pygtftk
python setup.py install --user
</code></pre>
<p>It complains about "unknown file type '.pxd'".</p>
<pre><code> building 'pygtftk.stats.multiprocessing.multiproc' extension
error: unknown file type '.pxd' (from 'pygtftk/stats/multiprocessing/multiproc.pxd')
</code></pre>
<p>Any hint would be greatly appreciated.</p>
<p>Best</p>
|
<python><cython><setup.py>
|
2024-04-12 15:25:03
| 1
| 804
|
dputhier
|
78,317,220
| 1,324,631
|
How to apply type annotations to variables in a tuple assignment statement?
|
<p>How should one annotate the types of variables assigned to in a tuple assignment?</p>
<p>As a minimal example:</p>
<pre><code>def foo() -> Tuple[Any, Any]:
return "x", 2
a, b = foo()
</code></pre>
<p>In context, I'm able to provide tighter type bounds on <code>a</code> and <code>b</code> than are automatically derived, and it's desirable to be able to express this in the program in order to better communicate intent and better leverage static analysis and autocompletion tools.</p>
<p><strong>How should one annotate or hint the types of variables assigned to in this type of tuple assignment statement?</strong></p>
<hr />
<p>Two things that <em>don't</em> work are individual <a href="https://peps.python.org/pep-0526/" rel="nofollow noreferrer">PEP 526</a> annotations, or one PEP 526 annotation on the tuple being assigned to:</p>
<pre><code>def bar():
a: str, b: int = foo()
# ^
# SyntaxError: invalid syntax
</code></pre>
<pre><code>def bar():
a, b: Tuple[str, int] = foo()
# ^
# SyntaxError: only single target (not tuple) can be annotated
</code></pre>
<p>Using a <a href="https://peps.python.org/pep-0484/" rel="nofollow noreferrer">PEP 484</a> annotation on the statement also doesn't work: At the very least, it's <em>tolerated</em> at runtime, but Pylance doesn't pick up any useful information from it.</p>
<pre><code>def bar():
a, b = foo() # type: Tuple[str, int]
# ~~~~~~~~~~~~~~~
# Pylance: Warning: Type annotation not supported for this statement
</code></pre>
<p>One thing that <em>does</em> work is to apply individual PEP 526 annotations to <code>a</code> and <code>b</code> on their own lines:</p>
<pre><code>def bar():
a: str
b: int
a, b = foo()
</code></pre>
<p>But that's now three times as many lines.
<strong>Is there a way to do this in just one line?</strong></p>
<p>Additionally, as an extra bonus, you can't simply make a 1:1 translation from that to PEP 484 (e.g. for supporting python versions <= 3.5):</p>
<pre><code>def bar():
a # type: str
# ^
# UnboundLocalError: cannot access local variable 'a' where it is not associated with a value
b # type: int
a, b = foo()
</code></pre>
<p>As a workaround you can explicitly assign <code>None</code> to make them valid statements:</p>
<pre><code>def bar():
a = None # type: str
b = None # type: int
a, b = foo()
</code></pre>
<p>But that introduces a behavioral difference, in that if <code>foo()</code> raises or returns something that can't be unpacked into <code>a</code> and <code>b</code>, then the code ends up leaving them as <code>None</code> in violation of the non-None invariant; if you wanted to leave them unbound you'd need to explicitly delete them if there's an exception:</p>
<pre><code>def bar():
try:
a = None # type: str
b = None # type: int
a, b = foo()
except Exception as e:
del a, b
raise e
</code></pre>
<p>And now that's officially going too far. <strong>How can one write a PEP 484 hint for a variable's type without binding that variable?</strong></p>
|
<python><python-3.x><python-typing>
|
2024-04-12 15:16:03
| 0
| 4,212
|
AJMansfield
|
78,316,928
| 3,585,575
|
gh has different output when captured with Python subprocess.run
|
<p>Within Python, I am trying to capture the full <code>stderr</code> output produced from the shell command:</p>
<pre><code>$ gh repo fork --remote
! knoepfel/larvecutils already exists
β Using existing remote origin
</code></pre>
<p>The python code looks like:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
fork = subprocess.run(["gh", "repo", "fork", "--remote"], capture_output=True, text=True)
print(fork)
</code></pre>
<p>But what is printed to the screen is:</p>
<pre><code>CompletedProcess(args=['gh', 'repo', 'fork', '--remote'],
returncode=0, stdout='',
stderr='knoepfel/larvecutils already exists\n')
</code></pre>
<p>In other words, it looks like the output after the newline has been truncated, <em>and</em> the leading <code>!</code> is not included. I'm wondering if this is because the <code>!</code> and the <code>β</code> are "special" (you don't see it above, but those characters are specially colored by <code>gh</code>).</p>
<p>Any ideas?</p>
|
<python><github-cli>
|
2024-04-12 14:28:17
| 2
| 1,710
|
Kyle Knoepfel
|
78,316,919
| 6,213,343
|
Polars: Replace parts of dataframe with other parts of dataframe
|
<p>I'm looking for an efficient way to copy / replace parts of a dataframe with other parts of the same dataframe in Polars.</p>
<p>For instance, in the following minimal example dataframe</p>
<pre class="lang-py prettyprint-override"><code>pl.DataFrame({
"year": [2020,2021,2020,2021],
"district_id": [1,2,1,2],
"distribution_id": [1, 1, 2, 2],
"var_1": [1,2,0.1,0.3],
"var_N": [1,2,0.3,0.5],
"unrelated_var": [0.2,0.5,0.3,0.7],
})
</code></pre>
<p>I'd like to replace all column values of "var_1" & "var_N" where the "distribution_id" = 2 with the corresponding values where the "distribution_id" = 1.</p>
<p>This is the desired result:</p>
<pre class="lang-py prettyprint-override"><code>pl.DataFrame({
"year": [2020,2021,2020,2021],
"district_id": [1,2,1,2],
"distribution_id": [1, 1, 2, 2],
"var_1": [1,2,1,2],
"var_N": [1,2,1,2],
"unrelated_var": [0.2,0.5,0.3,0.7],
})
</code></pre>
<p>I tried to use a "when" expression, but it fails with "polars.exceptions.ShapeError: shapes of <code>self</code>, <code>mask</code> and <code>other</code> are not suitable for <code>zip_with</code> operation"</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns([
pl.when(pl.col("distribution_id") == 2).then(df.filter(pl.col("distribution_id") == 1).otherwise(pl.col(col)).alias(col) for col in columns_to_copy
]
)
</code></pre>
<p>Here's what I used to do with SQLAlchemy:</p>
<pre class="lang-py prettyprint-override"><code>table_alias = table.alias("table_alias")
stmt = table.update().\
where(table.c.year == table_alias.c.year).\
where(table.c.d_id == table_alias.c.d_id).\
where(table_alias.c.distribution_id == 1).\
where(table.c.distribution_id == 2).\
values(var_1=table_alias.c.var_1,
var_n=table_alias.c.var_n)
</code></pre>
<p>Thanks a lot for you help!</p>
|
<python><dataframe><python-polars>
|
2024-04-12 14:27:17
| 4
| 626
|
Christoph Pahmeyer
|
78,316,856
| 3,611,164
|
Dynamically create and display ipywidgets, failing in databricks notebook
|
<p>Goal: An array of ipywidgets, that can be extended through a button-click on the UI.</p>
<pre class="lang-py prettyprint-override"><code>import ipywidgets as widgets
from IPython.display import display
# Keeps track of default and dynamically added widgets
widgets_list = [widgets.Text(value='Give me')]
w_out_widgets_list = widgets.Output()
# display defaults
w_out_widgets_list.append_display_data(widgets.HBox(widgets_list))
def add_new_widget(b):
with w_out_widgets_list:
widgets_list.append(widgets.Text(value='more and '))
w_out_widgets_list.clear_output()
display(widgets.HBox(widgets_list))
w_new_widget = widgets.Button(description='Add Widget')
w_new_widget.on_click(add_new_widget)
display(widgets.VBox([w_out_widgets_list, w_new_widget]))
</code></pre>
<p>This is working as expected in my locally running jupyter notebook.</p>
<p>Within the databricks notebook, a strange behaviour is observable:</p>
<ul>
<li>Default widget creation works fine</li>
<li>Button click appends a widget to the <code>widget_list</code>, but does not update the displayed widgets.</li>
<li>Upon a second click, the view updates by displaying the now three widgets below the default one.</li>
</ul>
<p><a href="https://i.sstatic.net/E5Fm9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E5Fm9.png" alt="Not understood behavior within databricks" /></a></p>
<p>ipywidgets are on version 7.7.2 there.</p>
<p>Any ideas on what the cause of this behavior is?</p>
|
<python><jupyter-notebook><databricks><ipywidgets><databricks-notebook>
|
2024-04-12 14:14:21
| 1
| 366
|
Fabitosh
|
78,316,845
| 1,726,805
|
Counting number of unique values in groups
|
<p>I have data where for multiple years, observations <em>i</em> are categorized in <em>cat</em>. An observation <em>i</em> can be in multiple categories in any year, but is unique across years. I am trying to count unique values for <em>i</em> by <em>year</em>, by <em>cat</em>, and by <em>year</em> and <em>cat</em>.</p>
<p>I'm learning Python (v3.12) & Pandas (v2.2.1). I can make this work, but only by creating separate tables for the counts, and merging them back in with the main data. See the example below. I suspect there is a better way to do this. Is there, and, if so, how?</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{'year': [2020,2020,2020,2021,2021,2022,2023,2023,2023,2023],
'cat': [1,1,2,2,3,3,1,2,3,4],
'i': ['a','a','b','c','d','e','f','f','g','g']
})
df
df_cat = df.groupby('cat')['i'].nunique()
df_year = df.groupby('year')['i'].nunique()
df_catyear = df.groupby(['cat', 'year'])['i'].nunique()
df_merged = df.merge(df_cat, how='left', on='cat').rename(columns={'i_x': 'i', 'i_y': 'n_by_cat'})
df_merged = df_merged.merge(df_year, how='left', on='year').rename(columns={'i_x': 'i', 'i_y': 'n_by_year'})
df_merged = df_merged.merge(df_catyear, how='left', on=['cat', 'year']).rename(columns={'i_x': 'i', 'i_y': 'n_by_catyear'})
</code></pre>
|
<python><pandas><group-by>
|
2024-04-12 14:11:33
| 1
| 609
|
Matthijs
|
78,316,719
| 1,029,902
|
Algolia search function returns only first 20 results
|
<p>I have this Python function to return all verified venues using algolia</p>
<pre><code>def get_all_venues():
try:
page = 0
hits_per_page = 1000 # Increase this value
while True:
result = index.search("", {
"filters": "is_verified=1",
"page": page,
"hitsPerPage": hits_per_page
})
hits = result["hits"]
if not hits:
break # Break the loop if there are no more hits
with open("venues.csv", "a") as file:
writer = csv.writer(file)
for hit in hits:
venue_id = hit["venue_id"]
venue_name = hit["venue_name"]
is_verified = hit["is_verified"]
fields = [venue_id, venue_name, is_verified]
writer.writerow(fields)
print(str(venue_id) + ": " + str(venue_name))
page += 1 # Move to the next page
except Exception as e:
print("An error occurred:", str(e))
get_all_venues()
</code></pre>
<p>When I run it, it returns only the first 20 results. I know there are at least 20000 results in the dataset. I know this because I have tried an alternative method.</p>
<p><strong>What I have tried</strong></p>
<p>I wrote a function which takes a <code>venue_id</code> and manually checks if the <code>is_verified</code> value is equal to 1 and if it is, then it writes to the csv. However, this is very tedious as I have to set a range of <code>venue_id</code> values in the millions and check every single record which makes the script take days. I am certain the filter is faster, so I want to use the filter instead.</p>
|
<python><search><filtering><algolia>
|
2024-04-12 13:52:00
| 0
| 557
|
Tendekai Muchenje
|
78,316,559
| 451,460
|
Get a "clang: error: no such file or directory" when building my Python package
|
<p>I'm trying to install a package that I'm maintaining (<a href="https://github.com/dputhier/pygtftk" rel="nofollow noreferrer">pygtftk</a>). I'm using the latest version and would like to go on with the development that I let few months ago (to prepare python 3.10 and 3.11 version). but I'm not able anymore to install it due to a weird clang/gcc error (I tried both under Unix and OSX). I'm using Python 3.9.15 under Unix.
When running:</p>
<pre><code> git clone git@github.com:dputhier/pygtftk.git
cd pygtftk
python setup.py install --user
</code></pre>
<p>It complains about not finding the 'exclude.cpp' in current directory.</p>
<pre><code> gcc -pthread -B ...
gcc: error: exclude.cpp: No such file or directory.
</code></pre>
<p>In fact this file is located in the following directory:</p>
<pre><code> ./pygtftk/stats/intersect/read_bed/exclude.cpp
</code></pre>
<p>And setup.py indicates that it is located in that directory:</p>
<pre><code> cython_ologram_3 = Extension(name='pygtftk.stats.intersect.read_bed.read_bed_as_list',
sources=["pygtftk/stats/intersect/read_bed/read_bed_as_list.pyx",
"pygtftk/stats/intersect/read_bed/exclude.cpp"], # Include custom Cpp code
extra_compile_args=extra_comp_cython, extra_link_args=extra_link_cython,
include_dirs=[np.get_include()],
language='c++')
</code></pre>
<p>I have been searching but found no way to fix it.</p>
<p>Any idea would be greatly appreciated.
Best</p>
|
<python><gcc><compilation><package><clang>
|
2024-04-12 13:23:45
| 0
| 804
|
dputhier
|
78,316,555
| 4,971,866
|
FastAPI Depends(func) returns a dataclass but the typing is not right
|
<p>I created a dependency function to verify token in the header and return info from the token.</p>
<pre class="lang-py prettyprint-override"><code>async def require_auth(res:Response, cred: HTTPAuthorizationCredentials = Depends(HTTPBearer(auto_error=True))):
try:
# 3rd-party service verify the token and reutrn user info
decoded = ...
except:
raise HTTPException(...)
return DecodedUser(**decoded)
</code></pre>
<p>here <code>decoded</code> is a <code>dict</code> contains user info and <code>DecodedUser</code> is a dataclass:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass(slots=True)
class DecodedUser:
id: str
email: str
...
</code></pre>
<p>but when using this in api:</p>
<pre class="lang-py prettyprint-override"><code>@app.get('/')
async def hello(user=Annotated[DecodedUser, Depends(require_auth)]):
user.id # this line
</code></pre>
<p>vscode pylance shows a red line under the <code>id</code> and complains: <code>Cannot access member "id" for type "type[Annotated]" \n Member "id" is unknown</code>,
and hover on the <code>user</code> shows its type is <code>type[DecodedUser]</code>, which supposed to be <code>DecodedUser</code>.</p>
<p>What am I doing wrong?</p>
|
<python><fastapi><python-typing>
|
2024-04-12 13:23:07
| 1
| 2,687
|
CSSer
|
78,316,548
| 937,440
|
Problems debugging using the Visual Studio Code command line debugger
|
<p>I am fairly new to VS Code (IDE) and Python (I have plenty of experience with Visual Studio and C#). I have written the following Hello World type program (test.py) using VS Code:</p>
<pre><code>import sys
def main():
print(sys.argv)
print(sys.argv[1])
print(sys.argv[2])
print("hello world")
if __name__ == "__main__":
main()
</code></pre>
<p>The Launch.json looks like tis:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"args": ["parameter1", "parameter2"]
}
]
}
</code></pre>
<p>If I select Run/Start Debugging then everything works as expected i.e. the arguments passed are printed to the terminal in VS Code. If I add a breakpoint then the code stops as expected at the breakpoint.</p>
<p>I have come across this webpage and specifically the "Command line debugging section" (<a href="https://code.visualstudio.com/docs/python/debugging" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/debugging</a>). I tried this (as stated on the webpage):</p>
<ol>
<li><p>Open terminal</p>
</li>
<li><p>Type: <code>python -m pip install --upgrade debugpy</code></p>
</li>
<li><p>Press enter</p>
</li>
<li><p>Amend the launch.json to this:</p>
<p>{
"name": "Python Debugger: Attach",
"type": "debugpy",
"request": "attach",
"connect": {
"host": "localhost",
"port": 5678
},
"args": [parameter1", "parameter2"]
}</p>
</li>
<li><p>If I Run/Start debugging at this stage then everything still works as expected i.e. the arguments passed are printed to the terminal in VS Code and the code stop as expected at the breakpoint.</p>
</li>
<li><p>Type: <code>python -m debugpy --listen 0.0.0.0:5678 ./test.py</code></p>
</li>
</ol>
<p>The code errors with this:</p>
<pre><code>0.01s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
['./test.py']
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\debugpy\__main__.py", line 39, in <module>
cli.main()
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\debugpy\server\cli.py", line 430, in main
run()
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\debugpy\server\cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "./test.py", line 10, in <module>
main()
File "./test.py", line 5, in main
print(sys.argv[1])
~~~~~~~~^^^
IndexError: list index out of range
</code></pre>
<p>If I comment out the following lines, then the code runs from the command line, however breakpoints are not honoured:</p>
<pre><code>print(sys.argv)
print(sys.argv[1])
print(sys.argv[2])
</code></pre>
<p>My two concerns are:</p>
<ol>
<li>Why can't I provide command line arguments when command line debugging? It works if you pass the aruements via a terminal command like this: <code>python -m debugpy --listen 0.0.0.0:5678 ./test.py "parameter1","parameter2"</code>. It appears the <code>args</code> in the <code>launch.json</code> are ignored.</li>
<li>Why aren't breakpoints honoured when command line debugging?</li>
</ol>
<p>I have tried adding the <code>-Xfrozen_modules=off</code> to help with the breakpoint issue (as suggested by the terminal message above), however it makes no difference. I have also tried adding the <code>--wait-for-client flag</code> (mentioned in the link I provided) to help with the breakpoint issue, however this also made no difference.</p>
<p>I am using Visual Studio Code 1.86, debugpy 24.077.775 and Python 3.12.3. Windows 10 64 bit.</p>
|
<python><python-3.x><visual-studio-code>
|
2024-04-12 13:22:14
| 1
| 15,967
|
w0051977
|
78,315,964
| 1,194,864
|
Loading pre-trained weights properly in Pytorch
|
<p>I would like to perform transfer learning by loading a pretrained vision transformer model, modify its last layer and training it with my own data.</p>
<p>Hence, I am loading my dataset perform the typical transformation similar to the ImageNet, then, load the model, disable the grad from all its layer remove the last layer and add a trainable one using the number of classes of my dataset. My code could look like as follows:</p>
<pre><code>#retrained_vit_weights = torchvision.models.ViT_B_16_Weights.DEFAULT # requires torchvision >= 0.13, "DEFAULT" means best available
#pretrained_vit = torchvision.models.vit_b_16(weights=pretrained_vit_weights).to(device)
pretrained_vit = torch.hub.load('facebookresearch/deit:main', 'deit_tiny_patch16_224', pretrained=True).to(device)
for parameter in pretrained_vit.parameters():
parameter.requires_grad = False
pretrained_vit.heads = nn.Linear(in_features=192, out_features=len(class_names)).to(device)
optimizer(torch.optim.Adam(params=pretrained_vit.parameters(), ... )
loss_fn = torch.nn.CrossEntropyLoss()
esults = engine.train(model=pretrained_vit, ..., ... )
</code></pre>
<p>When I am using <code>torchvision.models.ViT_B_16_Weights.DEFAULT</code> then the code works smoothly and I can run my code without any problem. However, when I am using instead the <code>deit_tiny_patch16_224</code> and I set the <code>requires_grade = False</code> then I got the following error:</p>
<pre><code>Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
</code></pre>
<p>When the varialble is set to True, the code works smoothly but ofc the training is really bad since I had a very small amount of pictures. How, can I set properly the <code>deit_tiny_patch16_224</code> parametes to <code>parameter.requires_grad = False</code>?</p>
<p>Is there an issue with the way I am loading the pre-trained weights?</p>
|
<python><pytorch><transformer-model><transfer-learning>
|
2024-04-12 11:27:08
| 1
| 5,452
|
Jose Ramon
|
78,315,840
| 13,849,446
|
Unable to find category and price for tickets in requests
|
<p>I have been lurking through the site: <a href="https://www.ticketcorner.ch" rel="nofollow noreferrer">https://www.ticketcorner.ch</a> for three days, but I am unable to find some ticket data in requests.
The purpose is to make a script that takes event url and gives a list of available tickets.
Now, I am getting some of the data from the requests like Block, Row and Seat No. but I am unable to find the category and the Price of the individual ticket (If I know one of these I can get other).
<a href="https://i.sstatic.net/D9qsU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D9qsU.png" alt="Venue Seat Map" /></a>
Now, for each available seat (highlighted in colors) I am able to get the Row: <strong>V</strong>, Seat No: <strong>38</strong>, Block: <strong>Parterre Pair</strong> from the following requests url: <a href="https://api.eventim.com/seatmap/api/SeatMapHandler?smcVersion=v6.1&version=v6.2.11&cType=web&cId=8&evId=17988010&a_holds=1&a_rowRules=1&key=web_8_17988010_0_TCS_0&a_systemId=1&a_promotionId=0&a_sessionId=TCS_NO_SESSION&timestamp=28548647&expiryTime=28548657&chash=8S7qZoIgwA&signature=8QDkUbymqOzx1jjF5t9x_f1ca4QdW79RM25nSJiN3h4&fun=seatinfos&blockId=11" rel="nofollow noreferrer">https://api.eventim.com/seatmap/api/SeatMapHandler?smcVersion=v6.1&version=v6.2.11&cType=web&cId=8&evId=17988010&a_holds=1&a_rowRules=1&key=web_8_17988010_0_TCS_0&a_systemId=1&a_promotionId=0&a_sessionId=TCS_NO_SESSION&timestamp=28548647&expiryTime=28548657&chash=8S7qZoIgwA&signature=8QDkUbymqOzx1jjF5t9x_f1ca4QdW79RM25nSJiN3h4&fun=seatinfos&blockId=11</a></p>
<p>But I also need to get the Category: <strong>Kat. 3</strong> and Price: <strong>77.40</strong> for each individual ticket</p>
<p>What I have achieved myself is explained in this answer I just saw,</p>
<p><a href="https://stackoverflow.com/questions/72246806/extract-ticketcorner-event-seat-prices-from-a-venue-map">Extract ticketcorner event seat prices from a venue map</a></p>
<p><strong>Note:</strong> I want to do it using requests for some reason and can not shift to selenium or other browser automation framework. I do not want any code or detailed explanation, only a hint that from where I can get the data would be great help</p>
<p>If anyone can help me find the Category and price for the individual available ticket it would be great help, I have been trying for 3 days but in vain.
Thanks for any help in advance</p>
|
<python><web-scraping><python-requests>
|
2024-04-12 11:03:20
| 1
| 1,146
|
farhan jatt
|
78,315,717
| 5,868,293
|
Creating new column in pandas based on list of tuples
|
<p>I have the following :</p>
<pre><code>import pandas as pd
ls = [(1,2,10,20,5),
(3,4,30,40,10),
(5,6,50,60,20)]
df_ = pd.DataFrame({'col1': [1.1, 3.5, 5.4, 4.1],
'col2': [11, 35, 44, 41]})
</code></pre>
<p>I would like to create a new column in <code>df_</code> which will be created according the below rules:
Checking at each tuple of the <code>ls</code>:</p>
<ul>
<li>if col1 is between the 1st and 2nd element of the tuple</li>
<li>if col2 is between the 3rd and 4th element of the tuple</li>
<li>if <strong>both</strong> of the above 2 conditions are true, then it should return the 5th element of the tuple, otherwise it should return None</li>
</ul>
<p>The resulting dataframe should look like this:</p>
<pre><code>df_ = pd.DataFrame({'col1': [1.1, 3.5, 5.4, 4.1],
'col2': [11, 35, 54, 41],
'result': [5, 10, None, None]})
</code></pre>
<p>How could I achieve that ?</p>
|
<python><pandas>
|
2024-04-12 10:40:41
| 2
| 4,512
|
quant
|
78,315,681
| 268,581
|
bokeh vbar_stack: positive values on positive side, negative values on negative side
|
<h1>Example program</h1>
<pre class="lang-py prettyprint-override"><code>import bokeh.colors
import pandas as pd
from bokeh.plotting import figure, show
import bokeh.models
import bokeh.palettes
import bokeh.transform
data = {
'date': ['2024-04-15', '2024-04-16', '2024-04-17' ],
'Bill': [1, 1, -1],
'Note': [1, -1, -1],
'Bond': [1, 0, -1]
}
df = pd.DataFrame(data)
df['date'] = pd.to_datetime(df['date'])
df.set_index('date', inplace=True)
p = figure(sizing_mode='stretch_both', x_axis_type='datetime', x_axis_label='date', y_axis_label='change')
p.vbar_stack(stackers=['Bill', 'Note', 'Bond'], x='date', width=pd.Timedelta(days=0.9), color=bokeh.palettes.Category20[3], source=df, legend_label=['Bill', 'Note', 'Bond'])
p.legend.click_policy = 'hide'
show(p)
</code></pre>
<h1>Output</h1>
<p><a href="https://i.sstatic.net/y85xp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y85xp.png" alt="enter image description here" /></a></p>
<h1>Notes</h1>
<p>In the first column, all three values are positive, so they stack upwards as expected.</p>
<p>In the third column, all three values are negative, so they stack downwards as expected.</p>
<p>In the middle column, we have 1 positive value, so it stacks upward, then we have one negative value, so it then stacks downward. The result is, we end up with the appearance of a stack of height 1.</p>
<h1>Question</h1>
<p>Is there a way to have positive values only stack up on the positive side of the y-axis and for negative values to only show up on the negative side of the y-axis?</p>
<p>In other words, the chart would look as follows:</p>
<p><a href="https://i.sstatic.net/nzp7Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nzp7Y.png" alt="enter image description here" /></a></p>
|
<python><bokeh>
|
2024-04-12 10:33:00
| 1
| 9,709
|
dharmatech
|
78,315,676
| 5,896,319
|
How to run a command at the same time with the running the application in Flask?
|
<p>I have a script as a flask command and I want to run it when the system starts until the system ends. It should be a thread basically. I already created the command and it is working fine. How can I run it at the same time with the system?
flask_commands.py:</p>
<pre><code>@click.command()
def file_watcher() -> None:
context.interactor_type = 'flask command'
event_handler = Handler()
folder_path = core.settings['folder_path']
observer = Observer()
observer.schedule(event_handler, folder_path, recursive=True)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
</code></pre>
<p>file_watcher.py</p>
<pre><code>class Handler(FileSystemEventHandler):
def on_created(self, event):
logging.log(logging.INFO, f'New file created: {event.src_path}')
print(f'New file created: {event.src_path}')
...
print(datamodel)
</code></pre>
<p>app.py</p>
<pre><code>from ...
app = create_app("pb")
init_flask_cmds(app)
if __name__ == '__main__':
port = core.settings.get('listening_port', 5000)
app.run(
debug=True,
host="0.0.0.0",
port=port
)
</code></pre>
<p>I'm running the application like this: <code>flask run</code></p>
|
<python><python-3.x><flask><watchdog>
|
2024-04-12 10:31:49
| 1
| 680
|
edche
|
78,315,455
| 1,123,501
|
FastAPI error when using Annotated in Class dependencies
|
<p>FastAPI added support for Annotated (and started recommending it) in version 0.95.0.</p>
<p>Additionally, FastAPI has a very powerful but intuitive Dependency Injection system (<a href="https://fastapi.tiangolo.com/tutorial/dependencies/" rel="nofollow noreferrer">documentation</a>). Moreover, FastAPI support <a href="https://fastapi.tiangolo.com/tutorial/dependencies/classes-as-dependencies/" rel="nofollow noreferrer">Classes as Dependencies</a>.</p>
<p>However, it seams like <code>Annotated</code> cannot be used in class dependencies, but on function dependencies.</p>
<p>I use FastAPI version <code>0.110.1</code>.</p>
<pre><code>from __future__ import annotations
from typing import Annotated
from fastapi import FastAPI, Depends, Query
app = FastAPI()
class ClassDependency:
def __init__(self, name: Annotated[str, Query(description="foo")]):
self.name = name
async def function_dependency(name: Annotated[str, Query(description="foo")]) -> dict:
return {"name": name}
@app.get("/")
async def search(c: Annotated[ClassDependency, Depends(function_dependency)]) -> dict:
return {"c": c.name}
</code></pre>
<p>The example above works without errors, but if I replace <code>Depends(function_dependency)</code> with <code>Depends(ClassDependency)</code> an exception is raised with the following message:</p>
<pre><code>pydantic.errors.PydanticUndefinedAnnotation: name 'Query' is not defined
</code></pre>
<p>Then, if I remove <code>Annotated</code> from the ClassDependency, by replacing the <code>name: Annotated[str, Query(description="foo")]</code> with the <code>name: str</code>, the example works.</p>
<p><strong>My question</strong>: Can I use Class dependencies and put <code>Annotated</code> to the parameters set in the constructor? Because it seams this is not working.</p>
<p><strong>My need</strong>: I want to have a class hierarchy for the query params of my api endpoints and provide validation and documentation extras for each of the param.</p>
|
<python><fastapi>
|
2024-04-12 09:50:25
| 1
| 20,150
|
Georgios
|
78,315,374
| 12,304,000
|
ValueError: You have to specify pixel_values
|
<p>Is it possible to generate an image using the CLIP model without giving a reference image? I tried to follow the documentation and came up with this:</p>
<pre><code>import torch
from transformers import CLIPProcessor, CLIPModel
from PIL import Image
# Load CLIP model and processor
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
prompts = [
"a cat",
]
for i, prompt in enumerate(prompts):
with torch.no_grad():
outputs = model(prompt)
image_features = outputs.pixel_values
# Convert image features to image
image = Image.fromarray(image_features[0].numpy())
image.save(f"generated_image_{i}.png")
</code></pre>
<p>but I get this error:</p>
<pre><code>Traceback (most recent call last):
File "clip.py", line 20, in <module>
outputs = model(**inputs)
File "/Users/x/.pyenv/versions/clip/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/x/.pyenv/versions/clip/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/x/.pyenv/versions/clip/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 1110, in forward
vision_outputs = self.vision_model(
File "/Users/x/.pyenv/versions/clip/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/x/.pyenv/versions/clip/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/x/.pyenv/versions/clip/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 847, in forward
raise ValueError("You have to specify pixel_values")
</code></pre>
<p>Docs: <a href="https://huggingface.co/docs/transformers/en/model_doc/clip" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/en/model_doc/clip</a></p>
|
<python><pytorch><huggingface-transformers><generative-programming>
|
2024-04-12 09:36:02
| 1
| 3,522
|
x89
|
78,315,360
| 143,091
|
Docker API build does not reuse layers from cache
|
<p>I have a Dockerfile, and when I run <code>docker build</code> from the command line it reuses unchanged layers as expected. However, when I start the build using the Python API, it always rebuilds all steps. Any idea what the problem could be? My code looks like this:</p>
<pre><code>import docker
with contextlib.closing(docker.from_env()) as client:
stream = client.api.build(path=workdir, tag=newtag, rm=True)
for part in stream:
# process stream
print(part)
</code></pre>
|
<python><docker><docker-api>
|
2024-04-12 09:32:44
| 1
| 10,310
|
jdm
|
78,315,239
| 906,598
|
Replace rows with nearest time using pyspark
|
<p>I have a dataframe in PySpark:</p>
<pre><code>id time replace
3241 2024-01-31 false
4344 2019-09-01 true
5775 2022-02-01 false
5394 2018-06-16 true
7645 2023-03-11 false
</code></pre>
<p>I want to find rows where <code>replace == true</code> and replace the time column with the nearest time among rows where <code>replace == false</code>. I want to do this for all rows where <code>replace == true</code>.</p>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2024-04-12 09:11:04
| 1
| 1,119
|
upabove
|
78,315,143
| 6,667,035
|
Python UDP server start failed with error code WinError 10045 in Windows
|
<p>I am trying to build Python UDP server using the following code.</p>
<pre class="lang-py prettyprint-override"><code>
import socket
if __name__ == '__main__':
bind_ip = "0.0.0.0"
bind_port = 30335
server = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP, Server setting
server.bind((bind_ip, bind_port))
print("[*] Listening on %s:%d " % (bind_ip, bind_port))
while True:
client, addr = server.accept()
print('Connected by ', addr)
while True:
data = client.recv(1024)
print("Client recv data : %s " % (data))
client.send("ACK!")
</code></pre>
<p>And I got an error below</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File C:\ProgramData\anaconda3\Lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File c:\users\jimmy\desktop\github\worldlongcompany\python\main.py:20
client, addr = server.accept()
File C:\ProgramData\anaconda3\Lib\socket.py:294 in accept
fd, addr = self._accept()
OSError: [WinError 10045] The attempted operation is not supported for the type of object referenced
</code></pre>
<p>Is there anything wrong with my code? I am new to Python and please let me know how can I fix it.</p>
<p>My Python version:</p>
<pre><code>> ! python --version
Python 3.11.8
</code></pre>
|
<python><windows><network-programming><udp>
|
2024-04-12 08:50:16
| 1
| 559
|
JimmyHu
|
78,315,073
| 5,330,527
|
Conditional filling of autocomplete dropdown in Django backend with admin_auto_filters
|
<p>My <code>models.py</code>:</p>
<pre><code>from smart_selects.db_fields import ChainedForeignKey
class Funder(models.Model):
name = models.CharField(max_length=200)
scheme = models.ManyToManyField('Scheme', blank=True, related_name='funders')
class Scheme(models.Model):
name = models.CharField(max_length=200)
class Project(models.Model):
funder = models.ForeignKey(Funder, on_delete=models.PROTECT)
scheme = ChainedForeignKey(
Scheme,
chained_field="funder",
chained_model_field="funder",
show_all=False,
auto_choose=True,
sort=True, null=True, blank=True)
</code></pre>
<p>As you can see I'm already using <a href="https://django-smart-selects.readthedocs.io/en/latest/" rel="nofollow noreferrer">smart-selects</a> for getting only the schemes that belong to that specific funder available in the <code>Project</code> dropdown selection in the admin back-end. <code>smart-selects</code>, though, doesn't take care of what happens in the <code>list_filters</code> section of the Admin.</p>
<p><strong>What I'd like to achieve:</strong> have two 'chained' dropdown filters in my Project admin table, where I'm filtering for projects that have a specific funder, and once I've filtered such projects I want to see only the <code>scheme</code>s that belong to that specific <code>funder</code> in the <code>scheme</code>s dropdown, with the ability of furter filtering the projects with that specific scheme.</p>
<p>My failing attempt so far (<code>admin/py</code>):</p>
<pre><code>from admin_auto_filters.filters import AutocompleteFilter, AutocompleteFilterFactory
class SchemeFilter(
AutocompleteFilterFactory(
'new Scheme', 'scheme'
)
):
def lookups(self, request, model_admin):
this_funder = request.GET.get('funder__pk__exact', '')
if this_funder:
schemes= Scheme.objects.filter(funders__pk__exact=this_funder).distinct()
else:
schemes = Scheme.objects.none()
return schemes.values('pk', 'name')
def queryset(self, request, queryset):
this_funder = request.GET.get('funder__pk__exact', '')
if this_funder and Scheme.objects.filter(funders__pk__exact=this_funder).count():
return queryset.filter(funder__pk__exact=self.value())
return queryset.filter(scheme__pk__exact=self.value())
class FunderFilter(AutocompleteFilter):
title = 'Funder'
field_name = 'funder'
class ProjectAdmin(NumericFilterModelAdmin, ImportExportModelAdmin):
search_fields = ['title', 'project_number']
list_filter = [FunderFilter, SchemeFilter,]
[...]
</code></pre>
<p><strong>Update I</strong></p>
<p>I think the issue is here:</p>
<p><code>schemes= Scheme.objects.filter(funder__pk__exact=this_funder).distinct()</code></p>
<p>I've now added a <code>related_name</code>. Still no luck.</p>
<p><strong>Update II</strong></p>
<p>I realised that I was editing the queryset, whereas I should have worked on the <code>lookups</code> instead. Here's my updated code. I feel I'm so close to finding the solution. It looks like the <code>lookup</code> <code>if</code>s are ignored and the full list of <code>scheme</code>s is always returned.</p>
<pre><code>class SchemeFilter(
AutocompleteFilterFactory(
'Scheme', 'scheme'
)
):
def lookups(self, request, model_admin):
this_funder = request.GET.get('funder__pk__exact', '')
if (this_funder != '' and Scheme.objects.filter(funders__pk__exact=this_funder).count() > 0):
schemes= Scheme.objects.filter(funders__pk__exact=this_funder).distinct()
elif (this_funder != '' and Scheme.objects.filter(funders__pk__exact=this_funder).count() == 0):
schemes = Scheme.objects.none()
else:
schemes = Scheme.objects.all()
return schemes.values('pk', 'name')
def queryset(self, request, queryset):
if self.value():
queryset = queryset.filter(scheme__pk__exact=self.value()).distinct()
return super().queryset(request, queryset)
</code></pre>
<p><strong>Update III</strong></p>
<p>I'm beginning to think that it's a bug of <code>admin_auto_filters</code>. Indeed, if I use the native <code>SimpleListFilter</code> like so:</p>
<pre><code>class TestSchemeFilter(admin.SimpleListFilter):
title = 'Test Scheme'
parameter_name = 'scheme'
def lookups(self, request, model_admin):
pk = request.GET.get('funder__pk__exact', '')
if pk and Scheme.objects.filter(funders__pk__exact=pk).count() > 0:
schemes = Scheme.objects.filter(funders__id__exact=pk)
else:
schemes = Scheme.objects.all()
return [(s.id, s.name) for s in schemes]
def queryset(self, request, queryset):
if self.value():
queryset = queryset.filter(scheme__pk__exact=self.value()).distinct()
</code></pre>
<p>I successfully get a list of only the schemes belonging to that funder in the right-hand side.</p>
|
<python><django><django-filter>
|
2024-04-12 08:36:48
| 0
| 786
|
HBMCS
|
78,315,067
| 8,962,929
|
flask celery throw error "urls must start with a leading slash"
|
<p>I'm trying to implement the <strong>celery</strong> using <strong>flask</strong> and when i run this command <code>celery -A src.celery_worker.celery_app worker --loglevel=debug</code> to start the celery worker, it throwing error:</p>
<blockquote>
<p>Couldn't import 'src.celery_worker.celery_app': urls must start with a
leading slash</p>
</blockquote>
<p>Here's the full traceback:</p>
<pre><code>Traceback (most recent call last):
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/celery/bin/celery.py", line 58, in convert
return find_app(value)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/celery/app/utils.py", line 383, in find_app
sym = symbol_by_name(app, imp=imp)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/kombu/utils/imports.py", line 61, in symbol_by_name
reraise(ValueError,
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/kombu/exceptions.py", line 34, in reraise
raise value.with_traceback(tb)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/kombu/utils/imports.py", line 59, in symbol_by_name
module = imp(module_name, package=package, **kwargs)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/celery/utils/imports.py", line 109, in import_from_cwd
return imp(module, package=package)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/sonnc/project/mac/lifescience-operator-management-bff/src/__init__.py", line 42, in <module>
api = Api(
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/flask_restx/api.py", line 197, in __init__
self.init_app(app)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/flask_restx/api.py", line 236, in init_app
self._init_app(app)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/flask_restx/api.py", line 247, in _init_app
self._register_doc(self.blueprint or app)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/flask_restx/api.py", line 320, in _register_doc
app_or_blueprint.add_url_rule(self._doc, "doc", self.render_doc)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/flask/scaffold.py", line 56, in wrapper_func
return f(self, *args, **kwargs)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/flask/app.py", line 1083, in add_url_rule
rule = self.url_rule_class(rule, methods=methods, **options)
File "/home/sonnc/project/mac/lifescience-operator-management-bff/venv/lib/python3.8/site-packages/werkzeug/routing.py", line 698, in __init__
raise ValueError("urls must start with a leading slash")
ValueError: Couldn't import 'src.celery_worker.celery_app': urls must start with a leading slash
</code></pre>
<p>Below is the file <code>src/celery_worker/__init__.py</code>:</p>
<pre><code>import os
from flask import Flask
from celery import Celery, Task
def celery_init_app(app: Flask) -> Celery:
class FlaskTask(Task):
def __call__(self, *args: object, **kwargs: object) -> object:
with app.app_context():
return self.run(*args, **kwargs)
celery_app = Celery(app.name, task_cls=FlaskTask)
celery_app.config_from_object(app.config["CELERY"])
celery_app.set_default()
app.extensions["celery"] = celery_app
return celery_app
def create_app() -> Flask:
app = Flask(__name__)
# url for local redis
# redis_url = "redis://localhost:6379"
# url for docker redis
# redis_url = "redis://redis:6379/0"
app.config.from_mapping(
CELERY=dict(
broker_url="redis://localhost:6379",
result_backend="redis://localhost:6379",
task_ignore_result=True,
# import the tasks
# imports=("src.common.send_email",),
),
)
app.config.from_prefixed_env()
celery_init_app(app)
return app
flask_app = create_app()
celery_app = flask_app.extensions["celery"]
</code></pre>
<p>Look like the error is related to the <code>flask_restx</code> module, but I can't trace how it causing this issue.<br>
Please help, thanks.</p>
<hr />
<p>EDIT 1:
Here's the proof that every <code>route</code> and <code>path</code> in my source code are start with a slash:</p>
<p><a href="https://i.sstatic.net/GYCJS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GYCJS.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/D7gq5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D7gq5.png" alt="enter image description here" /></a></p>
|
<python><flask><celery><flask-restx>
|
2024-04-12 08:35:58
| 2
| 830
|
fudu
|
78,315,011
| 984,621
|
Running a script with only the libraries defined in virtual environment (venv)
|
<p>I worked for some time on a script and now I would like to create a virtualenv for it, create <code>requirements.txt</code>, so when I deploy the script to the server, all the necessary libraries would be installed from the <code>requirements.txt</code> file and I would not need to install one library after another.</p>
<p>So, I created a venv (<code>virtualenv .venv</code>), activated it (<code>. .venv/bin/activate</code>), checked installed libraries in this newly created venv and got the following output:</p>
<pre><code>Package Version
---------- -------
pip 24.0
setuptools 69.2.0
wheel 0.43.0
</code></pre>
<p>Alright, so now I assumed that if I run my script, it will break because there are missing libraries in this venv. However, the script all worked well. I assume it is because Python somehow found it installed elsewhere on my laptop and used those.</p>
<p>How do I force the script to run and use only the libraries installed within the <code>venv</code>, so I don't forget to add these libraries to the <code>venv</code> and when I move the script to the server, then everything would get smoothly installed from the <code>requirements.txt</code>?</p>
|
<python><virtualenv>
|
2024-04-12 08:22:06
| 1
| 48,763
|
user984621
|
78,314,982
| 188,331
|
Assertion `srcIndex < srcSelectDimSize` failed on GPU for the `train()` function of HuggingFace `Seq2SeqTrainer`
|
<p>I use the HuggingFace Transformer framework's <code>Seq2SeqTrainer</code> to fine-tune a pre-trained model. The codes run perfectly on the CPU but are unable to run on GPU (NVIDIA GeForce 4090). The following error occurs:</p>
<blockquote>
<p>Assertion <code>srcIndex < srcSelectDimSize</code> failed</p>
</blockquote>
<p>It is a common issue when using CUDA, but there are many causes of such errors. Therefore, I use a custom <code>Seq2SeqTrainer</code> to debug the index sizes. Here are the codes of my custom <code>Seq2SeqTrainer</code>:</p>
<pre><code>class CustomSeq2SeqTrainer(Seq2SeqTrainer):
def __init__(self, *args, **kwargs):
super(CustomSeq2SeqTrainer, self).__init__(*args, **kwargs)
self.step_count = 0 # Initialize a step counter attribute
def training_step(self, model, inputs):
self.step_count += 1 # Increment the step counter
# Log tensor details for the first two steps
if self.step_count <= 2:
for k, v in inputs.items():
if isinstance(v, torch.Tensor):
print(f"Step {self.step_count} -- {k}: Shape={v.shape}")
print(f"Step {self.step_count} -- {k}: Tensor={v}")
elif self.step_count <= 10000:
for k, v in inputs.items():
if isinstance(v, torch.Tensor):
print(f"Step {self.step_count} -- {k}: Shape={v.shape}")
return super(CustomSeq2SeqTrainer, self).training_step(model, inputs) # Call the parent class's method
</code></pre>
<p>And here is the output when I run the <code>trainer.train()</code> which <code>trainer</code> is an instance of <code>CustomSeq2SeqTrainer</code>:</p>
<pre><code>Step 1 -- input_ids: Shape=torch.Size([8, 200])
Step 1 -- input_ids: Tensor=tensor([[ 2, 198, 66, ..., 1, 1, 1],
[ 2, 0, 278, ..., 1, 1, 1],
[ 2, 1037, 198, ..., 1, 1, 1],
...,
[ 2, 81, 1058, ..., 1, 1, 1],
[ 2, 81, 3238, ..., 1, 1, 1],
[ 2, 18327, 0, ..., 1, 1, 1]])
Step 1 -- token_type_ids: Shape=torch.Size([8, 200])
Step 1 -- token_type_ids: Tensor=tensor([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]])
Step 1 -- attention_mask: Shape=torch.Size([8, 200])
Step 1 -- attention_mask: Tensor=tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]])
Step 1 -- labels: Shape=torch.Size([8, 200])
Step 1 -- labels: Tensor=tensor([[ 2, 887, 66, ..., 1, 1, 1],
[ 2, 10741, 1029, ..., 1, 1, 1],
[ 2, 109, 1071, ..., 1, 1, 1],
...,
[ 2, 61, 246, ..., 1, 1, 1],
[ 2, 61, 887, ..., 1, 1, 1],
[ 2, 255, 0, ..., 1, 1, 1]])
Step 2 -- input_ids: Shape=torch.Size([8, 200])
Step 2 -- input_ids: Tensor=tensor([[ 2, 406, 350, ..., 1, 1, 1],
[ 2, 170, 89, ..., 1, 1, 1],
[ 2, 0, 0, ..., 1, 1, 1],
...,
[ 2, 4785, 3, ..., 1, 1, 1],
[ 2, 5733, 226, ..., 1, 1, 1],
[ 2, 1117, 0, ..., 1, 1, 1]])
Step 2 -- token_type_ids: Shape=torch.Size([8, 200])
Step 2 -- token_type_ids: Tensor=tensor([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]])
Step 2 -- attention_mask: Shape=torch.Size([8, 200])
Step 2 -- attention_mask: Tensor=tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]])
Step 2 -- labels: Shape=torch.Size([8, 200])
Step 2 -- labels: Tensor=tensor([[ 2, 406, 114, ..., 1, 1, 1],
[ 2, 0, 317, ..., 1, 1, 1],
[ 2, 0, 537, ..., 1, 1, 1],
...,
[ 2, 4785, 963, ..., 1, 1, 1],
[ 2, 905, 537, ..., 1, 1, 1],
[ 2, 1117, 0, ..., 1, 1, 1]])
Step 3 -- input_ids: Shape=torch.Size([8, 200])
Step 3 -- token_type_ids: Shape=torch.Size([8, 200])
Step 3 -- attention_mask: Shape=torch.Size([8, 200])
Step 3 -- labels: Shape=torch.Size([8, 200])
Step 4 -- input_ids: Shape=torch.Size([8, 200])
Step 4 -- token_type_ids: Shape=torch.Size([8, 200])
Step 4 -- attention_mask: Shape=torch.Size([8, 200])
Step 4 -- labels: Shape=torch.Size([8, 200])
[ 4/103050 00:00 < 13:27:29, 2.13 it/s, Epoch 0.00/30]
Epoch Training Loss Validation Loss
Step 5 -- input_ids: Shape=torch.Size([8, 200])
Step 5 -- token_type_ids: Shape=torch.Size([8, 200])
Step 5 -- attention_mask: Shape=torch.Size([8, 200])
Step 5 -- labels: Shape=torch.Size([8, 200])
Step 6 -- input_ids: Shape=torch.Size([8, 200])
Step 6 -- token_type_ids: Shape=torch.Size([8, 200])
Step 6 -- attention_mask: Shape=torch.Size([8, 200])
Step 6 -- labels: Shape=torch.Size([8, 200])
Step 7 -- input_ids: Shape=torch.Size([8, 200])
Step 7 -- token_type_ids: Shape=torch.Size([8, 200])
Step 7 -- attention_mask: Shape=torch.Size([8, 200])
Step 7 -- labels: Shape=torch.Size([8, 200])
Step 8 -- input_ids: Shape=torch.Size([8, 200])
Step 8 -- token_type_ids: Shape=torch.Size([8, 200])
Step 8 -- attention_mask: Shape=torch.Size([8, 200])
Step 8 -- labels: Shape=torch.Size([8, 200])
Step 9 -- input_ids: Shape=torch.Size([8, 200])
Step 9 -- token_type_ids: Shape=torch.Size([8, 200])
Step 9 -- attention_mask: Shape=torch.Size([8, 200])
Step 9 -- labels: Shape=torch.Size([8, 200])
Step 10 -- input_ids: Shape=torch.Size([8, 200])
Step 10 -- token_type_ids: Shape=torch.Size([8, 200])
Step 10 -- attention_mask: Shape=torch.Size([8, 200])
Step 10 -- labels: Shape=torch.Size([8, 200])
Step 11 -- input_ids: Shape=torch.Size([8, 200])
Step 11 -- token_type_ids: Shape=torch.Size([8, 200])
Step 11 -- attention_mask: Shape=torch.Size([8, 200])
Step 11 -- labels: Shape=torch.Size([8, 200])
Step 12 -- input_ids: Shape=torch.Size([8, 200])
Step 12 -- token_type_ids: Shape=torch.Size([8, 200])
Step 12 -- attention_mask: Shape=torch.Size([8, 200])
Step 12 -- labels: Shape=torch.Size([8, 200])
Step 13 -- input_ids: Shape=torch.Size([8, 200])
Step 13 -- token_type_ids: Shape=torch.Size([8, 200])
Step 13 -- attention_mask: Shape=torch.Size([8, 200])
Step 13 -- labels: Shape=torch.Size([8, 200])
Step 14 -- input_ids: Shape=torch.Size([8, 200])
Step 14 -- token_type_ids: Shape=torch.Size([8, 200])
Step 14 -- attention_mask: Shape=torch.Size([8, 200])
Step 14 -- labels: Shape=torch.Size([8, 200])
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [186,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [187,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [187,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [187,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [187,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1290: indexSelectLargeIndex: block: [187,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
</code></pre>
<p>The <code>thread</code> value ranges from: <code>[0, 0, 0]</code> to <code>[127, 0, 0]</code>.</p>
<p>The training ends with an error:</p>
<pre><code>RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
</code></pre>
<p>How can I resolve this GPU-only problem?</p>
<hr />
<p><strong>UPDATE</strong></p>
<p>The same issue appears in GeForce 3090 GPU card as well.</p>
<p><strong>UPDATE 2</strong></p>
<p>I add 10000 to the model like this: <code>resize_token_embeddings(len(tokenizer) + 10000)</code> and the error becomes:</p>
<blockquote>
<p>RuntimeError: shape '[-1, 58685]' is invalid for input of size 109896000</p>
</blockquote>
|
<python><huggingface-transformers>
|
2024-04-12 08:14:33
| 0
| 54,395
|
Raptor
|
78,314,847
| 8,781,465
|
How to resolve AttributeError and BadRequestError in Python using LangChain and AzureOpenAI?
|
<p>I'm working on integrating <code>LangChain</code> with <code>AzureOpenAI</code> in Python and encountering a couple of issues. I've recently updated from a deprecated method to a new class implementation, but now I'm stuck with some errors I don't fully understand. Here's the relevant part of my code:</p>
<pre><code>from langchain_openai import AzureOpenAI as LCAzureOpenAI
# from langchain.llms import AzureOpenAI <-- Deprecated
# Create client accessing LangChain's class
client = LCAzureOpenAI(
openai_api_version=api_version,
azure_deployment=deployment_name,
azure_endpoint=azure_endpoint,
temperature=TEMPERATURE,
max_tokens=MAX_TOKENS,
model=model
#,model_kwargs={'azure_openai_api_key': api_key}
)
# Attempt to send a chat message
client.chat("Hi")
</code></pre>
<p>This results in the following error:</p>
<pre><code>AttributeError: 'AzureOpenAI' object has no attribute 'chat'
</code></pre>
<p>When I replace <code>client.chat("Hi")</code> with <code>client.invoke("Hi")</code>, I get a different error:</p>
<pre><code>BadRequestError: Error code: 400 - {'error': {'code': 'OperationNotSupported', 'message': 'The completion operation does not work with the specified model, gpt-4. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.'}}
</code></pre>
<p>How can I resolve these errors?</p>
<p>Any guidance or insights into these errors and how to resolve them would be greatly appreciated!</p>
|
<python><langchain><py-langchain><azure-openai><gpt-4>
|
2024-04-12 07:41:01
| 0
| 1,815
|
DataJanitor
|
78,314,803
| 10,979,307
|
Manim set_opacity doesn't work in the expected manner
|
<p>When running the following code and generating the output animation, I expect the orange graph to slightly fade to opacity of 0.3. Unfortunately it does not work as expected. The area under the plot becomes colored which is not what I want. I have attached an image of the final output to the bottom of the question. Is this a bug or am I doing something wrong?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from manim import *
class LinearRegression(Scene):
def construct(self):
plot = VGroup()
ax = Axes(
x_range=[0, np.pi + 0.1, 1],
y_range=[0, 1.1, 0.2],
tips=True,
axis_config={"include_numbers": True},
x_length=7,
y_length=5,
).add_coordinates()
plot.add(ax)
x = np.array([0., 0.16534698, 0.33069396, 0.49604095, 0.66138793,
0.82673491, 0.99208189, 1.15742887, 1.32277585, 1.48812284,
1.65346982, 1.8188168, 1.98416378, 2.14951076, 2.31485774,
2.48020473, 2.64555171, 2.81089869, 2.97624567, 3.14159265])
y = np.array([0.01313289, 0.9531357, 0.27303831, 0.56426336, 0.69446266,
0.79658221, 0.72262249, 1.0565153, 0.90835232, 1.01992807,
1.11962864, 0.85117504, 1.01662771, 0.91446878, 0.67722208,
0.6222458, 0.511015, 0.39366855, 0.18442382, 0.03879388])
plr = np.poly1d(np.polyfit(x, y, 50))
graph = ax.plot(plr, x_range=[0, np.pi, 0.165346982],
use_smoothing=False, color=ORANGE)
self.play(FadeIn(ax))
self.wait()
self.play(Create(graph))
self.wait()
self.play(graph.animate.set_opacity(0.3))
self.wait(5)
</code></pre>
<p><a href="https://i.sstatic.net/vJrPH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vJrPH.png" alt="enter image description here" /></a></p>
|
<python><manim>
|
2024-04-12 07:33:13
| 1
| 761
|
Amirreza A.
|
78,314,735
| 4,673,585
|
Csv files getting uploaded to local drive rather than storage account using python function app
|
<p>I have an azure function app in python with a blob trigger. It will trigger a function when a db file is uploaded in the following path in my storage account:</p>
<pre><code>abc360/sqlite_db_file/{name}.db
</code></pre>
<p>The function will read data from db file and write data in csv and upload it back to a folder in storage account container with path:</p>
<pre><code>abc360/sqlite_csv_files
</code></pre>
<p>Function, reads data from db file, writes it to csv and uploads the csv in chunks using storage blob python libraries. Because of this I was able to reduce running time of function from 6.5 mins to 30 seconds.</p>
<p>I developed function locally using visual studio code and azure function core tools. Deploying it to azure using build and release pipeline. Please refer the following for code and pipelines:</p>
<pre><code>import logging
import sqlite3
import os
import csv
import uuid
import tempfile
from azure.functions import InputStream
from azure.storage.blob import BlobServiceClient, BlobBlock
class DataMigrator:
CONTAINER_NAME = "abc360/sqlite_csv_file"
CHUNK_SIZE = 1024 * 1024 * 4 # 4MB chunk size
def __init__(self, file_path, connection_string):
self.file_path = file_path
self.connection_string = connection_string
def connect_sqlite(self):
return sqlite3.connect(self.file_path)
def get_table_names(self, cursor_sqlite):
logging.info(cursor_sqlite)
cursor_sqlite.execute("SELECT name FROM sqlite_master WHERE type='table' LIMIT 100;")
return cursor_sqlite.fetchall()
def extract_info_from_filename(self, filename):
parts = filename.split('_')
project_code = parts[0]
revision = parts[1]
datestamp = parts[2].split('.')[0]
return project_code, revision, datestamp
def upload_file_chunks(self, blob_file_path, local_file_path, container_client):
logging.info(blob_file_path)
try:
blob_client = container_client.get_blob_client(blob_file_path)
# upload data
block_list = []
chunk_size = 1024 * 1024 * 4 # 4MB chunk size
with open(local_file_path, 'rb') as f:
offset = 0
while True:
read_data = f.read(chunk_size)
if not read_data:
break # done
blk_id = str(uuid.uuid4())
blob_client.stage_block(block_id=blk_id, data=read_data, length=len(read_data), offset=offset)
block_list.append(BlobBlock(block_id=blk_id))
offset += len(read_data)
blob_client.commit_block_list(block_list)
except Exception as err:
print('Upload file error')
print(err)
def upload_to_storage_account(self, csv_filename):
container_name = "abc360/sqlite_csv_file"
blob_service_client = BlobServiceClient.from_connection_string(self.connection_string)
container_client = blob_service_client.get_container_client(container_name)
blob_client = container_client.get_blob_client(csv_filename)
if blob_client.exists():
blob_client.delete_blob()
#self.upload_file_chunks(csv_filename, csv_filename, container_client)
with open(csv_filename, "rb") as data:
blob_client.upload_blob(data)
return blob_client.url
def write_to_csv(self, cursor_sqlite, table_name, filename):
logging.info(f"file name: {filename}")
project_code, revision, datestamp = self.extract_info_from_filename(filename)
cursor_sqlite.execute(f"PRAGMA table_info({table_name});")
columns_info = cursor_sqlite.fetchall()
columns = [column_info[1] for column_info in columns_info]
csv_filename = f"{table_name}_{project_code}_{revision}_{datestamp}.csv"
with open(csv_filename, 'w', newline='',encoding="utf-8") as csvfile:
csv_writer = csv.writer(csvfile)
csv_writer.writerow(['ProjectCode', 'Revision', 'DateStamp'] + columns)
cursor_sqlite.execute(f"SELECT * FROM {table_name};")
rows = cursor_sqlite.fetchall()
for row in rows:
csv_writer.writerow([project_code, revision, datestamp] + list(row))
return csv_filename
def main(myblob: InputStream):
conn_sqlite = None # Initialize connection variable
try:
blob_name = os.path.basename(myblob.name)
logging.info(f"Processing blob: {blob_name}")
temp_file_path = os.path.join(tempfile.gettempdir(), blob_name)
logging.info(temp_file_path)
with open(temp_file_path, "wb") as temp_file:
temp_file.write(myblob.read())
connection_string = os.environ.get('AzureWebJobsStorage')
if not connection_string:
raise ValueError("Storage connection string is not provided.")
migrator = DataMigrator(temp_file_path, connection_string)
conn_sqlite = migrator.connect_sqlite()
logging.info("Connected to SQLite database successfully")
cursor_sqlite = conn_sqlite.cursor()
tables = migrator.get_table_names(cursor_sqlite)
logging.info(f"Tables in SQLite file: {tables}")
for table in tables:
table_name = table[0]
csv_filename = migrator.write_to_csv(cursor_sqlite, table_name, blob_name)
if csv_filename:
csv_url = migrator.upload_to_storage_account(csv_filename)
if csv_url:
logging.info(f"CSV file uploaded: {csv_url}")
except Exception as e:
logging.error(f"Error: {str(e)}")
finally:
# Close SQLite connection
if conn_sqlite:
conn_sqlite.close()
# Set up logging configuration
logging.basicConfig(level=logging.INFO)
</code></pre>
<p><strong>function.json:</strong></p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "abc360/sqlite_db_file/{name}.db",
"connection": "abc360stg_STORAGE"
}
]
}
</code></pre>
<p>Storage account name is abc360stg and this is how application setting looks like:</p>
<p><a href="https://i.sstatic.net/idvCl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/idvCl.png" alt="enter image description here" /></a></p>
<p><strong>Azure git project directory structure:</strong></p>
<p><a href="https://i.sstatic.net/L77s1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L77s1.png" alt="enter image description here" /></a></p>
<p><strong>Build Pipeline:</strong></p>
<p>For build pipeline, i am using the python function app template:</p>
<p><a href="https://i.sstatic.net/WGhfp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WGhfp.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/1JKgP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1JKgP.png" alt="enter image description here" /></a></p>
<p><strong>Release pipeline:</strong></p>
<p><a href="https://i.sstatic.net/m96qP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m96qP.png" alt="enter image description here" /></a></p>
<p><strong>Issue:</strong></p>
<ol>
<li>Csv file is getting uploaded to the folder where my code is in local drive:</li>
</ol>
<p><a href="https://i.sstatic.net/lGFc2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGFc2.png" alt="enter image description here" /></a></p>
<ol start="2">
<li>Also when db file is uploaded in sqlite_db_file folder, function app is not triggered.</li>
</ol>
<p>I am pretty sure I messed up somewhere. Would really appreciate if I can get some assistance in resolving this.</p>
|
<python><azure-functions><azure-blob-storage><azure-python-sdk>
|
2024-04-12 07:15:04
| 1
| 337
|
Rahul Sharma
|
78,314,579
| 1,020,139
|
Why can't I use AnyStr as a return type in Python 3.12 with Pyright?
|
<p>Consider the following code:</p>
<pre><code>def get_secret(self, name: str, **kwargs: Any) -> AnyStr:
kwargs["SecretId"] = name
result = self.client.get_secret_value(**kwargs)
return result["SecretBinary"] if "SecretBinary" in result else result["SecretString"]
</code></pre>
<p>It yields the following error:</p>
<pre><code>Expression of type "bytes" cannot be assigned to return type "AnyStr@get_secret"
Type "bytes" cannot be assigned to type "AnyStr@get_secret"PylancereportReturnType
</code></pre>
<p>It works, if I specify the <code>name</code> parameter to be of type <code>AnyStr</code> and return it verbatim.</p>
<p><a href="https://docs.python.org/3/library/typing.html#typing.AnyStr" rel="nofollow noreferrer">https://docs.python.org/3/library/typing.html#typing.AnyStr</a></p>
|
<python><mypy><python-typing><pyright>
|
2024-04-12 06:39:38
| 0
| 14,560
|
Shuzheng
|
78,314,515
| 10,200,497
|
What is the best way to slice a dataframe including the the first instance of a mask?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
{
'a': [np.nan, np.nan, np.nan, 20, 12, 42, 33, 32, 31],
'b': [np.nan, np.nan, np.nan, np.nan, 2333, np.nan, np.nan, 12323, np.nan]
}
)
</code></pre>
<p>Mask is:</p>
<pre><code>mask = (
(df.a.notna()) &
(df.b.notna())
)
</code></pre>
<p>Expected output: Slicing <code>df</code> up to the first instance of <code>mask</code>. Note that the first row of the <code>mask</code> is INCLUDED:</p>
<pre><code> a b
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 20.0 NaN
4 12.0 2333.0
</code></pre>
<p>This first instance of the <code>mask</code> is row <code>4</code>. So slicing it up to this index is the goal.</p>
<p>These are my attempts. The first one works, but I am not sure if the approach is correct:</p>
<pre><code># attempt 1
idx = df.loc[mask.cumsum().eq(1) & mask].index[0]
df = df.loc[:idx]
print(df)
# attempt 2
out = df[~mask.cummax()]
</code></pre>
|
<python><pandas><dataframe>
|
2024-04-12 06:24:34
| 2
| 2,679
|
AmirX
|
78,314,371
| 268,581
|
U.S. Treasury Securities : how to calculate net issuance
|
<h1>Treasury Security Auction Data</h1>
<p>Each row represents a treasury auction:</p>
<pre class="lang-none prettyprint-override"><code>>>> df[(df['issue_date'] >= '2024-03-01') & (df['issue_date'] <= '2024-03-20')][['issue_date', 'maturity_date', 'security_type', 'total_accepted']]
issue_date maturity_date security_type total_accepted
9995 2024-03-05 2024-04-02 Bill 95096576100
9996 2024-03-05 2024-07-02 Bill 60060634300
9997 2024-03-05 2024-04-30 Bill 90090860300
9998 2024-03-07 2024-09-05 Bill 70275894100
9999 2024-03-07 2024-06-06 Bill 79311242700
10000 2024-03-07 2024-04-18 CMB 80000045200
10001 2024-03-12 2024-07-09 Bill 60060669400
10002 2024-03-12 2024-04-09 Bill 95094517100
10003 2024-03-12 2024-05-07 Bill 90091153900
10004 2024-03-14 2024-09-12 Bill 70293933600
10005 2024-03-14 2024-06-13 Bill 79331123700
10006 2024-03-14 2024-04-25 CMB 80001086200
10007 2024-03-15 2027-03-15 Note 56000018600
10008 2024-03-15 2054-02-15 Bond 22000030500
10009 2024-03-15 2034-02-15 Note 39000004400
10010 2024-03-19 2024-07-16 Bill 60061959000
10011 2024-03-19 2024-05-14 Bill 90091935800
10012 2024-03-19 2024-04-16 Bill 95097858000
</code></pre>
<h1>Explanation of the columns</h1>
<ul>
<li><code>issue_date</code>: The date that the treasury security will be issued to the buyer</li>
<li><code>maturity_date</code>: The date that the treasury security matures (buyer gets their money back)</li>
<li><code>security_type</code>: Whether it's a bill, note, bond, etc.</li>
<li><code>total_accepted</code>: The amount of total money brought in for the auction.</li>
</ul>
<h1>Net issuance</h1>
<p>So, on any given day, for a particular security type (say bills) there may be some amount of issuance. But, there may also be some amount maturing. <em>net issuance</em> is:</p>
<pre><code>net issuance = issued - maturing
</code></pre>
<h1>Calculating net issuance</h1>
<p>Here's a program which calculates a table of net issuance</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import treasury_gov_pandas
df = treasury_gov_pandas.update_records('https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v1/accounting/od/auctions_query', lookback=10)
df['issue_date'] = pd.to_datetime(df['issue_date'])
df['maturity_date'] = pd.to_datetime(df['maturity_date'])
df['total_accepted'] = pd.to_numeric(df['total_accepted'], errors='coerce')
# group by 'issue_date' and 'security_type' and sum 'total_accepted'
issued = df.groupby(['issue_date', 'security_type'])['total_accepted'].sum().reset_index()
# group by 'maturity_date' and 'security_type' and sum 'total_accepted'
maturing = df.groupby(['maturity_date', 'security_type'])['total_accepted'].sum().reset_index()
# join issued and maturing on 'issue_date' = 'maturity_date' and 'security_type' = 'security_type'
merged = pd.merge(issued, maturing, how='outer', left_on=['issue_date', 'security_type'], right_on=['maturity_date', 'security_type'])
merged.rename(columns={'total_accepted_x': 'issued', 'total_accepted_y': 'maturing'}, inplace=True)
merged['change'] = merged['issued'].fillna(0) - merged['maturing'].fillna(0)
merged['date'] = merged['issue_date'].combine_first(merged['maturity_date'])
tmp = merged
agg = tmp.groupby(['date', 'security_type'])['change'].sum().reset_index()
pivot_df = agg.pivot(index='date', columns='security_type', values='change').fillna(0)
</code></pre>
<p>It uses the following library to retrieve the data:</p>
<p><a href="https://github.com/dharmatech/treasury-gov-pandas.py" rel="nofollow noreferrer">https://github.com/dharmatech/treasury-gov-pandas.py</a></p>
<h1>Notes</h1>
<p>For any day:</p>
<ul>
<li>There might only be issued securities</li>
<li>There might only be maturing securities</li>
<li>There might be both</li>
</ul>
<p>Thus, in the code, I use an outer join.</p>
<h1>Result</h1>
<pre class="lang-py prettyprint-override"><code>>>> pivot_df
security_type Bill Bond CMB FRN Note Note TIPS Bond TIPS Note
date
1979-11-15 0.000000e+00 2.315000e+09 0.0 0.0 2.401000e+09 0.000000e+00 0.0
1980-01-03 6.606165e+09 0.000000e+00 0.0 0.0 0.000000e+00 0.000000e+00 0.0
1980-01-08 4.007825e+09 0.000000e+00 0.0 0.0 0.000000e+00 0.000000e+00 0.0
1980-01-10 6.402625e+09 1.501000e+09 0.0 0.0 0.000000e+00 0.000000e+00 0.0
1980-01-17 6.403760e+09 0.000000e+00 0.0 0.0 0.000000e+00 0.000000e+00 0.0
... ... ... ... ... ... ... ...
2053-02-15 0.000000e+00 -6.635744e+10 0.0 0.0 0.000000e+00 -1.987600e+10 0.0
2053-05-15 0.000000e+00 -6.272132e+10 0.0 0.0 0.000000e+00 0.000000e+00 0.0
2053-08-15 0.000000e+00 -7.160462e+10 0.0 0.0 0.000000e+00 0.000000e+00 0.0
2053-11-15 0.000000e+00 -6.645674e+10 0.0 0.0 0.000000e+00 0.000000e+00 0.0
2054-02-15 0.000000e+00 -7.121181e+10 0.0 0.0 0.000000e+00 -9.389377e+09 0.0
</code></pre>
<p>There we can see, a column for each security type. The value for a given day shows the net issuance for that security type.</p>
<h1>Question</h1>
<p>This approach appears to work. But I'm wondering, is this considered idiomatic pandas code? Is there a better approach than this one?</p>
|
<python><pandas>
|
2024-04-12 05:40:58
| 1
| 9,709
|
dharmatech
|
78,314,350
| 10,429,573
|
idenitfy rowTag elements from a huge xml
|
<p>I have a huge XML file of 20k lines, before I process the data, I need to convert the data to a CSV file using Databricks (Python / Spark).
In this huge XML file, how to list all the '<strong>rowTag</strong>' elements? Is there a way to find out??</p>
<p>Mean while I tried below:</p>
<p>For a simple xml, i can use below code to get CSV from a xml file.</p>
<pre><code>data = "/FileStore/tables/Books.xml"
df = spark.read.format("xml").option("rowTag","book").load(data)
df.withColumnRenamed("id1","id").write.format("csv").option("header","true").save("/FileStore/tables/csv/")
</code></pre>
<p>For a huge xml , the challenge is, to identify the "rowTag" elements, which are in hundreds actually.. Is there a way to get them by running a script?</p>
<p><em><strong>Sample xml shared below:</strong></em></p>
<p><strong>Update:</strong> Here <strong>scope</strong> is one major element in the xml. And <strong>PremInfo</strong> element has thousands of inner elements from the xml file.
<a href="https://i.sstatic.net/FMBu0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FMBu0.jpg" alt="sample xml attached" /></a></p>
|
<python><xml><apache-spark><pyspark>
|
2024-04-12 05:33:55
| 0
| 613
|
RK.
|
78,314,340
| 4,732,111
|
Initialising a polars dataframe with 15 million records freezes the machine
|
<p>I'm using Psycopg3 connector to fetch records from AWS RDS Posgres Database and i'm initialising Polars dataframe using the code below:</p>
<pre><code>rds_conn = psycopg.connect(
host=config.RDS_HOST_NAME,
dbname=config.RDS_DB_NAME,
user=config.RDS_DB_USER,
password=config.RDS_DB_PASSWORD,
port=config.RDS_PORT)
cur = rds_conn.cursor(name="rds_cursor")
cur.itersize = 100000
cur.execute(sql_query)
names = [x[0] for x in cur.description]
rows = cur.fetchall()
cur.close()
df = pl.DataFrame(rows, schema=names, infer_schema_length=None)
</code></pre>
<p>It works fine if the number of rows returned is around a million or so. Currently one of my table in RDS contains 15 million records and when i initialise the Polars dataframe, my machine freezes and i need to reboot my machine. I tried using LazyFrame over Dataframe but still the same.</p>
<p>Psycopg connector returns 15 million records without any issues but the problem happens when i initialise it as a Polars dataframe.</p>
<p>Is there a better way to do initialise my dataframe so that i don't have this issue? Can someone please help me on this?</p>
<p>Thanks</p>
|
<python><pandas><dataframe><psycopg2><python-polars>
|
2024-04-12 05:31:54
| 1
| 363
|
Balaji Venkatachalam
|
78,314,205
| 1,214,800
|
Consuming an iterable/list/generator of dynamically loaded pytest fixtures
|
<p>I'm trying to create a dynamic fixture loader for pytest that does some work after the value is yielded to the test. In the beginning, I didn't need to do anything that complicated, so I just returned the fixture once it was ready:</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import shutil
from pytest import CaptureFixture
import pytest
from .tests import testutils
FIXTURES_ROOT = Path(__file__).parent / "fixtures"
INBOX = Path(__file__).parent / "inbox"
CONVERTED = Path(__file__).parent / "converted"
class TestItem:
converted_dir: Path
def __init__(self, inbox_dir: Path):
self.inbox_dir = inbox_dir
self.converted_dir = CONVERTED / inbox_dir.name
def load_test_fixture(
name: str,
*,
exclusive: bool = False,
override_name: str | None = None,
match_filter: str | None = None,
cleanup_inbox: bool = False,
):
src = FIXTURES_ROOT / name
if not src.exists():
raise FileNotFoundError(
f"Fixture {name} not found. Does it exist in {FIXTURES_ROOT}?"
)
dst = INBOX / (override_name or name)
dst.mkdir(parents=True, exist_ok=True)
for f in src.glob("**/*"):
dst_f = dst / f.relative_to(src)
if f.is_file() and not dst_f.exists():
dst_f.parent.mkdir(parents=True, exist_ok=True)
shutil.copy(f, dst_f)
# if any files in dst are not in src, delete them
for f in dst.glob("**/*"):
src_f = src / f.relative_to(dst)
if f.is_file() and not src_f.exists():
f.unlink()
if exclusive or match_filter is not None:
testutils.set_match_filter(match_filter or name)
converted_dir = CONVERTED / (override_name or name)
shutil.rmtree(converted_dir, ignore_errors=True)
return TestItem(dst)
</code></pre>
<p>This works well - the fixture loads dynamically, and I can create a bunch of one-liner fixtures out of it:</p>
<pre class="lang-py prettyprint-override"><code>
@pytest.fixture(scope="function")
def basic_fixture():
return load_test_fixture("basic_fixture", exclusive=True)
def test_converted_dir_exists(
basic_fixture: TestItem, capfd: CaptureFixture[str]
):
assert basic_fixture.converted_dir.exists()
</code></pre>
<p>I've also gone and greated a multi-fixture loader that returns an array of fixtures:</p>
<pre class="lang-py prettyprint-override"><code>def load_test_fixtures(
*names: str,
exclusive: bool = False,
override_names: list[str] | None = None,
match_filter: str | None = None,
):
if exclusive:
match_filter = match_filter or rf"^({'|'.join(override_names or names)})"
return [
load_test_fixture(name, override_name=override, match_filter=match_filter)
for (name, override) in zip(names, override_names or names)
]
</code></pre>
<p>But now I want to have the loader act more like a fixture, where it can yield the item and then clean up after itself. This works fine for the single loader:</p>
<pre class="lang-py prettyprint-override"><code>def load_test_fixture(
name: str,
*,
exclusive: bool = False,
override_name: str | None = None,
match_filter: str | None = None,
cleanup_inbox: bool = False,
):
# ...
# instead of returning, yield the item
yield TestItem(dst)
if cleanup_inbox:
shutil.rmtree(dst, ignore_errors=True)
@pytest.fixture(scope="function")
def basic_fixture():
yield from load_test_fixture("basic_fixture", exclusive=True)
</code></pre>
<p>But I can't get it to work with the multiloader, because it returns either a generator of generators, or a list of generators. I've tried converting these both to @pytest.fixtures as well, but can't seem to get the list version to yield its items and properly wait to clean up after the test completes, and as fixtures, I'd have the added problem of dealing with passing args as indirect or params (ew).</p>
<pre class="lang-py prettyprint-override"><code>def load_test_fixtures(
*names: str,
exclusive: bool = False,
override_names: list[str] | None = None,
match_filter: str | None = None,
):
if exclusive:
match_filter = match_filter or rf"^({'|'.join(override_names or names)})"
yield from (
load_test_fixture(name, match_filter=match_filter, override_name=override)
for name, override in zip(names, override_names or names)
)
# tried all combinations of list, tuple, and yield here, as well as
# yield (next(load_test_fixture(name, match_filter=match_filter, override_name=override))
# for name, override in zip(names, override_names or names)
</code></pre>
<p>I even tried this, which <em>sort of</em> works, but doesn't wait for the end of the test to run the finalizer:</p>
<pre><code>@pytest.fixture(scope="function")
def multi_fixtures():
fixtures = []
for f in [
"basic_fixture",
"fancy_fixture",
"tasty_fixture",
"smart_fixture",
]:
fixtures.extend(
load_test_fixture(f, match_filter=match_filter, cleanup_inbox=True)
)
yield fixtures
</code></pre>
<p>(If I pass <code>cleanup_inbox=True</code> here, it deletes the files before the test runs).</p>
|
<python><python-3.x><pytest><generator><pytest-fixtures>
|
2024-04-12 04:47:22
| 1
| 73,674
|
brandonscript
|
78,314,174
| 1,091,386
|
Assigning default values to class objects when member functions don't exist
|
<p>I realized I had oversimplified <a href="https://stackoverflow.com/q/78314000/1091386">my earlier question</a> while trying to make it easily reproducible.</p>
<p>What I want is to run a member function on that input value and assign the result to an attribute, but if the input value didn't have that function, to give a default value instead, i.e.:</p>
<pre class="lang-py prettyprint-override"><code>class Puzzle:
def __init__(self, random_object):
self.name = random_object.getID() or "Dummy Value"
self.score = random_object.getScore() or 0
</code></pre>
<p>If the <code>random_object.getID()</code> function doesn't exist, I get an <code>AttributeError</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> p = Puzzle("hello")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __init__
AttributeError: 'str' object has no attribute 'getID'
</code></pre>
<p>Is the only way to concisely do this a check for AttributeError? What if I have multiple things I want to assign like this?</p>
<pre class="lang-py prettyprint-override"><code>class Puzzle:
def __init__(self, random_object):
try:
self.name = random_object.getID()
self.score = random_object.getScore()
except AttributeError as att:
if att.name == "getID":
self.name = "Dummy Value"
elif att.name == "getScore":
self.score = 0
</code></pre>
<p>I think the above block will only be able to catch one of the exceptions and miss assigning the other value. How can I catch the rest?</p>
|
<python><initialization><attributeerror>
|
2024-04-12 04:31:51
| 1
| 4,894
|
icedwater
|
78,314,079
| 12,714,507
|
Streaming audio over websockets and process it with ffmpeg, Invalid frame size error
|
<p>I am building an application in Python that processes audio data on a streaming connection with WebSockets. To work with it, I need to process it with ffmpeg on the server before passing it to some ML algorithms.</p>
<p>I have the following ffmpeg code setup to process each byte sequence that comes over WebSockets:</p>
<pre class="lang-py prettyprint-override"><code>async def ffmpeg_read(bpayload: bytes, sampling_rate: int = 16000) -> np.array:
ar = f"{sampling_rate}"
ac = "1"
format_for_conversion = "f32le"
ffmpeg_command = [
"ffmpeg",
"-i", "pipe:0",
"-ac", ac,
"-acodec", f"pcm_{format_for_conversion}",
"-ar", ar,
"-f", format_for_conversion,
"pipe:1"]
try:
process = await asyncio.create_subprocess_exec(
*ffmpeg_command,
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE)
process.stdin.write(bpayload)
await process.stdin.drain()
process.stdin.close()
out_bytes = await process.stdout.read(8000) # Read asynchronously
audio = np.frombuffer(out_bytes, np.float32)
if audio.shape[0] == 0:
raise ValueError("Malformed soundfile")
return audio
except FileNotFoundError:
raise ValueError(
"ffmpeg was not found but is required to load audio files from filename")
</code></pre>
<p>In every test, this works for exactly one message and prints the desired output to the screen, but the second one gets the following:</p>
<pre><code>[mp3 @ 0x1208051e0] Invalid frame size (352): Could not seek to 363.
[in#0 @ 0x600002214200] Error opening input: Invalid argument
Error opening input file pipe:0.
</code></pre>
<p>How do I fix this?</p>
|
<python><ffmpeg><subprocess><ffmpeg-python>
|
2024-04-12 03:51:26
| 1
| 349
|
Nimrod Sadeh
|
78,314,000
| 1,091,386
|
Specify default attribute value in Python class
|
<p>I would like to construct a class where an initial attribute depends on the result of an external function and gives a default value otherwise. E.g., <code>Puzzle</code> has a default score of the length of the string used to initialize it, but if no string is provided, its score is -1.</p>
<pre class="lang-py prettyprint-override"><code>class Puzzle:
def __init__(self, string):
self.score = len(string) || -1
</code></pre>
<p>This obviously doesn't work as no such syntax is allowed.</p>
<p>One way to do this is with <code>try-except</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Puzzle:
def __init__(self, string):
self.score = -1 # probably redundant
try:
self.score = len(string)
except:
self.score = -1 # could this be pass instead?
</code></pre>
<p>Is there a more concise way to do it?</p>
|
<python><initialization>
|
2024-04-12 03:24:11
| 2
| 4,894
|
icedwater
|
78,313,962
| 1,355,120
|
Does shap's gradientexplainer take in a tensorflow function or a model?
|
<p>I'm using tensorflow 2 trying to call the gradientexplainer. I'm looking at the docs and it states the following:</p>
<p><code>Note that for TensowFlow 2 you must pass a tensorflow function, not a tuple of input/output tensors</code></p>
<p>However it seems that you need to pass in your model directly, you can't pass a tensorflow function.</p>
<p>My model expects a dictionary instead of a nparray or dataframe, so I am attempting to call gradientexplainer like so:</p>
<pre><code>@tf.function
def model_fcn(data):
feed_dict = {k: tf.convert_to_tensor(data[:, i:i+1]) for i, k in enumerate(feature_keys)}
return model(feed_dict)
gradientexplainer = shap.GradientExplainer(model_fcn, X)
</code></pre>
<p>However I get this error: <code>ValueError: <class tensorflow.python.eager.polymorphic_function.polymorphic_function.Function'> is not currently a supported model type!</code></p>
<p>Just wondering if you can pass in a function to gradientexplainer or if it has to be the keras model</p>
|
<python><tensorflow><shap><gradientexplainer>
|
2024-04-12 03:04:41
| 0
| 3,389
|
Kevin
|
78,313,930
| 419,115
|
Generic type-hinting for kwargs
|
<p>I'm trying to wrap the signal class of <a href="https://blinker.readthedocs.io/en/stable/" rel="nofollow noreferrer">blinker</a> with one that enforces typing so that the arguments to <code>send</code> and <code>connect</code> get type-checked for each specific signal.</p>
<p>eg if I have a signal <code>user_update</code> which expects <code>sender</code> to be an instance of <code>User</code> and have exactly two <code>kwargs</code>: <code>time: int, audit: str</code>, I can sub-class <code>Signal</code> to enforce that like so:</p>
<pre class="lang-py prettyprint-override"><code>class UserUpdateSignal(Signal):
class Receiver(Protocol):
def __call__(sender: User, /, time: int, audit: str):
...
def send(sender: User, /, time: int, audit: str):
# super call
def connect(receiver: Receiver):
# super call
</code></pre>
<p>which results in the desired behavior when type-checking:</p>
<pre class="lang-py prettyprint-override"><code>user_update.send(user, time=34, audit="user_initiated") # OK
@user_update.connect # OK
def receiver(sender: User, /, time: int, audit: str):
...
user_update.send("sender") # typing error - signature mismatch
@user_update.connect # typing error - signature mismatch
def receiver(sender: str):
...
</code></pre>
<p>The issues with this approach are:</p>
<ul>
<li>it's very verbose, for a few dozen signals I'd have hundreds of lines of code</li>
<li>it doesn't <em>actually</em> tie the type of the <code>send</code> signature to that of the <code>connect</code> signature - they can be updated independently, type-checking would pass, but the code would crash when run</li>
</ul>
<p>The ideal approach would apply a signature defined once to both <code>send</code> and <code>connect</code> - probably through generics. I've tried a few approaches so far:</p>
<h2>Positional Args Only with ParamSpec</h2>
<p>I can achieve the desired behavior using only</p>
<pre class="lang-py prettyprint-override"><code>class TypedSignal(Generic[P], Signal):
def send(self, *args: P.args, **kwargs: P.kwargs):
super().send(*args, **kwargs)
def connect(self, receiver: Callable[P, None]):
return super().connect(receiver=receiver)
user_update = TypedSignal[[User, str]]()
</code></pre>
<p>This type-checks <em>positional</em> args correctly but has no support for <code>kwargs</code> due to the limitations of <code>Callable</code>. I need <code>kwargs</code> support since <code>blinker</code> uses <code>kwargs</code> for every arg past <code>sender</code>.</p>
<h1>Other Attempts</h1>
<h2>Using TypeVar and TypeVarTuple</h2>
<p>I can achieve type-hinting for the <code>sender</code> arg pretty simply using generics:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
class TypedSignal(Generic[T], Signal):
def send(self, sender: Type[T], **kwargs):
super(TypedSignal, self).send(sender)
def connect(self, receiver: Callable[[Type[T], ...], None]) -> Callable:
return super(TypedSignal, self).connect(receiver)
# used as
my_signal = TypedSignal[MyClass]()
</code></pre>
<p>what gets tricky is when I want to add type-checking for the <code>kwargs</code>. The approach I've been attempting to get working is using a variadic generic and <code>Unpack</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
KW = TypeVarTuple("KW")
class TypedSignal(Generic[T, Unpack[KW]], Signal):
def send(self, sender: Type[T], **kwargs: Unpack[Type[KW]]):
super(TypedSignal, self).send(sender)
def connect(self, receiver: Callable[[Type[T], Unpack[Type[KW]]], None]) -> Callable:
return super(TypedSignal, self).connect(receiver)
</code></pre>
<p>but mypy complains: <code>error: Unpack item in ** argument must be a TypedDict</code> which seems odd because this error gets thrown even with no usage of the generic, let alone when a <code>TypedDict</code> is passed.</p>
<h2>Using ParamSpec and Protocol</h2>
<pre class="lang-py prettyprint-override"><code>P = ParamSpec("P")
class TypedSignal(Generic[P], Signal):
def send(self, *args: P.args, **kwargs: P.kwargs) -> None:
super().send(*args, **kwargs)
def connect(self, receiver: Callable[P, None]):
return super().connect(receiver=receiver)
class Receiver(Protocol):
def __call__(self, sender: MyClass) -> None:
pass
update = TypedSignal[Receiver]()
@update.connect
def my_func(sender: MyClass) -> None:
pass
update.send(MyClass())
</code></pre>
<p>but mypy seems to wrap the protocol, so it expects a function that <em>takes</em> the protocol, giving the following errors:</p>
<pre><code> error: Argument 1 to "connect" of "TypedSignal" has incompatible type "Callable[[MyClass], None]"; expected "Callable[[Receiver], None]" [arg-type]
error: Argument 1 to "send" of "TypedSignal" has incompatible type "MyClass"; expected "Receiver" [arg-type]
</code></pre>
<h2>Summary</h2>
<p>Is there a simpler way to do this? Is this possible with current python typing?</p>
<p>mypy version is 1.9.0 - tried with earlier versions and it crashed completely.</p>
|
<python><generics><mypy><python-typing>
|
2024-04-12 02:48:04
| 3
| 894
|
Michoel
|
78,313,709
| 8,325,579
|
Good "hash_func" for unhashable items in streamlit's st.cache_resource - mainly dataframes?
|
<p>Streamlit's st.cache_resource decorator is key to speeding up streamlit apps.</p>
<p>In functions like:</p>
<pre class="lang-py prettyprint-override"><code>@st.cache_resource
def slow_function(inputA, input_b):
...
return something
</code></pre>
<p>it can speed up the code through memoization.</p>
<p>However, this relies on all the inputs being "hashable". If the input itself does not implement the <code>__hash__</code> dunder method, then the user can provide a hashing function.</p>
<pre class="lang-py prettyprint-override"><code>
class myCustomType:
...
@st.cache_resource(hash_funcs={myCustomType: lambda obj: ___})
def slow_function(input_a:int, input_b:myCustomType):
...
return something
</code></pre>
<p>My question is what "general" hash functions could be used, specifically for inputs such as pandas or polars dataframes.</p>
<p>I've tried:</p>
<pre class="lang-py prettyprint-override"><code>hash_funcs={pl.DataFrame: lambda obj: id(obj)} # Not stable across page re-execution
hash_funcs={pl.DataFrame: lambda obj: f'{obj.shape} {obj.schema}} # Not confident it's unique
hash_funcs={pl.DataFrame: lambda obj: hash_all_cells(obj) } # too slow
</code></pre>
<p>Are there other solutions that could be applied here?</p>
|
<python><hash><streamlit><python-polars><memoization>
|
2024-04-12 01:25:36
| 1
| 1,048
|
Myccha
|
78,313,700
| 825,227
|
Create a dataframe from numpy array and parameters
|
<p>Running Elastic Net simulations by varying a couple parameters and looking to save output coefficients to a dataframe for potential review later.</p>
<p>Ultimately looking to save off a dataframe with two parameter identifier columns (ie, 'alpha', 'l1_ratio') and a number of other columns for the resulting coefficients for the model fit with those parameters.</p>
<p>'alpha' is a float (varying from .1 to 1000 in increments) and 'l1_ratio' is a float from 0 to 1. 'coefs' is a numpy array that I'd like to expand into individual columns for each coefficient value (total number will stay fixed, say 5 for this simple case).</p>
<p>For instance:</p>
<pre><code>alpha = .1
l1_ratio = .5
coefs = array([-1.30, -0.45, .04, .65, -0.88])
</code></pre>
<p>would result in a row record in the final dataframe of:</p>
<pre><code> alpha l1_ratio c1 c2 c3 c4 c5
0 .1 .5 -1.30 -0.45 .04 .65 -0.88
</code></pre>
<p>I'll ultimately loop over and place additional rows for each scenario. Would also prefer not to label coefficient columns manually as there can be dozens depending on the situation--leaving column header empty is fine.</p>
<p>How would I do this?</p>
|
<python><pandas>
|
2024-04-12 01:23:39
| 1
| 1,702
|
Chris
|
78,313,631
| 903,188
|
How do I run a shell script that requires the host Python environment from a script running in a virtual Python environment?
|
<p>I have a Python 3.9 script running in a virtual environment. The host environment is set up with Python 3.6 and many installed packages. I need to be able to run a shell script that only functions properly if it's run in the host environment.</p>
<p>It's not possible for me to make the 3.9 virtual environment work for this shell script, nor is it possible to update the shell script to work in Python 3.9.</p>
<p>When I run the script at the command line, I need to call <code>deactivate</code>, run the shell script, and then reactivate the virtual environment.</p>
<p>I'd like to be able to do the equivalent of that from within a Python program running in my 3.9 venv. But since I'm trying to run a shell script and not a Python program, I can't simply call the Python 3.6 interpreter directly to solve the problem.</p>
<p>I can think of two solutions:</p>
<ol>
<li>Create a temporary shell script that deactivates the venv and calls the target shell script. Then run that temporary shell script from my Py3.9 program. I assume the deactivation will only apply to the scope I create when I run the shell script.</li>
<li>Create an alias for every program I want to run that first runs deactivate, e.g. <code>alias run_myprog = "deactivate; myprog"</code>, although it would be kind of tedious to have to create one of these for every program I need to run.</li>
</ol>
<p>Are there better solutions than the above two?</p>
<p>(This is related to <a href="https://stackoverflow.com/q/78313277/903188">this question</a>. The difference is that question is asking about running a Python 3.6 program instead of a shell script that depends on Python 3.6.)</p>
|
<python><python-venv>
|
2024-04-12 00:48:59
| 1
| 940
|
Craig
|
78,313,618
| 10,184,158
|
CSV file shows empty line or patial of content
|
<p>I wrote a python file to collect some server certificate info and save in a csv file.
Everything works fine except the display when open the csv file.<a href="https://i.sstatic.net/zruNb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zruNb.jpg" alt="enter image description here" /></a></p>
<p>As you can see the Cert Name field is either empty or only show part of cert name.
But when I click each cell, the full name is there.
I copied the cert name to a notepad and it shows a quotation mark in front of each line like this "tomcat/tomcat.pem.
Is there a way to display those cert name properly?
Prefer to do it within python since we have multiple servers for this job.</p>
|
<python><excel><csv>
|
2024-04-12 00:42:16
| 1
| 463
|
chun xu
|
78,313,614
| 11,394,520
|
AlloyDB: Batch Import of CSV Files from GCS Bucket using python
|
<p>We need to design a process that efficiently imports large CSV files that are created by upstream processes on a regular base into AlloyDB. We'd like to use python for this task. What is the best practice in this case?</p>
<p><strong>Some considerations:</strong></p>
<ul>
<li>SQL's INSERT statement is way less performant than using a database specific import tool like pg_restore</li>
<li>While pg_restore can be executed remotely, I'd expect import performance of huge files to be significantly better when run locally on the DB server because of the saved network round trips</li>
<li><a href="https://cloud.google.com/alloydb/docs/import-csv-file" rel="nofollow noreferrer">AlloyDB documentation says</a>: SSH into the DB server from a container, copy over the file from GCS bucket to local and run psql COPY / pg_restore. This is not a very convenient set of actions to do programatically.</li>
</ul>
<p>We have a similar setup with a CloudSQL postgres instance. In contrast to AlloyDB, CloudSQL offers a nice <a href="https://cloud.google.com/sql/docs/postgres/import-export/import-export-csv#import_data_to" rel="nofollow noreferrer">API that acts as an abstraction layer</a> and handles the whole import of the file. By that, it takes away a lot of burden from the developer.</p>
|
<python><postgresql><google-cloud-platform><google-alloydb>
|
2024-04-12 00:38:47
| 1
| 560
|
Thomas W.
|
78,313,586
| 9,655,481
|
Python library: optionally support numpy types without depending on numpy
|
<h2>Context</h2>
<p>We develop a Python library that contains a function expecting a numlike parameter. We specify this in our signature and make use of python type hints:</p>
<pre class="lang-py prettyprint-override"><code>def cool(value: float | int | List[float | int])
</code></pre>
<h2>π³ Problem & Goal</h2>
<p>During runtime, we noticed it's fine to pass in numpy number types as well, e.g. <code>np.float16(1.2345)</code>. So we thought: why not incorporate "numpy number types" into our signature as this would be beneficial for the community that will use our library.</p>
<p>However, we don't want <code>numpy</code> as dependency in our project. We'd like to only signify in the method signature that we can take a <code>float</code>, <code>int</code>, a list of them OR any "numpy number type". If the user hasn't installed <code>numpy</code> on their system, they should still be able to use our library and just ignore that they could possibly pass in a "numpy number type" as well.</p>
<p><sub>We don't want to depend on <code>numpy</code> as we don't use it in our library (except for allowing their types in our method signature). So why include it in our dependency graph? There's no reason to do so. One dependency less is better.</sub></p>
<h2>Additional requirements/context</h2>
<ul>
<li>We search for an answer that is compatible with all Python versions <code>>=3.8</code>.</li>
<li>(The answer should work with <code>setuptools>=69.0</code>.)</li>
<li>The answer should be such that we get proper IntelliSense (<code>Ctrl + Space</code>) when typing <code>cool(</code> in an IDE, e.g. VSCode.</li>
<li><a href="https://github.com/resultwizard/ResultWizard/blob/main/pyproject.toml" rel="noreferrer">This</a> is what our <code>pyproject.toml</code> looks like.</li>
</ul>
<h2>Efforts</h2>
<ul>
<li>We've noticed the option <code>[project.optional-dependencies]</code> for the <code>pyproject.toml</code> file, see <a href="https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#dependencies-optional-dependencies" rel="noreferrer">here</a>. However, it remains unclear how this optional dependencies declaration helps us in providing optional <code>numpy</code> datatypes in our method signatures.</li>
<li><code>numpy</code> provides the <a href="https://numpy.org/doc/stable/reference/typing.html" rel="noreferrer"><code>numpy.typing</code></a> type annotations. Is it somehow possible to only depend on this subpackage?</li>
<li>We did search on search engines and found <a href="https://stackoverflow.com/q/563022/9655481">this SO question</a>, however our question is more specific with regards to how we can only use <em>types</em> from another module. We also found <a href="https://stackoverflow.com/q/35328286/9655481">this SO question</a>, but despite having "optional" in its title, it's not about <em>optional</em> numpy types.</li>
</ul>
|
<python><numpy>
|
2024-04-12 00:26:27
| 1
| 1,157
|
Splines
|
78,313,334
| 195,540
|
Python opentelemetry wsgi usage with gunicorn / Application Insights
|
<p>I have the below setup working perfectly in development mode in my django application, so when I run <code>python manage.py runsslserver</code> the application reports perfectly to Application Insights.</p>
<pre><code>from azure.monitor.opentelemetry import configure_azure_monitor
from django.conf import settings as my_settings
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my.settings')
# Configure OpenTelemetry to use Azure Monitor with the specified connection string
AZ_OPENTELEMETRY_CONNECTION_STRING = impact3_settings.AZ_OPENTELEMETRY_CONNECTION_STRING
configure_azure_monitor(
connection_string=AZ_OPENTELEMETRY_CONNECTION_STRING,
)
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
</code></pre>
<p>However, when I move this into production, we're utilizing gunicorn and wsgi, so, manage.py isn't ever running. I've found a way to to add <code>OpenTelemetryMiddleware</code> to the wsgi file, but have no idea how/where to call the <code>configure_azure_monitor</code> to record every request. What am I missing?</p>
<pre><code>import os
from django.core.wsgi import get_wsgi_application
from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my.settings')
application = get_wsgi_application()
application = OpenTelemetryMiddleware(application)
</code></pre>
<p>I'm looking to track requests normally as displayed here:
<a href="https://i.sstatic.net/CkGxm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CkGxm.png" alt="Application Insights Dashboard" /></a></p>
|
<python><django><azure-application-insights><wsgi><open-telemetry>
|
2024-04-11 22:34:19
| 1
| 1,787
|
scoopseven
|
78,313,298
| 13,395,230
|
Formula for the Nth bit_count (or population count) of size M
|
<p>I am interested in finding a simple formula for determining the nth time a particular bit_count occurs in the sequence of natural numbers. Specifically, what is the relationship between <code>K</code> and <code>N</code> in the table below. So, for example, what is the <code>N</code> of the <code>K=123456789123456789123456789</code>, I can tell you the <code>M</code> is <code>50</code>, but what is the <code>N</code>?</p>
<pre><code>length = 5
for K in range(2**length):
bits = bin(K)[2:].zfill(length)
M = K.bit_count() # numbers of "1"s in the binary sequence
N = sum(1 for i in range(K) if M==i.bit_count())
print(f'{K: <2}',bits,M,N)
'''
K bits M N
0 00000 0 0
1 00001 1 0
2 00010 1 1
3 00011 2 0
4 00100 1 2
5 00101 2 1
6 00110 2 2
7 00111 3 0
8 01000 1 3
9 01001 2 3
10 01010 2 4
11 01011 3 1
12 01100 2 5
13 01101 3 2
14 01110 3 3
15 01111 4 0
16 10000 1 4
17 10001 2 6
18 10010 2 7
19 10011 3 4
20 10100 2 8
21 10101 3 5
22 10110 3 6
23 10111 4 1
24 11000 2 9
25 11001 3 7
26 11010 3 8
27 11011 4 2
28 11100 3 9
29 11101 4 3
30 11110 4 4
31 11111 5 0
...
'''
</code></pre>
<hr />
<p>So, I appear to have solved half of it. It appears that</p>
<pre><code>N = (K-sum(2**i for i in range(M)).bit_count()
</code></pre>
<p>whenever <code>N<=M</code>. This appears to be because,</p>
<pre><code>K = sum(2**i for i in range(M)) + sum(2**(M-1-i) for i in range(N))
</code></pre>
<p>again, whenever <code>N<=M</code>. It appears that <code>N<=M</code> occurs about half the time.</p>
<pre><code>length = 5
for K in range(2**length):
bits = bin(K)[2:].zfill(length)
M = K.bit_count() # numbers of "1"s in the binary sequence
N = sum(1 for i in range(K) if M==i.bit_count())
if N <= M:
A = sum(2**i for i in range(M)) + sum(2**(M-1-i) for i in range(N))
B = (K - sum(2**i for i in range(M))).bit_count()
else:
A = '-'
B = '-'
print(f'{K: <2}',bits,M,N,A,B)
</code></pre>
|
<python><algorithm><math><combinatorics><counting>
|
2024-04-11 22:23:44
| 4
| 3,328
|
Bobby Ocean
|
78,313,277
| 903,188
|
How can I run a Python program in the global Python environment from one in a virtual Python environment?
|
<p>I am writing and running Python programs on a machine that I don't have much control over. The global environment has Python 3.6 installed along with dozens of packages. There are many scripts available to me in that environment that I need to be able to run.</p>
<p>I run most of the scripts I develop in a Python 3.9 virtual environment. It is not practical to install the dozens of packages from the Py3.6 global environment into my venv, but I do want to be able to run some of the scripts that were designed for the 3.6 environment from my own 3.9 scripts.</p>
<p>I don't need to interact with the 3.6 script I'm running. I want to print the output as it is generated, and then process any files it produces.</p>
<p>This is basically what I want to do, although it doesn't actually work:</p>
<pre><code>def run36 (scriptname):
p = subprocess.Popen("deactivate;", "python3", scriptname, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True, bufsize=1, universal_newlines=True)
for l in p.stdout():
print(l.rstrip())
return p.returncode
run36("some36prog.py") # will generate somenewfile.txt
with open("somenewfile.txt") as f:
...
</code></pre>
<p>If the correct answer is that I need to explicitly call the Python3.6 executable to make this work, I need to be able to discover its path somehow without hard coding it.</p>
|
<python><python-venv>
|
2024-04-11 22:16:30
| 1
| 940
|
Craig
|
78,313,232
| 22,407,544
|
How to generate file in MEDIA_ROOT in Django?
|
<p>In my view I have a function that generates a file but does not return anything. I want the file to be generated in a specific folder in <code>MEDIA_ROOT</code> and saved in my models but I'm not sure how exactly to go about doing it. Here is the relevant section of my view:</p>
<p><code>writer = get_writer(output_format, output_dir)</code></p>
<p>This function generates a file in the stated directory. I want to save it to a specific directory relevant to my <code>MEDIA_ROOT</code>.</p>
<p>Here is the relevant section of my settings.py file:</p>
<pre><code>MEDIA_URL='/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
</code></pre>
<p>The destination directory, for example, can be <code>media/destination_directory</code> but I'm not sure how to write the path relative to <code>MEDIA_ROOT</code></p>
|
<python><django>
|
2024-04-11 22:00:48
| 1
| 359
|
tthheemmaannii
|
78,313,091
| 4,549,682
|
Bug in polars 0.20 but not in 0.19 - any way to get around it?
|
<p>I found there seems to be a bug in the latest polars versions (0.20.19) that wasn't present in 0.19. With code like:</p>
<pre><code>df.group_by("SomeDate")
.agg(
pl.col(col)
.filter(
pl.col(col).is_between(
pl.quantile(col, 0.005), pl.quantile(col, 0.995)
)
)
.mean()
)
</code></pre>
<p>This gives an error like <code>assertion failed: lhs.len() == rhs.len()</code> in the group_by.agg step. The same thing happens if I use min/max instead of quantiles to filter. If I used fixed values without referencing the column, it works.</p>
<p>One way would be do get the quantiles from the groupby, then join with the original df and finally filter each group with the quantiles. However this is going to be nasty with the current code I have since it loops over a lot of columns and also does this with many different variations. Is there some easy solution, or just stay with the older polars version until this gets fixed?</p>
|
<python><python-polars>
|
2024-04-11 21:26:56
| 0
| 16,136
|
wordsforthewise
|
78,313,084
| 23,260,297
|
Replace existing column with merged column and rename
|
<p>I am merging two dataframes. The first dataframe looks like this:</p>
<pre><code>A B C D
party1 asset1 product1 Buy
party1 asset1 product2 Sell
party2 asset2 product1 Buy
party2 asset2 product2 Sell
</code></pre>
<p>The second dataframe looks like this:</p>
<pre><code>A B D
party1 asset1 Buy
party1 asset1 Sell
party2 asset2 Buy
party2 asset2 Sell
</code></pre>
<p>I merge the dataframes like this:</p>
<pre><code>df2 = df.merge(df1, on=['A', 'B', 'D'])
</code></pre>
<p>which returns:</p>
<pre><code>A B D C
party1 asset1 Buy product1
party1 asset1 Sell product2
party2 asset2 Buy product1
party2 asset2 Sell product2
</code></pre>
<p>I need to replace the values in column 'B' with the values in column 'C' while keeping the same name 'B'.</p>
<pre><code>A B D
party1 product1 Buy
party1 product2 Sell
party2 product1 Buy
party2 product2 Sell
</code></pre>
<p>Any suggestions on how to achieve this would help.</p>
|
<python><pandas>
|
2024-04-11 21:22:27
| 1
| 2,185
|
iBeMeltin
|
78,312,935
| 13,354,617
|
Merging a python package into one script file
|
<p>I'm trying to merge all the files of a python package scripts+binaries into one script. I found this repo that does that: <a href="https://github.com/pagekite/PyBreeder" rel="nofollow noreferrer">https://github.com/pagekite/PyBreeder</a></p>
<p>but it's an old repo and was written originally for python2, I found a PR that was written by another guy that fixed some errors for python3, but still it doesn't work for me.</p>
<p>here's the current working PR in github: <a href="https://github.com/pagekite/PyBreeder/blob/07779e145a69f50daa7e05fe2fdc444b2c62404c/breeder.py" rel="nofollow noreferrer">PyBreeder</a>
It runs and generates one script file.</p>
<p>but after running the generated script, I get this error: <strong>ImportError: attempted relative import with no known parent package</strong>.</p>
<p>This is the generated script, you don't need to download the pybreeder repo to run it:</p>
<pre><code>#!/usr/bin/python
import base64, os, sys, zlib
try:
import io as StringIO
except ImportError:
import StringIO
if sys.version_info >= (999, 3, 4):
from importlib.util import module_from_spec as new_module
else:
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore",category=DeprecationWarning)
from imp import new_module
__FILES = {}
__builtin_open = open
__os_path_exists, __os_path_getsize = os.path.exists, os.path.getsize
def __comb_open(filename, *args, **kwargs):
if filename in __FILES:
return StringIO.StringIO(__FILES[filename])
else:
return __builtin_open(filename, *args, **kwargs)
def __comb_exists(filename, *args, **kwargs):
if filename in __FILES:
return True
else:
return __os_path_exists(filename, *args, **kwargs)
def __comb_getsize(filename, *args, **kwargs):
if filename in __FILES:
return len(__FILES[filename])
else:
return __os_path_getsize(filename, *args, **kwargs)
if 'b64decode' in dir(base64):
__b64d = base64.b64decode
else:
__b64d = base64.decodestring
open = __comb_open
os.path.exists = __comb_exists
os.path.getsize = __comb_getsize
sys.path[0:0] = ['.SELF/']
###############################################################################
__FILES[".SELF/trying/__init__.py"] = """\
from .app import *
"""
m = sys.modules["trying"] = new_module("trying")
m.__file__ = "trying/__init__.py"
m.open = __comb_open
exec(__FILES[".SELF/trying/__init__.py"], m.__dict__)
###############################################################################
def hello():
print("nice")
#EOF#
</code></pre>
<p>Here's how you can recreate a simple example:</p>
<p>I created a directory called <code>trying</code> with two files: <code>__init__.py</code> and <code>app.py</code> for example.
in <code>__init__.py</code>:</p>
<pre><code>from .app import *
</code></pre>
<p>and in <code>app.py</code>:</p>
<pre><code>def hello():
print("nice")
</code></pre>
<p>and run that <a href="https://github.com/pagekite/PyBreeder/blob/07779e145a69f50daa7e05fe2fdc444b2c62404c/breeder.py" rel="nofollow noreferrer">PyBreeder</a> like this: python breeder.py trying/</p>
<p>Note: It will print the generated script to terminal, so do this if you want to save it to a file: <code>python breeder.py trying/ > generated.py</code></p>
|
<python><python-import><python-packaging>
|
2024-04-11 20:40:31
| 0
| 369
|
Mhmd Admn
|
78,312,866
| 1,711,271
|
remove all whitespaces from the headers of a polars dataframe
|
<p>I'm reading some csv files where the column headers are pretty annoying: they contain whitespaces, tabs, etc.</p>
<pre><code>A B C D E
CD E 300 0 0 0
CD E 1071 0 0 0
K E 390 0 0 0
</code></pre>
<p>I want to read the file, then remove all whitespaces and/or tabs from the column names. Currently I do</p>
<pre><code>import polars as pl
file_df = pl.read_csv(csv_file,
comment_prefix='#',
separator='\t')
file_df = file_df.rename(lambda column_name: column_name.strip())
</code></pre>
<p>Is this the "polaric" way to do it? I'm not a big fan of lambdas, but if the only other solution is to write a function just for this, I guess I'll stick to lambdas.</p>
|
<python><rename><python-polars>
|
2024-04-11 20:19:56
| 2
| 5,726
|
DeltaIV
|
78,312,841
| 1,609,514
|
Is there a simple analytical solution to this two-variable ODE? (using Sympy)
|
<p>I have a relatively simple but non-linear set of differential equations in two variables:</p>
<ol>
<li>dx[1]/dt = (u[1] - u[3]) / A</li>
<li>dx[2]/dt = u[1] * u[2] - u[3] * x[2] / (x[1] * A)</li>
</ol>
<p>I thought this might have an analytical solution so I tried solving it in Sympy following <a href="https://stackoverflow.com/a/26187578/1609514">the example posted here</a>.</p>
<pre class="lang-python prettyprint-override"><code>from sympy import symbols, Function, Eq, dsolve
x1, x2 = symbols('x1, x2', cls=Function)
t, u1, u2, u3, A, x1_init, x2_init = symbols('t, u1, u2, u3, A, x1_init, x2_init')
ode_system = [
Eq(x1(t).diff(t), (u1 - u3) / A),
Eq(x2(t).diff(t), u1 * u2 - u3 * x2(t) / (x1(t) * A))
]
ics = {x1(0): x1_init, x2(0): x2_init}
sol = dsolve(ode_system, [x1(t), x2(t)], ics=ics)
</code></pre>
<p>Sympy does in fact find a solution. The solution for <code>x1(t)</code> is as expected, but the integral for <code>x2(t)</code> is pretty complicated and includes a <code>Piecewise</code> function:</p>
<pre><code>In [2]: sol[0]
Out[2]: Eq(x1(t), x1_init + t*(u1 - u3)/A)
In [3]: sol[1]
Out[3]: Eq(x2(t), u1*u2*Piecewise((exp(u1*log(A*x1_init + t*u1 - t*u3)/(u1 - u3))/u1, Ne(u1, 0)), (log(A*x1_init + t*u1 - t*u3)/(u1 - u3), True))*exp(-u3*log(A*x1_init + t*u1 - t*u3)/(u1 - u3)) - (u1*u2*Piecewise((exp(u1*log(A*C1)/(u1 - u3))/u1, Ne(u1, 0)), (log(A*C1)/(u1 - u3), True)) - x2_init*(A*x1_init)**(u3/(u1 - u3)))*exp(-u3*log(A*x1_init + t*u1 - t*u3)/(u1 - u3)))
</code></pre>
<p>I'm having trouble deciphering this...</p>
<p>Is there an obvious simplification to this solution that might work under certain conditions I'm interested in or is it really this complex piecewise function?</p>
|
<python><sympy><ode><symbolic-math><integral>
|
2024-04-11 20:13:45
| 1
| 11,755
|
Bill
|
78,312,723
| 354,051
|
conversion from YUV to RGB color
|
<p>I'm trying to convert <a href="https://github.com/FFmpeg/FFmpeg/blob/7e59c4f90885a9ffffb0a3f1d385b4eae3530529/libavfilter/avf_showspectrum.c#L191" rel="nofollow noreferrer">FFMPEG's Intensity</a> color map from yuv format to rgb. It should give you colors as shown in the color bar in the image. You can generate a spectrogram using command:</p>
<pre><code>ffmpeg -i words.wav -lavfi showspectrumpic=s=224x224:mode=separate:color=intensity spectrogram.png
</code></pre>
<p><a href="https://i.sstatic.net/Qs6S4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qs6S4.png" alt="ffmpeg spectogram" /></a></p>
<p>Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def rgb_from_yuv(Y, U, V):
Y -= 16
U -= 128
V -= 128
R = 1.164 * Y + 1.596 * V
G = 1.164 * Y - 0.392 * U - 0.813 * V
B = 1.164 * Y + 2.017 * U
# Clip and normalize RGB values
R = np.clip(R, 0, 255)
G = np.clip(G, 0, 255)
B = np.clip(B, 0, 255)
return '#{:02X}{:02X}{:02X}'.format(int(R), int(G), int(B))
def yuv_to_rgb(Y, U,V):
# Convert YUV to RGB
R = Y + (V - 128) * 1.40200
G = Y + (U - 128) * -0.34414 + (V - 128) * -0.71414
B = Y + (U - 128) * 1.77200
# Clip and normalize RGB values
R = np.clip(R, 0, 255)
G = np.clip(G, 0, 255)
B = np.clip(B, 0, 255)
print(R,G,B)
return '#{:02X}{:02X}{:02X}'.format(int(R), int(G), int(B))
# FFMPEG's intensity color map
colors = [
[ 0, 0, 0, 0 ],
[ 0.13, .03587126228984074, .1573300977624594, -.02548747583751842 ],
[ 0.30, .18572281794568020, .1772436246393981, .17475554840414750 ],
[ 0.60, .28184980583656130, -.1593064119945782, .47132074554608920 ],
[ 0.73, .65830621175547810, -.3716070802232764, .24352759331252930 ],
[ 0.78, .76318535758242900, -.4307467689263783, .16866496622310430 ],
[ 0.91, .95336363636363640, -.2045454545454546, .03313636363636363 ],
[ 1, 1, 0, 0 ]]
cmaps = []
for i, c in enumerate(colors):
Y = c[1]
U = c[2]
V = c[3]
hex = yuv_to_rgb(Y,U,V)
cmaps.append((c[0], hex))
print(cmaps)
</code></pre>
<p>Both the functions are not providing desired output.</p>
|
<python><ffmpeg><colors><yuv><color-conversion>
|
2024-04-11 19:41:10
| 1
| 947
|
Prashant
|
78,312,648
| 6,495,189
|
Python inheritance with @dataclasses.dataclass and annotations
|
<p>I am very confused by the following code:</p>
<pre><code>import dataclasses
@dataclasses.dataclass()
class Base():
x: int = 100
@dataclasses.dataclass()
class Derived(Base):
x: int = 200
@dataclasses.dataclass()
class DerivedRaw(Base):
x = 300
base = Base()
derived = Derived()
derived_raw = DerivedRaw()
print(base.x)
print(derived.x)
print(derived_raw.x)
</code></pre>
<p>What it prints is:</p>
<pre><code>100
200
100
</code></pre>
<p>I don't understand why the last number isn't 300. Why do the annotations matter?</p>
<p>This seems to be an interaction with <code>@dataclasses.dataclass()</code>, as the code:</p>
<pre><code>class Base():
x: int = 100
class DerivedRaw(Base):
x = 300
derived_raw = DerivedRaw()
print(derived_raw.x)
</code></pre>
<p>Does print 300.</p>
|
<python><inheritance><python-typing><python-dataclasses>
|
2024-04-11 19:24:40
| 1
| 443
|
Tony Bruguier
|
78,312,551
| 21,935,028
|
Decrypt a Java encrypted file in Python
|
<p>I use the following to encrypt a file in Java:</p>
<pre><code>
public static byte[] hexStringToByteArray(String s) {
int len = s.length();
byte[] data = new byte[len / 2];
for (int i = 0; i < len; i += 2)
{
data[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i+1), 16));
}
return data;
}
public static void encExtract ( String zipName, String encKey )
{
try{
byte[] KeyByte = hexStringToByteArray( encKey );
SecretKey secKey = new SecretKeySpec(KeyByte, 0, KeyByte.length, "AES");
Cipher desCipher;
desCipher = Cipher.getInstance("AES");
File zipFile = new File(zipName);
byte[] indata = java.nio.file.Files.readAllBytes(zipFile.toPath());
System.out.println(" Encyption size " + indata.length );
desCipher.init(Cipher.ENCRYPT_MODE, secKey);
byte[] textEncrypted = desCipher.doFinal(indata);
try (FileOutputStream outputStream = new FileOutputStream(zipName+".enc")) {
outputStream.write(textEncrypted);
System.out.println(" Encypted size " + textEncrypted.length );
}
}catch(Exception e)
{
e.printStackTrace(System.out);
}
}
</code></pre>
<p>The system that the encrypted file will get sent to will use Python to decrypt it. I am trying to test this and have built the following Python script:</p>
<pre><code>import base64
import hashlib
from Crypto import Random
from Crypto.Cipher import AES
encFH = open('the_enc_file.zip.enc', 'rb')
encData = bytearray(encFH.read())
encKey = "__hello123__world123__again123__"
iv = encData[:AES.block_size]
cipher = AES.new(encKey, AES.MODE_CBC, iv)
decdata = cipher.decrypt(encData[AES.block_size:])
f = open("the_dec_file.zip", "wb")
f.write(decdata)
f.close()
</code></pre>
<p>When I run the above Python I get the error:</p>
<pre><code>Traceback (most recent call last):
File "thescript.py", line 13, in <module>
cipher = AES.new(encKey, AES.MODE_CBC, iv)
File "/home/xxx/.local/lib/python3.10/site-packages/Crypto/Cipher/AES.py", line 228, in new
return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs)
File "/home/xxx/.local/lib/python3.10/site-packages/Crypto/Cipher/__init__.py", line 79, in _create_cipher
return modes[mode](factory, **kwargs)
File "/home/xxx/.local/lib/python3.10/site-packages/Crypto/Cipher/_mode_cbc.py", line 274, in _create_cbc_cipher
cipher_state = factory._create_base_cipher(kwargs)
File "/home/xxx/.local/lib/python3.10/site-packages/Crypto/Cipher/AES.py", line 100, in _create_base_cipher
result = start_operation(c_uint8_ptr(key),
File "/home/xxx/.local/lib/python3.10/site-packages/Crypto/Util/_raw_api.py", line 248, in c_uint8_ptr
raise TypeError("Object type %s cannot be passed to C code" % type(data))
TypeError: Object type <class 'str'> cannot be passed to C code
</code></pre>
<p>I think perhaps there is a mismatch with AES.MODE_CBC in Python and Cipher.ENCRYPT_MODE in Java, but I'm not sure.</p>
|
<python><java><python-cryptography><javax.crypto>
|
2024-04-11 19:01:22
| 1
| 419
|
Pro West
|
78,312,534
| 12,556,481
|
Impossible to grab an href link using requests or Selenium
|
<p>My goal is to extract all the href links from this page and find the .pdf links. I tried using the requests library and Selenium, but neither of them could extract it.</p>
<p>How can I solve this problem? Thank you.</p>
<p>Ex: This contain a .pdf file link</p>
<p><a href="https://i.sstatic.net/tmrDY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tmrDY.png" alt="enter image description here" /></a></p>
<p><strong>This is the request code:</strong></p>
<pre><code> import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/113.0'}
url="https://www.bain.com/insights/topics/energy-and-natural-resources-report/"
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))
</code></pre>
<p><strong>This is the selenium code:</strong></p>
<pre><code> from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
options = webdriver.ChromeOptions()
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=options)
page_source = driver.get("https://www.bain.com/insights/topics/energy-and-natural-resource-report/")
driver.implicitly_wait(10)
soup = BeautifulSoup(page_source, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))
driver.quit()
</code></pre>
|
<python><selenium-webdriver><python-requests><selenium-chromedriver>
|
2024-04-11 18:58:34
| 2
| 309
|
dfcsdf
|
78,312,517
| 18,100,562
|
How to implement EventDispatcher in Kivy?
|
<p>Good Day, Friends!</p>
<p>The Kivy documentation is a bit unclear about the usage of EventDispatcher and I was not able to implement it.</p>
<p>Simple description:
When entering a screen, in on_enter() method, an object Task() is created and it will execute a threaded function. This function will emit (publish) a message with its results. Some UI elements (Buttons, Labals, custom Widgets) are subscribed to that message and will call a member function that accepts the published results and perform some actions.</p>
<pre><code>class Reciever(Button):
def __init__(self, text, **kwargs):
super().__init__(**kwargs)
self.text = text
def on_process_task_result(self, result):
print(f"{self} recieved result: {result}")
r1 = Reciever("r1")
r1.register_event_type("on_process_task_result")
class Task(EventDispatcher):
def __init__(self, **kwargs):
super(Task, self).__init__(**kwargs)
def execute(self, sender):
# Simulating task execution and generating a result
result = random.choice(["ok", "fail", "repeat"])
print("Task result:", result)
self.dispatch('on_process_task_result', result=result)
</code></pre>
<p>In general I think a custom observer implementation is overkill. Current implementation with polling is a mess. That is the reason of reworking.</p>
<p>Can someone provide an example how to implement it within Kivy?</p>
|
<python><events><design-patterns><kivy>
|
2024-04-11 18:53:42
| 1
| 507
|
mister_kanister
|
78,312,502
| 15,524,510
|
pyGad RAM usage
|
<p>I am using pygad for a neural network and I am wondering why its using RAM after each generation? The issue is that I leave it to run and then as it starts to use more and more RAM, then RAM usage spikes suddenly and causes a craash.</p>
<p>Here is my code to reproduce:</p>
<pre><code>import pygad
import pygad.kerasga
import pandas as pd
import numpy as np
train = np.random.rand(1000000,50)
label = np.random.rand(1000000,1)
def fitness_func(ga_instance, solution, solution_idx):
global train, dataset, model
#for batch in dataset:
# train['pred'] = pygad.kerasga.predict(model=model, solution=solution, data=batch, verbose=1, batch_size=2**13)
t0 = datetime.datetime.now()
preds = pygad.kerasga.predict(model=model, solution=solution, data=train, verbose = 0, batch_size = 2**13)
t1 = datetime.datetime.now()
print(t1 - t0)
scores = label[preds>0.7].mean() - label[preds<0.3].mean()
score = scores.mean()
return score
def on_generation(ga_instance):
print(ga_instance.generations_completed, f"Fitness = {ga_instance.best_solution()[1]}")
input_layer = tf.keras.layers.Input(shape = (train.shape[1]))
dense_layer1 = tf.keras.layers.Dense(units = train.shape[1], activation = tf.keras.layers.LeakyReLU(alpha=0.01))(input_layer)
output_layer = tf.keras.layers.Dense(units = 1)(dense_layer1)
model = tf.keras.Model(inputs=input_layer, outputs=output_layer)
keras_ga = pygad.kerasga.KerasGA(model=model, num_solutions=5)
ga_instance = pygad.GA(num_generations = 50, num_parents_mating = 2, initial_population=keras_ga.population_weights, fitness_func = fitness_func,\
on_generation=on_generation)
ga_instance.run()
</code></pre>
<p>Even with this small dataset the RAM steadily increases. Is there a setting that I can use to prevent this?</p>
|
<python><tensorflow><keras><pygad>
|
2024-04-11 18:49:46
| 1
| 363
|
helloimgeorgia
|
78,312,416
| 740,067
|
Why pyodbc connection.close hangs for 10 minute after timeout error on Azure-sql-database?
|
<p>I'm executing the following code. The code intentionally contains a query that takes longer than the <code>query timeout</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pyodbc
TIMEOUT_SEC = 1
QUERY_DELAY_SEC = TIMEOUT_SEC + 1
DSN = ".."
QUERY = f"WAITFOR DELAY '00:00:{QUERY_DELAY_SEC:02d}';SELECT 1;"
conn = pyodbc.connect(DSN)
conn.timeout = TIMEOUT_SEC
cursor = conn.cursor()
try:
result = cursor.execute(QUERY).fetchall()
cursor.close()
except Exception as error:
print(error)
finally:
# if timeout exception happens this close gets a ~10 min. wait
# if query succeed (e.g. with removed `WAITFOR DELAY`), close is instant
conn.close();
print(f"{result=}")
</code></pre>
<p>I expect it</p>
<ol>
<li>raises the timeout error</li>
<li>prints the error</li>
<li>exits the script</li>
</ol>
<p>While instead it</p>
<ol>
<li>raises the timeout error</li>
<li>prints the error</li>
<li>waits <code>10</code> minutes</li>
<li>exits the script</li>
</ol>
<p>Can the expected behavior be forced somehow?</p>
|
<python><azure-sql-database><pyodbc><wait><freeze>
|
2024-04-11 18:32:07
| 1
| 5,669
|
xliiv
|
78,312,309
| 23,260,297
|
create new dataframe after performing calculations from groupby
|
<p>I have a dataframe that looks like this:</p>
<pre><code>ID TradeDate party Deal Asset Start Expire Fixed Quantity MTM Float
1 04/11/2024 party1 Sell HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Sell HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Sell HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Buy HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Buy HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Sell WTI 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Sell WTI 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Buy WTI 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
</code></pre>
<p>I group the data by Start, Asset, and Deal and I perform caluculations to transform the dataframe into this:</p>
<pre><code>groups = df.groupby(['Start', 'Asset', 'Deal'])
ID TradeDate party Deal Asset Start Expire Fixed Quantity MTM Float
1 04/11/2024 party1 Sell HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Sell HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Sell HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
total 3000 7500.00
ID TradeDate party Deal Asset Start Expire Fixed Quantity MTM Float
1 04/11/2024 party1 Buy HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Buy HO 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
total 3000 5000.00
ID TradeDate party Deal Asset Start Expire Fixed Quantity MTM Float
1 04/11/2024 party1 Sell WTI 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
1 04/11/2024 party1 Sell WTI 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
total 3000 5000.00
ID TradeDate party Deal Asset Start Expire Fixed Quantity MTM Float
1 04/11/2024 party1 Buy WTI 01/01/2024 02/01/2024 10.00 1000 2500.00 10.00
total 1000 2500.00
</code></pre>
<p>My objective is to transform these groups another time so that I can output only the data I need. The expected output for this step should look something like this:</p>
<pre><code>party Deal Asset Start MTM Float
party1 Sell HO 01/01/2024 7500.00 10.00
party1 Buy HO 01/01/2024 5000.00 10.00
party1 Sell WTI 01/01/2024 5000.00 10.00
party1 Buy WTi 01/01/2024 2500.00 10.00
</code></pre>
<p>Do I need to perform another groupby of some sort? or is there another function that could achieve this? Any suggestions would help.</p>
<p>Note: in the second step, those are individual dataframes that come from a list. You may need an intermediate step to concat these together and then get to the final output.</p>
|
<python><pandas>
|
2024-04-11 18:10:30
| 1
| 2,185
|
iBeMeltin
|
78,312,262
| 14,179,793
|
AWS Lambda with Docker Image: Runtime.InvalidEntrypoint
|
<p>I have a Dockerfile that uses a base image from a third party that uses <code>FROM rockylinux:8</code> as its base image and the repo currently produces an image that is used in ECS. I am trying to make this container runnable on AWS lambda environment while also maintaining the ECS image functionality.</p>
<p>Reviewing the requirements (<a href="https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-reqs" rel="nofollow noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-reqs</a>), it looks like I need to install the Lambda runtime API for python in the image I am creating and then update the <code>ENTRYPOINT</code>. I used the example Dockerfile under "creating an image from an alternative base image" (<a href="https://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-clients" rel="nofollow noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-clients</a>) as a reference and modified the Dockerfile. Below is a reduced version that anyone should be able to use:</p>
<pre><code>FROM rockylinux:8.9
RUN yum -y update && \
yum -y upgrade
HEALTHCHECK NONE
RUN yum install -y nano && \
yum install -y wget
RUN adduser worker
USER worker
ARG HOME='/home/worker'
WORKDIR $HOME
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-py310_23.10.0-1-Linux-x86_64.sh -O miniconda.sh && \
bash miniconda.sh -b -p ${HOME}/miniconda && \
rm miniconda.sh && \
source ${HOME}/miniconda/bin/activate && \
conda init --all
ENV PATH="${HOME}/miniconda/bin:${PATH}"
ARG BUILD=${HOME}/build
RUN mkdir ${BUILD}
COPY lambda_function.py ${BUILD}
RUN pip install --target ${BUILD} awslambdaric
ENTRYPOINT ["/home/worker/miniconda/bin/python3.10", "-m", "awslambdaric"]
CMD [ "lambda_function.handler" ]
</code></pre>
<p>But I'm getting the following error while testing on Lambda function</p>
<pre><code>{
"errorType": "Runtime.InvalidEntrypoint",
"errorMessage": "RequestId: d8919b62-472d-46c9-acf3-2b5ddaf49264 Error: fork/exec /home/worker/miniconda3/bin/python3.10: permission denied"
}
</code></pre>
<p>When I run the container locally the permissions look as follows:</p>
<pre><code>[worker@7525b61497ad bin]$ ls -al python3.10
-rwxrwxr-x 1 worker worker 17157264 Apr 10 19:14 python3.10
</code></pre>
<p>And I am able to run python:</p>
<pre><code>[worker@7525b61497ad bin]$ python
Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
</code></pre>
<p>Sample project structure:</p>
<pre><code>test
βββ Dockerfile
βββ lambda_function.py
</code></pre>
<p>lambda_function.py contents:</p>
<pre><code>import sys
def handler(event, context):
return 'Hello from AWS Lambda using Python' + sys.version + '!'
</code></pre>
<p>Update: Removing the <code>RUN adduser worker</code> and <code>USER worker</code> resolves the issue but I do not know why at the moment.</p>
|
<python><amazon-web-services><docker><aws-lambda>
|
2024-04-11 18:01:44
| 0
| 898
|
Cogito Ergo Sum
|
78,312,157
| 984,003
|
Get image from data looking like \x00\x00\x87
|
<p>How do I create/get/open the image from a URL request when the response looks like this:</p>
<pre class="lang-none prettyprint-override"><code>\x00\x00\x00 ftypavif\x00\x00\x00\ ... \x87"y~\x13 $%\\\xad ... xb5\xa07tR\x80
</code></pre>
<p>Much longer, of course.</p>
<p>An example of such a URL is <a href="https://media-cldnry.s-nbcnews.com/image/upload/t_focal-860x484,f_avif,q_auto:eco,dpr_2/MSNBC/Components/Video/201610/tdy_food_brisket_161021__142565.jpg" rel="nofollow noreferrer">here</a> (not my website).</p>
<p>Normally I do it like this:</p>
<pre><code>import requests
import urllib, cStringIO, PIL, io
from PIL import Image
ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36"
headers = {'User-agent': ua, 'Accept-Encoding':'gzip,deflate,sdch', 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'}
# below url returns regular image and code works
imgurl = "https://media-cldnry.s-nbcnews.com/image/upload/t_focal-860x484,f_auto,q_auto:best/MSNBC/Components/Video/201610/tdy_food_brisket_161021__142565.jpg"
# below url does NOT work.
imgurl = "https://media-cldnry.s-nbcnews.com/image/upload/t_focal-860x484,f_avif,q_auto:eco,dpr_2/MSNBC/Components/Video/201610/tdy_food_brisket_161021__142565.jpg"
r = requests.get(imgurl, headers=None, verify=False,timeout=3)
image = Image.open(cStringIO.StringIO(r.content))
</code></pre>
<p>The error for the bad URL is "IOError: cannot identify image file ..."</p>
<p>I also tried</p>
<pre><code>image = Image.open(io.BytesIO(r.content))
</code></pre>
<p>and various versions of</p>
<pre><code>data = bytes.fromhex(r.content)
</code></pre>
<pre><code>import binascii
data = binascii.a2b_hex(r.content)
</code></pre>
<p>They all return various errors.</p>
<p>Note that I am still using Python 2.7 with PIL (Pillow).</p>
|
<python><python-2.7><python-imaging-library>
|
2024-04-11 17:39:05
| 2
| 29,851
|
user984003
|
78,312,145
| 1,046,013
|
Loop in a Django template simple_tag returning a dictionary
|
<p>When using a simple template tag that returns a dictionary:</p>
<pre><code>@register.simple_tag
def get_types():
return {
"item1": "Foo",
"item2": "Bar",
}
</code></pre>
<p>This doesn't print any column:</p>
<pre><code>{% for type in get_types.values %}
<th>{{ type }}</th>
{% endfor %}
</code></pre>
<p>While this does:</p>
<pre><code>{% get_types as types %}
{% for type in types.values %}
<th>{{ type }}</th>
{% endfor %}
</code></pre>
<p>Is there a way to make it work without having to put the temporary variable <code>{% get_types as types %}</code>? Most questions I found on StackOverflow or in forums are from 2018 or older. I'm wondering if there is a cleaner way to do this 6 years later as I'm not a fan of temporary variables.</p>
|
<python><django><django-templates><django-template-filters>
|
2024-04-11 17:37:26
| 1
| 3,866
|
NaturalBornCamper
|
78,312,080
| 158,466
|
SQL Server Table-Valued Parameter and Python
|
<p>I am writing a simple ETL. The source is a spreadsheet. The destination is a SQL Server table. The data are integers and strings.
I would like to pass all of the data at once to SQL Server through a stored procedure rather than a row at a time.
I learned about table-valued parameters, and created a type and a stored procedure.</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TYPE dbo.apr_house_part_row
AS TABLE
(
-- These columns match the apr_house_part table (order, name and type)
[abc_id] [int] NOT NULL,
[def_id] [int] NOT NULL,
[ghi_nm] [nvarchar](50) NOT NULL,
[jkl_nm] [nvarchar](50) NOT NULL
)
CREATE PROCEDURE dbo.usp_load_apr_house_part(@TVP apr_house_part_row READONLY)
AS
SET NOCOUNT ON;
INSERT INTO apr_house_part
SELECT *
FROM @TVP;
</code></pre>
<p>I am using Python to read the spreadsheet and interact with the server. Below is an abridged version of the code that demonstrates the problem. I have tried using pyodbc and pytds. I don't show the connection code below, but I have tried with both, and confirmed that other operations, e.g. executing SELECTs, work fine.</p>
<p>Pyodbc and pytds seem to be raising an exception because I am passing all of the data at once. There are 2526 rows, which is where <em>that</em> number comes from. I show the run-time exceptions that are raised in the code below; pyodbc raises a ProgrammingError and pytds raises a ValueError. I have looked into the pyodbc message but the solutions to other questions about it don't seem to apply.</p>
<pre class="lang-python prettyprint-override"><code># read the spreadsheet
hp = openpyxl.load_workbook("excel_file.xlsx")
sh = hp.worksheets[0]
rows = list(sh.rows)
# pull out the header
header = [_.value for _ in rows.pop(0)]
# make a list of lists
values = list(list(col.value or '' for col in row) for row in rows)
hp.close()
# Try it with pyodbc.
conn = db.odbc() # creates and logs in with a pyodbc.Connection
res = conn.execute("exec usp_apr_house_part ?", values)
# pyodbc.ProgrammingError: ('The SQL contains 1 parameter markers, but 2526 parameters were supplied', 'HY000')
# Try it with pytds.
tds = db.tds() # creates and logs in with a pytds.Connection
cur = tds.cursor()
res = cur.execute("exec usp_apr_house_part ?", values)
# TypeError: not all arguments converted during string formatting
</code></pre>
<p>I hope I am missing something "obvious".
What am I missing, or where should I look for help?</p>
|
<python><sql-server><stored-procedures><table-valued-parameters>
|
2024-04-11 17:25:59
| 0
| 1,791
|
Daniel 'Dang' Griffith
|
78,312,055
| 5,790,653
|
How to write to different files based on value of list of dicts and concat the for loop and three list of dicts
|
<p>First of all, I'm sorry if the question is too long. I tried to include the least data, but that was I could add.</p>
<p>These are the lists I have:</p>
<pre class="lang-py prettyprint-override"><code>products_income = [
{'client_group': 'A', 'count': 2, 'product': 'A', 'monthly_income': 370},
{'client_group': 'B', 'count': 1, 'product': 'B', 'monthly_income': 215},
{'client_group': 'C', 'count': 3, 'product': 'C', 'monthly_income': 495},
{'client_group': 'A', 'count': 2, 'product': 'D', 'monthly_income': 304},
{'client_group': 'B', 'count': 1, 'product': 'E', 'monthly_income': 110},
{'client_group': 'B', 'count': 2, 'product': 'F', 'monthly_income': 560},
{'client_group': 'A', 'count': 1, 'product': 'G', 'monthly_income': 196},
]
client_package = [
{'client_group': 'A', 'total_income': 870},
{'client_group': 'B', 'total_income': 885},
{'client_group': 'C', 'total_income': 495}
]
client_group_user_counts = {
'A': ['user1', 'user2', 'user3', 'user4', 'user5'], # 5 users
'B': ['user21', 'user22', 'user23', 'user24'], # 4 users
'C': ['user41', 'user42', 'user43'], # 3 users
}
</code></pre>
<p>These are the output of my shop, and this is the expected output to be written different files called by '<code>client_group</code>.txt':</p>
<p><code>A.txt</code>:</p>
<pre><code>group A has 2 product A, monthly income is 370.
group A has 2 product D, monthly income is 304.
group A has 1 product G, monthly income is 196.
group A total income is 870.
group A has total 5 users, and this is its users:
user1
user2
user3
user4
user5
</code></pre>
<p><code>B.txt</code>:</p>
<pre><code>group B has 1 product B, monthly income is 215.
group B has 1 product E, monthly income is 110.
group B has 2 product F, monthly income is 560.
group B total income is 885.
group B has total 4 users, and this is its users
user21
user22
user23
user24
</code></pre>
<p><code>C.txt</code>:</p>
<pre><code>group C has 3 product C, monthly income is 495.
group C total income is 495.
group C has total 3 users, and this is its users:
user41
user42
user43
</code></pre>
<p>This is my current code and I haven't yet reached the expected output (actually currently I don't know how to write to separated files in this case, beside the other issue of my code):</p>
<pre class="lang-py prettyprint-override"><code># I added this function, because I had error in the last line of next code block inside the `f-string`, so I thought that could be a workaround for this
def to_join():
return '\n'.join(client_group_user_counts[user])
for product in products_income:
for client in client_package:
for user in client_group_user_counts:
if product['client_group'] == client['client_group'] == user:
print(f"""group {product['client_group']} has {product['count']} product {product['product']}, monthly income is {product['monthly_income']}.
total income is {client['total_income']}.
group {product['client_group']} has total {len(client_group_user_counts[user])} users, and this is its users:
{to_join()}
""")
</code></pre>
<p>This is current output:</p>
<pre><code>group A has 2 product A, monthly income is 370.
group A total income is 870.
group A has total 5 users, and this is its users:
user1
user2
user3
user4
user5
group B has 1 product B, monthly income is 215.
group B total income is 885.
group B has total 4 users, and this is its users:
user21
user22
user23
user24
group C has 3 product C, monthly income is 495.
group C total income is 495.
group C has total 3 users, and this is its users:
user41
user42
user43
group A has 2 product D, monthly income is 304.
group A total income is 870.
group A has total 5 users, and this is its users:
user1
user2
user3
user4
user5
group B has 1 product E, monthly income is 110.
group B total income is 885.
group B has total 4 users, and this is its users:
user21
user22
user23
user24
group B has 2 product F, monthly income is 560.
group B total income is 885.
group B has total 4 users, and this is its users:
user21
user22
user23
user24
group A has 1 product G, monthly income is 196.
group A total income is 870.
group A has total 5 users, and this is its users:
user1
user2
user3
user4
user5
</code></pre>
|
<python>
|
2024-04-11 17:19:58
| 4
| 4,175
|
Saeed
|
78,311,944
| 4,342,353
|
How to manage that escapes for the double quotes `'\"'` inside the 'user content' for training datasets will not be removed?
|
<h2>1. Objective</h2>
<p>The objective is to ensure the training data keeps the needed format for a model training.</p>
<p>Using the <a href="https://huggingface.co/docs/trl/sft_trainer" rel="nofollow noreferrer"><code>SFTTrainer</code></a> model training. The <a href="https://huggingface.co/docs/trl/sft_trainer" rel="nofollow noreferrer"><code>SFTTrainer</code></a> has a parameter <code>train_dataset=dataset,</code> that expects a <code>dict</code> as an input for the training.</p>
<p>The training data contains a before it will be converted to a dict, a <code>JSON</code> format which includes a <code>JSON array</code> where one <code>JSON</code> value pair contains an <code>markup language</code> as text and this <code>markup language text</code> contains text that which represents <code>JSON</code>.</p>
<p>Here is an example of the data input format before it is loaded in the <a href="https://github.com/huggingface/datasets" rel="nofollow noreferrer">datasets</a> library, which converts the <code>complex JSON</code> to a <code>dict.</code></p>
<pre class="lang-json prettyprint-override"><code>{"messages": [{"role": "system", "content": "my instructions"}, {"role": "user", "content": "my question"}, {"role": "assistant", "content": "```json\n{\"key\":\"mykey\",\"value\":\"myvalue\"}"\n```}"]}
</code></pre>
<p>During the conversion to a dict, it removes the escapes from the original data, which impacts the model's answers not in the expected format later, as you can see this in the situation section of the issue.</p>
<pre class="lang-json prettyprint-override"><code>{'messages': [{'role': 'system', 'content': 'my instructions'},
{'role': 'user', 'content': 'my question'},
{'role': 'assistant', 'content': '```json\n{"key": "mykey", "value": "myvalue" }"\n```}']
</code></pre>
<h2>2. Situation</h2>
<p>I noticed this behaviour after the finetuning of a model using the training data. The answer of a the finetuned model to a question didn't contain the right format, inside a complex answer json.</p>
<p>Answer format of the model later, if the nested json text is not escaped with <code>"\""</code> the answer json can't be parsed and needs additional effort to extract the content of value for the <code>"key_nested_json_as_text":"value"</code>.</p>
<p>Answer format of the model later should be like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"key_one" : "value_one",
"key_nested_json_as_text" : "```json\n{\"key\":\"mykey\",\"value\":\"myvalue\"}\n```"
}
</code></pre>
<p>This format can't be parse using JSON:</p>
<pre class="lang-json prettyprint-override"><code>{
"key_one" : "value_one",
"key_nested_json_as_text" : "```json\n{"key":"mykey","value":"myvalue"}\n```"
}
</code></pre>
<p>The <code>"key_nested_json_as_text":"value"</code> cause the problem later:</p>
<pre><code>```json\n{\"key\":\"mykey\",\"value\":\"myvalue\"}"\n```
</code></pre>
<h2>3. To Reproduce problem situation</h2>
<p>The function <em>load_dataset</em> from <em>huggingface datasets</em> removes the escapes before the double quotes '"' inside the <em>user content</em> for the training datasets should not be removed by the datasets library.</p>
<ul>
<li>Here is an example of the training data input format for the json file:</li>
</ul>
<pre class="lang-json prettyprint-override"><code>"{\"messages": [{"role": "system", "content": "my instructions"}, {"role": "user", "content": "my question"}, {"role": "assistant", "content": "```json\n{\"key\":\"mykey\",\"value\":\"myvalue\"}"\n```}]}"
</code></pre>
<ul>
<li>This is how I use the <code>load_dataset</code> function:</li>
</ul>
<pre><code>load_dataset('json', data_files='my_input_file.json', field='messages', split="train")
</code></pre>
<ul>
<li>That is how I display the result using print of the content:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>for data in train_dataset:
print(f"\n{data}")
</code></pre>
<p>The resulting format in datasets output is:</p>
<pre><code>{'messages': [{'role': 'system', 'content': 'my instructions'}, {'role": 'user', 'content': 'my question'}, {'role": 'assistant', 'content': '```json\n{"key": "mykey", "value": "myvalue"}\n```'}]}
</code></pre>
<p>But I like to ensure that the escapes for the double quotes '"' inside the user content will not be removed by the datasets library.</p>
<ul>
<li>I want to have this format for the training</li>
</ul>
<pre><code>'```json\n{\"key\": \"mykey\", \"value\": \"myvalue\"}\n```'
</code></pre>
<ul>
<li>and not this is used.</li>
</ul>
<pre><code>'```json\n{"key": "mykey", "value": "myvalue"}\n```'
</code></pre>
<p>Any idea, if there is someone who had the same situation and has a solution that would be awesome?</p>
|
<python><huggingface-datasets>
|
2024-04-11 16:57:15
| 1
| 471
|
Thomas Suedbroecker
|
78,311,930
| 3,621,575
|
How can I create a list based on values from all columns?
|
<p>I would like to create lists, such as in <code>df2</code>, from data such as in <code>df1</code>. Most of the help I read online is about the opposite, how to dissect lists. In the end, I would like this list to be accessible as a new column in the original <code>df1</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = pd.DataFrame(list(zip([1,2,3], [4,5,6], [7,8,9])),
columns=['numbers', 'numbers2', 'numbers3'])
df2 = pd.DataFrame(list(zip([[1,4,7], [2,5,8], [3,6,9]])),
columns=['list_of_numbers'])
</code></pre>
<p>Thanks for taking a look. Please let me know if you have any questions if my example is not clear.</p>
<p><a href="https://i.sstatic.net/XlPZn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XlPZn.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2024-04-11 16:54:25
| 2
| 581
|
windyvation
|
78,311,910
| 7,800,760
|
Unable to locally save some models locally with SentenceTransformers
|
<p>Tried many times to have SentenceTransformers and having stripped down my code to the following:</p>
<pre><code>from sentence_transformers import SentenceTransformer
modelPath = "/Users/bob/.cache"
model = SentenceTransformer("Salesforce/SFR-Embedding-Mistral")
model.save(modelPath)
</code></pre>
<p>always fails with timeouts(?) while downloading the model, not always at the same exact spot. Here follows an example:</p>
<pre><code>llm-fast-py3.11bob /Volumes/2TBWDB/code/llm_fast [main] $ /Volumes/2TBWDB/code/llm_fast/.venv/bin/python /Volumes/2TBWDB/code/llm_fast/llm_fast/prove/scarica
.py
modules.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 229/229 [00:00<00:00, 413kB/s]
config_sentence_transformers.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 123/123 [00:00<00:00, 288kB/s]
README.md: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 85.3k/85.3k [00:00<00:00, 756kB/s]
sentence_bert_config.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 54.0/54.0 [00:00<00:00, 187kB/s]
config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 663/663 [00:00<00:00, 1.99MB/s]
model.safetensors.index.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 22.2k/22.2k [00:00<00:00, 29.4MB/s]
model-00001-of-00003.safetensors: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.94G/4.94G [00:46<00:00, 106MB/s]
model-00002-of-00003.safetensors: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5.00G/5.00G [00:46<00:00, 107MB/s]
model-00003-of-00003.safetensors: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.28G/4.28G [00:40<00:00, 107MB/s]
Downloading shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [02:14<00:00, 44.72s/it]
Loading checkpoint shards: 33%|ββββββββββββββββββββββββββββββββ | 1/3 [00:17<00:35, 17.53s/it]zsh: killed /Volumes/2TBWDB/code/llm_fast/.venv/bin/python
/Users/bob/.pyenv/versions/3.11.7/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
</code></pre>
<p>Same happens with other largish models such as intfloat/e5-mistral-7b-instruct</p>
<p>I have a good 200MBit fiber connection. Can I do anything else to save these models locally?</p>
<p>Python 3.11.7 - sentence-transformers 2.6.1 - Mac M1 16GB with Sonoma 14.4.1 - 55GB free disk space</p>
|
<python><nlp><huggingface-transformers><large-language-model><sentence-transformers>
|
2024-04-11 16:50:18
| 0
| 1,231
|
Robert Alexander
|
78,311,650
| 2,675,981
|
Jira Python get description from SUB-Task
|
<p>I am trying to get the description from a subtask in Jira. I am able to get all the information from the main task: summary, status, description, subtasks, etc... but when I pull data for a specific subtask, I am missing the description field.</p>
<pre><code>ticket_data = jira.issue("PROJ-12345")
for subtask in ticket_data.fields.subtasks:
if "generate data" in (subtask.fields.summary).lower():
st_desc = subtask.fields.description # ERROR
print(subtask.raw['fields'] # Description is not a field
</code></pre>
<p>I thought that maybe I should try to pull only the subtask; it makes sense to me that the main task would have limited information regarding its subtasks. But when I pull the data for the subtask directly, I get the same exact information as from the main task and the description field is missing. I even tried to define that I wanted ONLY the description field.</p>
<pre><code>subtask_data = jira.issue(subtask_key, fields='description')
print(subtask_data.raw['fields']) # Returns same data - Description is not a field
</code></pre>
<p>I am able to create subtasks with the description without issue.</p>
<pre><code>name: "subtask_1"
project: "PROJ"
summary: "Generate data for the given task"
assignee: "myself@email.com"
description: "All the data I need for the description entered here"
issuetype: "Sub-task"
parent: "PROJ-12345"
</code></pre>
<p>I have found several posts related to getting data from subtasks or how to create subtasks, I have even found a post on Atlassian where it appears to be a similar issue, but it's unanswered. So I thought I would ask the brilliant users of Stack Overflow for some help. :) What am I missing?</p>
<p>Thank you in advance!</p>
|
<python><jira>
|
2024-04-11 15:58:17
| 3
| 885
|
Apolymoxic
|
78,311,523
| 2,153,235
|
Invoke nonexistent __new__ method for abstract class: What happens?
|
<p>I was trying to decipher the following code, executed in a Conda environment for Python 3.9:</p>
<pre><code>from datetime import datetime, timezone
# <...snip...>
current_file_start_time = datetime.now(timezone.utc)
</code></pre>
<p><a href="https://github.com/python/cpython/blob/3.9/Lib/datetime.py" rel="nofollow noreferrer">Here is the source code for <code>datetime.py</code></a> that I found online. The <code>timezone</code> class starts as follows:</p>
<pre><code>class timezone(tzinfo):
__slots__ = '_offset', '_name'
# Sentinel value to disallow None
_Omitted = object()
def __new__(cls, offset, name=_Omitted):
if not isinstance(offset, timedelta):
raise TypeError("offset must be a timedelta")
if name is cls._Omitted:
if not offset:
return cls.utc
name = None
elif not isinstance(name, str):
raise TypeError("name must be a string")
if not cls._minoffset <= offset <= cls._maxoffset:
raise ValueError("offset must be a timedelta "
"strictly between -timedelta(hours=24) and "
"timedelta(hours=24).")
return cls._create(offset, name)
@classmethod
def _create(cls, offset, name=None):
self = tzinfo.__new__(cls)
self._offset = offset
self._name = name
return self
# <...snip...>
timezone.utc = timezone._create(timedelta(0))
</code></pre>
<p>The <code>__new__</code> method invokes <code>_create</code>, which in turn invokes <code>tzinfo.__new__</code>. Fortunately, the <code>tzinfo</code> class is defined in the same <code>datetime.py</code>, but it is self documented as an abstract base class and doesn't have a <code>__new__</code> method.</p>
<ul>
<li><p>Does the <code>tzinfo</code> not have a <code>__new__</code> method <em>because</em> it is an abstract class and (I presume) can't be instantiated?</p>
</li>
<li><p>What happens when <code>tzinfo.__new__</code> is called, as above?</p>
</li>
</ul>
|
<python>
|
2024-04-11 15:36:16
| 1
| 1,265
|
user2153235
|
78,311,513
| 2,715,191
|
Train neural network for Absolute function with minimum Layers
|
<p>I'm trying to train neural network to learn y = |x| function. As we know the absolute function has 2 different lines connecting with each other at point zero. So I'm trying to have following Sequential model:</p>
<p>Hidden Layer:
2 Dense Layer (activation relu)
Output Layer:
1 Dense Layer</p>
<p>after training the model,it only fits the half side of the function. Most of the time it is right hand side, sometimes it is the left side. As soon as I add 1 more Layer in the hidden layer, so instead of 2 I have 3, it perfectly fits the function. Can anyone explain why there is need an extra layer when the absolute function has only one cut ?</p>
<p>Here is the code:</p>
<pre><code>import numpy as np
X = np.linspace(-1000,1000,400)
np.random.shuffle(X)
Y = np.abs(X)
# Reshape data to fit the model input
X = X.reshape(-1, 1)
Y = Y.reshape(-1, 1)
import tensorflow as tf
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# Build the model
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(2, activation='relu'),
tf.keras.layers.Dense(1)
])
# Compile the model
model.compile(optimizer='adam', loss='mse',metrics=['mae'])
model.fit(X, Y, epochs=1000)
# Predict using the model
Y_pred = model.predict(X)
# Plot the results
plt.scatter(X, Y, color='blue', label='Actual')
plt.scatter(X, Y_pred, color='red', label='Predicted')
plt.title('Actual vs Predicted')
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.show()
</code></pre>
<p>Plot for 2 Dense Layer:</p>
<p><a href="https://i.sstatic.net/42zxJ.png" rel="noreferrer"><img src="https://i.sstatic.net/42zxJ.png" alt="enter image description here" /></a></p>
<p>Plot for 3 Dense Layer:
<a href="https://i.sstatic.net/xlZ9Y.png" rel="noreferrer"><img src="https://i.sstatic.net/xlZ9Y.png" alt="enter image description here" /></a></p>
|
<python><tensorflow><machine-learning><keras><deep-learning>
|
2024-04-11 15:34:01
| 2
| 1,741
|
onik
|
78,311,464
| 5,330,527
|
Use autocomplete drop-down filters in Django Admin back-end for normal fields (not ManyToMany or ForeignKey)
|
<p>Is it possible to use an autocomplete filter <code>class</code> to be used in the <code>list_filter</code> for normal fields (i.e. not foreign keys or other relations)?</p>
<p>For example, for this model:</p>
<pre><code>class Project(models.Model):
project_number = models.DecimalField(max_digits=15, decimal_places=0, blank=True, null=True)
</code></pre>
<p>I'd like to have an autocomplete filter in my admin back-end for the <code>project_number</code> field. Would <a href="https://pypi.org/project/django-admin-autocomplete-filter/" rel="nofollow noreferrer">admin_auto_filters</a> be able to do that?</p>
|
<python><django><django-admin-filters>
|
2024-04-11 15:25:08
| 0
| 786
|
HBMCS
|
78,311,389
| 610,569
|
Why would `eval('dict('+s+')')` work but not `literal_eval('dict('+s+')')`?
|
<p>I could do this with <code>eval()</code>:</p>
<pre><code>>>> s = 'hello=True,world="foobar"'
>>> eval('dict('+s+')')
{'hello': True, 'world': 'foobar'}
</code></pre>
<p>but with <code>literal_eval()</code>:</p>
<pre><code>>>> from ast import literal_eval
>>> literal_eval('dict('+s+')')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ast.py", line 110, in literal_eval
return _convert(node_or_string)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ast.py", line 109, in _convert
return _convert_signed_num(node)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ast.py", line 83, in _convert_signed_num
return _convert_num(node)
^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ast.py", line 74, in _convert_num
_raise_malformed_node(node)
File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ast.py", line 71, in _raise_malformed_node
raise ValueError(msg + f': {node!r}')
ValueError: malformed node or string on line 1: <ast.Call object at 0x1014cb940>
</code></pre>
<p><strong>Q (part 1):</strong> Why would <code>eval()</code> work but not <code>literal_eval()</code>?</p>
<p><strong>Q (part 2):</strong> Is there a name for the <code>'+s+'</code> style/format?</p>
<p><strong>Q (part 3):</strong> Is it possible to use something like the <code>+s+</code> with <code>literal_eval()</code>?</p>
|
<python><variables><eval><abstract-syntax-tree>
|
2024-04-11 15:13:48
| 1
| 123,325
|
alvas
|
78,311,338
| 2,263,683
|
GCP Spanner can't set DatabaseDialect while creating a database
|
<p>I have Python method to create a database in GCP's Spanner which in I want to set the database dialect to PostgreSql:</p>
<pre><code>from google.cloud import spanner
from google.cloud.spanner_admin_database_v1.types import spanner_database_admin, DatabaseDialect
def create_database(instance_id, database_id, extra_statements=None, database_dialect=DatabaseDialect.POSTGRESQL.value):
"""Create a new database"""
if extra_statements is None:
extra_statements = []
spanner_client = spanner.Client()
database_admin_api = spanner_client.database_admin_api
request = spanner_database_admin.CreateDatabaseRequest(
parent=database_admin_api.instance_path(
spanner_client.project, instance_id
),
create_statement=f"CREATE DATABASE `{database_id}`",
extra_statements=extra_statements,
database_dialect=database_dialect,
)
operation = database_admin_api.create_database(request=request)
database = operation.result(OPERATION_TIMEOUT_SECONDS)
</code></pre>
<p>But no matter what value I set for <code>database_dialect</code> parameter, I always get this error:</p>
<blockquote>
<p>Traceback (most recent call last): File
"/home/ghasem/dayrize-cloud/.venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py",
line 76, in error_remapped_callable
return callable_(*args, **kwargs) File "/home/ghasem/dayrize-cloud/.venv/lib/python3.10/site-packages/grpc/_channel.py",
line 1161, in <strong>call</strong>
return _end_unary_response_blocking(state, call, False, None) File
"/home/ghasem/dayrize-cloud/.venv/lib/python3.10/site-packages/grpc/_channel.py",
line 1004, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that
terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Invalid create statement. Database ids should be 2-30 characters long, contain only lowercase letters, numbers,
underscores or hyphens, start with a letter and cannot end with an
underscore or hyphen. Example of valid create statement: CREATE
DATABASE "my-database""
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.145.95:443 {grpc_message:"Invalid create statement.
Database ids should be 2-30 characters long, contain only lowercase
letters, numbers, underscores or hyphens, start with a letter and
cannot end with an underscore or hyphen. Example of valid create
statement: CREATE DATABASE "my-database"", grpc_status:3,
created_time:"2024-04-11T15:00:12.70067553+00:00"}"</p>
<blockquote>
</blockquote>
<p>The above exception was the direct cause of the following exception:</p>
<p>Traceback (most recent call last): File "", line 1, in
File
"/home/ghasem/dayrize-cloud/dayrize-backend/src/dayrize_backend/helper/spanner.py",
line 31, in create_database
operation = database_admin_api.create_database(request=request) File
"/home/ghasem/dayrize-cloud/.venv/lib/python3.10/site-packages/google/cloud/spanner_admin_database_v1/services/database_admin/client.py",
line 821, in create_database
response = rpc( File "/home/ghasem/dayrize-cloud/.venv/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py",
line 131, in <strong>call</strong>
return wrapped_func(*args, **kwargs) File "/home/ghasem/dayrize-cloud/.venv/lib/python3.10/site-packages/google/api_core/timeout.py", line 120, in func_with_timeout
return func(*args, **kwargs) File "/home/ghasem/dayrize-cloud/.venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py",
line 78, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc google.api_core.exceptions.InvalidArgument: 400 Invalid create
statement. Database ids should be 2-30 characters long, contain only
lowercase letters, numbers, underscores or hyphens, start with a
letter and cannot end with an underscore or hyphen. Example of valid
create statement: CREATE DATABASE "my-database" [links {<br />
description: "The rules of Cloud Spanner database IDs." url:
"https://cloud.google.com/spanner/docs/data-definition-language#database-id-names"
} ]</p>
</blockquote>
<p>These are the values I set for <code>database_dialect</code> based on <a href="https://cloud.google.com/spanner/docs/reference/rpc/google.spanner.admin.database.v1#databasedialect" rel="nofollow noreferrer">the documentations</a>:</p>
<pre><code>database_dialect=DatabaseDialect.POSTGRESQL.value
database_dialect=DatabaseDialect.POSTGRESQL
database_dialect=2
</code></pre>
<p>I know the database name is fine as if I remove the <code>database_dialect</code> from the statement, it will create the database without any problem.</p>
<p>What am I missing here?</p>
|
<python><google-cloud-platform><google-cloud-spanner>
|
2024-04-11 15:06:00
| 1
| 15,775
|
Ghasem
|
78,311,325
| 9,173,710
|
Detecting center and radius of craters in microscopy images
|
<p>I want to find the center and the radius of crater-like features in microscopy images using Python.</p>
<p>I have images like this, they are numpy arrays representing the height map of a surface.<br />
<a href="https://i.sstatic.net/deGSv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/deGSv.png" alt="enter image description here" /></a></p>
<p>I need the center to create a radial profile later on in my processing.</p>
<p>So far I have tried to simply apply a height threshold, binarizing the image and only keeping the highest features, then fitting a circle or ellipse to these points.<br />
<a href="https://i.sstatic.net/5q9wt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5q9wt.png" alt="enter image description here" /></a></p>
<p>My problem is, that some images cannot be thresholded very well. A lower and higher threshold both make the circle detection worse.</p>
<p>I tried using Canny, but I haven't had much success.</p>
<p>How can I properly detect these circles?</p>
<p>This is the code so far:</p>
<pre class="lang-py prettyprint-override"><code>from skimage.measure import EllipseModel, CircleModel
from skimage.draw import ellipse_perimeter, circle_perimeter
import numpy as np
import diplib as dip
import matplotlib.pyplot as plt
from pathlib import Path
base_path = Path("folder including multiple images")
glob = base_path.rglob("*_f.npz")
for file in glob:
image_grey: np.ndarray = np.load(file)["arr_0"]
max_peak = image_grey.max()
min_peak = image_grey.min()
# i adjust the value at the end here to set a certain threshold
threshold = max_peak - (max_peak-min_peak)*0.3
rim_binarized = image_grey>=threshold
points = np.argwhere(rim_binarized)
ell = EllipseModel()
succ = ell.estimate(points)
print(succ)
ye, xe, a, b = (int(round(x)) for x in ell.params[:-1])
ey, ex = ellipse_perimeter(ye,xe,a,b,ell.params[-1])
circ = CircleModel()
circ.estimate(points)
yc, xc, r = (int(round(x)) for x in circ.params)
cy, cx = circle_perimeter(yc,xc,r)
fig2, (ax1, ax2, ax3) = plt.subplots(ncols=3, nrows=1, figsize=(12, 4),
sharex=False, sharey=False)
ax1.set_title('Original picture')
ax1.imshow(image_grey)
ax1.plot(cx,cy, "r,")
ax1.plot(ex,ey, "m,")
ax2.set_title('Rim')
ax2.imshow(rim_binarized, cmap="Greys")
dipimg = dip.Image(image_grey)
ax3.set_title("Ridge")
rad = dip.RadialMean(dipimg, binSize=1, center=(ye,xe), maxRadius="inner radius")
ax3.plot(rad, label="ellipse")
rad = dip.RadialMean(dipimg, binSize=1, center=(yc,xc), maxRadius="inner radius")
ax3.plot(rad, label="circle")
plt.show()
</code></pre>
|
<python><image-processing><feature-detection>
|
2024-04-11 15:03:30
| 2
| 1,215
|
Raphael
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.