QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,575,069
| 11,192,099
|
Altair log position of tooltip on click to file
|
<p>I aim to keep a log (log file or dataframe) of all the countries a user has clicked on. I am trying to log the <code>id</code> of the tooltip on click, but do not know how I can extract any values. I'd be happy about any advice on how to extract values. Getting the country name and highlighting the country work well so far.</p>
<p>Any help is appreciated, also if altair does not support this solution, a pointer to if/how other frameworks accomplish this would also be helpful.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import altair as alt
import streamlit as st
from vega_datasets import data
@st.cache
def get_iso_names(url: str) -> pd.DataFrame:
return pd.read_csv(url)
# Data generators for the background
sphere = alt.sphere()
graticule = alt.graticule()
# Source of land data
source = alt.topo_feature(data.world_110m.url, "countries")
iso_name_url = "https://raw.githubusercontent.com/stefangabos/world_countries/master/data/countries/en/world.csv"
country_names = get_iso_names(iso_name_url)
click = alt.selection_multi(empty="none")
# Layering and configuring the components
background = (
alt.layer(
alt.Chart(sphere).mark_geoshape(fill="lightblue"),
alt.Chart(graticule).mark_geoshape(stroke="white", strokeWidth=0.5),
alt.Chart(source)
.mark_geoshape(
stroke="black",
strokeWidth=0.15,
)
.encode(
tooltip=[
alt.Tooltip("name:N", title="Country"),
],
# color=alt.condition(hover, alt.value("firebrick"), "none"),
color=alt.condition(click, alt.value("firebrick"), alt.value("white")),
)
.transform_lookup(
lookup="id",
from_=alt.LookupData(country_names, "id", ["name"]),
),
)
.add_selection(click)
.project("naturalEarth1")
.properties(width=1200, height=800)
)
st.altair_chart(
background.interactive(),
# (background + foreground).interactive(),
use_container_width=False,
)
</code></pre>
|
<python><maps><altair><vega-lite>
|
2023-06-28 16:24:18
| 1
| 1,179
|
a-doering
|
76,575,014
| 3,371,250
|
How to recover a tree structure from an unfavorable representation of one?
|
<p>Let's say we have an unfavorable representation of a tree. This tree is not nested, but flattened and its nodes are "connected" only by ids:</p>
<pre><code>{
"nodes":[
{
"id":0,
"value":"and",
"children":[
1,
4
]
}
{
"id":1,
"value":"or",
"children":[
2,
3
]
},
{
"id":4,
"value":"or",
"children":[
5,
6
]
}
],
"leafs":[
{
"id":2,
"value":"some statement"
},
{
"id":3,
"value":"some statement"
},
{
"id":5,
"value":"some statement"
},
{
"id":6,
"value":"some statement"
}
]
}
</code></pre>
<p>You can see that the tree is not only flattened, there is also a rather unnecessary representation of the leafs as a dedicated list.
The ids of the leafs are therefore appearing twice: Once as a child in its parent node and once as an identifier of the leaf.</p>
<p>What I want is a nested representation of this tree as dedicated python objects. I have to substitute the "id" with the whole object and get rid of the overly complicated list representation.</p>
<p>This is what I want:</p>
<pre><code>{
"tree": {
"id": 0,
"value": "and",
"children": [
{
"id": 1,
"value": "or",
"children": [
{
"id": 2,
"value": "some statement"
},
{
"id": 3,
"value": "some statement"
}
]
},
{
"id": 4,
"value": "or",
"children": [
{
"id": 6,
"value": "some statement"
},
{
"id": 6,
"value": "some statement"
}
]
}
]
}
}
</code></pre>
<p>How would I start to parse the two lists, so that I can build python objects (node and leaf classes), that reference each other and are representing this tree structure just by reference.</p>
<pre><code>class Node:
def __init__(self, id, operator):
self.id = id
self.value= operator
self.children = None
class Leaf:
def __init__(self, id, operator):
self.id = id
self.value = None
</code></pre>
<p>These are my tree classes, but I don't know how to traverse the two list in a way that brings me to my desired tree.</p>
|
<python><data-structures><tree><abstract-syntax-tree>
|
2023-06-28 16:15:23
| 1
| 571
|
Ipsider
|
76,574,991
| 9,422,807
|
Pandas compare values of multiple columns
|
<p>I want to find out if any of the value in columns mark1, mark2, mark3, mark4 and mark5 are the same, column-wise comparison from a dataframe below, and list result as True or False</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=[[7, 2, 3, 7, 7], [3, 4, 3, 2, 7], [1, 6, 5, 2, 7], [5, 5, 6, 3, 1]],
columns=["mark1", "mark2", 'mark3', 'mark4', 'mark5'])
</code></pre>
<p>Ideal output:</p>
<pre><code> mark1 mark2 mark3 mark4 mark5 result
0 7 2 3 7 7 True
1 3 4 3 2 7 True
2 1 6 5 2 7 False
3 5 5 6 3 1 True
</code></pre>
<p>So I came up with a func using nested forloop to compare each value in a column, does not work.
AttributeError: 'Series' object has no attribute 'columns'
What's the correct way? Avoid nested forloop by all means.</p>
<pre><code>def compare_col(df):
check = 0
for i in range(len(df.columns.tolist())+1):
for j in range(1, len(df.columns.tolist())+1):
if df.iloc[i, i] == df.iloc[j, i]:
check += 1
if check >= 1:
return True
else:
return False
df['result'] = df.apply(lambda x: compare_col(x[['mark1', 'mark2', 'mark3', 'mark4', 'mark5]]), axis=1)
</code></pre>
|
<python><pandas><multiple-columns>
|
2023-06-28 16:12:48
| 3
| 413
|
Liu Yu
|
76,574,972
| 3,541,631
|
Passing naming arguments/parameters between shell functions (existing in multiple shell scripts) and finally to python script
|
<p>I have a python script that is expecting 1, 2 or 3 arguments:</p>
<pre><code>--name - mandatory
--max - optional
--min - optional
</code></pre>
<p>I have a function in bash script that calls the python script:</p>
<p>script_1</p>
<pre><code>filter() {
.../python3 -E filtered_as.pyc --name $1 --min $2 --max $3
</code></pre>
<p>Other bash scripts can call the filter function:</p>
<p>script_2</p>
<pre><code>filter NAME_DEW 42 | while read id; do
.............
</code></pre>
<p>What, I want:</p>
<ul>
<li>instead of passing $1, $2 I want to pass named arguments, the same as expected in python script (from the shell script to the other shell script, from shell script to python.</li>
<li>work with optional arguments:
<ul>
<li>can be all 3</li>
<li>can be just name</li>
<li>can be name and min or max (2)</li>
</ul>
</li>
</ul>
<p>Also because I have 2 optional arguments, I need named arguments to know which one is passed.</p>
|
<python><bash>
|
2023-06-28 16:10:04
| 1
| 4,028
|
user3541631
|
76,574,954
| 997,381
|
Paginate and apply in batches to Pandas DataFrame
|
<p>I have a DataFrame like:</p>
<pre><code>id sentence
1 "Some txt"
2 "Another txt"
3 "Awkward txt"
4 "Last txt"
...
9273
</code></pre>
<p>Now I need to get records in <strong>portion by 20 (paginate)</strong>, apply function <strong>that is called once</strong> that returns a list of 20 elements and applies to a DataFrame creating a new column like:</p>
<pre><code>id sentence parsed
1 "Some txt" 1242
2 "Another txt" 9762
3 "Awkward txt" 9355
4 "Last txt" 4126
...
9273
</code></pre>
<hr />
<p>Practical use-case scenario: I have an API that can do batch call. I want to take a single column's paginated values, put into that API, wait for response and apply to each row with returned data. I want to call API once, instead of 20x with <code>.apply()</code>.</p>
<p>How?</p>
|
<python><pandas><dataframe>
|
2023-06-28 16:07:15
| 1
| 1,404
|
cadavre
|
76,574,768
| 22,009,322
|
Drawing grouped animated plt.step using matplotlib
|
<p>I'm just learning python and want to draw the plt.step lines that will follow the scatter points.
The scatter points are drawn perfectly, so I used pretty much the same approach to draw the plt.steps lines, but it doesn't working which is unclear for me why, since ax.set_data can take the same 2D type arrays as ax.set_offsets do.</p>
<p>Unfortunately I am stuck with the following issues:</p>
<p>a) there is only one plt.step line (should be the same amount as 'Tracks' which are 3)</p>
<p>b) the line is drawn one cell behind the point</p>
<p>c) the color of the line is not corresponding to the scatter point</p>
<p>I appreciate any advise!</p>
<p>Current output:</p>
<p><a href="https://i.sstatic.net/jLY0r.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jLY0r.gif" alt="enter image description here" /></a></p>
<p>The example of the code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.animation import FuncAnimation
df = pd.DataFrame()
cf = 0
while cf < 3:
df = pd.concat([df, pd.DataFrame(
{
"Track": f'Track {cf + 1}',
"Timeline": np.linspace(0, 9, 10, dtype=int),
"Position": np.random.randint(low=0+cf, high=3+cf, size=10)
}
)])
cf = cf + 1
df = df.reset_index(drop=True)
print(df)
df.info()
# plot:
fig, ax = plt.subplots()
# Point coordinates:
y = df['Position']
x = df['Timeline']
# Labels with axes:
ax.set_xlabel('Timeline')
ax.set_ylabel('Position')
ax.set_ylim(-0.2, 4.2)
ax.invert_yaxis()
ax.set_xticks(list(np.unique(x)))
ax.set_yticks(list(np.unique(y)))
ax.set_xlim(df["Timeline"].min()-0.5, df["Timeline"].max()+0.5)
# Colors:
colors = {'Track 1': 'tab:red', 'Track 2': 'tab:blue', 'Track 3': 'blue'}
# Drawing points and lines according to positions:
frames = (len(df.groupby(['Timeline'])))
steps = []
scatters = []
for track, group in df.groupby("Track"):
scatters.append(ax.scatter(group["Timeline"].iloc[0],
group["Position"].iloc[0],
s=45, c=colors[track]))
steps = plt.step(group["Timeline"].iloc[0],
group["Position"].iloc[0],
color=colors[track])
def animate(i):
for scatter, (_, group) in zip(scatters, df.groupby("Track")):
scatter.set_offsets((group["Timeline"].iloc[i],
group["Position"].iloc[i]))
for step, (_, group) in zip(steps, df.groupby('Track')):
step.set_data(group['Timeline'].iloc[:i],
group['Position'].iloc[:i])
print('step', i) #for some reason there are three 0 steps in the beginning
anim = FuncAnimation(fig, animate, frames=frames, interval=400)
ax.grid()
anim.save('test.gif', writer='pillow')
</code></pre>
|
<python><matplotlib><matplotlib-animation>
|
2023-06-28 15:44:57
| 1
| 333
|
muted_buddy
|
76,574,724
| 3,446,051
|
ValueError: Layer "model_1" expects 7 input(s), but it received 1 input tensors
|
<p>I have a very weird problem I've never experienced before:
I have built a simple model:</p>
<pre><code>merged = Concatenate()(model_inputs)
merged = Dense(50,activation='relu')(merged)
merged = Dense(30,activation='relu')(merged)
merged = Dense(1,activation='sigmoid')(merged)
model = Model(inputs=model_inputs,outputs=merged)
</code></pre>
<p>And <code>model_inputs</code> is:</p>
<pre><code>[<KerasTensor: shape=(None, 1) dtype=float32 (created by layer 'CryoSleep')>,
<KerasTensor: shape=(None, 1) dtype=float32 (created by layer 'RoomService')>,
<KerasTensor: shape=(None, 1) dtype=float32 (created by layer 'Spa')>,
<KerasTensor: shape=(None, 1) dtype=float32 (created by layer 'VRDeck')>,
<KerasTensor: shape=(None, 1) dtype=float32 (created by layer 'Deck')>,
<KerasTensor: shape=(None, 1) dtype=float32 (created by layer 'Side')>,
<KerasTensor: shape=(None, 1) dtype=float32 (created by layer 'AllSpending')>]
</code></pre>
<p>Now I compiled and wanted to fit the model:</p>
<pre><code>X_Train_nn = create_input_values(X_Train)
y_Train_nn = create_label_values(y_Train)
X_Val_nn = create_input_values(X_Val)
y_Val_nn = create_label_values(y_Val)
model.compile(loss='binary_crossentropy',optimizer='nadam',metrics=['acc'])
model.fit(X_Train_nn,y_Train_nn,epochs=50,batch_size=32,validation_data=(X_Val_nn , y_Val_nn ),verbose=1)
</code></pre>
<p>The model trains an epoch but fails when trying to validate. If I remove the validation part, it works successfully. Otherwise I get the following error message:</p>
<pre><code>ValueError: Layer "model_1" expects 7 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 7) dtype=float64>]
</code></pre>
<p>But that is not true. The validation set has the correct length. Why? I can train the model with <code>X_Val_nn</code> and it trains correctly. I used <code>X_Val_nn</code> with the <code>evaluate</code> function and it works correctly.<br />
I even passed the train set as training and validation data and the same error is shown. So it is able to train with <code>X_Train_nn</code> but for validating it fails. And it shows always the error message shown above.<br />
Any idea what is wrong here?</p>
<p>PS: The whole traceback is the following:</p>
<pre><code>Epoch 1/50
Exception ignored in: <function _xla_gc_callback at 0x7f601ee392d0>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/jax/_src/lib/__init__.py", line 103, in _xla_gc_callback
def _xla_gc_callback(*args):
KeyboardInterrupt:
26/28 [==========================>...] - ETA: 0s - loss: 0.4339 - acc: 0.7849
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-195-c3a76520886a> in <cell line: 6>()
4 y_Val_nn = create_label_values(y_Val)
5 model.compile(loss='binary_crossentropy',optimizer='nadam',metrics=['acc'])
----> 6 model.fit(X_Val_nn,y_Val_nn,epochs=50,batch_size=32,validation_data=(range(100), y_Train_nn),verbose=1)
33 frames
/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
59 def error_handler(*args, **kwargs):
60 if not tf.debugging.is_traceback_filtering_enabled():
---> 61 return fn(*args, **kwargs)
62
63 filtered_tb = None
/usr/local/lib/python3.10/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1727 steps_per_execution=self._steps_per_execution,
1728 )
-> 1729 val_logs = self.evaluate(
1730 x=val_x,
1731 y=val_y,
/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
59 def error_handler(*args, **kwargs):
60 if not tf.debugging.is_traceback_filtering_enabled():
---> 61 return fn(*args, **kwargs)
62
63 filtered_tb = None
/usr/local/lib/python3.10/dist-packages/keras/engine/training.py in evaluate(self, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, return_dict, **kwargs)
2070 ):
2071 callbacks.on_test_batch_begin(step)
-> 2072 tmp_logs = self.test_function(iterator)
2073 if data_handler.should_sync:
2074 context.async_wait()
/usr/local/lib/python3.10/dist-packages/tensorflow/python/util/traceback_utils.py in error_handler(*args, **kwargs)
139 try:
140 if not is_traceback_filtering_enabled():
--> 141 return fn(*args, **kwargs)
142 except NameError:
143 # In some very rare cases,
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py in __call__(self, *args, **kwds)
892
893 with OptionalXlaContext(self._jit_compile):
--> 894 result = self._call(*args, **kwds)
895
896 new_tracing_count = self.experimental_get_tracing_count()
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py in _call(self, *args, **kwds)
940 # This is the first call of __call__, so we have to initialize.
941 initializers = []
--> 942 self._initialize(args, kwds, add_initializers_to=initializers)
943 finally:
944 # At this point we know that the initialization is complete (or less
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py in _initialize(self, args, kwds, add_initializers_to)
761 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
762 self._concrete_variable_creation_fn = (
--> 763 self._variable_creation_fn # pylint: disable=protected-access
764 ._get_concrete_function_internal_garbage_collected(
765 *args, **kwds))
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
169 """Returns a concrete function which cleans up its graph function."""
170 with self._lock:
--> 171 concrete_function, _ = self._maybe_define_concrete_function(args, kwargs)
172 return concrete_function
173
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py in _maybe_define_concrete_function(self, args, kwargs)
164 kwargs = {}
165
--> 166 return self._maybe_define_function(args, kwargs)
167
168 def _get_concrete_function_internal_garbage_collected(self, *args, **kwargs):
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py in _maybe_define_function(self, args, kwargs)
394 kwargs = placeholder_bound_args.kwargs
395
--> 396 concrete_function = self._create_concrete_function(
397 args, kwargs, func_graph)
398
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py in _create_concrete_function(self, args, kwargs, func_graph)
298
299 concrete_function = monomorphic_function.ConcreteFunction(
--> 300 func_graph_module.func_graph_from_py_func(
301 self._name,
302 self._python_function,
/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, create_placeholders, acd_record_initial_resource_uses)
1212 _, original_func = tf_decorator.unwrap(python_func)
1213
-> 1214 func_outputs = python_func(*func_args, **func_kwargs)
1215
1216 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py in wrapped_fn(*args, **kwds)
665 # the function a weak reference to itself to avoid a reference cycle.
666 with OptionalXlaContext(compile_with_xla):
--> 667 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
668 return out
669
/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1198 except Exception as e: # pylint:disable=broad-except
1199 if hasattr(e, "ag_error_metadata"):
-> 1200 raise e.ag_error_metadata.to_exception(e)
1201 else:
1202 raise
/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1187 # TODO(mdan): Push this block higher in tf.function's call stack.
1188 try:
-> 1189 return autograph.converted_call(
1190 original_func,
1191 args,
/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
437 try:
438 if kwargs is not None:
--> 439 result = converted_f(*effective_args, **kwargs)
440 else:
441 result = converted_f(*effective_args)
/usr/local/lib/python3.10/dist-packages/keras/engine/training.py in tf__test_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
375
376 if not options.user_requested and conversion.is_allowlisted(f):
--> 377 return _call_unconverted(f, args, kwargs, options)
378
379 # internal_convert_user_code is for example turned off when issuing a dynamic
/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs, options, update_cache)
457 if kwargs is not None:
458 return f(*args, **kwargs)
--> 459 return f(*args)
460
461
/usr/local/lib/python3.10/dist-packages/keras/engine/training.py in step_function(model, iterator)
1834
1835 data = next(iterator)
-> 1836 outputs = model.distribute_strategy.run(run_step, args=(data,))
1837 outputs = reduce_per_replica(
1838 outputs,
/usr/local/lib/python3.10/dist-packages/tensorflow/python/distribute/distribute_lib.py in run(***failed resolving arguments***)
1314 fn = autograph.tf_convert(
1315 fn, autograph_ctx.control_status_ctx(), convert_by_default=False)
-> 1316 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
1317
1318 def reduce(self, reduce_op, value, axis):
/usr/local/lib/python3.10/dist-packages/tensorflow/python/distribute/distribute_lib.py in call_for_each_replica(self, fn, args, kwargs)
2893 kwargs = {}
2894 with self._container_strategy().scope():
-> 2895 return self._call_for_each_replica(fn, args, kwargs)
2896
2897 def _call_for_each_replica(self, fn, args, kwargs):
/usr/local/lib/python3.10/dist-packages/tensorflow/python/distribute/distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs)
3694 def _call_for_each_replica(self, fn, args, kwargs):
3695 with ReplicaContext(self._container_strategy(), replica_id_in_sync_group=0):
-> 3696 return fn(*args, **kwargs)
3697
3698 def _reduce_to(self, reduce_op, value, destinations, options):
/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
687 try:
688 with conversion_ctx:
--> 689 return converted_call(f, args, kwargs, options=options)
690 except Exception as e: # pylint:disable=broad-except
691 if hasattr(e, 'ag_error_metadata'):
/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
375
376 if not options.user_requested and conversion.is_allowlisted(f):
--> 377 return _call_unconverted(f, args, kwargs, options)
378
379 # internal_convert_user_code is for example turned off when issuing a dynamic
/usr/local/lib/python3.10/dist-packages/tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs, options, update_cache)
456
457 if kwargs is not None:
--> 458 return f(*args, **kwargs)
459 return f(*args)
460
/usr/local/lib/python3.10/dist-packages/keras/engine/training.py in run_step(data)
1822
1823 def run_step(data):
-> 1824 outputs = model.test_step(data)
1825 # Ensure counter is updated only if `test_step` succeeds.
1826 with tf.control_dependencies(_minimum_control_deps(outputs)):
/usr/local/lib/python3.10/dist-packages/keras/engine/training.py in test_step(self, data)
1786 x, y, sample_weight = data_adapter.unpack_x_y_sample_weight(data)
1787
-> 1788 y_pred = self(x, training=False)
1789 # Updates stateful loss metrics.
1790 self.compute_loss(x, y, y_pred, sample_weight)
/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
59 def error_handler(*args, **kwargs):
60 if not tf.debugging.is_traceback_filtering_enabled():
---> 61 return fn(*args, **kwargs)
62
63 filtered_tb = None
/usr/local/lib/python3.10/dist-packages/keras/engine/training.py in __call__(self, *args, **kwargs)
556 layout_map_lib._map_subclass_model_variable(self, self._layout_map)
557
--> 558 return super().__call__(*args, **kwargs)
559
560 @doc_controls.doc_in_current_and_subclasses
/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
59 def error_handler(*args, **kwargs):
60 if not tf.debugging.is_traceback_filtering_enabled():
---> 61 return fn(*args, **kwargs)
62
63 filtered_tb = None
/usr/local/lib/python3.10/dist-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1110 ):
1111
-> 1112 input_spec.assert_input_compatibility(
1113 self.input_spec, inputs, self.name
1114 )
/usr/local/lib/python3.10/dist-packages/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
217
218 if len(inputs) != len(input_spec):
--> 219 raise ValueError(
220 f'Layer "{layer_name}" expects {len(input_spec)} input(s),'
221 f" but it received {len(inputs)} input tensors. "
ValueError: in user code:
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1852, in test_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1836, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.10/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 1316, in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
File "/usr/local/lib/python3.10/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 2895, in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
File "/usr/local/lib/python3.10/dist-packages/tensorflow/python/distribute/distribute_lib.py", line 3696, in _call_for_each_replica
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1824, in run_step **
outputs = model.test_step(data)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1788, in test_step
y_pred = self(x, training=False)
File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 61, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 558, in __call__
return super().__call__(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 61, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/base_layer.py", line 1112, in __call__
input_spec.assert_input_compatibility(
File "/usr/local/lib/python3.10/dist-packages/keras/engine/input_spec.py", line 219, in assert_input_compatibility
raise ValueError(
ValueError: Layer "model_1" expects 7 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 7) dtype=float64>]
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>Here I share also the code for <code>create_input_values</code> and <code>create_label_values</code>:</p>
<pre><code>def create_input_values(df):
input_values=[]
for column in df.columns:
if df[column].dtype == np.float64:
input_values.append(np.array(df[column].values.tolist()).reshape(len(df),1))
else:
input_values.append(np.array(pd.to_numeric(df[column], downcast='float').values.tolist()).reshape(len(df),1))
return input_values
def create_label_values(series):
return np.array(series.values.tolist()).reshape(len(series),1)
</code></pre>
<p>As mentioned, it also fails if I pass my <code>X_Train_nn</code> as <code>validation_data</code>. If I don't use <code>validation_data</code> then everything will work correctly.</p>
|
<python><tensorflow><keras>
|
2023-06-28 15:38:55
| 2
| 5,459
|
Code Pope
|
76,574,560
| 7,326,714
|
Shopify REST API cannot delete product images
|
<p>I have the following custom function in python to communicate with the Shopify REST API via requests to delete images from products,</p>
<pre><code>headers = {"Accept": "application/json", "Content-Type": "application/json"}
def delete_product_image(shop_url, product_id, image_id,headers):
del_r = requests.delete(f'{shop_url}/products/{product_id}/images/{image_id}.json()',headers=headers)
if del_r.status_code in [200, 201]:
print(f'image {image_id} of product {product_id} removed')
else:
print(f'Error: {del_r.status_code} - {del_r.text}')
time.sleep(1)
delete_product_image(shop_url, product_id, image_id)
</code></pre>
<p>If I execute the <code>delete_product_image(shop_url, product_id, image_id,headers)</code> the <code>del_r</code> status code returns <code>200</code>, but the images are not deleted from the products.</p>
<p>What is going wrong?</p>
|
<python><rest><shopify>
|
2023-06-28 15:17:42
| 1
| 1,991
|
LucSpan
|
76,574,525
| 2,612,592
|
Close Python Websocket server from another thread
|
<p>I have a websocket server class that has a <code>stop()</code> method that is called when Control+C is pressed in console (so I guess <strong>it is accessed from another thread</strong>).</p>
<p>How can I safely stop the server (and finish the script from running) when <code>stop()</code> is called?</p>
<p>Current implementation executes <code>stop()</code> method, but the script keeps running and does not print <code>Server closed</code>. But if a client sends a message after the <code>stop()</code> method is called, then the scripts actually finished and prints <code>Server closed</code> message.</p>
<pre><code>import asyncio
import websockets
DOMAIN = 'localhost'
PORT = 1111
class EchoServer:
loop = None
def stop(self): # METHOD CALLED FROM ANOTHER THREAD
self.loop.stop()
def process_message(self, message):
print(message)
return message
async def handle(self, ws_client):
print('Listening')
async for message in ws_client:
await ws_client.send(message)
async def main(self):
start_server = websockets.serve(self.handle, DOMAIN, PORT)
asyncio.ensure_future(start_server)
def start(self):
self.loop = asyncio.get_event_loop()
self.loop.create_task(self.main())
print('Starting server...')
self.loop.run_forever()
print('Server closed')
</code></pre>
|
<python><multithreading><websocket><python-asyncio><python-multithreading>
|
2023-06-28 15:11:54
| 2
| 587
|
Oliver Mohr Bonometti
|
76,574,455
| 7,321,700
|
Creating Multiple dataframes in a loop from function result
|
<p><strong>Scenario:</strong> I have a function that calls an API and retrieves data. The input of this function is a Year.</p>
<p><strong>Objective:</strong> I have a list of years and want to call the function sequentially in a loop, and add the results to either:</p>
<ol>
<li>multiple Dataframes (changing the names as it loops)</li>
<li>or append to a single Dataframe as new dimensions.</li>
</ol>
<p><strong>Issue:</strong> For multiple dataframes, I am trying to run a loop, where each iteration creates a new dataframe, and names it based on the year:</p>
<pre><code>for b_years in base_years:
if b_years != '' and b_years >= 2017:
output_df_ + b_years = pd.DataFrame(getAPICore(b_years))
</code></pre>
<p><strong>Obs.</strong> base_years is a DF with the unique list of years.</p>
<p><strong>Error:</strong> For the above snippet, the first part of the third line gives an operator assignment error.</p>
<p><strong>Question 1:</strong> How can this operation be performed?</p>
<p><strong>Question 2:</strong> If instead of multiple dataframes, I appended every new function result as a new dimension of a single dataframe:</p>
<pre><code>for b_years in base_years:
if b_years != '' and b_years >= 2017:
outputdf_name = pd.DataFrame.append(getAPICore(b_years))
</code></pre>
<p>the function ends with no error/result. Is there a way to do this operation?</p>
|
<python><pandas><dataframe>
|
2023-06-28 15:02:25
| 1
| 1,711
|
DGMS89
|
76,574,447
| 11,267,281
|
How to download models from HuggingFace through Azure Machine Learning Registry?
|
<p>While I'm perfectly able to download any models from my own Azure Machine Learning Registry or even the "azureml" registry, if I run the exact same code against the HuggingFace registry I receive the error "<strong>Exception: Registry asset URI could not be parsed</strong>".</p>
<p>Steps to reproduce (in my case I used an Azure Compute Instance):</p>
<pre><code>registry_name = "HuggingFace"
from azure.ai.ml import MLClient
ml_client_registry = MLClient(credential=credential, registry_name=registry_name)
m_name = "openai-gpt"
m_version = 12
m = ml_client_registry.models.get(name=m_name, version=m_version)
m_local_base_path = "./models_from_huggings_registry"
ml_client_registry.models.download(name=m_name, version=m_version, download_path=m_local_base_path)
</code></pre>
<p>If I print the "m" variable, it shows the model metadata:</p>
<blockquote>
<p>Model({'job_name': None, 'is_anonymous': False,
'auto_increment_version': False, 'name': 'openai-gpt', 'description':
'<code>openai-gpt</code> is a pre-trained language model available on the Hugging
Face Hub. It's specifically designed for the <code>text-generation</code> task
in the <code>transformers</code> library. If you want to learn more about the
model's architecture, hyperparameters, limitations, and biases, you
can find this information on the model's dedicated <a href="https://huggingface.co/openai-gpt" rel="nofollow noreferrer">Model Card on the
Hugging Face Hub</a>.\n\nHere's an
example API request payload that you can use to obtain predictions
from the model:\n<code>\n{\n "inputs": "My name is Julien and I like to"\n}\n</code>\n', 'tags': {'modelId': 'openai-gpt', 'task':
'text-generation', 'library': 'transformers', 'license': 'mit'},
'properties': {'skuBasedEngineIds':
'azureml://registries/HuggingFace/models/transformers-cpu-small/labels/latest,azureml://registries/HuggingFace/models/transformers-gpu-medium/labels/latest',
'engineEnvironmentVariableOverrides': '{"AZUREML_HF_MODEL_ID":
"openai-gpt", "AZUREML_HF_TASK": "text-generation"}'},
'print_as_yaml': True, 'id':
'azureml://registries/HuggingFace/models/openai-gpt/versions/12',
'Resource__source_path': None, 'base_path':
'/mnt/batch/tasks/shared/LS_root/mounts/clusters/dsvm-general-optimized01/code/Users/mauro.minella/git_repos/azuremlnotebooks/MLOPS/notebooks
AMLv2', 'creation_context':
<azure.ai.ml.entities._system_data.SystemData object at
0x7f2602efdf60>, 'serialize': <msrest.serialization.Serializer object
at 0x7f25bf52c130>, 'version': '12', 'latest_version': None, 'path':
None, 'datastore': None, 'utc_time_created': None, 'flavors': None,
'arm_type': 'model_version', 'type': 'preset_model'})</p>
</blockquote>
<p>, however the very last instruction that should download the model actually returns the error above, whose full text is here below:</p>
<pre><code>TypeError Traceback (most recent call last)
File /anaconda/envs/azuremlsdkv2mm/lib/python3.10/site-packages/azure/ai/ml/_utils/_storage_utils.py:187, in get_ds_name_and_path_prefix(asset_uri, registry_name)
186 try:
--> 187 split_paths = re.findall(STORAGE_URI_REGEX, asset_uri)
188 path_prefix = split_paths[0][3]
File /anaconda/envs/azuremlsdkv2mm/lib/python3.10/re.py:240, in findall(pattern, string, flags)
233 """Return a list of all non-overlapping matches in the string.
234
235 If one or more capturing groups are present in the pattern, return
(...)
238
239 Empty matches are included in the result."""
--> 240 return _compile(pattern, flags).findall(string)
TypeError: expected string or bytes-like object
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
Cell In[21], line 6
2 import mlflow
4 m_local_base_path = "./models_from_huggings_registry"
----> 6 ml_client_registry.models.download(name=m_name, version=m_version, download_path=m_local_base_path)
File /anaconda/envs/azuremlsdkv2mm/lib/python3.10/site-packages/azure/ai/ml/_telemetry/activity.py:263, in monitor_with_activity.<locals>.monitor.<locals>.wrapper(*args, **kwargs)
260 @functools.wraps(f)
261 def wrapper(*args, **kwargs):
262 with log_activity(logger, activity_name or f.__name__, activity_type, custom_dimensions):
--> 263 return f(*args, **kwargs)
File /anaconda/envs/azuremlsdkv2mm/lib/python3.10/site-packages/azure/ai/ml/operations/_model_operations.py:305, in ModelOperations.download(self, name, version, download_path)
295 """Download files related to a model.
296
297 :param str name: Name of the model.
(...)
301 :raise: ResourceNotFoundError if can't find a model matching provided name.
302 """
304 model_uri = self.get(name=name, version=version).path
--> 305 ds_name, path_prefix = get_ds_name_and_path_prefix(model_uri, self._registry_name)
306 if self._registry_name:
307 sas_uri = get_storage_details_for_registry_assets(
308 service_client=self._service_client,
309 asset_name=name,
(...)
314 uri=model_uri,
315 )
File /anaconda/envs/azuremlsdkv2mm/lib/python3.10/site-packages/azure/ai/ml/_utils/_storage_utils.py:190, in get_ds_name_and_path_prefix(asset_uri, registry_name)
188 path_prefix = split_paths[0][3]
189 except Exception:
--> 190 raise Exception("Registry asset URI could not be parsed.")
191 ds_name = None
192 else:
Exception: Registry asset URI could not be parsed.
</code></pre>
|
<python><azure><huggingface-transformers><azure-machine-learning-service><huggingface>
|
2023-06-28 15:01:08
| 1
| 319
|
Mauro Minella
|
76,574,431
| 6,335,363
|
How can I create a union type that can also be used to instantiate union members in Python?
|
<p>I'm currently building an algebraic data type to represent the state of a task, as per <a href="https://stackoverflow.com/questions/16258553/how-can-i-define-algebraic-data-types-in-python">this question</a>, but want to extend it to make it a little cleaner to use.</p>
<p>Here are the definitions of my states:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class StatusWaiting:
# Nothing
pass
@dataclass
class StatusRunning:
progress: float
@dataclass
class StatusComplete:
result: int
@dataclass
class StatusFailure:
reason: str
</code></pre>
<p>However, the way that I intend to use these variants has two seemingly incompatible behaviours:</p>
<p>Type annotation should behave as follows:</p>
<pre class="lang-py prettyprint-override"><code>Status = StatusWaiting | StatusRunning | StatusComplete | StatusFailure
def get_status() -> Status:
...
</code></pre>
<p>Instantiation should behave as follows:</p>
<pre class="lang-py prettyprint-override"><code>class Status:
Waiting = StatusWaiting
Running = StatusRunning
Failed = StatusFailed
Complete = StatusComplete
my_status = Status.Running(0.42)
</code></pre>
<p>How can I define the <code>Status</code> type so that I can have it behave as a union when used as a type annotation, and also behave as a collection of the variants for simple initialization?</p>
<pre class="lang-py prettyprint-override"><code>Status = ???
def get_status() -> Status:
return Status.Failed("Something has gone horribly wrong")
</code></pre>
<p>I've tried using an <code>Enum</code>, but this doesn't appear to allow for instantiation.</p>
<pre class="lang-py prettyprint-override"><code>class Status(Enum):
Waiting = StatusWaiting
Running = StatusRunning
Complete = StatusComplete
Failure = StatusFailure
def get_status() -> Status:
# Mypy: "Status" not callable
return Status.Complete(42)
</code></pre>
|
<python><python-typing><algebraic-data-types><discriminated-union>
|
2023-06-28 14:59:09
| 1
| 2,081
|
Maddy Guthridge
|
76,574,210
| 18,219,269
|
custom module not found when debugging in VSCode Python?
|
<p>I currently have a python project running with the current launch config.</p>
<pre><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Module",
"type": "python",
"request": "launch",
"program": "${file}",
"env": {"PYTHONPATH": "${workspaceFolder}\\project_code"}
}
]
}
</code></pre>
<p>The python file that I am debugging is in a different folder "${workspaceRoot}\project_code\check_files" which imports modules from a separate folder "${workspaceRoot}\project_code\src"</p>
<p>When I am running the file without debugging, it works however when I try to run it with the debugger, the module that I am trying to import from the src folder is not found. I tried googling for a solution but can't seem to find one that works. Hence, appreciate if somebody can provide some guidance on what should I do?</p>
<p><strong>UPDATE</strong></p>
<p>added directory structure for reference. Also, I have updated the deprecated references as well. Thanks for pointing that out <a href="https://stackoverflow.com/users/19133920/jialedu">JialeDu</a></p>
<pre><code>project_name (VS code folder is opened at this directory)
- project_code
- py_files (the file that I am trying to run is here)
- modules (the module location that I am trying to load)
</code></pre>
|
<python><python-3.x><visual-studio-code><vscode-debugger>
|
2023-06-28 14:32:06
| 0
| 339
|
terrygryffindor
|
76,574,162
| 15,673,412
|
python - use subplot with a function returning fig, ax
|
<p>I am using the <code>ruptures.display</code> function inside a for loop of 4 iterations.
The function returns a <code>tuple[Figure, list[ndarray] | ndarray]</code> of Figure and Axes.</p>
<p>I would like to use <code>plt.subplot</code> and plot this 4 plots one above the other.
I normally use <code>plt.subplot(4,1, j)</code> but in this case I am struggling, because I do not know how to concile it with the <code>display</code> function returning matplotlib objects.</p>
<p>Thanks</p>
|
<python><matplotlib><subplot>
|
2023-06-28 14:27:00
| 0
| 480
|
Sala
|
76,574,134
| 2,646,505
|
Fully vectorise sum of binned data (using pre-computed bin-index)
|
<p>Suppose that I have some time series (<code>t</code>) with multiple observables (<code>a</code> and <code>b</code>):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
t = np.linspace(0, 10, 100)
a = np.random.normal(loc=5, scale=0.1, size=t.size)
b = np.random.normal(loc=1, scale=0.5, size=t.size)
</code></pre>
<p>I want to get the mean of time-bins, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>bin_edges = np.linspace(0, 12, 12)
bin_index = np.digitize(t, bin_edges) - 1
a_binned = np.zeros(bin_edges.size - 1)
b_binned = np.zeros(bin_edges.size - 1)
for ibin in np.argwhere(np.bincount(bin_index) > 0).flatten():
select = bin_index == ibin
a_binned[ibin] = np.mean(a[select])
b_binned[ibin] = np.mean(b[select])
</code></pre>
<p><strong>My question: (How) can I vectorise the loop?</strong></p>
|
<python><numpy>
|
2023-06-28 14:23:07
| 1
| 6,043
|
Tom de Geus
|
76,573,942
| 3,371,250
|
How to filter a list of objects by a list of ids?
|
<p>Let's say we have a list of objects like this:</p>
<pre><code>my_objects = [
{
"id":0,
"some_value":"a"
},
{
"id":1,
"some_value":"a"
},
{
"id":2,
"some_value":"b"
},
{
"id":3,
"some_value":"b"
},
]
</code></pre>
<p>Given a list of ids like this:</p>
<pre><code>ids = [1, 2]
</code></pre>
<p>What would be a pythonic way to retrieve a list of all the objects with the ids in this list?
e.g.:</p>
<pre><code>my_objects_filtered = [
{
"id":1,
"some_value":"a"
},
{
"id":2,
"some_value":"b"
}
]
</code></pre>
<p>What I want in the end is a list of the "some_value" value for all ids in the list "ids":</p>
<pre><code>ids = [a, b]
</code></pre>
<p>Which I could get by doing this:</p>
<pre><code>some_values = [my_object.param_id for my_object in my_objects_filtered]
</code></pre>
<p>But I do not know how to get <code>my_objects_filtered</code>
Thanks in advance.</p>
|
<python><python-3.x><list><dictionary><list-comprehension>
|
2023-06-28 14:01:10
| 6
| 571
|
Ipsider
|
76,573,937
| 9,506,773
|
grouby parts of signal in pandas dataframe over a threshold
|
<p>I have a dataframe that contains the following, in a visual form:</p>
<p><a href="https://i.sstatic.net/pqDX7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pqDX7.png" alt="enter image description here" /></a></p>
<p>One signal and a set of true or false values associated to it depending on if is over or under a threshold. How could I <code>groupby</code> the parts of the signal that are over the threshold in separate groups? For simplicity, let's assume that I have the following data:</p>
<pre><code>TESTDATA = StringIO("""date;value;over_th
2023-05-04 10:34:51.002100665;0.4;True
2023-05-04 10:34:51.007100513;0.5;True
2023-05-04 10:34:51.012100235;0.4;True
2023-05-04 10:34:51.017100083;0.3;False
2023-05-04 10:34:51.022099789;0.2;False
2023-05-04 10:35:23.610740595;0.1;False
2023-05-04 10:35:23.615740466;0.7;True
2023-05-04 10:35:23.620740227;0.8;True
2023-05-04 10:35:23.625740082;0.1;False
2023-05-04 10:35:23.630739797;0.7;True
2023-05-04 10:35:23.631;0.2;False
2023-05-04 10:35:23.632;0.8;True
2023-05-04 10:35:23.633;0.1;False
2023-05-04 10:35:23.634;0.9;True
2023-05-04 10:35:23.635;0.2;False
2023-05-04 10:35:23.630739797;0.4;True
""")
df = pd.read_csv(TESTDATA, sep=";")
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-28 14:00:46
| 2
| 3,629
|
Mike B
|
76,573,881
| 7,281,675
|
Remove all \n between triple quotes
|
<p>I want a regex pattern that matches <strong>any number</strong> of occurrences of something within something else. For example regarding the following code:</p>
<pre><code>'''\n <html>\n <body>\n <p>Your file successfully uploaded</p>\n </body>\n </html>\n '''
</code></pre>
<p>Here, I am interested to match every <code>\n</code> between <code>'''</code>s. my desired output is as follows:</p>
<pre><code>''' <html> <body> <p>Your file successfully uploaded</p> </body> </html> '''
</code></pre>
<p>I can break the problem to find anything <code>.*</code> between triple quotes then simply replace <code>\n</code>s. But I am looking to find a better way to possibly apply at once. All of my attempts such as the following just find or substitute one of the <code>\n</code>s.</p>
<pre><code>re.findall('(?<=\'\'\').+(\\n)+[^\\n]+(?=\'\'\')', text, re.DOTALL)
</code></pre>
|
<python><regex>
|
2023-06-28 13:55:06
| 1
| 4,603
|
keramat
|
76,573,856
| 519,422
|
How to write efficient loops in Python and solve error: "ValueError: operands could not be broadcast together with shapes (6,) (4,)"?
|
<p>I'm trying to avoid loops as much as possible in Python because I have been told that code runs more efficiently when there are fewer loops. For the same reason, I'm also trying to use "zip" instead of nested loops for dealing with more than one iterator.</p>
<ol>
<li>Is this advice correct?</li>
<li>I have tried using the advice but can't get past this error when trying to run the code below. I have probably done something silly but cant figure it out. Could anyone please point me in the right direction?</li>
</ol>
<pre><code>ValueError: operands could not be broadcast together with shapes (6,) (4,)
</code></pre>
<pre><code>import numpy as np
import itertools
# For each value in d and e, I want to loop through all a and b.
d = np.array([1, 2, 3, 4])
e = np.array([1000, 2000, 3000, 4000])
a = np.array([10.0e+2, 20.0e+2, 30.0e+2, 40.0e+2, 50.0e+2, 60.0e+2])
b = np.array([1.0e-1, 2.0e-1, 3.0e-1, 4.0e-1, 5.0e-1, 6.0e-1])
c = np.zeros(6)
for (j, k) in zip(d, e):
c = ((b * (25.0 - a))/(8*e)) + d
</code></pre>
|
<python><python-3.x><numpy><loops><nested-loops>
|
2023-06-28 13:52:36
| 2
| 897
|
Ant
|
76,573,550
| 13,944,524
|
Dictionary/Hashmap implementation using double hashing is stuck in an infinite loop
|
<p>I'm following this formula from <a href="https://en.wikipedia.org/wiki/Double_hashing" rel="nofollow noreferrer">wikipedia</a>:</p>
<pre class="lang-none prettyprint-override"><code>H(i, k) = (H1(k) + i*H2(k)) % size
</code></pre>
<p>and my <code>H1</code> is Python's built-in <code>hash()</code> function.</p>
<p><code>H2</code> is:</p>
<pre class="lang-none prettyprint-override"><code>PRIME - (H1(k) % PRIME)
</code></pre>
<p>Unfortunately it randomly sticks in an infinite loop after a couple of execution. It cannot traverse all the slots in my table.</p>
<p>Here is my code but you have to set <a href="https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHASHSEED" rel="nofollow noreferrer"><code>PYTHONHASHSEED=12</code></a> in order to reproduce this bug. (I deliberately removed many details so that the implementation would be minimal)</p>
<pre class="lang-py prettyprint-override"><code>EMPTY = object()
class DoubleHashingHashMap:
def __init__(self):
self.prime = 7
self.size = 15
self.slots = [EMPTY] * self.size
def __setitem__(self, key, value):
for idx in self.probing_squence(key):
slot = self.slots[idx]
if slot is EMPTY:
self.slots[idx] = (key, value)
break
elif isinstance(slot, tuple):
k, v = slot
if k == key:
self.slots[idx] = (key, value)
break
def probing_squence(self, key):
h1 = self.hash_func1(key) % self.size
h2 = self.hash_func2(key) % self.size
i = 1
while True:
yield (h1 + i*h2) % self.size
i += 1
def hash_func1(self, item):
return hash(item)
def hash_func2(self, item):
return self.prime - (self.hash_func1(item) % self.prime)
hashmap = DoubleHashingHashMap()
for i in range(8):
hashmap[str(i)] = i
print("8 items added.")
print("Going into the infinite loop when adding 9th item(which is 8)...")
hashmap["8"] = 8
print("This line can't be reached.")
</code></pre>
<p>I would appreciate if you tell me what's wrong with my math.</p>
|
<python><algorithm><dictionary><hashmap><hash-collision>
|
2023-06-28 13:14:45
| 1
| 17,004
|
S.B
|
76,573,531
| 7,483,211
|
When using profile, get error "snakemake: error: Couldn't parse config file: 'tuple' object has no attribute 'safe_load'"
|
<p>On snakemake 7.29.0, running <code>snakemake --profile profiles/basel-combined-cluster</code> I get the following error:</p>
<pre><code>snakemake: error: Couldn't parse config file: 'tuple' object has no attribute 'safe_load'
</code></pre>
<p>I tried <code>--verbose</code> and <code>--debug</code>, neither of which helped. What can I do to fix this?</p>
|
<python><snakemake>
|
2023-06-28 13:12:59
| 1
| 10,272
|
Cornelius Roemer
|
76,573,454
| 12,131,616
|
How to draw labels in ellipses as in a contour plot?
|
<h2><strong>Problem</strong></h2>
<p>I am trying to plot a figure similar to the following image:</p>
<p><a href="https://i.sstatic.net/rk56G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rk56G.png" alt="enter image description here" /></a></p>
<p>I use the following code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
contours = [30, 40, 50]
v = np.array([15, 18])
colours = [
(0.0, 0.26666666666666666, 0.10588235294117647, 1.0),
(0.2917647058823529, 0.6886582083813917, 0.3827758554402153, 1.0),
(0.8274509803921569, 0.9325490196078431, 0.8031372549019608, 1.0)
]
def ellipseFunction(x, v):
return np.sqrt((x[0]/v[0])**2 + (x[1]/v[1])**2)
ellipses = [Ellipse((0, 0), width= 2*v[0]*contours[l], height=2*v[1]*contours[l], ec = colours[l], fc= "none") for l in range(len(contours))]
x = np.linspace(-2500, 2500, 100)
y = np.linspace(-2500, 2500, 100)
plt.figure()
plt.pcolormesh(x, y, ellipseFunction(np.meshgrid(x, y), v).T, cmap = 'plasma', zorder = -1, vmax = 150)
plt.gca().set_aspect(1./plt.gca().get_data_ratio())
for l in range(len(contours)):
plt.gca().add_patch(ellipses[l])
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/q2O3r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q2O3r.png" alt="enter image description here" /></a></p>
<h2><strong>Question</strong></h2>
<p><strong>How can I add the ellipse labels (30, 40 and 50) in the edge of the ellipses as in the contour plot from the original image?</strong></p>
|
<python><numpy><matplotlib><text><label>
|
2023-06-28 13:02:04
| 1
| 663
|
Puco4
|
76,573,426
| 11,613,489
|
Python/Selenium: Automate clicking button on a website
|
<p>I need some help here:</p>
<p>I have a code, written on Python/Selenium that perform some actions, basic actions, get names in a website. That works perfeclty fine.</p>
<p>The aim:</p>
<p>The thing is... I would like to automate the action for the whole page, this includes all pages defined on the site.</p>
<p>These actions need to be performed in each page out of <em>100</em> pages (within same website)
I would like, using Pyhton/Selenium, automate the change of the pages, do the same on <em>Page 2, Page 3</em> and so on.</p>
<p>Screenshot, "Inspect" website:
<a href="https://i.sstatic.net/APxqS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/APxqS.png" alt="Screenshot, "Inspect" website:" /></a></p>
<p>This is the part of the HTML were the buttons for pages are defined:
</p>
<pre><code> <div id="ember450" class="ember-view ">
<div id="ember472" class="artdeco-pagination artdeco-pagination--has-controls ember-view pv5 ph2"> <button disabled="" aria-label="Anterior" id="ember473" class="artdeco-pagination__button artdeco-pagination__button--previous artdeco-button artdeco-button--muted artdeco-button--1 artdeco-button--tertiary artdeco-button--disabled ember-view" type="button"> <li-icon aria-hidden="true" type="chevron-left-icon" class="artdeco-button__icon" size="small"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" class="mercado-match" data-supported-dps="16x16" fill="currentColor" width="16" height="16" focusable="false">
<path d="M11 1L6.39 8 11 15H8.61L4 8l4.61-7z"></path>
</svg></li-icon>
<span class="artdeco-button__text">
Anterior
</span></button>
<ul class="artdeco-pagination__pages artdeco-pagination__pages--number">
<li data-test-pagination-page-btn="1" id="ember474" class="artdeco-pagination__indicator artdeco-pagination__indicator--number active selected ember-view"> <button aria-current="true" aria-label="Página 1" type="button">
<span>1</span>
</button>
</li>
<li data-test-pagination-page-btn="2" id="ember475" class="artdeco-pagination__indicator artdeco-pagination__indicator--number ember-view"> <button aria-label="Página 2" type="button" data-ember-action="" data-ember-action-476="476">
<span>2</span>
</button>
</li>
<li data-test-pagination-page-btn="3" id="ember477" class="artdeco-pagination__indicator artdeco-pagination__indicator--number ember-view"> <button aria-label="Página 3" type="button" data-ember-action="" data-ember-action-478="478">
<span>3</span>
</button>
</li>
<li data-test-pagination-page-btn="4" id="ember479" class="artdeco-pagination__indicator artdeco-pagination__indicator--number ember-view"> <button aria-label="Página 4" type="button" data-ember-action="" data-ember-action-480="480">
<span>4</span>
</button>
</li>
<li data-test-pagination-page-btn="5" id="ember481" class="artdeco-pagination__indicator artdeco-pagination__indicator--number ember-view"> <button aria-label="Página 5" type="button" data-ember-action="" data-ember-action-482="482">
<span>5</span>
</button>
</li>
<li data-test-pagination-page-btn="6" id="ember483" class="artdeco-pagination__indicator artdeco-pagination__indicator--number ember-view"> <button aria-label="Página 6" type="button" data-ember-action="" data-ember-action-484="484">
<span>6</span>
</button>
</li>
<li data-test-pagination-page-btn="7" id="ember485" class="artdeco-pagination__indicator artdeco-pagination__indicator--number ember-view"> <button aria-label="Página 7" type="button" data-ember-action="" data-ember-action-486="486">
<span>7</span>
</button>
</li>
<li data-test-pagination-page-btn="8" id="ember487" class="artdeco-pagination__indicator artdeco-pagination__indicator--number ember-view"> <button aria-label="Página 8" type="button" data-ember-action="" data-ember-action-488="488">
<span>8</span>
</button>
</li>
<li id="ember489" class="artdeco-pagination__indicator artdeco-pagination__indicator--number ember-view"><button aria-label="Página 9" type="button" data-ember-action="" data-ember-action-490="490">
<span>…</span>
</button>
</li>
<li data-test-pagination-page-btn="100" id="ember491" class="artdeco-pagination__indicator artdeco-pagination__indicator--number ember-view"> <button aria-label="Página 100" type="button" data-ember-action="" data-ember-action-492="492">
<span>100</span>
</button>
</li>
</ul>
<div class="artdeco-pagination__page-state">
Página 1 de 100
</div>
<button aria-label="Siguiente" id="ember493" class="artdeco-pagination__button artdeco-pagination__button--next artdeco-button artdeco-button--muted artdeco-button--icon-right artdeco-button--1 artdeco-button--tertiary ember-view" type="button"> <li-icon aria-hidden="true" type="chevron-right-icon" class="artdeco-button__icon" size="small"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" class="mercado-match" data-supported-dps="16x16" fill="currentColor" width="16" height="16" focusable="false">
<path d="M5 15l4.61-7L5 1h2.39L12 8l-4.61 7z"></path>
</svg></li-icon>
<span class="artdeco-button__text">
Siguiente
</span></button>
</div>
</div>
</div>
</code></pre>
<p>This is what I have done so far:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
# Create an instance of the Chrome WebDriver
driver = webdriver.Chrome()
page_number = 2
driver.get("https://www.example.com/search/pagina=1")
while page_number < 100:
# Start performing actions on the current page
# ...
next_page_button_xpath = f'//button[@aria-label="Página {page_number}"]'
next_page_button = driver.find_element(By.XPATH, next_page_button_xpath)
next_page_button.click()
page_number += 1
# Close the WebDriver
driver.quit()
</code></pre>
<p>But, I am getting the following exception:</p>
<blockquote>
<p><em>no such element: Unable to locate element: {"method":"xpath","selector":"//button[@aria-label="Página 2"]"} hen the
html that contains the button to click on the page is
2 </em></p>
</blockquote>
<p>I have already doublechecked the HTML source code, several times... What am I doing wrong?</p>
|
<python><html><selenium-webdriver><webdriver>
|
2023-06-28 12:57:50
| 2
| 642
|
Lorenzo Castagno
|
76,573,254
| 3,371,250
|
How to check if a set of keys have the same value?
|
<p>Let's say we have a dict</p>
<pre><code>dict = {
"a": "x",
"b": "x",
"c": "x",
"d": "y",
"e": "y",
"f": "y",
}
</code></pre>
<p>How do I quickly check, if a set of specific keys all have the same value?</p>
<p>E.g.:</p>
<pre><code>list_of_keys = ["a", "b", "c"]
</code></pre>
<p>should return True.</p>
<pre><code>list_of_keys = ["b", "c", "d"]
</code></pre>
<p>should return <code>False</code>.</p>
<p>Thanks in advance.</p>
|
<python><dictionary><list-comprehension>
|
2023-06-28 12:37:00
| 5
| 571
|
Ipsider
|
76,573,169
| 8,973,620
|
How to find out the maximum length of a dimension in a ragged dataset
|
<p>If I have the following dataset built from a ragged tensor, how can I get the maximum length (4 in this example) of all elements?</p>
<pre><code>ds = tf.data.Dataset.from_tensor_slices(
tf.ragged.constant([[1, 2, 3, 4], [], [5, 6, 7], [8], []]))
</code></pre>
|
<python><tensorflow><keras><tf.dataset>
|
2023-06-28 12:26:00
| 1
| 18,110
|
Mykola Zotko
|
76,573,128
| 17,718,587
|
Python version is being overridden
|
<p>I try to install Python 3.10/3.11 on my Windows 10 machine. It works without any issues, but when I try to execute code from the terminal, it uses version 3.6.6. (Even though version 3.6.6 is not listed in my Python versions.)</p>
<pre class="lang-none prettyprint-override"><code>C:\Windows\system32>python --version
Python 3.6.6
C:\Windows\system32>py --list
-V:3.11 * Python 3.11 (64-bit)
-V:3.10 Python 3.10 (64-bit)
-V:3.9 Python 3.9 (64-bit)
-V:3.8 Python 3.8 (64-bit)
</code></pre>
<p>I tried editing my environment variables, and it looks just fine for version 3.11 (<strong>user</strong> <code>Path</code>):</p>
<pre class="lang-none prettyprint-override"><code>C:\Users\chenb\AppData\Local\Programs\Python\Python311\Scripts\
C:\Users\chenb\AppData\Local\Programs\Python\Python311\
C:\Program Files\MySQL\MySQL Shell 8.0\bin\
C:\Users\chenb\AppData\Local\Microsoft\WindowsApps
C:\Users\chenb\AppData\Local\Programs\Microsoft VS Code\bin
C:\Program Files\Blace
D:\Blace
C:\Program Files\JetBeans\PyCharm 2021.3.3\bin
C:\Program Files\JetBeans\PyCharm Community Edition 2022.1\bin
D:\FFmpeg\bin
C:\Users\chenb\Desktop\FFmpeg\bin
C:\Program Files\PsynchoPy
C:\Users\chenb\Desktop\Coding\swigwin-4.1.1
C:\Users\chenb\anaconda3\Scripts
%IntelliJ IDEA Community Edition%
A:\src\flutter\bin
A:\PsychoPy
A:\PsychoPy\DLLs
C:\Users\chenb\AppData\Roaming\npm
</code></pre>
<p>When searching for Python 3.6 on my computer, there's a software called 'PsychoPy' that uses Python 3.6 (which I need):</p>
<p>Directory <code>C:\Program Files\PsychoPy3</code> as shown in <em>Windows File Explorer</em>:</p>
<pre class="lang-none prettyprint-override"><code>Name Date modified Type Size
------------------------------------------------------------------
DLLs 06/04/2023 20:58 File folder
include 06/04/2023 21:00 File folder
Lib 06/04/2023 21:00 File folder
libs 06/04/2023 21:00 File folder
man 06/04/2023 21:00 File folder
MinGit 06/04/2023 21:00 File folder
Scripts 06/04/2023 21:00 File folder
share 06/04/2023 21:00 File folder
tcl 06/04/2023 21:00 File folder
Tools 06/04/2023 21:00 File folder
CHANGEOG.txt 20/11/2020 20:15 Text Source File 158 KB
LICENSE.txt 17/05/2018 16:35 Text Source File 1 KB
LICENSES.txt 18/12/2014 17:53 Text Source File 4 KB
NEWS.txt 27/06/2018 5:43 Text Source File 395 KB
PsychoPy3 06/04/2023 21:01 Internet Shortcut 1 KB
python.exe 27/06/2018 5:40 Application 99 KB
python3.dll 27/06/2018 5:37 Application exten... 58 KB
python36.dll 27/06/2018 5:37 Application exten... 3,527 KB
python64.exe 27/06/2018 5:40 Application 99 KB
pythonw.exe 27/06/2018 5:40 Application 97 KB
uinst.exe 06/04/2023 21:01 Application 62 KB
</code></pre>
<p>Is it possible that the software is overriding my Python version?<br />
Any idea how can I solve this issue?</p>
|
<python><windows><cmd><interpreter>
|
2023-06-28 12:20:44
| 2
| 2,772
|
ChenBr
|
76,573,085
| 8,458,083
|
"nix run" works but "nix build" doesn't
|
<pre><code>{
description = "virtual environment with python and streamlit";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
inputs.flake-utils.url = "github:numtide/flake-utils";
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
python=pkgs.python311;
f = ps: with ps;[
ipython
matplotlib
pandas
];
pip_python_packages= python.withPackages(f);
myDevTools = [
pip_python_packages
pkgs.streamlit
];
outputName = builtins.attrNames self.outputs self.outputs;
in {
devShells.default = pkgs.mkShell {
buildInputs = myDevTools;
};
packages.default = pkgs.poetry2nix.mkPoetryApplication {
projectDir = self;
};
apps.default = {
program = "${python}/bin/python";
args = [ "main.py" ];
src = "./.";
type = "app";
};
});
}
</code></pre>
<p>the command nix run "works". Not as intended, it opens only the python interpreter but this is another question.</p>
<p>But the command nix run doesn't work</p>
<blockquote>
<p>error:
… while evaluating the attribute 'pkgs.buildPythonPackage'</p>
<pre><code> at /nix/store/s1z7nb9n6r5n0r34fabp6yybwkbr8mjk-source/pkgs/development/interpreters/python/passthrufun.nix:87:5:
86| withPackages = import ./with-packages.nix { inherit buildEnv pythonPackages;};
87| pkgs = pythonPackages;
| ^
88| interpreter = "${self}/bin/${executable}";
… while calling the 'mapAttrs' builtin
at /nix/store/s1z7nb9n6r5n0r34fabp6yybwkbr8mjk-source/pkgs/development/interpreters/python/passthrufun.nix:31:8:
30| value;
31| in lib.mapAttrs func items;
| ^
32| in ensurePythonModules (callPackage
(stack trace truncated; use '--show-trace' to show the full trace)
error: getting status of '/nix/store/ggvg85rp5qzyr9bngsl6r0pcrkyxqa49-source/poetry.lock': No
</code></pre>
<p>such file or directory</p>
</blockquote>
<p>This error message doesn't refer to any part of my code. This is difficult to know which part causes a problem</p>
|
<python><nix><nix-flake>
|
2023-06-28 12:14:59
| 1
| 2,017
|
Pierre-olivier Gendraud
|
76,573,052
| 2,107,488
|
Unable to run python code in Zeppelin on Windows Server
|
<p>I could configure Zeppelin on my windows server 2019 and successfully start it.</p>
<p>My environment variable is Configured as follows:</p>
<pre><code>HADOOP_HOME C:\hadoop\
JAVA_HOME C:\Programm Files\Zulu\zulu-8\jre\
SPARK_HOME C:\Saprk\spark-3.3.0-bin-hadoop3\
</code></pre>
<p>But I would like to run a very simple Python code in my Zeppelin notebook. Ad I have set up the Python like this:
<a href="https://i.sstatic.net/qDoD0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qDoD0.png" alt="enter image description here" /></a></p>
<p>So now if I execute the following code:</p>
<pre><code>%python
print("Hello World")
</code></pre>
<p>I get the following error message:</p>
<pre><code>org.apache.zeppelin.interpreter.InterpreterException: java.io.IOException: Fail to launch interpreter process:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/zeppelin-0.10.1-bin-netinst/interpreter/python/python-interpreter-with-py4j-0.10.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/zeppelin-0.10.1-bin-netinst/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 2
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.main(RemoteInterpreterServer.java:305)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:129)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:271)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:438)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:69)
at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.IOException: Fail to launch interpreter process:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/zeppelin-0.10.1-bin-netinst/interpreter/python/python-interpreter-with-py4j-0.10.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/zeppelin-0.10.1-bin-netinst/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 2
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.main(RemoteInterpreterServer.java:305)
at org.apache.zeppelin.interpreter.remote.ExecRemoteInterpreterProcess.start(ExecRemoteInterpreterProcess.java:97)
at org.apache.zeppelin.interpreter.ManagedInterpreterGroup.getOrCreateInterpreterProcess(ManagedInterpreterGroup.java:68)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getOrCreateInterpreterProcess(RemoteInterpreter.java:104)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:154)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:126)
... 13 more
Took 1 minute. Last updated by anonymous at June 27 2023, 2:10:45 PM.
ERROR
%sh
echo 1
org.apache.zeppelin.interpreter.InterpreterException: java.io.IOException: Fail to launch interpreter process:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/zeppelin-0.10.1-bin-netinst/interpreter/sh/zeppelin-shell-0.10.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/zeppelin-0.10.1-bin-netinst/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 2
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.main(RemoteInterpreterServer.java:305)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:129)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:271)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:438)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:69)
at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.IOException: Fail to launch interpreter process:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/zeppelin-0.10.1-bin-netinst/interpreter/sh/zeppelin-shell-0.10.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/zeppelin-0.10.1-bin-netinst/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 2
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.main(RemoteInterpreterServer.java:305)
at org.apache.zeppelin.interpreter.remote.ExecRemoteInterpreterProcess.start(ExecRemoteInterpreterProcess.java:97)
at org.apache.zeppelin.interpreter.ManagedInterpreterGroup.getOrCreateInterpreterProcess(ManagedInterpreterGroup.java:68)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getOrCreateInterpreterProcess(RemoteInterpreter.java:104)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:154)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:126)
... 13 more
</code></pre>
<p>I found this <a href="https://issues.apache.org/jira/browse/ZEPPELIN-5508" rel="nofollow noreferrer">post</a> that someone has the same problem with Java interpreter and could solve the problem, but the solution doesn't work for Python.</p>
<p>Does anyone have an idea how can I solve this problem?</p>
|
<python><apache-zeppelin>
|
2023-06-28 12:11:24
| 1
| 3,087
|
Kaja
|
76,572,966
| 5,302,323
|
Exporting only output of specific cells of jupyter notebook into PDF
|
<p>I would like to only export to PDF the output - not the actual code - of specific cells that I have in my open Jupyter Notebook kernel.</p>
<p>I already ran the code, so when I open the notebook, the output is already there. I simply want to extract the output of 5 different cells (out of around 20) into a PDF to make it easier to share.</p>
<p>I tried to download latex, nbconvert using jupyter notebook tags on the cells whose output I want to extract (tag = "pdf-extract") or pip import hide_code but honestly nothing works.</p>
<p>I also tried using this HTML code but then I cannot 'go-backwards' and need to restart the whole kernel to see the code again... losing all output too :-(</p>
<pre><code>%%html
<style>
div.input {
display:none;
}
</style>
</code></pre>
<p>I've also tried Mercury, but getting the 'Waiting for worker...' message after trying to load my ipynb.</p>
<p>Seems like such a simple request, but seriously hard to do for me.</p>
<p>I would like the output to be in PDF format and for the code not to be run again. The output is already in the kernel, all I need is to extract it into a PDF.</p>
<p>Any help would be really appreciated!</p>
|
<python><jupyter>
|
2023-06-28 11:59:59
| 0
| 365
|
Cla Rosie
|
76,572,850
| 15,452,168
|
Calculating multiple colors and duplicate colors in a pivot table in pandas
|
<p>I have a DataFrame df that contains information about orders, including the 'Webshop_Order', 'Category', 'Level', 'Class', 'USIM', 'Size', 'Color', 'Length', 'Demand_Qty', and 'Return_Qty' columns.</p>
<p>I want to create a pivot table that includes the number of orders for each size and color combination per USIM and Webshop_Order.</p>
<p>I would like to calculate some summary columns that I am able to do:</p>
<p>'multiple_sizes_in_transaction'
'duplicate_sizes_in_transaction'</p>
<p>Here's my current code:</p>
<pre><code># ...
# Pivot to create table of USIM/ORDER with the number of orders of each size
sizes_per_order = df.pivot_table(
index=['USIM', 'Webshop_Order'],
columns='Size',
values='Demand_Qty',
aggfunc='sum',
fill_value=0,
)
sizes_per_order = sizes_per_order.assign(
multiple_sizes_in_transaction=sizes_per_order.gt(0).sum(axis=1).gt(1),
duplicate_sizes_in_transaction=sizes_per_order.gt(1).any(axis=1),
)
# ...
</code></pre>
<p><a href="https://i.sstatic.net/bplYI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bplYI.png" alt="enter image description here" /></a>
However, I also want to include the 'Color' parameter in the pivot table and calculate the number of orders with</p>
<ol>
<li>multiple colors</li>
<li>duplicate colors.</li>
</ol>
<p>so, I decided to update the code</p>
<pre><code># Pivot to create table of USIM/ORDER with the number of orders of each size and color
sizes_per_order = df.pivot_table(
index=['USIM', 'Webshop_Order'],
columns=['Size', 'Color'],
values='Demand_Qty',
aggfunc='sum',
fill_value=0,
)
sizes_per_order = sizes_per_order.assign(
multiple_sizes_in_transaction=sizes_per_order.gt(0).sum(axis=1).gt(1),
multiple_colors_in_transaction=sizes_per_order.astype(bool).sum(axis=1, level='Color').gt(1).all(axis=1),
duplicate_sizes_in_transaction=sizes_per_order.gt(1).any(axis=1),
duplicate_colors_in_transaction=sizes_per_order.astype(bool).sum(axis=1, level='Color').gt(1).any(axis=1)
)
sizes_per_order
</code></pre>
<p><a href="https://i.sstatic.net/vJZu4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vJZu4.png" alt="enter image description here" /></a>
but the output is not visually good, it's difficult to read it well due to a lot of column values.</p>
<p>Could you please guide me on how to modify the code to achieve a nice representation of this?</p>
<p>Thank you in advance for your help!</p>
|
<python><pandas><numpy><pivot><pivot-table>
|
2023-06-28 11:44:54
| 1
| 570
|
sdave
|
76,572,824
| 13,849,446
|
Why do I get a "No matching distribution found" error when installing a package I just uploaded to pypi?
|
<p>I am trying to upload my package to PyPi. It uploads successfully every time but when I try to install it, I get the following error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement Google-Ads-Transparency-Scraper==1.4 (from versions: none)
ERROR: No matching distribution found for Google-Ads-Transparency-Scraper==1.4
</code></pre>
<p>The following is the directory structure I have</p>
<blockquote>
<ul>
<li>GoogleAds
<ul>
<li>GoogleAdsTransparency
<ul>
<li>__init __.py</li>
<li>main.py</li>
<li>regions.py</li>
</ul>
</li>
<li>setup.py</li>
<li>setup.cfg</li>
<li>license.txt</li>
<li>README.md</li>
</ul>
</li>
</ul>
</blockquote>
<p>The setup.py has the following content</p>
<pre><code>"""Install packages as defined in this file into the Python environment."""
from setuptools import setup, find_packages
setup(
name="Google Ads Transparency Scraper",
author="Farhan Ahmed",
author_email="jattfarhan10@gmail.com",
url="https://github.com/faniAhmed/GoogleAdsTransparencyScraper",
description="A scraper for getting Ads from Google Transparency",
version="1.4",
packages=find_packages(),
download_url= 'https://github.com/faniAhmed/GoogleAdsTransparencyScraper/archive/refs/tags/v1.2.tar.gz',
keywords= ['Google', 'Transparency', 'Scraper', 'API', 'Google Ads', 'Ads', 'Google Transparency', 'Google Transparency Scraper', 'Google Ads Scrapre'],
license='Securely Incorporation',
install_requires=[
"setuptools>=45.0",
"beautifulsoup4>=4.12.2",
"Requests>=2.31.0",
"lxml>=4.6.3",
],
classifiers=[
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: Free for non-commercial use",
"Natural Language :: Urdu",
"Operating System :: OS Independent",
"Programming Language :: Python",
],
platforms=["any"],
)
</code></pre>
<p>The setup.cfg</p>
<pre><code>[metadata]
description-file = README.md
</code></pre>
<p>The <strong>init</strong>.py has the following content</p>
<pre><code>from GoogleAdsTransparency.main import GoogleAds
from GoogleAdsTransparency.main import show_regions_list
</code></pre>
<p>I run the following commands to upload the package</p>
<pre><code>python setup.py sdist
twine upload dist/*
</code></pre>
<p>The link to the package is
<a href="https://pypi.org/project/Google-Ads-Transparency-Scraper/" rel="nofollow noreferrer">https://pypi.org/project/Google-Ads-Transparency-Scraper/</a></p>
|
<python><setuptools><pypi><setup.py><python-packaging>
|
2023-06-28 11:41:03
| 1
| 1,146
|
farhan jatt
|
76,572,527
| 13,955,154
|
Handling function parameters with blueprints
|
<p>I have this method:</p>
<pre><code>def extract_and_segmentate(input_folder, input_folder):
extract_txt_from_pdf(input_folder, output_folder)
extract_txt_from_pptx(input_folder, output_folder)
segmentate(output_folder, output_folder, 35)
</code></pre>
<p>And I want to wrap it inside a blueprint like:</p>
<pre><code>@extraction.route('/extract', methods=['GET'])
def extract_and_segmentate()
</code></pre>
<p>I know that usually methods in blueprints don't have parameters, so how can I handle my situation?</p>
|
<python><flask><blueprint>
|
2023-06-28 11:01:17
| 1
| 720
|
Lorenzo Cutrupi
|
76,572,408
| 2,443,944
|
Python cap number of occurences of duplicate items in a list
|
<p>I'm looking for a function to limit the number of duplicates in a list. For example in the list below, we have three <code>'1'</code>s, one <code>'2'</code>, two <code>'a'</code>s, and three <code>'b'</code>s.</p>
<pre><code>initial_list = [1,2,1,1,'a','b','a', 'b','b']
def cap(initial_list, n=2)
</code></pre>
<p>If I was to limit the number of duplicates to <code>n=2</code>, I would get:</p>
<pre><code>[1,2,1,'a','b','a', 'b']
</code></pre>
<p>The order of the items in the output does not matter.</p>
|
<python><python-3.x><list>
|
2023-06-28 10:44:12
| 3
| 2,227
|
piccolo
|
76,572,323
| 133,374
|
Combine `Protocol` with base class requirement, type intersection
|
<p>I want to express some type which inherits from <code>nn.Module</code>. I would do so as follows:</p>
<pre class="lang-py prettyprint-override"><code>ModuleType = TypeVar("ModuleType", bound=nn.Module)
</code></pre>
<p>At the same time, I want to express that it has an <code>__init__</code> function of a certain format. I would do so using <code>Protocol</code>, as follows:</p>
<pre class="lang-py prettyprint-override"><code>class InitWithCfgProtocol(Protocol):
def __init__(self, cfg: dict):
...
</code></pre>
<p>Now, how can I express a type which has both properties? Basically an intersection of <code>ModuleType</code> and <code>InitWithCfgProtocol</code>. Is this possible?</p>
<p>Further, how can this again be expressed as a <code>TypeVar</code>, such that I can use it as a template argument for a <code>Generic</code>? Like this:</p>
<pre class="lang-py prettyprint-override"><code>ModuleInitWithCfgType = TypeVar(...) # ?
@dataclass
class ModuleFactory(Generic[ModuleInitWithCfgType]):
module_class: Type[ModuleInitWithCfgType]
def make(self, cfg: dict) -> ModuleInitWithCfgType:
return self.module_class(cfg)
</code></pre>
<p>In this example, I want to have proper checks (via PyCharm, mypy or similar) for:</p>
<ul>
<li><code>self.module_class(cfg)</code> - it should warn when I pass wrong arguments</li>
<li><code>factory = ModuleFactory(MyModule)</code> - it should warn when I pass a wrong <code>MyModule</code> class</li>
</ul>
|
<python><python-typing>
|
2023-06-28 10:32:03
| 2
| 68,916
|
Albert
|
76,571,965
| 15,452,168
|
How to create interactive checkboxes for filtering a DataFrame in Jupyter Notebook?
|
<p>I am working on a Jupyter Notebook project and I have a DataFrame with the following structure:</p>
<p><strong>Info</strong></p>
<pre><code><class 'pandas.core.frame.DataFrame'>
MultiIndex: 6936 entries, (2199603, 1357456995) to (2200982, 1357808973)
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 34 6936 non-null int64
1 36 6936 non-null int64
2 38 6936 non-null int64
3 40 6936 non-null int64
4 42 6936 non-null int64
5 44 6936 non-null int64
6 46 6936 non-null int64
7 48 6936 non-null int64
8 multiple_sizes_in_transaction 6936 non-null bool
9 duplicate_sizes_in_transaction 6936 non-null bool
dtypes: bool(2), int64(8)
</code></pre>
<p>I want to create an interactive filtering mechanism using checkboxes for the columns multiple_sizes_in_transaction and duplicate_sizes_in_transaction. The checkboxes should allow me to filter the DataFrame based on the selected values (True or False).</p>
<p>I tried using the interact function from the ipywidgets library as follows:</p>
<p><strong>Test Code</strong></p>
<pre><code>import pandas as pd
from ipywidgets import interact, Checkbox
# Load the data
df = df
@interact(multiple_sizes=Checkbox(value=False), duplicate_sizes=Checkbox(value=False))
def filter_data(multiple_sizes, duplicate_sizes):
filtered_df = df[(df['multiple_sizes_in_transaction'] == multiple_sizes) &
(df['duplicate_sizes_in_transaction'] == duplicate_sizes)]
display(filtered_df)
</code></pre>
<p><strong>Test Data</strong></p>
<pre><code>import pandas as pd
data = {
'USIM': [2199603, 2199603, 2199603, 2199603, 1357459, 1357459, 1357459, 1357459, 2200982, 2200982, 2200982, 2200982, 2200982],
'WEBSHOP_ORDER': [1357456995, 1357456996, 1357456997, 1357456998, 1357459079, 1357460517, 1357471294, 1357472723, 1357807067, 1357807855, 1357808382, 1357808849, 1357808973],
'34': [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0],
'36': [0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0],
'38': [0, 0, 1, 0, 0, 2, 0, 0, 0, 0, 0, 1, 0],
'40': [0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0],
'42': [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
'44': [1, 0, 1, 0, 1, 0, 0, 2, 0, 0, 0, 0, 1],
'46': [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
'48': [0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0],
'multiple_sizes_in_transaction': [False, False, True, False, True, True, True, False, False, False, False, True, False],
'duplicate_sizes_in_transaction': [False, False, False, False, False, True, False, True, False, False, False, False, False]
}
df = pd.DataFrame(data)
# Set the index
df.set_index(['USIM', 'WEBSHOP_ORDER'], inplace=True)
# Display the DataFrame
display(df)
</code></pre>
<p><strong>Dataframe</strong></p>
<pre><code> SIZE 34 36 38 40 42 44 46 48 multiple_sizes_in_transaction duplicate_sizes_in_transaction
USIM WEBSHOP_ORDER
2199603 1357456995 0 0 0 0 0 1 0 0 False False
1357456996 0 0 0 1 0 0 0 0 False False
1357456997 0 0 1 0 0 1 0 0 True False
1357456998 0 1 0 0 0 0 0 0 False False
1357459 1357459079 0 0 0 1 0 1 1 0 True False
1357460517 1 0 2 0 0 0 0 1 True True
1357471294 0 1 0 1 0 0 0 0 True False
1357472723 0 0 0 0 0 2 0 1 False True
2200982 1357807067 0 0 0 1 0 0 0 0 False False
1357807855 0 0 0 0 1 0 0 0 False False
1357808382 0 0 0 0 0 0 0 1 False False
1357808849 1 0 1 0 0 0 0 0 True False
1357808973 0 0 0 0 0 1 0 0 False False
</code></pre>
<p><a href="https://i.sstatic.net/Ky2bh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ky2bh.png" alt="enter image description here" /></a></p>
<p>However, the checkboxes are not displayed as expected. The output is empty (nothing)
How can I create interactive checkboxes for filtering my DataFrame in Jupyter Notebook?</p>
|
<python><pandas><jupyter-notebook><interactive><ipywidgets>
|
2023-06-28 09:48:06
| 2
| 570
|
sdave
|
76,571,937
| 1,451,479
|
Apply delta_E_CMC between to groups of pixels efficiently
|
<p>I want to apply: <a href="https://colour.readthedocs.io/en/develop/generated/colour.difference.delta_E_CMC.html" rel="nofollow noreferrer">delta_E_CMC</a> between 2 groups of LAB pixels. One group is the image itself. Another group is a subset of pixels from other image.</p>
<p>I'm using Python and my main issue here is performance, thus I'm looking for <strong>efficient</strong> ways to calculate delta_E_CMC between all the pixels in image A to a group B, subset of pixels from different image.</p>
|
<python>
|
2023-06-28 09:43:57
| 2
| 4,112
|
Aviel Fedida
|
76,571,812
| 4,502,950
|
Multiclass text classification using hugging face models
|
<p>I am trying to do sentiment analysis on customer feedback and for that I am using hugging face models (required). The issue is that all the responses I am getting are either Positive or negative , I haven't gotten a neutral response.</p>
<p>this is how my dataset looks like</p>
<pre><code>import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import numpy as np
# Example DataFrame
df = pd.DataFrame({'text': ['This movie is great!','neutral','Happy this movie!' ,'I feel bored.', 'The weather is nice.',np.nan]})
# Function to predict sentiment
def predict_sentiment(text):
# Load tokenizer and model
if pd.isna(text):
return 'N/A' # Return a default value for NaN
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("textattack/bert-base-uncased-imdb")
tokens = tokenizer.encode_plus(text, padding=True, truncation=True, return_tensors="pt")
outputs = model(**tokens)
predicted_class = outputs.logits.argmax().item()
sentiment_classes = ['negative','positive', 'neutral']
predicted_sentiment = sentiment_classes[predicted_class]
return predicted_sentiment
# Apply sentiment prediction on DataFrame column
df['predicted_sentiment'] = df['text'].apply(predict_sentiment)
text predicted_sentiment
0 This movie is great! positive
1 neutral positive
2 Happy this movie! positive
3 I feel bored. negative
4 The weather is nice. positive
5 NaN N/A
</code></pre>
<p>Now, if I switch the lables like this ['negative','neutral','positive'] I only get results</p>
<pre><code> text predicted_sentiment
0 This movie is great! neutral
1 neutral neutral
2 Happy this movie! neutral
3 I feel bored. negative
4 The weather is nice. neutral
5 NaN N/A
</code></pre>
<p>whereas the results should be</p>
<pre><code> text predicted_sentiment
0 This movie is great! positive
1 neutral neutral
2 Happy this movie! positive
3 I feel bored. negative
4 The weather is nice. positive
5 NaN N/A
</code></pre>
|
<python><nlp><huggingface-transformers><huggingface>
|
2023-06-28 09:28:04
| 2
| 693
|
hyeri
|
76,571,780
| 4,011,460
|
python kubernetes client: How to update crd subresource status
|
<p>I'm trying to update the subresource "status" of a crd resource with the python kubernetes client. The customresourceapi does not seem to provide functionality for this. Any ideas?</p>
|
<python><kubernetes>
|
2023-06-28 09:24:29
| 1
| 1,035
|
Julian Re
|
76,571,752
| 574,633
|
Impossible to click Instagram "Change password" button with selenium
|
<p>On instagram you can use the Account center to change your password:</p>
<p><a href="https://i.sstatic.net/owyiB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/owyiB.png" alt="enter image description here" /></a></p>
<p>From there you can choose change password option to change your password in this screen:</p>
<p><a href="https://i.sstatic.net/c1CKk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c1CKk.png" alt="enter image description here" /></a></p>
<p>I'm trying using selenium to fill the input boxes and click the Change password "button" (It's not a button) with no success.</p>
<p>I can fill the inputs without any issue but for the button I have tried 2 different approaches with same results</p>
<p>1 - Selecting the span that contains the text "Change password" and then clicking it. This closes the change password dialog but no password is changed</p>
<p>2 - Moving to the span (have tried some parents too) and then clicking it. Same result as before.</p>
<p>Here the code:</p>
<pre><code> print("Filling change password data")
currentPassowrd= driver.find_element(By.XPATH, "//label[contains(text(),'Current password')]/preceding-sibling::input")
newPassowrd = driver.find_element(By.XPATH, "//label[text()='New password']/preceding-sibling::input")
reNewPassowrd = driver.find_element(By.XPATH, "//label[text()='Re-type new password']/preceding-sibling::input")
changePasswordButton = driver.find_element(By.XPATH, "//span[text()='Change password']")
ActionChains(driver) \
.move_to_element(currentPassowrd).click() \
.send_keys("old pass") \
.move_to_element(newPassowrd).click() \
.send_keys("new pass") \
.move_to_element(reNewPassowrd).click() \
.send_keys("re new pass") \
.perform()
time.sleep(3);
ActionChains(driver).click(changePasswordButton).perform()
</code></pre>
|
<python><selenium-webdriver><selenium-chromedriver><instagram>
|
2023-06-28 09:20:56
| 2
| 6,366
|
Notbad
|
76,571,540
| 4,847,250
|
How do I add FigureCanvasQTAgg in a pyqt layout?
|
<p>I have an issue when I add a matplotlib figure into a pyqt5 environment.</p>
<p>I get this error :</p>
<pre><code> File "C:\Users\maxime\Desktop\SESAME\PycharmProjects\neocom\di.py", line 37, in __init__
layout.addWidget(self.canvas)
TypeError: addWidget(self, a0: QWidget, stretch: int = 0, alignment: Union[Qt.Alignment, Qt.AlignmentFlag] = Qt.Alignment()): argument 1 has unexpected type 'FigureCanvasQTAgg'
</code></pre>
<p>I don't understand this error because every post I saw do the same thing to add the figure.
Where did I go wrong?
It is like FigureCanvasQTAgg is not a widget but it should be, right?</p>
<pre><code>from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
import sys
import matplotlib.pyplot as plt
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
class window(QMainWindow):
def __init__(self, parent=None):
super(window, self).__init__()
self.parent = parent
self.centralWidget = QWidget()
self.setCentralWidget(self.centralWidget)
self.mainHBOX_param_scene = QHBoxLayout()
V1 = Viewer()
self.mainHBOX_param_scene.addWidget(V1)
self.centralWidget.setLayout(self.mainHBOX_param_scene)
class Viewer(QGraphicsView):
def __init__(self, parent=None):
super( Viewer, self).__init__(parent)
self.parent = parent
self.scene = QGraphicsScene(self)
self.setScene(self.scene)
self.figure = plt.figure()
self.canvas = FigureCanvas(self.figure)
self.axes_Delay = self.figure.add_subplot(1, 1,1)
self.axes_Delay.set_title("Title")
# self.canvas.setGeometry(0, 0, 1600, 500 )
layout = QVBoxLayout()
layout.addWidget(self.canvas)
self.setLayout(layout)
self.canvas.show()
def main():
app = QApplication(sys.argv)
ex = window(app)
ex.show()
sys.exit(app.exec_( ))
if __name__ == '__main__':
main()
</code></pre>
|
<python><matplotlib><pyqt5>
|
2023-06-28 08:51:10
| 1
| 5,207
|
ymmx
|
76,571,527
| 861,164
|
Does the change of the leaf_size parameter affect memory usage of sklearn.cluster.dbscan?
|
<p>I managed to run out of memory while conducting a sklearn.cluster.dbscan clustering on a large data set (sklearn.<strong>version</strong> 1.1.2)</p>
<p>here is an example call</p>
<pre><code>dbscan(xy, eps=40, min_samples=10, algorithm='kd_tree', leaf_size=100)
</code></pre>
<p>where <code>xy</code> is a numpy array of shape (180538, 2). The original data has shape (579990, 2).</p>
<p>I've tried to set the algorithm explicitly ('ball_tree' and 'kd_tree'), which didn't make any difference.</p>
<p>I also varied the 'leaf_size' parameter between 3 and 500, which does not seem to have any affect on the performance whatsoever.</p>
<p>Is this to be expected or is there a problem?</p>
|
<python><memory><scikit-learn>
|
2023-06-28 08:49:09
| 1
| 1,498
|
Michael
|
76,571,523
| 4,555,441
|
SHAP on LSTM (multivariate with different input sizes time series data) - 'NoneType' object is not callable error
|
<p>I am using SHAP to understand the LSTM model predictions. My data is x, y coordinates (with padding) - I used the code given here (Sequence LSTM
<a href="https://datascience.stackexchange.com/questions/48796/how-to-feed-lstm-with-different-input-array-sizes/48814#48814">https://datascience.stackexchange.com/questions/48796/how-to-feed-lstm-with-different-input-array-sizes/48814#48814</a>)</p>
<p>After the LSTM is trained I used SHAP - explainer but getting a TypeError: 'NoneType' object is not callable (but its not clear what is None). How to solve this error?</p>
<pre><code>from tensorflow.keras import Sequential
from tensorflow.keras.utils import Sequence
from tensorflow.keras.layers import LSTM, Dense, Masking
import numpy as np
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
# create sequence lengths between 1 to 10
seq_lens = np.random.randint(1, 10, halfN)
X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Padding and Masking
special_value = -10.0
max_seq_len = max(seq_lens)
Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
for s, x in enumerate(X):
seq_len = x.shape[0]
Xpad[s, 0:seq_len, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
import shap
# define the explainer
explainer = shap.Explainer(model2)
# explain the predictions of the pipeline on the first two samples
shap_values = explainer(Xpad[:2])
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-75-dd276207f9fd> in <module>
1 # explain the predictions of the pipeline on the first two samples
----> 2 shap_values = explainer(Xtrain[:2], max_evals=4000)
/usr/local/lib/python3.8/site-packages/shap/explainers/_permutation.py in __call__(self, max_evals, main_effects, error_bounds, batch_size, outputs, silent, *args)
72 """ Explain the output of the model on the given arguments.
73 """
---> 74 return super().__call__(
75 *args, max_evals=max_evals, main_effects=main_effects, error_bounds=error_bounds, batch_size=batch_size,
76 outputs=outputs, silent=silent
/usr/local/lib/python3.8/site-packages/shap/explainers/_explainer.py in __call__(self, max_evals, main_effects, error_bounds, batch_size, outputs, silent, *args, **kwargs)
256 feature_names = [[] for _ in range(len(args))]
257 for row_args in show_progress(zip(*args), num_rows, self.__class__.__name__+" explainer", silent):
--> 258 row_result = self.explain_row(
259 *row_args, max_evals=max_evals, main_effects=main_effects, error_bounds=error_bounds,
260 batch_size=batch_size, outputs=outputs, silent=silent, **kwargs
/usr/local/lib/python3.8/site-packages/shap/explainers/_permutation.py in explain_row(self, max_evals, main_effects, error_bounds, batch_size, outputs, silent, *row_args)
130
131 # evaluate the masked model
--> 132 outputs = fm(masks, zero_index=0, batch_size=batch_size)
133
134 if row_values is None:
/usr/local/lib/python3.8/site-packages/shap/utils/_masked_model.py in __call__(self, masks, zero_index, batch_size)
62 full_masks = np.zeros((int(np.sum(masks >= 0)), self._masker_cols), dtype=np.bool)
63 _convert_delta_mask_to_full(masks, full_masks)
---> 64 return self._full_masking_call(full_masks, zero_index=zero_index, batch_size=batch_size)
65
66 else:
/usr/local/lib/python3.8/site-packages/shap/utils/_masked_model.py in _full_masking_call(self, masks, zero_index, batch_size)
91 masked_inputs = self.masker(delta_ind, *self.args).copy()
92 else:
---> 93 masked_inputs = self.masker(mask, *self.args)
94
95 # wrap the masked inputs if they are not already in a tuple
TypeError: 'NoneType' object is not callable
</code></pre>
|
<python><lstm><shap>
|
2023-06-28 08:48:44
| 0
| 648
|
pranav nerurkar
|
76,571,522
| 18,949,720
|
ValueError: Could not find a backend to open `Thumbs.db`` with iomode `ri`
|
<p>I made a Python script on Ubuntu that takes images reads QR codes using pyzbar and export images in a new folder after renaming it after the data extracted from the QR code. It works well on Linux, but when trying to run it on Windows I get this error:</p>
<pre><code>ValueError: Could not find a backend to open `C:\Users\User1\Desktop\QR_read\Images\Thumbs.db`` with iomode `ri`.
</code></pre>
<p>Here is my folder structure, inside "QR_read":</p>
<pre><code>- Python_script.py
- Images_folder
Saved_images
img1
img2
img3
...
</code></pre>
<p>Where "Saved_images" contains images that are exported from the code. If this folder already exists when running the code, it is first deleted and then created again.</p>
<p>I did not find any information about this error. Does anyone know where the problem may come from ?</p>
|
<python><windows>
|
2023-06-28 08:48:37
| 0
| 358
|
Droidux
|
76,571,442
| 6,730,854
|
Calibration PTZ camera with OpenCV's calibrateCamera
|
<p>I have N images from a PTZ camera with some point correspondences in each frame.</p>
<p>The problem is the images are taken at different zoom.</p>
<p>I am wondering how to use <code>cv2.calibrateCamera</code> for these N images so that some parameters are fixed and some not.</p>
<p>I would like these parameters to <strong>NOT</strong> be fixed:</p>
<ul>
<li><code>fx=fy</code> can change from image to image. (zoom with <code>cv2.CALIB_FIX_ASPECT_RATIO</code> for simplicity)</li>
<li>rotation vectors to change from image to image</li>
</ul>
<p>I would like to fix these parameters for all images:</p>
<ul>
<li>optical centers ( cx,cy)</li>
<li>camera position as translation vectors <code>tvecs</code></li>
</ul>
|
<python><opencv>
|
2023-06-28 08:38:57
| 0
| 472
|
Mike Azatov
|
76,571,248
| 3,983,470
|
Filtering from a pivot table on Django Rest Framework returns all the values
|
<p>I have a project in Django using Django Rest Framework with this models.</p>
<p>job.py</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from .base import BaseModel
class Job(BaseModel):
title = models.CharField(max_length=100)
races = models.ManyToManyField('Race', through='JobRace')
skills = models.ManyToManyField('Skill')
def __str__(self):
return f"{self.title}"
</code></pre>
<p>race.py</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from .base import BaseModel
class Race(BaseModel):
name = models.CharField(max_length=100)
...
image = models.ImageField(default=None)
description= models.TextField(blank=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return f"{self.name}
</code></pre>
<p>and jobRace.py who is the pivot table between jobs a races, I specified instead of letting Django auto generate it because I wanted to add an aditional image field to the pivot table</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from rpg.models.job import Job
from rpg.models.race import Race
from .base import BaseModel
class JobRace(BaseModel):
job = models.ForeignKey(Job, related_name='job_race', on_delete=models.CASCADE)
race = models.ForeignKey(Race, related_name='job_race', on_delete=models.CASCADE)
image = models.ImageField(null=True, blank=True, upload_to='jobs/')
def __str__(self):
return f"{self.job.title} - {self.race.name}"
</code></pre>
<p>with their serializers</p>
<p>job.py</p>
<pre class="lang-py prettyprint-override"><code>from rest_framework import serializers
from ..models.job import Job
from .skill import SkillSerializer
from .jobRace import JobRaceSerializer
class JobSerializer(serializers.ModelSerializer):
job_race = JobRaceSerializer(many=True,read_only=True)
class Meta:
model = Job
fields = '__all__'
</code></pre>
<p>race.py</p>
<pre class="lang-py prettyprint-override"><code>from rest_framework import serializers
from ..models.race import Race
class RaceSerializer(serializers.ModelSerializer):
class Meta:
model = Race
fields = '__all__'
</code></pre>
<p>jobRace.py</p>
<pre class="lang-py prettyprint-override"><code>from rest_framework import serializers
from ..models.jobRace import JobRace
class JobRaceSerializer(serializers.ModelSerializer):
race_name = serializers.SerializerMethodField()
job_name = serializers.SerializerMethodField()
class Meta:
model = JobRace
fields = ['id', 'race', 'job', 'race_name', 'job_name', 'image']
def get_race_name(self, obj):
return obj.race.name
def get_job_name(self, obj):
return obj.job.title
</code></pre>
<p>Now for the views I have a ModelViewSet to get out of the box the basic CRUD functions, but I want to enhance the list method to be able to accept filters so I have this in my job view</p>
<p>job.py</p>
<pre class="lang-py prettyprint-override"><code>from ..models.job import Job
from ..serializers.job import JobSerializer
from rest_framework.viewsets import ModelViewSet
from rest_framework.permissions import IsAuthenticated
# Create your views here.
class JobViewSet(ModelViewSet):
permission_classes = [IsAuthenticated]
queryset = Job.objects.all()
serializer_class = JobSerializer
def get_queryset(self):
queryset = super().get_queryset()
# Retrieve query parameters from request
race = self.request.query_params.get('race')
# Apply filters to queryset based on query parameters
if race:
queryset = queryset.filter(job_race__race_id=race)
return queryset
</code></pre>
<p>My objective is that when getting the jobs from one specific race, also getting the image field in the pivot table JobRace but currently with this implementation while Im filtering the jobs well, Im always bringing all the elements from the job_race relationship.</p>
<p>with this url {{baseUrl}}/rpg/jobs?race=5 I should be getting all the jobs with the race 5</p>
<p>But Im getting something like this</p>
<pre class="lang-json prettyprint-override"><code>[
{
"id": 1,
"job_race": [
{
"id": 1,
"race": 1,
"job": 1,
"race_name": "Elf",
"job_name": "Warrior",
"image": "http://127.0.0.1:8000/media/jobs/elf-warrior.jpg"
},
{
"id": 2,
"race": 2,
"job": 1,
"race_name": "Human",
"job_name": "Warrior",
"image": "http://127.0.0.1:8000/media/jobs/human-warrior_Po2Du5x.jpg"
},
{
"id": 3,
"race": 3,
"job": 1,
"race_name": "Merfolk",
"job_name": "Warrior",
"image": "http://127.0.0.1:8000/media/jobs/merfolk-warrior.jpg"
},
{
"id": 4,
"race": 4,
"job": 1,
"race_name": "Goblin",
"job_name": "Warrior",
"image": "http://127.0.0.1:8000/media/jobs/goblin-warrior.jpg"
},
{
"id": 5,
"race": 5,
"job": 1,
"race_name": "Orc",
"job_name": "Warrior",
"image": "http://127.0.0.1:8000/media/jobs/orc.jpg"
}
],
"created_at": "2023-06-27T08:15:23.397062Z",
"updated_at": "2023-06-27T08:15:23.397062Z",
"title": "Warrior",
"races": [
1,
2,
3,
4,
5
],
"skills": [
2,
9,
11
]
}
]
</code></pre>
<p>as you can see while is true Im getting the correct jobs because "Warrior" the only job for the race 5 the job_race field brings all the job_races for the warrior with all the other races and Im only interested in bringing the job_race of the Warrior with the race 5 as specified in the viewset</p>
<p>Is there something Im doing wrong?</p>
|
<python><django><django-rest-framework>
|
2023-06-28 08:13:33
| 0
| 607
|
Anibal Cardozo
|
76,571,092
| 4,765,864
|
Can't make Celery always_eager in FastAPI
|
<p>I am trying to implement queues inside a FastAPI application. I have manually checked whether it connects to the broker and sends messages and it's all OK. However, for testing purposes I would like to get rid of the need to launch broker by setting the "<a href="https://docs.celeryq.dev/en/stable/userguide/configuration.html#task-always-eager" rel="nofollow noreferrer">task_always_eager</a>" option. The project layout is as follows:</p>
<pre class="lang-bash prettyprint-override"><code>$ tree -I '*pyc|venv' src tests
src
├── celery_app.py
├── celeryconf.py
├── db.py
├── main.py
├── __pycache__
├── red_link
│ ├── connectors.py
│ ├── exceptions.py
│ ├── __init__.py
│ ├── models.py
│ ├── __pycache__
│ ├── services.py
│ ├── tasks.py
│ └── views.py
├── settings.py
└── throttling
├── __init__.py
├── middleware.py
└── __pycache__
tests
├── conftest.py
├── fixtures
│ └── casettes
│ ├── test_sends_sms_fails.yml
│ └── test_sends_sms.yml
├── __pycache__
└── test_redlink_sms_sending.py
8 directories, 18 files
</code></pre>
<p>The <code>celery_app.py</code> configuration:</p>
<pre class="lang-py prettyprint-override"><code>from celery import Celery
from settings import get_settings
settings = get_settings()
def create_celery_app() -> Celery:
celery = Celery(
"sms_sender",
broker=settings.celery_broker_url,
include=["red_link.tasks"],
)
celery.config_from_object("celeryconf")
return celery
app = create_celery_app()
</code></pre>
<p>And the <code>celeryconf.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from settings import get_settings
settings = get_settings()
task_always_eager = settings.celery_task_always_eager
</code></pre>
<p>The environment variable that sets the "always eager" option is set correctly:</p>
<p><a href="https://i.sstatic.net/A77I1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A77I1.png" alt="Debugger hangs on the environment variable creation and shows that the always eager option is set to true" /></a></p>
<p>But when I launch pytest it fails as it tries to connect to the broker:</p>
<pre><code>=============================================================================================== short test summary info ===============================================================================================
FAILED tests/test_redlink_sms_sending.py::test_sends_sms - kombu.exceptions.OperationalError: [Errno 111] Connection refused
FAILED tests/test_redlink_sms_sending.py::test_sends_sms_fails - kombu.exceptions.OperationalError: [Errno 111] Connection refused
FAILED tests/test_redlink_sms_sending.py::test_send_sms_fails_with_error_connection - kombu.exceptions.OperationalError: [Errno 111] Connection refused
</code></pre>
|
<python><celery><fastapi>
|
2023-06-28 07:51:00
| 0
| 4,174
|
gonczor
|
76,571,043
| 14,282,714
|
How to give certain operation priority in matcher Spacy
|
<p>I would like to give a certain match rule priority in Spacy's matcher. For example the sentence: <code>"There is no apple or is there an apple?</code>, I would like to give the <code>no apple</code> priority. So actually if that happens once is should return no string_id. Now I use a pattern to check both "no apple" and "apple". Here is some reproducible example:</p>
<pre><code>import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
pattern = [
[{"LOWER": {"NOT_IN": ["no"]}}, {"LOWER": "apple"}],
[{"LOWER": "apple"}]
]
matcher.add("apple", pattern)
doc = nlp("There is no apple or is there an apple?")
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id]
span = doc[start:end]
print(match_id, string_id, start, end, span.text)
</code></pre>
<p>Output:</p>
<pre><code>8566208034543834098 apple 3 4 apple
8566208034543834098 apple 7 9 an apple
8566208034543834098 apple 8 9 apple
</code></pre>
<p>Now it matches the apple multiple times because of the second statement in the pattern. An option could be creating a separate pattern especially for No like this:</p>
<pre><code>import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
pattern = [
[{"LOWER": "apple"}],
]
no_pattern = [
[{"LOWER": "no"}, {"LOWER": "apple"}],
]
matcher.add("apple", pattern)
matcher.add("no_apple", no_pattern)
doc = nlp("There is no apple or is there an apple?")
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id]
span = doc[start:end]
print(match_id, string_id, start, end, span.text)
</code></pre>
<p>Output:</p>
<pre><code>14541201340755442066 no_apple 2 4 no apple
8566208034543834098 apple 3 4 apple
8566208034543834098 apple 8 9 apple
</code></pre>
<p>Now it show the no apple as a pattern which can be used for the outcome. But I was wondering if it possible to let spacy know to prioritize a statement? This would prevent it from making multiple patterns.</p>
|
<python><text><spacy><matcher>
|
2023-06-28 07:43:26
| 0
| 42,724
|
Quinten
|
76,570,913
| 2,455,888
|
How to correctly assigning weights to columns in a Tkinter frame?
|
<p>This is what I am trying to achieve</p>
<p><a href="https://i.sstatic.net/1yJkb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1yJkb.png" alt="enter image description here" /></a></p>
<p>Here is the code:</p>
<pre><code>class GeometryFrame(tk.Frame):
def __init__(self, parent):
super().__init__(parent)
self.columnconfigure(0, weight=1)
rectangle_1 = tk.Label(self, text="Rectangle 1", bg="green", fg="white")
rectangle_1.grid(column=0, row=0, ipadx=10, ipady=10, sticky="EW")
rectangle_2 = tk.Label(self, text="Rectangle 2", bg="red", fg="white")
rectangle_2.grid(column=0, row=1, ipadx=10, ipady=10, sticky="EW")
class App(tk.Tk):
def __init__(self):
super().__init__()
self.geometry("600x100")
self.columnconfigure(0, weight=1)
self.geo_frame = GeometryFrame(self)
self.geo_frame.grid(column=0, row=0)
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>But I am getting this
<a href="https://i.sstatic.net/eUKiy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eUKiy.png" alt="enter image description here" /></a></p>
<p>How to fix this? (NOTE: I want to use <code>grid</code> and not <code>pack</code>).</p>
<p>I get the desired result if I remove the <code>GeometryFrame</code> class and place it's code to the <code>App</code> class.</p>
<pre><code>class App(tk.Tk):
def __init__(self):
super().__init__()
self.geometry("600x100")
self.columnconfigure(0, weight=1)
# self.rowconfigure(0, weight=1)
rectangle_1 = tk.Label(self, text="Rectangle 1", bg="green", fg="white")
rectangle_1.grid(column=0, row=0, ipadx=10, ipady=10, sticky="EW")
rectangle_2 = tk.Label(self, text="Rectangle 2", bg="red", fg="white")
rectangle_2.grid(column=0, row=1, ipadx=10, ipady=10, sticky="EW")
</code></pre>
|
<python><tkinter>
|
2023-06-28 07:26:20
| 1
| 106,472
|
haccks
|
76,570,896
| 7,936,386
|
ImportError: cannot import name 'JSONEncoder' from 'flask.json'
|
<p>I'm following a course on full-stack with Flask. My <strong>init</strong>.py looks like:</p>
<pre><code>from flask import Flask
from config import Config
from flask_mongoengine import MongoEngine
app = Flask(__name__)
app.config.from_object(Config)
db = MongoEngine()
db.init_app(app)
from application import routes
</code></pre>
<p>However, when importing <code>from flask_mongoengine import MongoEngine</code>, I'm getting an ImportError:</p>
<pre><code>ImportError: cannot import name 'JSONEncoder' from 'flask.json'
</code></pre>
<p>My venv looks like:</p>
<pre><code>blinker==1.6.2
click==8.1.3
colorama==0.4.6
dnspython==2.3.0
email-validator==2.0.0.post2
Flask==2.3.2
flask-mongoengine==1.0.0
Flask-WTF==1.1.1
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.3
mongoengine==0.27.0
pymongo==4.4.0
python-dotenv==1.0.0
Werkzeug==2.3.6
WTForms==3.0.1
</code></pre>
<p>Is there anything I can do here to avoid this conflict? Thanks!</p>
|
<python><mongodb><flask><python-venv>
|
2023-06-28 07:23:31
| 4
| 619
|
Andrei Niță
|
76,570,894
| 7,719,864
|
Why is Python's type hint with Callable with base class as argument not compatible with parameter with child class
|
<p>I have the following two classes:</p>
<pre class="lang-py prettyprint-override"><code>class MessageBase:
pass
class MessageChild(MessageBase):
pass
</code></pre>
<p>And the following function:</p>
<pre class="lang-py prettyprint-override"><code>def action(message: MessageChild):
pass
</code></pre>
<p>Why does the following type hint for the given <code>get_action</code> function produces an error in Pylance? Normally type hints using base classes are compatible with child classes.</p>
<pre class="lang-py prettyprint-override"><code>def get_action() -> Callable[[MessageBase], None]:
return action
</code></pre>
<pre><code>Expression of type "(message: MessageChild) -> None" cannot be assigned to return type "(MessageBase) -> None"
Type "(message: MessageChild) -> None" cannot be assigned to type "(MessageBase) -> None"
Parameter 1: type "MessageBase" cannot be assigned to type "MessageChild"
"MessageBase" is incompatible with "MessageChild"
</code></pre>
|
<python><inheritance><python-typing>
|
2023-06-28 07:23:30
| 1
| 813
|
Robson
|
76,570,858
| 6,730,854
|
How to fix distortion coefficients to 0 during OpenCV's calibrateCamera?
|
<p>I'm learning about OpenCV's calibrateCamera function and trying to fix distortion coefficients to 0 as I don't want to take distortion into account for now.</p>
<pre><code> # reprojection error, calibration matrix, distortion, rotation and translation vectors
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(
[3d_points.astype(np.float32)],
[image_points.astype(np.float32)],
(img_raw.shape[1], img_raw.shape[0]), None, distCoeffs = np.zeros((5, 1),dtype=np.float32))
</code></pre>
<p>However, in the output I still get non-zero dist coefficients</p>
<pre><code>dist
array([[-0.06402103],
[ 0.76996236],
[ 0.05764124],
[ 0.02921579],
[-1.12958531]])
</code></pre>
<p>How do I fix them so it doesn't try to optimize them?</p>
|
<python><opencv>
|
2023-06-28 07:17:08
| 1
| 472
|
Mike Azatov
|
76,570,857
| 9,506,773
|
How to count number of True and False blocks in a column in a pandas dataframe
|
<p>I have the following pandas dataframe:</p>
<pre><code> col
2023-05-04 10:34:51.002100665 True
2023-05-04 10:34:51.007100513 True
2023-05-04 10:34:51.012100235 True
2023-05-04 10:34:51.017100083 False
2023-05-04 10:34:51.022099789 False
2023-05-04 10:35:23.610740595 False
2023-05-04 10:35:23.615740466 True
2023-05-04 10:35:23.620740227 True
2023-05-04 10:35:23.625740082 False
2023-05-04 10:35:23.630739797 True
</code></pre>
<p>How do I count the number of True and False blocks present in it? The result should be:</p>
<pre><code>Number of True blocks: 3
Number of False blocks: 2
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-28 07:17:07
| 1
| 3,629
|
Mike B
|
76,570,836
| 5,640,517
|
django logging config with gunicorn and command line scripts
|
<p>Up until now this config</p>
<pre class="lang-py prettyprint-override"><code>LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
}
},
'formatters': {
'verbose': {
'format': '%(asctime)s [%(levelname)s] %(name)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'loggers': {
'': {
'handlers': ['console'],
'level': 'INFO',
},
},
}
</code></pre>
<p>was good enough for me, everything to console then using <code>PYTHONUNBUFFERED=TRUE</code> and gunicorn config</p>
<pre class="lang-py prettyprint-override"><code>bind = "0.0.0.0:50000"
workers = 1
worker_class = "gthread"
threads = 2
worker_connections = 1000
timeout = 60
keepalive = 5
accesslog = "/var/log/gunicorn/myapp.com-access.log"
errorlog = "/var/log/gunicorn/myapp.com-error.log"
loglevel = "debug"
capture_output = True
</code></pre>
<p>everything went to accesslog or errorLog.</p>
<p>Now that I need to use <code>poetry run python manage.py sqlflush</code> I run into trouble.
all my logger.info() logs are also output to console so I get</p>
<pre><code>Logging in
Logged in
Logging in
Already logged in
BEGIN;
SET FOREIGN_KEY_CHECKS = 0;
TRUNCATE `myapp_imagemodel`;
TRUNCATE `auth_user_groups`;
TRUNCATE `auth_user_user_permissions`;
TRUNCATE `auth_permission`;
TRUNCATE `auth_group_permissions`;
TRUNCATE `auth_user`;
TRUNCATE `auth_group`;
TRUNCATE `django_content_type`;
TRUNCATE `myapp_game`;
TRUNCATE `myapp_game_screenshots`;
TRUNCATE `django_session`;
TRUNCATE `django_admin_log`;
SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
</code></pre>
<p>How should I configure logging so it doesn't mess up with manage.py script's output?
Can I still have it all in one file?</p>
|
<python><django><gunicorn>
|
2023-06-28 07:13:22
| 1
| 1,601
|
Daviid
|
76,570,802
| 3,458,191
|
Binary file created in python differs from perl file
|
<p>I have the following issue when I create a binary file with python and with perl I see a different outcome in the file for the same data.</p>
<p>Example - python:</p>
<pre><code>id_hex = binascii.hexlify(id.encode()).decode()
kool_hex = binascii.hexlify(kool.encode()).decode()
did_hex = binascii.hexlify(did.encode()).decode()
mode_hex = binascii.hexlify(mode.encode()).decode()
gogo_hex = binascii.hexlify(gogo.encode()).decode()
# Write to file
with open(filename, "wb") as file:
file.write(binascii.unhexlify(id_hex))
file.write(b"\x00" * (32 - len(id) // 2))
file.write(binascii.unhexlify(kool_hex))
file.write(binascii.unhexlify(did_hex))
file.write(binascii.unhexlify(mode_hex))
file.write(binascii.unhexlify(gogo_hex))
file.close()
</code></pre>
<p>The above code will create a file with this entry:
B202111290000000.....</p>
<p>Example - perl:</p>
<pre><code>#print $Key, "\n";
open FILE, ">$filename" or die "Unable to create or open the keybox.bin file $!";
foreach my $i ( 0...(length($id) - 1 ) )
{
my $char = substr($id, $i, 1);
my $ascii = sprintf("%2x", ord($char));
print FILE pack("H2",$ascii);
}
foreach my $i ( 0...(32 - length($id) -1) )
{
print FILE "\x00";
}
foreach my $i ( 0...(length($kool)/2 - 1 ) )
{
my $char = substr($kool, $i*2, 2);
print FILE pack("H2",$char);
}
foreach my $i ( 0...(length($did)/2 - 1 ) )
{
my $char = substr($di, $i*2, 2);
print FILE pack("H2",$char);
}
foreach my $i ( 0...(length($mode)/2 - 1 ) )
{
my $char = substr($mode, $i*2, 2);
print FILE pack("H2",$char);
}
foreach my $i ( 0...(length($gogo)/2 - 1 ) )
{
my $char = substr($gogo, $i*2, 2);
print FILE pack("H2",$char);
}
close FILE or die "Unable to close the keybox.bin file $!";
</code></pre>
<p>The above code will create a file with this entry:
��2+8�>�*@]�00009.��1ŀ�r��8��....</p>
<p>Why does it differ and how do I have to change my python code to get the same result?</p>
|
<python><perl>
|
2023-06-28 07:07:33
| 1
| 1,187
|
FotisK
|
76,570,588
| 2,540,204
|
Unable to set value to None unless set to string first
|
<p>I Have a list of pandas series: <code>stagedInserts</code>.</p>
<p>I'm looping through each key-value in each series within a list to (among other things) weed out any <code>np.nan</code> values and set the value to <code>None</code>. This is carried out as follows:</p>
<pre><code># %%
import pandas as pd
import numpy as np
# %%
stagedInserts = [
pd.Series({"one": 3, "two": 4, "three": np.nan})
]
print(f"stagedInserts: {stagedInserts}")
for i, vals in enumerate(stagedInserts):
for j, val in enumerate(vals):
if(np.isnan(stagedInserts[i][j])):
print(f"stagedInserts[i][j] {stagedInserts[i][j]}")
stagedInserts[i][j] = None
print(f"2stagedInserts[i][j] {stagedInserts[i][j]}")
</code></pre>
<p>Though not particularly elegant, I would expect this to be an entirely reasonable way to convert <code>nan</code>s to <code>None</code>s. However the second print statement returns a nan. I've confirmed this with an additional <code>np.isnan()</code> as well.</p>
<p>This would be confusing enough but I found that if I set first <code>stagedInserts[i][j] = "Test"</code> and THEN <code>stagedInserts[i][j] = None</code>, the second print statement yields a <code>None</code> as I wanted. (See below):</p>
<pre><code>for i, vals in enumerate(stagedInserts):
for j, val in enumerate(vals):
if(np.isnan(stagedInserts[i][j])):
print("it's still nan")
print(f"stagedInserts[i][j] {stagedInserts[i][j]}")
stagedInserts[i][j] = "Test"
print(f"2stagedInserts[i][j] {stagedInserts[i][j]}")
stagedInserts[i][j] = None
print(f"3stagedInserts[i][j] {stagedInserts[i][j]}")
</code></pre>
<p>What's going on here?</p>
|
<python><numpy><nan><nonetype>
|
2023-06-28 06:30:17
| 0
| 2,703
|
neanderslob
|
76,570,543
| 2,095,858
|
matplotlib 3D scatter points not placed correctly
|
<p>I'm trying to do some simple 3D plots but the scatter points appear not to align properly with the axes. This graph should have points aligned with x=1, y=1, z=1. This issue also occurs using the tkinter backend.</p>
<p><a href="https://i.sstatic.net/5cUDH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5cUDH.png" alt="MATPLOTLIB 3D scatter" /></a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
x = np.array([-1, -0.5, 0, 0.5, 1])
y = np.array([-1, -0.5, 0, 0.5, 1])
z = np.array([-1, -0.5, 0, 0.5, 1])
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
xyz = np.meshgrid(x,y,z)
ax.scatter(xyz[0], xyz[1], xyz[2], marker='.')
plt.show()
</code></pre>
<p>The following links seem loosely related, but I'm unable to put together a concrete solution.</p>
<p><a href="https://github.com/matplotlib/matplotlib/issues/11836" rel="nofollow noreferrer">https://github.com/matplotlib/matplotlib/issues/11836</a></p>
<p><a href="https://github.com/matplotlib/matplotlib/issues/1077/" rel="nofollow noreferrer">https://github.com/matplotlib/matplotlib/issues/1077/</a></p>
<p>Python 3.11.1, Matplotlib 3.7.1, Windows 10</p>
|
<python><matplotlib><tkinter><matplotlib-3d>
|
2023-06-28 06:22:31
| 3
| 421
|
craigB
|
76,570,472
| 12,236,429
|
Inferring (printing) algebraic form of constraint(s) and objective function from the gurobipy model
|
<p>Is there a way by which we can print the underlying algebraic form of constraints or objective function. For instance with <code>AMPL</code> there is a handy functionality <code>display</code> which prints the algebraic form of the constraint or the objective function. I understand that we can always write the model in an LP form and then investigate, but can we print (interactively) the constraint in the python console ?</p>
<p>A concrete example:</p>
<pre><code>import gurobipy as gp
from gurobipy import GRB
products, price = gp.multidict({"table": 80, "chair": 45})
resources, availability = gp.multidict({"mahogany": 400, "labour": 450})
bom = {
("mahogany", "chair"): 5,
("mahogany", "table"): 20,
("labour", "chair"): 10,
("labour", "table"): 15,
}
model = gp.Model("furniture")
make = model.addVars(products, name="make", obj=list(price.values()), vtype=GRB.INTEGER)
res = model.addConstrs(
gp.quicksum(bom[r, p] * make[p] for p in products) <= availability[r]
for r in resources
)
model.ModelSense = GRB.MAXIMIZE
model.optimize()
</code></pre>
<p>Now, I make add another product to the formulation -</p>
<pre><code>add_prod = "bed"
make[add_prod] = model.addVar(
obj=90, vtype=GRB.INTEGER, column=gp.Column([15, 18], res.values()), name="make[bed]"
)
model.optimize()
</code></pre>
<p>If, I want to verify whether the change I have made has happened in an expected manner, I want to print the constraint and objective function. I can write the model in an <code>.lp</code> format but is there a way I can print the constraint, without having to look at the <code>.lp</code> file. Especially, if the model is very large, and I want to see only one constraint, then seeing the entire lp file could be an overkill.</p>
|
<python><linear-programming><gurobi><mixed-integer-programming>
|
2023-06-28 06:10:32
| 1
| 1,029
|
Bhartendu Awasthi
|
76,570,434
| 8,852,498
|
k8s core_v1_api list_pod_for_all_namespaces failed with exception: urllib3.exceptions.ProtocolError: Connection broken: InvalidChunkLength
|
<p>I am using kubernetes/client/api/core_v1_api (<a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/client/api/core_v1_api.py" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/blob/master/kubernetes/client/api/core_v1_api.py</a>)</p>
<p>Sometimes code failed on:</p>
<pre><code>for pod_event in self._watcher.stream(func=self._core_api.list_pod_for_all_namespaces, **watch_kwargs):
</code></pre>
<p>getting this exception:</p>
<pre><code>urllib3.exceptions.ProtocolError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read))
</code></pre>
<p>I have some suggestion:</p>
<ul>
<li><p>I think to adding timeout_seconds=0 while watch.stream loop
<a href="https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-3-using-python-aea5ab16f627" rel="nofollow noreferrer">https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-3-using-python-aea5ab16f627</a>
I am not sure if it will help</p>
</li>
<li><p>try to catch this specific exception and ignore it. because it mean trying to chunk empty string (I am not sure if its mean there is no more data)</p>
</li>
<li><p>upgrade urlib3/kubernetes packages on my project or open issue for this</p>
</li>
</ul>
|
<python><kubernetes><urllib3>
|
2023-06-28 06:04:14
| 0
| 845
|
Adi Epshtain
|
76,570,326
| 5,771,861
|
Can one access the `self` object while mocking a property with PropertyMock using a side_effect function?
|
<p>I am trying to mock a property and would like to control the return value of the property according to other state in the object.
A small representative example looks like this</p>
<pre class="lang-py prettyprint-override"><code>import datetime
from pytest_mock import MockFixture
class MyObject:
def __init__(self, name: str):
self._name = name
self._creation_time = datetime.datetime.now()
@property
def creation_time(self) -> datetime.datetime:
# A series of
# calculations
# here
# return_value = ... something
return return_value
def test_ordered_creation_time(mocker: MockFixture) -> None:
def creation_time_side_effect(self) -> datetime.datetime:
if self._name == 'first_created':
return datetime.datetime(2020, 1, 1)
elif self._name == 'second_created':
return datetime.datetime(2020, 2, 1)
mock_creation_time = mocker.patch.object(
MyObject,
'creation_time',
new_callable=mocker.PropertyMock)
mock_creation_time.side_effect = creation_time_side_effect
first_created = MyObject('first_created')
second_created = MyObject('second_created')
assert first_created.creation_time < second_created.creation_time
</code></pre>
<p>Running this with pytest gives me</p>
<pre><code>
E TypeError: test_ordered_creation_time.<locals>.creation_time_side_effect() missing 1 required positional argument: 'self'
</code></pre>
<p>It looks like while using PropertyMock and setting a side_effect function, that function does not have access to the <code>self</code> object.
Is that correct ? If yes, why is that ?</p>
<p>Alternatively, is there another way to inspect the <code>self</code> object while mocking a property</p>
<hr />
<p>Full error from pytest</p>
<pre><code>
====================================================================================================================================================================================== FAILURES ======================================================================================================================================================================================
_____________________________________________________________________________________________________________________________________________________________________________ test_ordered_creation_time _____________________________________________________________________________________________________________________________________________________________________________
mocker = <pytest_mock.plugin.MockerFixture object at 0x103fb1210>
def test_ordered_creation_time(mocker: MockFixture) -> None:
def creation_time_side_effect(self) -> datetime.datetime:
if self._name == 'first_created':
return datetime.datetime(2020, 1, 1)
elif self._name == 'second_created':
return datetime.datetime(2020, 2, 1)
mock_creation_time = mocker.patch.object(
MyObject,
'creation_time',
new_callable=mocker.PropertyMock)
mock_creation_time.side_effect = creation_time_side_effect
first_created = MyObject('first_created')
second_created = MyObject('second_created')
> assert first_created.creation_time < second_created.creation_time
dummy.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/homebrew/Cellar/python@3.11/3.11.4/Frameworks/Python.framework/Versions/3.11/lib/python3.11/unittest/mock.py:2946: in __get__
return self()
/opt/homebrew/Cellar/python@3.11/3.11.4/Frameworks/Python.framework/Versions/3.11/lib/python3.11/unittest/mock.py:1124: in __call__
return self._mock_call(*args, **kwargs)
/opt/homebrew/Cellar/python@3.11/3.11.4/Frameworks/Python.framework/Versions/3.11/lib/python3.11/unittest/mock.py:1128: in _mock_call
return self._execute_mock_call(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <PropertyMock name='creation_time' id='4361753680'>, args = (), kwargs = {}, effect = <function test_ordered_creation_time.<locals>.creation_time_side_effect at 0x103f9e3e0>
def _execute_mock_call(self, /, *args, **kwargs):
# separate from _increment_mock_call so that awaited functions are
# executed separately from their call, also AsyncMock overrides this method
effect = self.side_effect
if effect is not None:
if _is_exception(effect):
raise effect
elif not _callable(effect):
result = next(effect)
if _is_exception(result):
raise result
else:
> result = effect(*args, **kwargs)
E TypeError: test_ordered_creation_time.<locals>.creation_time_side_effect() missing 1 required positional argument: 'self'
/opt/homebrew/Cellar/python@3.11/3.11.4/Frameworks/Python.framework/Versions/3.11/lib/python3.11/unittest/mock.py:1189: TypeError
</code></pre>
|
<python><python-unittest><python-mock>
|
2023-06-28 05:42:27
| 1
| 2,075
|
Hakan Baba
|
76,570,233
| 13,217,533
|
NLTK Data Files NOT FOUND when using AWS-lambda-layer to hold data files
|
<h1>TLDR: My Question & Problem</h1>
<ul>
<li>Using SAM, I want develop and test lambda functions that uses NLTK locally on my machine.</li>
<li>I created a lambda layer to hold the NLTK data files that is required for NLTK functions that I want to run.</li>
<li>I created a Lambda function that calls <code>word_tokenize</code> which requires the <code>punkt</code> data file to run.</li>
<li>My expectation is that the lambda function will get the punkt data files from the lambda layer I created.</li>
<li>However, when I run the lambda function on my local machine, <strong>the lambdas are not getting the NLTK data files from the layer I created</strong> and says data files are not found.</li>
<li>What am I missing here? Is it folder structure of my layer or my yml file is wrong? <em><strong>Please help!</strong></em></li>
</ul>
<p>This is the error thrown when I <code>sam build</code> and <code>sam local invoke</code> my lambda function:</p>
<pre><code>**********************************************************************
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/PY3/english.pickle
Searched in:
- '/root/nltk_data'
- '/var/lang/nltk_data'
- '/var/lang/share/nltk_data'
- '/var/lang/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- ''
**********************************************************************
...more errors (see below for details)
</code></pre>
<p>Please help!</p>
<h1>Background</h1>
<ul>
<li>Using SAM to make lambda code</li>
<li>Python 3.9</li>
<li>Using NLTK to tokenize a string of words.</li>
<li>Windows machine</li>
</ul>
<h1>Objective:</h1>
<ul>
<li>Create and deploy a lambda function that can tokenize a string of words</li>
</ul>
<h1>What I did</h1>
<p>I have already done the sam init.</p>
<p>Things I did:</p>
<ol>
<li>create the lambda function "plag_check" which does tokenizer</li>
<li>create the lambda layer "nltkDataFilesLayer" which holds the NLTK data files needed for NLTK tokenizer</li>
<li>adjusted my template.yml</li>
<li>I build with <code>sam build</code></li>
<li>I test code on my local machine with docker with <code>sam local invoke plag_check --event events/event.json</code></li>
</ol>
<h2>1. plag_check</h2>
<p>For plag_check, I made a folder "plag_check" and inside, I have app.py</p>
<p>plag_check/app.py:</p>
<pre><code>import json
from nltk.tokenize import word_tokenize;
def lambda_handler(event, context):
text = "Hello, how are you doing?"
tokens = word_tokenize(text)
print(tokens)
return {
"statusCode": 200,
"body": json.dumps({
"message": "inbound",
"text":text,
"tokens": tokens,
}),
}
</code></pre>
<h2>2. nltkDataFilesLayer</h2>
<p>I have a folder in my SAM project "nltkDataFilesLayer". Its contents:</p>
<pre><code>- nltkDataFilesLayer/
- nltk_data/
- corpora/
- wordnet.zip
- taggers/
- averaged_perceptron_tagger/
- averaged_perceptron_tagger.zip
- tokenizers/
- punkt/
- punkt.zip
</code></pre>
<p>It just has the nltk_data which is generated by <code>nltk.download</code> on my local machine.</p>
<h4>DETAILS: How I made the nltkDataFilesLayer folder</h4>
<ul>
<li>I ran a python script file that calls <code>nltk.download</code></li>
<li>nltk downloads the files to my AppData/Roaming/nltk_data folder on my windows machine</li>
<li>I copied the entire nltk_data folder.</li>
<li>I change directory to sam project and created a folder "nltkDataFilesLayer"</li>
<li>I paste the nltk_data folder in the newly created "nltkDataFilesLayer"</li>
</ul>
<h2>3. My template.yaml</h2>
<pre><code>AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: >
plag1
Sample SAM Template for plag1
Globals:
Function:
Timeout: 60
MemorySize: 512
Resources:
nltkDataFilesLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: nltkDataFilesLayer
Description: Data files for nltk
ContentUri: ./nltkDataFilesLayer
CompatibleRuntimes:
- python3.9
PCheck:
Type: AWS::Serverless::Function
Properties:
CodeUri: plag_check/
Handler: app.lambda_handler
Runtime: python3.9
Layers:
- !Ref nltkDataFilesLayer
Architectures:
- x86_64
</code></pre>
<h2>4. I build my code <code>sam build</code></h2>
<p>no problems here</p>
<h2>5. I test my code locally</h2>
<p>I ran <code>sam local invoke plag_check --event events/event.json</code>
ERROR!</p>
<pre><code>[ERROR] LookupError:
**********************************************************************
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/PY3/english.pickle
Searched in:
- '/root/nltk_data'
- '/var/lang/nltk_data'
- '/var/lang/share/nltk_data'
- '/var/lang/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- ''
**********************************************************************
raise LookupError(resource_not_found), in findnpickle")sent_tokenizee)
END RequestId: 188fc3e0-f33c-4f40-be51-a6a631ac53b7
REPORT RequestId: 188fc3e0-f33c-4f40-be51-a6a631ac53b7 Init Duration: 0.56 ms Duration: 6051.22 ms Billed Duration: 6052 ms Memory Size: 512 MB Max Memory Used: 512 MB
{"errorMessage": "\n**********************************************************************\n Resource \u001b[93mpunkt\u001b[0m not found.\n Please use the NLTK Downloader to obtain the resource:\n\n \u001b[31m>>> import nltk\n >>> nltk.download('punkt')\n \u001b[0m\n For more information see: https://www.nltk.org/data.html\n\n Attempted to load \u001b[93mtokenizers/punkt/PY3/english.pickle\u001b[0m\n\n Searched in:\n - '/root/nltk_data'\n - '/var/lang/nltk_data'\n - '/var/lang/share/nltk_data'\n - '/var/lang/lib/nltk_data'\n - '/usr/share/nltk_data'\n - '/usr/local/share/nltk_data'\n - '/usr/lib/nltk_data'\n - '/usr/local/lib/nltk_data'\n - ''\n**********************************************************************\n", "errorType": "LookupError", "requestId": "188fc3e0-f33c-4f40-be51-a6a631ac53b7", "stackTrace": [" File \"/var/task/app.py\", line 8, in lambda_handler\n tokens = word_tokenize(text)\n", " File \"/var/task/nltk/tokenize/__init__.py\", line 129, in word_tokenize\n sentences = [text] if preserve_line else sent_tokenize(text, language)\n", " File \"/var/task/nltk/tokenize/__init__.py\", line 106, in sent_tokenize\n tokenizer = load(f\"tokenizers/punkt/{language}.pickle\")\n", " File \"/var/task/nltk/data.py\", line 750, in load\n opened_resource = _open(resource_url)\n", " File \"/var/task/nltk/data.py\", line 876, in _open\n return find(path_, path + [\"\"]).open()\n", " File \"/var/task/nltk/data.py\", line 583, in find\n raise LookupError(resource_not_found)\n"]}
</code></pre>
|
<python><aws-lambda><nltk><aws-sam><aws-lambda-layers>
|
2023-06-28 05:18:48
| 2
| 718
|
KJ Ang
|
76,570,179
| 10,217,732
|
What is the relation between the Lookup Chain and MRO in Python?
|
<p>In Python, I've come across two concepts, the <code>Lookup Chain</code> and the <code>Method Resolution Order (MRO)</code>, which seem related but I'm having trouble understanding their relationship. Could someone please clarify the connection between these two concepts?</p>
<ul>
<li>Lookup Chain</li>
<li>Method Resolution Order MRO</li>
</ul>
<p><strong>From my understanding,</strong> the lookup chain refers to the order in which Python searches for attributes and methods in a class hierarchy. It determines the sequence of classes to be traversed during attribute and method resolution. On the other hand, the MRO is a specific algorithm used by Python to determine the order in which methods are resolved in a class hierarchy, especially in cases of multiple inheritance.</p>
<p>I'm seeking a clearer explanation of these two concepts.</p>
<ul>
<li>Does the lookup chain play a role in the MRO algorithm?</li>
<li>How does the MRO ensure the correct method resolution in complex inheritance scenarios?</li>
</ul>
|
<python><python-3.x><inheritance><multiple-inheritance><method-resolution-order>
|
2023-06-28 05:02:55
| 2
| 918
|
Suyog Shimpi
|
76,569,962
| 2,696,230
|
What's the Optimal Approach to Sending Photos from a Client Upon Request Using Twisted?
|
<p>I'm currently using the Twisted framework to establish a client-server communication system. The design involves multiple clients interacting with a single server. At any given time, the server might request a photo from any one of these clients. At present, I can successfully send a request from the server, have it received by the client, and get a response back. However, I'm encountering an issue where the photo file is being transmitted in chunks. Given that these chunks might be sent in an arbitrary manner, I'm unsure how to handle this. I'm seeking advice on how to implement this functionality, and also, I would like to know if using Twisted is an appropriate choice for my needs in the first place.</p>
<p>Client side:</p>
<pre><code>class ClientProtocol(protocol.Protocol):
def send_image(self):
with open("image.jpg", "rb") as image_file:
encoded_image = base64.b64encode(image_file.read())
self.transport.write("SEND_IMAGE\n".encode()) # indicate that the following data is a photo response
self.transport.write(encoded_image)
def dataReceived(self, data):
message = data.decode() # decode from bytes to string
print("Received response from server:", message)
if message == "SEND_IMAGE":
print("a request for SEND_IMAGE received")
self.send_image()
</code></pre>
<p>Server side:</p>
<pre><code>class ServerProtocol(protocol.Protocol):
def __init__(self):
self.waiting_for_photo = False
def connectionMade(self):
print("connection established")
self.send_request()
def handle_image(self, encoded_image):
print("attempt to receive a photo")
decoded_image = base64.b64decode(encoded_image)
with open("received_image.jpg", "wb") as image_file:
image_file.write(decoded_image)
def dataReceived(self, data):
message = data.decode()
if message == "SEND_IMAGE":
print("SEND_IMAGE reponse received")
self.waiting_for_photo = True
elif self.waiting_for_photo:
print("trying to recieve photo")
self.handle_image(data)
self.waiting_for_photo = False # reset the flag
</code></pre>
<p>Thanks</p>
|
<python><twisted>
|
2023-06-28 04:04:50
| 0
| 1,763
|
Abdulkarim Kanaan
|
76,569,945
| 11,471,385
|
How to run asyncpg (Postgresql) multiple queries simultaneously?
|
<p>I'm using PostgreSQL & asyncpg.</p>
<pre><code>class PgDb:
# noinspection SpellCheckingInspection
def __init__(self, conn: asyncpg.connection.Connection):
self.conn = conn
async def select(self, sql: str, args: Union[list, Dict[str, Any]] = []) -> List[Dict[str, Any]]:
sql, _args = self.__convert_placeholders(sql, args)
return [dict(row) for row in await self.conn.fetch(sql, *_args)]
class DbPoolSingleton:
db_pool: Optional[asyncpg.pool.Pool] = None
@staticmethod
async def create_pool():
config = get_postgres_config()
pool: asyncpg.Pool = await asyncpg.create_pool(
...,
min_size=30,
max_size=40
)
print("Pool created")
return pool
@staticmethod
async def get_pool() -> asyncpg.pool.Pool:
if not DbPoolSingleton.db_pool:
DbPoolSingleton.db_pool = await DbPoolSingleton.create_pool()
return DbPoolSingleton.db_pool
@staticmethod
async def terminate_pool():
(await DbPoolSingleton.get_pool()).terminate()
DbPoolSingleton.db_pool = None
print("Pool terminated")
</code></pre>
<pre><code>import asyncio
from helpers.pg_rdb_helper import DbPoolSingleton, PgDb
async def test_synchronous():
conn = await (await DbPoolSingleton.get_pool()).acquire()
db = PgDb(conn)
sql = """samplesql"""
total_start = start = datetime.datetime.now()
for i in range(20):
start = datetime.datetime.now()
rows = await db.select(sql)
end = datetime.datetime.now()
print(f"{i}st query took: ", (end-start).total_seconds())
total_end = datetime.datetime.now()
print(f"total query took: ", (total_end-total_start).total_seconds())
</code></pre>
<p>=> <strong>total query took: 2.131297</strong></p>
<pre><code>async def test_asynchronous():
db_pool = await DbPoolSingleton.get_pool()
sql = """samplesql"""
total_start = datetime.datetime.now()
tasks = []
for i in range(20):
db = PgDb(await db_pool.acquire())
task = asyncio.create_task(db.select(sql))
tasks.append(task)
await asyncio.gather(*tasks)
total_end = datetime.datetime.now()
print(f"total query took: ", (total_end-total_start).total_seconds())
</code></pre>
<p>===> <strong>total query took: 2.721282</strong></p>
<hr />
<p>Here, I have a function which is simple multiple queries call, the first version is synchronous version which await every single query without using <code>asyncio</code>, the second one is using <code>asyncio.gather</code> to run these query in background (at least this is my assumption).</p>
<p>Then turn out, as you saw the result <code>asynchronous version</code> was completely slower than <code>synchronous version</code>. Basically I know in <code>asynchronous version</code> we have some overhead for getting connection from pool for every single query which caused it a bit slower.</p>
<p>So how could we fix <code>asynchronous version</code> to take advandtage of <code>asyncpg</code> and <code>asyncio</code>.</p>
<hr />
<p>After I investigate, I have some fix for this <code>asynchronous version</code> but bot of them got some error.</p>
<p><em><strong>Asynchronous fix 1</strong></em></p>
<pre><code>async def test_asynchronous():
db_pool = await DbPoolSingleton.get_pool()
sql = """samplesql"""
total_start = datetime.datetime.now()
tasks = []
async with db_pool.acquire() as conn:
db = PgDb(conn)
for i in range(20):
task = asyncio.create_task(db.select(sql))
tasks.append(task)
await asyncio.gather(*tasks)
total_end = datetime.datetime.now()
print(f"total query took: ", (total_end-total_start).total_seconds())
</code></pre>
<p>I got this error ===></p>
<pre><code>asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress
</code></pre>
<h2>Basically, this fix make multiple coroutine using the same connection to db, so that I got this error..</h2>
<p>Now, I gave up with this problem, please help me to resolve it??</p>
<p>My question: <strong>So how could we fix <code>asynchronous version</code> to take advandtage of <code>asyncpg</code> and <code>asyncio</code>.</strong></p>
|
<python><postgresql><asynchronous><pool><asyncpg>
|
2023-06-28 03:57:43
| 2
| 717
|
DFX Nguyễn
|
76,569,730
| 2,612,592
|
Python Windows Service (pywin32) works in debug mode but fails as an installed Windows Service
|
<p>I made a websocket echo server and wanted to test it as a running service in the background.
The code for the Windows Service in Python was took from <a href="https://thepythoncorner.com/posts/2018-08-01-how-to-create-a-windows-service-in-python/" rel="nofollow noreferrer">here</a> and it uses <code>pywin32</code>.</p>
<p><strong>The code works well in debug mode</strong> <code>python server.py debug</code> (meaning client receives message echoed back from server).</p>
<p>When I install it (only admin mode works) with <code>python server.py install</code> it appears in the Windows Services List, but when I try to start it and error message appear with: "<em>Windows could not start Python Echo WebSocket Server in local Computer. For more info...... error code 536870913.</em>"</p>
<ul>
<li>Python version: 3.10.4</li>
<li>Windows 11</li>
</ul>
<p><strong>Base class for Windows Service (SMWinservice.py):</strong></p>
<pre><code>import socket
import win32serviceutil
import servicemanager
import win32event
import win32service
class SMWinservice(win32serviceutil.ServiceFramework):
'''Base class to create winservice in Python'''
_svc_name_ = 'pythonService'
_svc_display_name_ = 'Python Service'
_svc_description_ = 'Python Service Description'
@classmethod
def parse_command_line(cls):
'''
ClassMethod to parse the command line
'''
win32serviceutil.HandleCommandLine(cls)
def __init__(self, args):
'''
Constructor of the winservice
'''
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
socket.setdefaulttimeout(60)
def SvcStop(self):
'''
Called when the service is asked to stop
'''
self.stop()
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def SvcDoRun(self):
'''
Called when the service is asked to start
'''
self.start()
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_, ''))
self.main()
def start(self):
'''
Override to add logic before the start
eg. running condition
'''
pass
def stop(self):
'''
Override to add logic before the stop
eg. invalidating running condition
'''
pass
def main(self):
'''
Main class to be ovverridden to add logic
'''
pass
# entry point of the module: copy and paste into the new module
# ensuring you are calling the "parse_command_line" of the new created class
if __name__ == '__main__':
SMWinservice.parse_command_line()
</code></pre>
<p><strong>Server implementation (server.py):</strong></p>
<pre><code>import asyncio
import websockets
from SMWinservice import SMWinservice
DOMAIN = 'localhost'
PORT = XXX # Replace XXX with integer
class EchoServerService(SMWinservice):
_svc_name_ = "EchoWSService"
_svc_display_name_ = "Python Echo WebSocket Server"
_svc_description_ = "A python written service of a WebSocket Echo Server running in localhost:XXX"
def start(self):
self.isrunning = True
def stop(self):
self.isrunning = False
def process_message(self, message):
print(message)
return message
async def handle(self, ws_client):
async for message in ws_client:
result = self.process_message(message)
await ws_client.send(result)
async def async_main(self):
async with websockets.serve(self.handle, DOMAIN, PORT) as _:
await asyncio.Future()
def main(self):
if self.isrunning:
asyncio.run(self.async_main())
if __name__ == '__main__':
EchoServerService.parse_command_line()
</code></pre>
|
<python><service><windows-services><python-asyncio><pywin32>
|
2023-06-28 02:45:11
| 1
| 587
|
Oliver Mohr Bonometti
|
76,569,467
| 247,542
|
Apache WSGI IPC
|
<p>I have a Django app running behind an Apache/ModWSGI server with multiple processes.</p>
<p>I'd like to implement a feature where settings, such as model field labels, are stored in the database and dynamically updated in the UI.</p>
<p>To avoid this from slowing down each Django request, I don't want to pull all model field names from the database in each request. I just want to pull them from the Django instance first loads, and then update them once if they're changed.</p>
<p>It's pretty trivial to create a Django signal to hook onto my field name model, and dynamically update the field's "verbose name" value in memory. However, that only updates the process for that request. The other N-1 processes being run by Apache will still be using the old value.</p>
<p>How would I tell the other processes to update a specific setting?</p>
<p>Ideally, I want a very light-weight interprocess communication method to push a "field name changed" event to the other processes, which they would then consume and do that lightweight task of updating the value in their memory.</p>
<p>There are a ton of multiprocessing tools for Python, but are there any specific ones for a task like this, especially for Django+Apache?</p>
<p>I find it hard to believe no one's ever had to make their ModWSGI processes talk to each other, but when I search for Django/Python IPC tools, I either get the tutorials for the standard multiprocessing package, or message consuming services like Celery, which aren't quite what I'm looking for.</p>
<p>I could bake something myself using a custom thread in the WSGIHandler that reads/writes from a shared queue, but I don't want to reinvent the wheel.</p>
|
<python><django><multiprocessing><ipc>
|
2023-06-28 01:13:18
| 1
| 65,489
|
Cerin
|
76,569,410
| 6,085,595
|
Python get imported modules from source code returned by inspect.getsource
|
<p>I'm using <code>inspect.getsource</code> to retrieve the source code of functions. However, since some variables are imported at top level of the module containing the functions, it is not returned by <code>inspect.getsource</code>. For example, if we have</p>
<pre><code>import numpy as np
def sqrt(x):
return np.sqrt(x)
</code></pre>
<p>then <code>inspect.getsource(sqrt)</code> will not return the line <code>import numpy as np</code>. Is it possible to automatically retrieve what <code>np</code> here refers to? The import can also be other forms, such as <code>from some_module import some_func</code>, <code>from some_module import *</code>, <code>import some_module as some_name</code>.</p>
<p>Right now I'm using this function in trace function called by <code>sys.settrace</code>, so the solution can be both static (without running the code) or dynamic (when running the code, such as examining the frames).</p>
|
<python><reflection>
|
2023-06-28 00:54:43
| 0
| 2,577
|
DiveIntoML
|
76,569,327
| 3,604,745
|
Google Document AI Python Query Throws "ValueError: Unknown field for ProcessRequest: document_type"... base64 encoding throws another error
|
<p>I'm running the sample query for Python using an OCR Google Document AI processor. The only difference between my query and this sample query:</p>
<pre><code>process_document_sample(
project_id="99999FAKE",
location="us",
processor_id="99999FAKE",
file_path="/path/to/local/pdf"
)
</code></pre>
<p>is that I'm using JPEG instead of PDF. It throws:</p>
<blockquote>
<p>ValueError: Unknown field for ProcessRequest: document_type</p>
</blockquote>
<p>So, I figured I need to add this:</p>
<pre><code># Import the base64 encoding library.
import base64
# Pass the image data to an encoding function.
def encode_image(image):
with open(image, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
return encoded_string
</code></pre>
<p>So, I tried:</p>
<pre><code># Function to encode the image as base64
def encode_image(image_path):
with open(image_path, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read()).decode("utf-8")
return encoded_string
# File path to the JPEG image
image_path = "merged_images/fake.jpeg"
# Encode the image as base64
encoded_image = encode_image(image_path)
# Call the process_document_sample function
process_document_sample(
project_id="99999FAKE",
location="us",
processor_id="99999FAKE",
file_path=encoded_image
)
</code></pre>
<p>Unfortunately, that leads to:</p>
<blockquote>
<p>OSError: [Errno 63] File name too long:</p>
</blockquote>
<p>I guess that's because you only need base64 for a JSON-based request, but that still leaves me with the original error.</p>
|
<python><google-cloud-platform><cloud-document-ai>
|
2023-06-28 00:20:39
| 1
| 23,531
|
Hack-R
|
76,569,224
| 12,946,401
|
Visualizing 3D video sequence in python
|
<p>I have a stack of 100 images of size 300 x 300 which represent a 3D volume. I have another 100 of these 3D volumes as a sequence. I have been trying to visualize this 3D image sequence data as a video where I can watch how the volume is changing over time and move around the scene to see the changes in the volume from different views.</p>
<p>I tried doing this in matplotlib but it only works on 3D point data and I could not figure out how to make it visualize a 3D stack of images as a video. I also did not see support for this in mayavi or plotly and I was wondering if there is a package that can do this visualization out of the box? I would like to see the video sequence As MIP and if possible to slice through them.</p>
|
<python><multidimensional-array><graphics><3d><voxel>
|
2023-06-27 23:44:26
| 1
| 939
|
Jeff Boker
|
76,569,196
| 531,928
|
How to clean up additional object for use with flask
|
<p>The main looks like:</p>
<pre><code>class SpecialObject:
raise NotYetImplementedError
am = SpecialObject()
if __name__ == "__main__":
def exit_cleanup(sig, frame):
del am
signal.signal(signal.SIGINT, exit_cleanup)
app = MakeApp()
app.run(...)
</code></pre>
<p>Then when I hit ctrl+c the gets 'am' referenced before assignment. Any idea whats going on?</p>
|
<python><flask>
|
2023-06-27 23:34:30
| 1
| 1,531
|
Gunslinger
|
76,569,147
| 1,601,703
|
How can I make VS Code format my Python code?
|
<p>I have following Python code:</p>
<pre><code>check_files(original_file = original_file_name,
used_file = used_file_name,
unused_file = unused_file_name)
</code></pre>
<p>I want to make it instead to look like:</p>
<pre><code>check_files(original_file = original_file_name,
used_file = used_file_name,
unused_file = unused_file_name)
</code></pre>
<p>Also I want to correct formatting not only for function calls but also that way dictionary key/value pairs and etc.</p>
<p>For Example, in RStudio, if I select the code and press <kbd>CTRL + I</kbd> RStudio will correct formating as I have described above. Is there any similar way to correct formating in VSCode?</p>
|
<python><visual-studio-code>
|
2023-06-27 23:14:54
| 3
| 7,080
|
vasili111
|
76,569,146
| 9,537,439
|
Python version used within the Apache Airflow package in a Docker image
|
<p>I'm using the PowerShell from VSCode in W10. To be able to use Airflow in W10 I'm using a docker image. I have "downloaded" an apache/airflow docker image with:</p>
<pre><code>(my_env) PS E:\my_project> docker pull apache/airflow
</code></pre>
<p>Then when I check the Python version used in airflow within the docker image:</p>
<pre><code>(my_env) PS E:\my_project> docker run apache/airflow airflow version
2.6.2
(my_env) PS E:\my_project> docker run apache/airflow python --version
Python 3.7.17
(my_env) PS E:\my_project> python --version
Python 3.11.3
</code></pre>
<p>So the airflow image is the latest version 2.6.2 (as of June 2023), but that image uses Python 3.7. The virtual environment I'm using uses Python 3.11. Should I use 3.7 in my environment to avoid problems? Or there won't be conflicts at all?</p>
|
<python><docker><airflow>
|
2023-06-27 23:14:53
| 1
| 2,081
|
Chris
|
76,569,013
| 3,295,036
|
Passing values to nested attributes in python dataclasses
|
<p>I'm trying to create instances of a python dataclass that has nested attributes, when I pass an attribute that belongs to some nested one, it should replace the actual value with the provided.
Here my try:</p>
<pre><code>from dataclasses import dataclass, field
from typing import Optional
@dataclass(kw_only=True)
class Attributes:
correlation_id: Optional[str] = None
routing_key: Optional[str] = None
@dataclass(kw_only=True)
class Metadata:
workflow: Optional[str] = None
workflow_state_id: Optional[str] = None
attributes: Optional[Attributes] = field(default=None)
def __post_init__(self):
if self.attributes is None:
self.attributes = Attributes()
@dataclass(kw_only=True)
class Message:
version: Optional[int] = field(default=0)
metadata: Optional[Metadata] = field(default=None)
def __post_init__(self):
if self.metadata is None:
self.metadata = Metadata()
@dataclass(kw_only=True)
class Command(Message):
...
@dataclass(kw_only=True)
class SomeCommand(Command):
name: str
description: str
c1 = SomeCommand(name="awesome command", description="desc", routing_key='some key')
print(c1)
</code></pre>
<p>My expected response should be:</p>
<pre><code>SomeCommand(version=0, metadata=Metadata(workflow=None, workflow_state_id=None, attributes=Atributes(correlation_id=None, routing_key='some key')), name='awesome command', description='desc')
</code></pre>
<p>But it doesn't work and shows the error:</p>
<pre><code>TypeError: SomeCommand.__init__() got an unexpected keyword argument 'routing_key'
</code></pre>
<p>Is it possible to do in Python dataclasses? if so, what should I change? Thanks in advance.</p>
|
<python><python-dataclasses>
|
2023-06-27 22:29:20
| 2
| 1,111
|
maudev
|
76,568,665
| 9,343,043
|
os.mkdir(os.path.join($folder1,$folder2)) adding "\" before $folder2 when running with shell wrapper
|
<p>I wrote a shell/SGE wrapper for a python script for image analysis research. The first step in my python script is to make folders to copy the output of my python script <code>SC_qc.py</code> for each subject (as seen below):</p>
<pre><code># -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
import numpy
import matplotlib.pyplot as plt
import nibabel as nib
import sys
import os
# Get subject IDs and put them in list
txtfile = sys.argv[1]
subjects = [ s.strip() for s in open(txtfile, 'r').readlines() ]
comp_space = "\\\\share\\org\\comp_space\\doej\\SC_QC\\"
sc_space = "\\\\share\\org\\projects\\autism\\Protocols\\StructuralConnectivity\\"
for folder in subjects:
try:
os.mkdir(os.path.join(comp_space,folder))
except FileExistsError:
pass
</code></pre>
<p>The problem is, there seems to be a "/" added before the subject ID which is stored as <code>folder</code> from the <code>subjects</code> variable (obtained by the loop/iteration over the text file of subject ids <code>subjects</code>).</p>
<pre><code>Executing on: 2118ffn001
Executing in: /share/org/projects/autism/
Executing at: Tue Jun 27 16:46:04 EDT 2023
Traceback (most recent call last):
File "/share/comp_space/doej/SC_qc.py", line 25, in <module>
os.mkdir(os.path.join(comp_space,folder))
FileNotFoundError: [Errno 2] No such file or directory: '\\\\share\\org\\comp_space\\doej\\SC_QC\\/subject_01'
</code></pre>
<p>The folder <code>\\\\\share\\org\\comp_space\\doej\\SC_QC\\</code> itself exists, and I did attempt to use <code>dos2unix</code> to no avail (got the same error).</p>
<p>Is there any chance I should use <code>sed</code> to remove a hidden character from my text file, and what could be that hidden character? Or is there something wrong with how I am reading <code>subjects</code> as a variable in
<code>subjects = [ s.strip() for s in open(txtfile, 'r').readlines() ]</code>? My <code>txtfile</code> variable is just all my subject IDs listed one per line in a text file.</p>
<p>Thanks in advance!</p>
|
<python><operating-system><strip>
|
2023-06-27 21:13:55
| 2
| 871
|
florence-y
|
76,568,570
| 6,440,589
|
How to track the version of a Python package used by Azure Machine Learning?
|
<p>My team has deployed a <strong>Python</strong> script onto <strong>Azure Machine Learning (AML)</strong>: among other things, this script processes data stored in a pandas dataframe. Lately, the script suddenly stopped working, returning an error related to the <code>pd.NA</code> values used to denote missing data.</p>
<p>Replacing the <code>pd.NA</code> values in the pandas dataframe with <code>np.nan</code> fixed this issue, but it is still unclear how this error happened in the first place.</p>
<p>According to <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html" rel="nofollow noreferrer">pandas' webpage concerning missing data</a>:</p>
<blockquote>
<p>Experimental: the behaviour of pd.NA can still change without warning.</p>
</blockquote>
<p>But our dependencies have not changed lately; for instance, we have been using <strong>pandas 1.1</strong> since the beginning of our project. Our <strong>conda.yml</strong> file also stayed unchanged over the past few months.</p>
<p>Is it possible that Azure Machine Learning uses versions of toolboxes and packages different than those specified in the <strong>conda.yml</strong> file? If so, how to keep track of the version in use?</p>
|
<python><pandas><conda><nan><azure-machine-learning-service>
|
2023-06-27 20:53:31
| 1
| 4,770
|
Sheldon
|
76,568,543
| 19,989,634
|
'Updater' object has no attribute 'dispatcher' Error
|
<p>I am trying to make a telegram to discord bot for a select users messages but no matter what I do I run into an error: <strong>AttributeError: 'Updater' object has no attribute 'dispatcher'</strong></p>
<p>I cannot find a replacement for dispatcher anywhere. This is the first time im attempting this.</p>
<p>here is my current code:</p>
<pre><code>import telegram
from telegram.ext import *
from discord.ext import commands
import discord
import asyncio
telegram_token = ""
discord_token = ""
# Discord Channel ID
discord_channel_id = "112865"
telegram_bot = telegram.Bot(token=telegram_token)
intents = discord.Intents.default()
discord_bot = discord.Client(intents=intents)
def telegram_message_handler(update, context):
message = update.effective_message
text = message.text
username = message.from_user.username
# Send the message to Discord
discord_message = f"[@{username}]: {text}"
discord_channel = discord_bot.get_channel(int(discord_channel_id))
asyncio.run_coroutine_threadsafe(discord_channel.send(discord_message), discord_bot.loop)
updater = Updater(telegram_token, update_queue=None)
dispatcher = updater.dispatcher
message_handler = MessageHandler(filters.text & ~filters.command, telegram_message_handler)
dispatcher.add_handler(message_handler)
@discord_bot.event
async def on_ready():
print(f"Connected to Discord as {discord_bot.user.name}")
@discord_bot.event
async def on_message(message):
if message.author == discord_bot.user:
return
# Send the message to Telegram
telegram_chat_id = "dav52"
await telegram_bot.send_message(chat_id=telegram_chat_id, text=message.content)
# Start the Telegram bot
updater.start_polling()
# Start the Discord bot
discord_bot.run(discord_token)
</code></pre>
|
<python><discord><telegram><telegram-bot>
|
2023-06-27 20:49:27
| 0
| 407
|
David Henson
|
76,568,476
| 1,494,193
|
How to load a pandas dataframe from ORM SqlAlchemy from an existing database?
|
<p>I want to load an entire database table into a Pandas DataFrame using <strong>SqlAlchemy ORM</strong>. I have successfully queried the number of rows in the table like this:</p>
<pre class="lang-py prettyprint-override"><code>from local_modules import RemoteConnector
from sqlalchemy import Integer, Column
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.automap import automap_base
import pandas as pd
Base = automap_base()
class Calculations(Base):
__tablename__ = "calculations"
id = Column("ID", Integer, primary_key=True)
Base.prepare()
connection = RemoteConnector('server', 'calculations_database')
connection.connect()
Session = sessionmaker(bind=connection.engine)
session = Session()
result = session.query(Calculations).count()
print('Record count:', result)
</code></pre>
<p>Output:</p>
<pre><code>Record count: 13915
Process finished with exit code 0
</code></pre>
<p>If possible and, it seems it can be done, I want to define the table using automap_base from sqlalchemy.ext.automap and not have to manually state each column. I did so with 'id' because I had an error that asked me to set a primary key (is there a better way to do this?).</p>
<p>In order to get any results I've been able to do the following:</p>
<pre class="lang-py prettyprint-override"><code>results = session.query(Calculations).all()
</code></pre>
<p>Output:</p>
<pre><code>[<__main__.Calculations object at 0x000001AF2324F510>, <__main__.Calculations object at 0x000001AF2324F6D0>, <__main__.Calculations object at 0x000001AF2324F810>, <__main__.Calculations object at 0x000001AF2324F910>, <__main__.Calculations object at 0x000001AF2324FA50>, <__main__.Calculations object at 0x000001AF2324FB90>, <__main__.Calculations object at 0x000001AF2324FCD0>, <__main__.Calculations object at 0x000001AF2324FE10>, <__main__.Calculations object at 0x000001AF2324FF50>, <__main__.Calculations object at 0x000001AF22CD40D0>, <__main__.Calculations object at 0x000001AF22CD4210>, <__main__.Calculations object at 0x000001AF22CD4350>, <__main__.Calculations object at 0x000001AF22CD4490>, <__main__.Calculations object at 0x000001AF22CD45D0>, <__main__.Calculations object at 0x000001AF22CD4710>, <__main__.Calculations object at 0x000001AF22CD4850>, <__main__.Calculations object at 0x000001AF22CD4990>, <__main__.Calculations object at 0x000001AF22CD4AD0>, <__main__.Calculations object at 0x000001AF22CD4C10>, <__main__.Calculations object at 0x000001AF22CD4D50>, <__main__.Calculations object at 0x000001AF22CD4E90>, <__main__.Calculations object at 0x000001AF22CD4FD0>, <__main__.Calculations object at 0x000001AF22CD5110>, <__main__.Calculations object at 0x000001AF22CD5250>, <__main__.Calculations object at 0x000001AF22CD53D0>, <__main__.Calculations object at 0x000001AF22CD5510>, <__main__.Calculations object at 0x000001AF22CD5650>, <__main__.Calculations object at 0x000001AF22CD5790>, <__main__.Calculations object at 0x000001AF22CD58D0>, <__main__.Calculations object at 0x000001AF22CD5A10>, <__main__.Calculations object at 0x000001AF22CD5B50>, <__main__.Calculations object at 0x000001AF22CD5C90>, <__main__.Calculations object at 0x000001AF22CD5DD0>, <__main__.Calculations object at 0x000001AF22CD5F10>, <__main__.Calculations object at 0x000001AF22CD6050>, <__main__.Calculations object at 0x000001AF22CD6190>, <__main__.Calculations object at 0x000001AF22CD62D0>, <__main__.Calculations object at 0x000001AF22CD6410>, <__main__.Calculations object at 0x000001AF22CD6550>, <__main__.Calculations object at 0x000001AF22CD6690>, <__main__.Calculations object at 0x000001AF22CD67D0>, <__main__.Calculations object at 0x000001AF22CD6910>, <__main__.Calculations object at 0x000001AF22CD6A50>, <__main__.Calculations object at 0x000001AF22CD6B90>, <__main__.Calculations object at 0x000001AF22CD6CD0>, <__main__.Calculations object at 0x000001AF22CD6E10>, <__main__.Calculations object at 0x000001AF22CD6F50>, <__main__.Calculations object at 0x000001AF22CD7090>]
</code></pre>
<p>This shows all the columns in the table as an object. My best attempt to extract the values has been:</p>
<pre class="lang-py prettyprint-override"><code>for result in results:
print(result.__dict__)
</code></pre>
<p>Output:</p>
<pre><code>{'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x00000232E0A91730>, 'id': 1.0}
{'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x00000232E0A90E90>, 'id': 2.0} ... and so on
</code></pre>
<p>Not only I do not get the values but it does not print the columns, only the ID I defined in the class. I thought that when I did the automap_base it would transfer automatically. When I do define them they do appear, like this:</p>
<pre class="lang-py prettyprint-override"><code>class Calculations(Base):
__tablename__ = "Calculations"
id = Column("Trade ID", Integer, primary_key=True)
Amount = Column("Amount", Integer)
Yield = Column("Yield", Integer)
</code></pre>
<p>Output:</p>
<pre><code>{'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x000001BFD2092090>, 'Amount': 34303.0, 'Yield': 0.01141, 'id': 1.0}
{'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x000001BFD2091010>, 'Amount': 10000.0, 'Yield': 0.01214, 'id': 2.0}
{'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x000001BFD2090FB0>, 'Amount': 43515.0, 'Yield': 0.01206, 'id': 3.0}
... and so on
</code></pre>
<p>What I would like to ultimately do is something like this as suggested in <a href="https://stackoverflow.com/questions/29525808/sqlalchemy-orm-conversion-to-pandas-dataframe">SQLAlchemy ORM conversion to pandas DataFrame</a>:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_sql_query(sql=session.query(Calculation).all(), con=connection.engine)
</code></pre>
<p>But I get the following error:</p>
<pre><code> raise exc.ObjectNotExecutableError(statement) from err
sqlalchemy.exc.ObjectNotExecutableError: Not an executable object: [<__main__.CALC_TFSB_INVESTMENTS object at 0x000001FF42966E50>, ... an so on
</code></pre>
<p>I have also tried:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_sql_query(sql=select(Calculations), con=connection.engine)
print(df.head())
</code></pre>
<p>How can I load the DataFrame? How can I automate the schema detection, I suppose using automap_base? How can I improve my code, are there other things I can add, perhaps dunder fields to make things better?</p>
|
<python><pandas><dataframe><sqlalchemy><orm>
|
2023-06-27 20:38:00
| 2
| 401
|
Gorgonzola
|
76,568,234
| 4,333,809
|
Failed running pyinstaller for universal2 arch. Some dependencies are not compatible
|
<p>Pyinstaller can set the target arch when working with .spec files by specifying the target architecture in the .spec file via the <code>target_arch=</code> argument to EXE().</p>
<p>I want to set it to <code>universal2</code> to build a macho fat file for running on both Intel and arm64 macOS available architectures.</p>
<p>However, some of the dependancies are python system <code>.so</code> libraries built for current arch only (in my case it's arm64)</p>
<p>This leads to the following error when trying to build the executable:</p>
<pre><code> raise IncompatibleBinaryArchError(f"{display_name} is not a fat binary!")
PyInstaller.utils.osx.IncompatibleBinaryArchError:
/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/lib-dynload/_struct.cpython-310-darwin.so is not a fat binary!
</code></pre>
<p>So apparently <code>_struct.cpython-310-darwin.so</code> is built for arm64 only... Is there any way to build a FAT version of my project ?</p>
<p>where can I get builtin python framework that compiled as FAT ?</p>
<p>Is there a way to auto-download the right library compiled for the desired arch, upon demand ?</p>
<p>thanks !</p>
|
<python><macos><pyinstaller>
|
2023-06-27 19:56:18
| 2
| 5,214
|
Zohar81
|
76,568,119
| 8,549,369
|
Python type checking not recognizing type information in docstrings
|
<p>I am using an internal library called client_aiohttp, which was generated using <a href="https://openapi-generator.tech/docs/generators/python-aiohttp/" rel="nofollow noreferrer">openapi-generator/python-aiohttp</a>.</p>
<p>For example here is the add_hosts method in the Clusters class:</p>
<pre class="lang-py prettyprint-override"><code># client_aiohttp/api/clusters.py
class Clusters:
def add_hosts(self, cluster_name, **kwargs): # noqa: E501
"""add_hosts # noqa: E501
A human readable description
:param cluster_name: (required)
:type cluster_name: str
:param body:
:type body: ApiHostRefList
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:return: Returns the result object.
:rtype: ApiHostRefList
"""
return self.add_hosts_with_http_info(cluster_name, **kwargs) # noqa: E501
</code></pre>
<p>However, the vscode python language server (pylance) and mypy do not seem to recognize the type information specified in the docstring.</p>
<p>I have tried searching for a tool or option that can generate a .pyi stub file from the docstrings or enable the evaluation of docstrings, but I haven't found any.</p>
<p>How can I make the type checker recognize the type information specified in the docstrings for proper type checking?</p>
|
<python><python-typing><aiohttp><openapi-generator><docstring>
|
2023-06-27 19:40:46
| 1
| 537
|
tomitheninja
|
76,568,019
| 1,873,403
|
How can I exclude non-matching records from a graphql using the gql library in python?
|
<p>I am running a query against a graphql server, but the query is returning more records than can be handled and the query is timing out. I can't figure out how to get pagination to work properly, but really I only need a subset of all of the records where the language field matches 'english', but I can't figure out how to do this.</p>
<pre><code>query DocumentsAndSoftware
{
technicalDocumentsEntries (limit: 3000)
{
... on technicalDocuments_document_Entry
{ id title technicalDocuments {
... on technicalDocuments_document_BlockType
{ fallbackFileUrl documentLanguage file
{ id url dateModified
}
}
}
}
}
</code></pre>
<p>I have tried this</p>
<pre><code>query
languages($documentLanguage: {DocumentsAndSoftware:
{technicalDocuments_document_Entry {technicalDocuments:
{technicalDocuments_document_BlockType: {some: {documentLanguage:
{eq:'english'}}}}}}})
</code></pre>
<p>To prepend the query</p>
<p>But I am getting this error:</p>
<pre><code>GraphQLSyntaxError: Syntax Error: Expected Name, found '{'.
GraphQL request:1:36
1 | query languages($documentLanguage: {DocumentsAndSoftware: {technicalDocuments_do
| ^
| cument_Entry {technicalDocuments: {technicalDocuments_document_BlockType: {some:
</code></pre>
<p>I don't really understand the logic around how this querying/filtering works here. I understand that I'm trying to find records where the documentLanguage field matches the term 'english', and I think I've specified the path for that match, but the error doesn't make any sense to me. Can someone help me out?</p>
<p>Thanks</p>
|
<python><graphql><gqlquery>
|
2023-06-27 19:25:49
| 0
| 1,180
|
Brad Davis
|
76,567,959
| 1,766,544
|
mysqlx connector doesn't work with large integers
|
<p>The mysqlx implementation doesn't appear to support large integers. It deduces the type, but doesn't seem to account for overflow.</p>
<p>Looking at the source, I think the only integer types are "v_signed_int" and "v_unsigned_int". The numbers that work are pretty small (32 bit), so I must be doing something wrong, but even if it's a bug, where would I report it?</p>
<p>This is a quick and dirty reproduction</p>
<pre><code>import mysqlx
session = mysqlx.get_session({
'host': 'localhost',
'port': 33060,
'user': 'redacted',
'password': 'redacted',
})
document1 = {
'_id': 'unique-name-1',
'small': 888888888,
'stuff': 'other stuff',
}
document2 = {
'_id': 'unique-name-2',
'large': 8888888888,
'stuff': 'other stuff',
}
try:
db = session.create_schema('databasename')
except:
db = session.get_schema('databasename')
try:
collection = db.create_collection('tablename')
except:
collection = db.get_collection('tablename')
try:
collection.add(document1).execute()
collection.add(document2).execute()
print('done')
except Exception as e:
print(type(e))
print(e)
</code></pre>
<blockquote>
<p><class 'SystemError'><br />
<built-in function serialize_message> returned a result with an exception set</p>
</blockquote>
<p>select * from tablename;</p>
<blockquote>
<p>doc {"_id": "unique-name-1", "small": 888888888, "stuff": "other stuff"}<br />
_id 0x756E697175652D6E616D652D31<br />
_json_schema {"type": "object"}</p>
</blockquote>
<p>sql server 8.0 is configured with</p>
<blockquote>
<p>loose_mysqlx_port=33060</p>
</blockquote>
<p>pip freeze | grep mysql</p>
<blockquote>
<p>mysqlclient==2.2.0<br />
mysql-connector-python==8.0.33</p>
</blockquote>
<p>This is as far as I could get in the libraries: Protobuf.mysqlxpb.serialize_message(self._msg) raises the SystemError where self._msg is:</p>
<blockquote>
<p>{'_mysqlxpb_type_name': 'Mysqlx.Crud.Insert', 'collection': {'_mysqlxpb_type_name': 'Mysqlx.Crud.Collection', 'name': b'test', 'schema': b'test_docguardian_documents'}, 'projection': [], 'row': [{'_mysqlxpb_type_name': 'Mysqlx.Crud.Insert.TypedRow', 'field': [{'_mysqlxpb_type_name': 'Mysqlx.Expr.Expr', 'type': 7, 'object': {'_mysqlxpb_type_name': 'Mysqlx.Expr.Object', 'fld': [{'_mysqlxpb_type_name': 'Mysqlx.Expr.Object.ObjectField', 'key': b'_id', 'value': {'_mysqlxpb_type_name': 'Mysqlx.Expr.Expr', 'type': 2, 'literal': {'_mysqlxpb_type_name': 'Mysqlx.Datatypes.Scalar', 'type': 8, 'v_string': {'_mysqlxpb_type_name': 'Mysqlx.Datatypes.Scalar.String', 'value': b'unique-name-1'}}}}, {'_mysqlxpb_type_name': 'Mysqlx.Expr.Object.ObjectField', 'key': b'small', 'value': {'_mysqlxpb_type_name': 'Mysqlx.Expr.Expr', 'type': 2, 'literal': {'_mysqlxpb_type_name': 'Mysqlx.Datatypes.Scalar', 'type': 1, 'v_signed_int': 888}}}, {'_mysqlxpb_type_name': 'Mysqlx.Expr.Object.ObjectField', 'key': b'large', 'value': {'_mysqlxpb_type_name': 'Mysqlx.Expr.Expr', 'type': 2, 'literal': {'_mysqlxpb_type_name': 'Mysqlx.Datatypes.Scalar', 'type': 1, 'v_signed_int': 888888888888888}}}, {'_mysqlxpb_type_name': 'Mysqlx.Expr.Object.ObjectField', 'key': b'stuff', 'value': {'_mysqlxpb_type_name': 'Mysqlx.Expr.Expr', 'type': 2, 'literal': {'_mysqlxpb_type_name': 'Mysqlx.Datatypes.Scalar', 'type': 8, 'v_string': {'_mysqlxpb_type_name': 'Mysqlx.Datatypes.Scalar.String', 'value': b'other stuff'}}}}]}}]}], 'args': [], 'data_model': 1, 'upsert': False}</p>
</blockquote>
|
<python><mysql>
|
2023-06-27 19:16:14
| 0
| 5,981
|
Kenny Ostrom
|
76,567,918
| 50,385
|
How to have mypy ignore classes using a specific decorator?
|
<p>Sometimes as a syntactic convenience, I define class decorators that turn class definitions into data rather than a new class. Or they return the class that was passed in almost entirely unchanged, except for adding one field containing the data that was generated from the class definition.</p>
<p>However when I do this I am often annotating fields with objects that are not real types. For example:</p>
<pre><code>@turn_into_data
class Foo:
bar: 5
</code></pre>
<p>This might build a dictionary <code>{ "bar" : 5 }</code>. <code>5</code> is not a type in Python. If I also typecheck my code with <code>mypy</code> or <code>pytype</code> they will error here. There is a standard workaround:</p>
<pre><code>@turn_into_data
@typing.no_type_check
class Foo:
bar: 5
</code></pre>
<p>But this has to be repeated every time I use <code>@turn_into_data</code>. I can't just have <code>turn_into_data</code> call it because the type checker isn't actually running either decorator, it just special cases seeing <code>typing.no_type_check</code> in the decorator list and disables checking in that case.</p>
<p>I could change all my type annotations so that instead of <code>bar: 5</code> I write <code>bar: Annotated[Any, 5]</code>. <code>Annotated</code> was introduced as a way to introduce extra data in an annotation that isn't strictly for type checking. But this is much more verbose. Again there's the temptation to try to define my decorator to work around it, having it wrap the declared type in <code>Annotated[Any, T]</code> but again by then it's already too late, the static checker doesn't know about the transformation.</p>
<p>Is there any way to say my decorator should also disable type checking?</p>
|
<python><python-decorators><mypy><python-typing>
|
2023-06-27 19:08:16
| 0
| 22,294
|
Joseph Garvin
|
76,567,891
| 5,662,005
|
Convert Databricks notebook to .py file in workspace
|
<p>The actual problem I'm trying to solve is that I'm using mkdocs/mkdocs-materials for my documentation. But that tool can't work with notebook type files.</p>
<p>So as a clumsy workaround I'm figuring is to have an intermediate step that creates a copy of the notebook content as a .py file, in the same workspace folder. Have mkdocs build off of those copies. Then delete the copies before pushing.</p>
<p>For example I've got a notebook type object in my workspace. Display looks like this:</p>
<pre><code>%sql
select * from something
%sql
select * from something_else
def some_dummy_function():
print('dummy')
</code></pre>
<p>When you export a notebook as a source python file via the GUI, you get this with all the tagging for syntax.</p>
<pre><code># Databricks notebook source
# MAGIC %sql
# MAGIC select * from something
# COMMAND ----------
# MAGIC %sql
# MAGIC select * from something_else
def some_dummy_function():
print('dummy')
</code></pre>
<p>I want to get this programmatically, from a notebook in a workspace.</p>
<p>Or if you've got suggestions for the root problem at hand ... all ears.</p>
|
<python><databricks><mkdocs><aws-databricks>
|
2023-06-27 19:02:17
| 1
| 3,899
|
Error_2646
|
76,567,884
| 2,661,518
|
python regex split at last occurance of special characters
|
<p>I want to split below and only keep sub text <code>Test results don't match for id</code>
from <code>[error][test][new] Test results don't match for id 1911</code></p>
<p>I used below to get number out <code>re.split(r'\d+', x)[0].strip()</code> but I want to get <code>[error][test][new]</code> out too</p>
|
<python><regex>
|
2023-06-27 19:00:42
| 1
| 2,775
|
user2661518
|
76,567,814
| 1,019,129
|
Suppress LLamaCpp stats output
|
<p>How can I suppress LLamaCpp stats output in Langchain ...
equivalent code :</p>
<pre><code>llm = LlamaCpp(model_path=..., ....)
llm('who is Caesar')
> who is Caesar ?
Julius Caesar was a Roman general and statesman who played a critical role in the events that led to the demise of the Roman Republic and the rise of the Roman Empire. He is widely considered one of Rome's greatest warlords and is often ranked alongside his adopted son, Octavian, as one of the two most important figures in ancient
llama_print_timings: load time = 532.05 ms
llama_print_timings: sample time = 32.74 ms / 71 runs ( 0.46 ms per token, 2168.40 tokens per second)
llama_print_timings: prompt eval time = 29011.08 ms / 432 tokens ( 67.16 ms per token, 14.89 tokens per second)
llama_print_timings: eval time = 10284.56 ms / 70 runs ( 146.92 ms per token, 6.81 tokens per second)
llama_print_timings: total time = 39599.38 ms
Rome.
</code></pre>
|
<python><langchain><large-language-model><llamacpp>
|
2023-06-27 18:45:11
| 2
| 7,536
|
sten
|
76,567,774
| 3,532,564
|
tensorflow gradients are 0 even though the loss is non-zero?
|
<p>Here is a minimal example:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import numpy as np
def setup_mat():
np_data = np.asarray(
[
[-1, -2, -3],
[-0.1, -0.2, -0.3],
[0.1, 0.2, 0.3],
[1, 2, 3]
]
)
return tf.convert_to_tensor(
np_data, dtype_hint=tf.float32)
def report_binary_xent():
Xs = setup_mat()
weights = tf.Variable([[1.0], [2.0], [3.0]], shape=(3, 1), trainable=True)
bce = tf.keras.losses.BinaryCrossentropy(from_logits=False)
labels = np.asarray([0.0, 1.0, 0.0, 0.0])
with tf.GradientTape(persistent=True) as g:
g.watch([weights])
logits = tf.matmul(Xs, weights)
preds = tf.math.softmax(logits)
bce_res = bce(labels, preds)
dydx = g.gradient(bce_res, [preds, logits, weights])
print(dydx)
</code></pre>
<p>I'd expect the gradients to be non-zero as:</p>
<ol>
<li>the weights are marked as trainable</li>
<li>I'm watching the weights</li>
<li>the loss is non-zero</li>
</ol>
<p>Could I get any insight into this? It's not immediately clear to me what's wrong</p>
|
<python><tensorflow>
|
2023-06-27 18:39:21
| 1
| 2,258
|
IanQ
|
76,567,771
| 14,230,633
|
How to register something other than a model in MLFlow
|
<p>I know how to register a model (e.g. <code>mlflow.sklearn.log_model()</code> and then register the model object) but I'm hoping to register a dataframe so that later I can pull the Production version of that dataframe. Any idea if/how that's possible?</p>
<p>I tried wrapping a class around <code>mlflow.pyfunc.PythonModel</code> and doing something like the following</p>
<pre><code>class MyModel(mlflow.pyfunc.PythonModel):
def __init__(self, df):
super().__init__()
self.df = df
</code></pre>
<p>But when I registered that model and loaded it later, the <code>df</code> attribute did not exist</p>
|
<python><pandas><mlflow>
|
2023-06-27 18:38:42
| 1
| 567
|
dfried
|
76,567,728
| 1,872,286
|
python symmetric start/end regex does not work
|
<p>Python 3</p>
<p>read a string from the file:</p>
<pre><code>with open(filepath, "r", encoding="utf-8") as f:
content_string = f.read()
</code></pre>
<p>It looks line this:</p>
<pre><code>---
section-1-line-1
section-1-line-2
section-1-line-3
---
section-2-line-1
section-2-line-2
section-2-line-3
---
section-3-line-1
section-3-line-2
section-3-line-3
---
</code></pre>
<p>I need to remove entire section that contains line <code>section 2 line 2</code></p>
<p>So the end result should be</p>
<pre><code>---
section-1-line-1
section-1-line-2
section-1-line-3
---
section-3-line-1
section-3-line-2
section-3-line-3
---
</code></pre>
<p>So I create regexp:</p>
<pre><code>rx = re.compile(r'---[^-{3}]+section-2-line-2[^-{3}]+---', re.S)
content_string_modified = re.sub(rx, '', content_string)
</code></pre>
<p>This regexp above does nothing, i.e. does not match. If I remove the closing <code>---</code> from the regex (<code>r'---[^-{3}]+section-2-line-2[^-{3}]+'</code>) it matches partially - it finds starting negative class but does not use the quantifier of the closing negative class, i.e. ignores <code>{3}</code> and stops at the first dash, not at the first three dashes, so it leaves a chunk of section that needs to be removed:</p>
<pre><code>---
section-1-line-1
section-1-line-2
section-1-line-3
-2-line-3
---
section-3-line-1
section-3-line-2
section-3-line-3
---
</code></pre>
<p>Why? How to make both starting and ending <code>[^-{3}]+</code> to work? Thanks!</p>
|
<python><regex><class><multiline>
|
2023-06-27 18:31:44
| 3
| 649
|
SwissNavy
|
76,567,692
| 15,528,750
|
Hydra: How to express None in config files?
|
<p>I have a very simple Python script:</p>
<pre class="lang-py prettyprint-override"><code>import hydra
from omegaconf import DictConfig, OmegaConf
@hydra.main(version_base="1.3", config_path=".", config_name="config")
def main(cfg: DictConfig) -> None:
if cfg.benchmarking.seed_number is None:
raise ValueError()
if __name__ == "__main__":
main()
</code></pre>
<p>And here the config file:</p>
<pre><code>benchmarking:
seed_number: None
</code></pre>
<p>Unfortunately, the Python script does <em>not</em> raise an error. Instead, when I print the type of <code>cfg.benchmarking.seed_number</code>, it is <code>str</code>. How can I pass <code>None</code> instead?</p>
|
<python><fb-hydra>
|
2023-06-27 18:24:12
| 3
| 566
|
Imahn
|
76,567,665
| 1,761,806
|
npydantic.error_wrappers.ValidationError: 1 validation error for XXX field required (type=value_error.missing)
|
<p>I've edited the question as going round in circles trying to fix the original error. Here's the latest:</p>
<p><strong>Current error</strong></p>
<pre><code>pydantic.main.BaseModel.__init__\npydantic.error_wrappers.ValidationError: 1 validation error for ConversationBufferMemory2\nchat\n field required (type=value_error.missing)\n', 'args': '[]', 'kwargs':
</code></pre>
<p><strong>llm.py</strong></p>
<pre><code> chat = Chat.objects.get(id=chat_id)
memory = ConversationBufferMemory2()
memory.set_chat(chat)
</code></pre>
<p><strong>models2.py</strong></p>
<pre><code>from langchain.memory.buffer import ConversationBufferMemory
class ConversationBufferMemory2(ConversationBufferMemory):
"""Buffer for storing conversation memory."""
chat:Chat
#def __init__(self,*args, **kwargs):
# super().__init__(*args, **kwargs)
def set_chat(self,chat:Chat):
self.chat=chat
@property
def buffer(self) -> Any:
"""String buffer of memory. Each message is separated by a newline."""
if self.return_messages:
return self.chat.messages # Think this is the right return type ? https://python.langchain.com/docs/modules/memory/
else:
return get_buffer_string(
self.chat.messages,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
</code></pre>
<pre><code>ValidationError:
File "/code/apps/llm_module/llm.py", line 183, in get_response\n memory = ConversationBufferMemory2()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/local/lib/python3.11/site-packages/langchain/load/serializable.py", line 74, in __init__\n super().__init__(**kwargs)\n File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__\npydantic.error_wrappers.ValidationError: 1 validation error for ConversationBufferMemory2\nchat\n field required (type=value_error.missing)\n', 'args': '[]', 'kwargs':
</code></pre>
|
<python><django><py-langchain>
|
2023-06-27 18:19:07
| 1
| 7,637
|
Reddspark
|
76,567,398
| 3,130,747
|
Ignore pylint no-member errors for the module psycopg2.errors using pyproject.toml
|
<p>How can I ignore the pylint error:</p>
<pre><code>E1101: Module 'psycopg2.errors' has no 'RaiseException' member (no-member)
</code></pre>
<p>by a setting in the <code>pyproject.toml</code>. Looking in <code>pylint --help</code> there seems to be <code>ignored-modules</code> under the <code>Main</code> section, so I've tried adding:</p>
<pre><code>[tool.pylint."MAIN"]
ignored-modules = ["psycopg2.errors"]
</code></pre>
<p>To the file - but it's not suppressed / ignored the warnings related to the module, i feel as though i'm missing something obvious.</p>
|
<python><psycopg2><pylint>
|
2023-06-27 17:38:46
| 1
| 4,944
|
baxx
|
76,567,309
| 3,357,735
|
Pass variable name as column name in Dynamodb UpdateExpression python
|
<p>How can I pass a variable name in Dynamocdb UpdateExpression, using python.</p>
<p>file is the variable that i want to put in UpdateExpression. When I use like below it gives me</p>
<pre><code>"[ERROR] ClientError: An error occurred (ValidationException) when calling the UpdateItem operation: The document path provided in the update expression is invalid for update
response = table.update_item(
Key={"runid": item["runid"]},
UpdateExpression="SET " + file + "= :val1, update_date = :val2",
ExpressionAttributeValues={
":val1": item[file],
":val2": item["update_date"],
}
</code></pre>
|
<python><amazon-dynamodb>
|
2023-06-27 17:26:24
| 1
| 3,071
|
Neethu Lalitha
|
76,567,286
| 10,755,032
|
Pyspark - How to calculate the average on the text data
|
<p>I have taken a look at this: <a href="https://stackoverflow.com/questions/57030626/how-to-use-pyspark-to-calculate-average-on-rdd">How to use Pyspark to calculate average on RDD</a> did not help.</p>
<p>My data is in a text file in the following way</p>
<pre><code>robert 43
daniel 64
andrew 99
jake 56
peter 67
sophia 56
marie 62
--
robert 55
daniel 89
andrew 0
jake 11
peter 0
sophia 67
marie 93
</code></pre>
<p>I want to create a rdd file calculate the avg marks for each student and then store it in a df. How do I do it.</p>
<p>The result I want:</p>
<pre><code>FirstName AvgMarks
robert 22
daniel 20
andrew 50
jake 10
...
</code></pre>
|
<python><pyspark>
|
2023-06-27 17:23:12
| 1
| 1,753
|
Karthik Bhandary
|
76,566,900
| 1,286,937
|
Pytest and gcloud libraries throwing DefaultCredentialsError
|
<p>I'm trying to write a unit test for my Flask application, however I'm having issues with these two google libs:</p>
<ul>
<li>google-cloud-secret-manager</li>
<li>google-cloud-bigquery</li>
</ul>
<p>As soon a new instance of <code>bigquery.Client</code> or <code>secretmanager.SecretManagerServiceClient</code> is created, I get this error:</p>
<blockquote>
<p>ERROR tests/integration/health/test_ping.py::test_ping -
google.auth.exceptions.DefaultCredentialsError: Your default
credentials were not found. To set up Application Default Credentials,
see <a href="https://cloud.google.com/docs/authentication/external/set-up-adc" rel="nofollow noreferrer">https://cloud.google.com/docs/authentication/external/set-up-adc</a>
for more information.</p>
</blockquote>
<p>I'm using the <a href="https://python-dependency-injector.ets-labs.org/" rel="nofollow noreferrer">dependency_injector</a>, and I have this container:</p>
<pre><code>from dependency_injector import containers, providers
from google.cloud import bigquery, secretmanager
class Core(containers.DeclarativeContainer):
bigquery_client = providers.Singleton(bigquery.Client)
secretmanager_client = providers.Singleton(secretmanager.SecretManagerServiceClient)
secret_manager_repository = providers.Singleton(
SecretManagerRepository, secretmanager_client=secretmanager_client
)
products_repository = providers.Singleton(
ProductsRepository,
bigquery_client=bigquery_client,
)
class Container(containers.DeclarativeContainer):
wiring_config = containers.WiringConfiguration(__name__)
core = providers.Container(
Core,
)
</code></pre>
<p>And then I use the Container above when creating the app:</p>
<pre><code>def create_app():
app = Flask(__name__)
container = Container()
container.core.init_resources()
app.container = container
register_blueprints(app)
return app
</code></pre>
<p>This is my test:</p>
<pre><code>def test_ping():
"""
GIVEN there is a Flask app running
WHEN the '/api/v1/' path is requested (GET)
THEN check that the response is valid
"""
app = create_app()
response = app.test_client().get("/api/v1/health")
assert b"ok" in response.data
</code></pre>
<p>I can't keep a key.json file in my codebase, with my gcloud credentials. How can I fix this error?</p>
<p>I've tried mocking the 2 gcloud clients, but it didn't work:</p>
<pre><code>with mock.patch('google.cloud.bigquery.Client') as mock_client, \
mock.patch('google.cloud.secretmanager.SecretManagerServiceClient') as mock_secret_client:
mock_client.return_value = None
mock_secret_client.return_value = None
app = ...
</code></pre>
<p>Any ideas?</p>
|
<python><google-bigquery><pytest><gcloud><google-secret-manager>
|
2023-06-27 16:29:31
| 0
| 5,151
|
Victor
|
76,566,712
| 19,053,778
|
Updating values of an old dataframe with a new dataframe using multiple columns and with different sizes
|
<p>I'm trying to updated a dataframe with another dataframe. They have the same columns, but have different sizes.</p>
<pre><code>old_df = pd.DataFrame({"Primary_ID":[ID1,ID1,ID1,ID2,ID2,ID2,ID3,ID3],"Secondary_ID":[1,2,3,4,5,6,7,8], "Value":[100,200,300,400,5000,600,700,8000]})
new_df = pd.DataFrame({"Primary_ID":[ID3,ID2],"Secondary_ID":[8,5],"Value":[800,500]})
</code></pre>
<p>I was trying to update using <code>old_df.update(new_df)</code>, but this will get the indices udpated in a wrong way since they have different sizes (so we have to match the exact values for all the columns except the column called "Value" in both dataframes, and updated it that way)</p>
<p>I was also looping through each row on both old and new dataframe and try to match the values that way, but thats not optimial as these dataframes have quite the size to them</p>
<p>Any advice?</p>
|
<python><pandas>
|
2023-06-27 15:59:39
| 2
| 496
|
Chronicles
|
76,566,690
| 19,198,552
|
How can I fill text continously into a tkinter text widget?
|
<p>I create numbers in a loop. At stdout you can watch the progress, because there is a print statement in the loop. But I want to make the progress visible in a tkinter text widget. So at each print I also copy the new number into a text widget. But in the text widget the numbers first show, when the loop stops.</p>
<pre><code>import tkinter as tk
def count():
number = 0
while number<10000:
number += 1
print(number)
text.insert(tk.END, number)
text.see(tk.END)
root = tk.Tk()
text = tk.Text(root)
text.grid()
button = tk.Button(root, text="run", command=count)
button.grid()
root.mainloop()
</code></pre>
<p>I expected to see the numbers in the text widget in a similar way they show at stdout.</p>
|
<python><tkinter>
|
2023-06-27 15:57:24
| 2
| 729
|
Matthias Schweikart
|
76,566,688
| 774,133
|
Plot without reordering x values in plotnine
|
<p>Please consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>from collections import OrderedDict as od
import pandas as pd
from plotnine import *
d = {"A1.1": 4, "A1.2": 12, "A1.12": 13, "B10.1": 10}
def sorter(item):
key, _ = item
l, n = key.split(".")
l = ord(l[0])*1000 + int(l[1:])*100
return l+int(n)
d = od(sorted(d.items(), key=sorter))
df = pd.DataFrame.from_dict(d, orient='index', columns=['n_files'])
df.index.name = "fld"
df.reset_index(inplace=True)
print(df.head())
</code></pre>
<p>with output:</p>
<pre><code> fld n_files
0 A1.1 4
1 A1.2 12
2 A1.12 13
3 B10.1 10
</code></pre>
<p>In the actual code, <code>d</code> is a dictionary with folder names as keys and number of files as values. As you can see, the <code>fld</code> column has the rows ordered according to the <code>sorter</code> function. This function serves to order the values using the integer number after the <code>.</code>: without that function values would be ordered lexicographically. For example, <code>A1.12</code> would come before <code>A1.2</code>. Instead, I want <code>A1.12</code> to be after <code>A1.2</code>.</p>
<p>If I plot the data using <code>plotnine.ggplot</code>, values are ordered lexicographically along the x axis.</p>
<pre class="lang-py prettyprint-override"><code>from collections import OrderedDict as od
import pandas as pd
from plotnine import *
d = {"A1.1": 4, "A1.2": 12, "A1.12": 13, "B10.1": 10}
def sorter(item):
key, _ = item
l, n = key.split(".")
l = ord(l[0])*1000 + int(l[1:])*100
return l+int(n)
d = od(sorted(d.items(), key=sorter))
df = pd.DataFrame.from_dict(d, orient='index', columns=['n_files'])
df.index.name = "fld"
df.reset_index(inplace=True)
print(df.head())
(
ggplot(df, aes(x='fld', y='n_files'))
+ geom_bar(stat='identity')
+ scale_x_discrete(name="subfolder")
+ scale_y_continuous(name="n. of files")
+ theme(
figure_size=(12, 4),
axis_text_x=element_text(rotation=90, hjust=1, size=12),
)
)
</code></pre>
<p><a href="https://i.sstatic.net/vCvgm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vCvgm.png" alt="enter image description here" /></a></p>
<p>Question:
how do I tell plotnine to keep the order it finds in the dataframe?</p>
<p>Furthermore, as a secondary question, is there any way to change the title of the two axes without specifying also their type (e.g., <code>scale_x_discrete</code>)?
`</p>
|
<python>
|
2023-06-27 15:57:00
| 1
| 3,234
|
Antonio Sesto
|
76,566,543
| 2,606,766
|
databricks-connect cannot load module in udf
|
<p>I'm trying to load <code>PyNaCl</code> into a pyspark UDF running on Windows.</p>
<pre class="lang-py prettyprint-override"><code>from nacl import bindings as c
def verify_signature(msg, keys):
c.crypto_sign_ed25519ph_update(...)
...
verify_signature_udf = udf(lambda x: verify_signature(x, public_keys), BooleanType())
data_signed = data.withColumn(
"is_signature_valid", verify_signature_udf("state_values")
)
</code></pre>
<p><code>PyNaCl</code> is installed locally (using <code>databricks-connect</code>) but as I understand it is not installed on the executor. Thus I get this:</p>
<pre><code>File "/databricks/spark/python/pyspark/cloudpickle/cloudpickle.py", line 679, in subimport
__import__(name)
ModuleNotFoundError: No module named 'nacl'
</code></pre>
<p>As described in <a href="https://spark.apache.org/docs/latest/api/python/user_guide/python_packaging.html" rel="nofollow noreferrer">Python Packaging</a> I'm trying to load it like this:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.environ['PYSPARK_PYTHON'] = "./environment/bin/python"
spark = SparkSession.builder.config(
"spark.archives",
"pyspark_venv.tar.gz#environment").getOrCreate()
</code></pre>
<p>No change, same message. If I just extract the nacl package from the tar.gz and store it as zip file and load it like this:</p>
<pre class="lang-py prettyprint-override"><code>spark.sparkContext.addPyFile(path="nacl.zip")
</code></pre>
<p>it gets loaded but I get this error now:</p>
<pre><code>File "/local_disk0/spark-xxx8db3a-5436-4ce8-8ff5-19eaeb4397b4/executor-xxxb7a74-4e1b-40bf-aae2-fc3553155f91/spark-xxx70cb9-482d-42a9-901a-c36f66a42a19/isolatedSparkFiles/0e10cb02-db69-4d63-b7ea-6c2b415fb5d9/nacl.zip/nacl/bindings/crypto_aead.py", line 17, in <module>
from nacl._sodium import ffi, lib
ModuleNotFoundError: No module named 'nacl._sodium'
</code></pre>
<p>Any ideas? Would it work with <a href="https://docs.databricks.com/dev-tools/dbx.html" rel="nofollow noreferrer">dbx</a>? Alternatively is there an option to achieve this without an UDF?</p>
<p><strong>Edit:</strong> In the zip file there are the following sodium components. No additional sodium stuff in the tgz which is not in the zip:</p>
<pre class="lang-bash prettyprint-override"><code>./nacl/bindings/sodium_core.py
./nacl/bindings/__pycache__/sodium_core.cpython-39.pyc
./nacl/_sodium.pyd
</code></pre>
<p><strong>Edit2</strong>: when I move the import into the for loop databricks-connect will run w/o error but the error will be raised upon execution on the executor thus it's also not working like this (was my misunderstanding):</p>
<pre class="lang-py prettyprint-override"><code>
def verify_signature(msg, keys):
for key in keys:
# this will prevent databricks-connect to raise a local error
from nacl import bindings as c
c.crypto_sign_ed25519ph_update(...)
...
verify_signature_udf = udf(lambda x: verify_signature(x, public_keys), BooleanType())
data_signed = data.withColumn(
"is_signature_valid", verify_signature_udf(my_keys)("state_values")
)
</code></pre>
|
<python><pyspark><databricks><databricks-connect><pynacl>
|
2023-06-27 15:41:03
| 0
| 1,945
|
HeyMan
|
76,566,490
| 4,390,160
|
Awaitable objects in Python after 3.10
|
<p>I was surprised by many errors like the following in code that was working without issues on Python 3.10, but failed when run using Python 3.11:</p>
<pre><code>AttributeError: '<Some awaitable class>' object has no attribute 'add_done_callback'
</code></pre>
<p>After some searching, I found that this is due to some asyncio aspects having been deprecated. I likely missed this due to the way the <code>asyncio</code> library is implemented, so that it doesn't actually trigger deprecation warnings.</p>
<p>That's water under the bridge, but now I'm having trouble refactoring my code correctly. For example, this code (simplified example) used to work without issues (and still works in Python 3.10):</p>
<pre><code>import asyncio
class ExampleAwaitable:
def __await__(self):
async def _():
return await asyncio.sleep(0)
return _().__await__()
print(f"{asyncio.iscoroutine(ExampleAwaitable())=}")
async def amain():
await asyncio.wait([ExampleAwaitable()])
print("waited!")
asyncio.run(amain())
</code></pre>
<p>But in Python 3.11, it results in:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "will_fail.py", line 20, in <module>
asyncio.run(amain())
File "C:\Program Files\Python311\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "will_fail.py", line 16, in amain
await asyncio.wait([ExampleAwaitable()])
File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 418, in wait
return await _wait(fs, timeout, return_when, loop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 522, in _wait
f.add_done_callback(_on_completion)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'ExampleAwaitable' object has no attribute 'add_done_callback'
</code></pre>
<p>What is the most straightforward and recommended way to refactor this code, to avoid this error in 3.11 and beyond?</p>
|
<python><python-asyncio><deprecated>
|
2023-06-27 15:34:09
| 1
| 32,399
|
Grismar
|
76,566,358
| 2,153,235
|
write_history_file("pyHistory"): 'str' object has no attribute 'mode'
|
<p>I am following <a href="https://stackoverflow.com/a/47595405/2153235">this answer</a> on writing the Python command history to a file, which relies on the <em>readline</em> module and the <em>write_history_file</em> function therein. I have to account for differences in using the Conda prompt on Windows 10, which is just the CMD prompt with environment variables set fo Conda. For this use case there is no history file in c:\User.Name, which is typically the case. Additionally, I need <a href="https://stackoverflow.com/a/51964654/2153235"><em>pyreadline3</em></a>.</p>
<p>Here is how I found the "module path" to <em>write_history_file</em> in <em>pyreadline3</em>:</p>
<pre><code>from pprint import pprint
import inspect, pyreadline3
pprint(inspect.getmembers(pyreadline3.Readline))
<...snip...>
('write_history_file',
<function BaseReadline.write_history_file at 0x000001D67D83D280>)]^M^[[25l
<...snip...>
</code></pre>
<p>There are some puzzling terminal-oriented control characters because I used Cygwin's Bash and <code>typescript</code> to launch Conda (see <a href="https://stackoverflow.com/a/76520838/2153235">here</a>), but the "path" is shown to be <em>BaseReadline.write_history_file</em>. I got the syntax of <em>write_history_file</em> from <a href="https://stackoverflow.com/a/47595405/2153235">this answer</a>. Here is how I used it, resulting in an "AttributeError":</p>
<pre><code>>>> pyreadline3.BaseReadline.write_history_file('pyHistory')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\User.Name\.conda\envs\py39\lib\site-packages\pyreadline3\rlmain.p y",
line 180, in write_history_file
self.mode._history.write_history_file(filename)
AttributeError: 'str' object has no attribute 'mode'
</code></pre>
<p>I get the same AttributeError even if I adorn the file name <code>pyHistory</code> with double-quotations. The only thing I can find on the above AttributeError error is <a href="https://stackoverflow.com/questions/54969871/attributeerror-str-object-has-no-attribute-mode">this Q&A</a> but it doesn't seem to be applicable because the answer there is that the wrong type of argument is being supplied.</p>
<p><strong>How else can track down the cause of this error?</strong></p>
<p>For me, the simpler the better. I am trying to access a live command history because it helps me experiment and get started with Python (and Conda), coming from a Matlab where the history is always available.</p>
|
<python><windows><conda><command-history>
|
2023-06-27 15:18:37
| 1
| 1,265
|
user2153235
|
76,566,282
| 5,722,359
|
How to extract the iid(s) of all the visible top-level items in a ttk.Treeview?
|
<p>In the root window, I created a <code>ttk.Treeview</code> with vertical and horizontal <code>ttk.Scrollbar</code>, respectively, within it. What I like to do next is to be able to extract the <code>iid</code> of all the visible top-level items, i.e. the iid of Restaurant 0, 1, 2, etc.. whenever I press the <code>Show Visible Top Level Item iid</code>. <strong>May I know what is the best way to achieve this objective?</strong></p>
<p>I experimented using <a href="https://stackoverflow.com/a/76566372/5722359">@JRiggles</a> answer but found that the returned values are not exact. For example, after adjusting the root window to only show:</p>
<p><a href="https://i.sstatic.net/nLe9t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nLe9t.png" alt="picture" /></a></p>
<p>the self.show() method returns:</p>
<pre><code>total_items=10 treeview_top_level_iids=('0', '1', '2', '3', '4', '5', '6', '7', '8', '9')
y_top=0.3333333333333333 y_bottom=0.6
first_visible=3 last_visible=6
visible_tl_idds=['G3', 'G4', 'G5', 'G6']
</code></pre>
<p>The values of <code>visible_tl_idds</code> should have been just <code>['G4', 'G5']</code>.</p>
<p>I have tried revising the <code>self.show()</code> method algorithm to use:</p>
<pre><code>if y_top == 0.0:
first_visible = 0
else:
first_visible = int(y_top * total_items) + 1
if y_bottom != 1.0:
last_visible = int(y_bottom * total_items)
else:
last_visible = total_items - 1
</code></pre>
<p>instead of:</p>
<pre><code>first_visible = int(y_top * total_items)
last_visible = int(y_bottom * total_items)
</code></pre>
<p>The result is closer to reality but still not exact. <code>G6</code> should not be there.</p>
<pre><code>total_items=10 treeview_top_level_iids=('0', '1', '2', '3', '4', '5', '6', '7', '8', '9')
y_top=0.3333333333333333 y_bottom=0.6
first_visible=4 last_visible=6
visible_tl_idds=['G4', 'G5', 'G6']
</code></pre>
<p><strong>Test script:</strong></p>
<pre><code>#!/usr/bin/python3
# -*- coding: utf-8 -*-
try:
import tkinter as tk
import tkinter.ttk as ttk
except:
import Tkinter as tk
import ttk as ttk
class App(ttk.Frame):
def __init__(self, parent, *args, **kwargs):
BG0 = '#aabfe0' #Blue scheme
BG1 = '#4e88e5' #Blue scheme
ttk.Frame.__init__(self, parent=None, style='App.TFrame', borderwidth=0,
relief='raised', width=390, height=390)
self.parent = parent
self.parent.title('Treeview')
self.parent.geometry('470x350')
self.setStyle()
self.createWidgets(BG0, BG1)
self.rowconfigure(0, weight=1)
self.columnconfigure(0, weight=1)
def setStyle(self):
style = ttk.Style()
style.configure('App.TFrame', background='pink')
def createWidgets(self, BG0, BG1):
# Treeview with scroll bars
columns = [f'Column {i}' for i in range(10)]
self.tree = ttk.Treeview(
self, height=20, selectmode='extended', takefocus=True,
columns=("type", "property A", "property B", "selected"),
displaycolumns=["property A", "property B", "selected"])
self.ysb = ttk.Scrollbar(self, orient=tk.VERTICAL)
self.xsb = ttk.Scrollbar(self, orient=tk.HORIZONTAL)
self.tree.configure(yscrollcommand=self.ysb.set,
xscrollcommand=self.xsb.set)
self.tree.grid(row=0, column=0, columnspan=4, sticky='nsew')
self.ysb.grid(row=0, column=5, sticky='ns')
self.xsb.grid(row=1, column=0, columnspan=4, sticky='ew')
self.tree.column('#0', stretch=True, anchor='w', width=100)
self.tree.column('property A', stretch=True, anchor='n', width=100)
self.tree.column('property B', stretch=True, anchor='n', width=100)
self.tree.column('selected', stretch=True, anchor='n', width=100)
self.tree.heading('#0', text="Type", anchor='w')
self.tree.heading('property A', text='Property A', anchor='center')
self.tree.heading('property B', text='Property B', anchor='center')
self.tree.heading('selected', text='Selected', anchor='center')
for i in range(10):
self.tree.tag_configure(i)
self.tree.insert("", "end", iid=f"G{i}", open=True,
tags=[i, 'Parent'], text=f"Restaurant {i}")
self.tree.insert(f"G{i}", "end", iid=f"C-{i}", text=f"Cookie",
tags=[f"{i}-a", 'Child', "Not Selected"],
values=(f"C-{i}-a", f"{i}-Ca", f"{i}-Cb", False))
self.tree.insert(f"G{i}", "end", iid=f"P-{i}", text=f"Pudding",
tags=[f"P-{i}", 'Child', "Not Selected"],
values=(f"P-{i}", f"{i}-Pa", f"{i}-Pb", False))
# Button
self.showbutton = ttk.Button(self, text="Show Visible Top-Level Items iid", command=self._show)
self.showbutton.grid(row=2, column=0, sticky='nsew')
def _show(self):
self.update_idletasks()
treeview_top_level_iids = self.tree.get_children()
total_items = len(treeview_top_level_iids)
y_top, y_bottom = self.tree.yview()
first_visible = int(y_top * total_items)
last_visible = int(y_bottom * total_items)
# My modifications
# if y_top == 0.0:
# first_visible = 0
# else:
# first_visible = int(y_top * total_items) + 1
# if y_bottom != 1.0:
# last_visible = int(y_bottom * total_items)
# else:
# last_visible = total_items - 1
visible_tl_idds = [f"G{i}" for i in range(first_visible,
last_visible + 1)]
print( f"{total_items=} {treeview_top_level_iids=}")
print( f"{y_top=} {y_bottom=}")
print( f"{first_visible=} {last_visible=}")
print(f"{visible_tl_idds=}")
if __name__ == '__main__':
root = tk.Tk()
app = App(root)
app.grid(row=0, column=0, sticky='nsew')
root.rowconfigure(0, weight=1)
root.columnconfigure(0, weight=1)
root.mainloop()
</code></pre>
|
<python><tkinter><treeview><ttk>
|
2023-06-27 15:08:53
| 2
| 8,499
|
Sun Bear
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.