QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,373,819
897,272
Python Global variable which appears to be correctly set is being return as None due to module being imported twice?
<p>I was attempting to have a simple singleton in my 'context' module; yes I know singleton's are bad and I hate using one, this is the least-ugly of lots of bad options for moving forward with refactoring spaghetti code as we move to get rid of context entirely.</p> <p>Here is my first version of the module.</p> <pre><code>_context = None def set_context(context): global _context #assorted error checking you don't care about done here _context = context return context def get_context(): if not _context: raise ValueError(&quot;Attempt to call context before it was initialized&quot;) return _context </code></pre> <p>In my debugger I can se set context is called and sets _context as I'd expect. However when get_context is called _context is None. Odder still if I set a break point in set_context and In my debugger I try to check variables it says that _context is None but &lt;my_project&gt;.context._context has a context, even though I think those two should be the same object.</p> <p>One I put a breakpoint on the original _context = None at the top of the class I discovered that line was running twice, once in the module that set_context is called and once in the module that get_context was run. So presumably the problem is that I ended up with two copies of the module and the one I set the context in is different then the one I loaded from, but I thought the whole point of modules was that they only get loaded once? How can I prevent multiple module loads and/or get my singleton to actually be <em>single</em>?</p>
<python><python-3.x><singleton>
2024-04-23 16:23:10
1
6,521
dsollen
78,373,809
4,237,254
Field value not being populated except when it's being debugged
<p>I'm having a weird problem where normally <code>field_val</code> should be set to some particular value but is being set to &quot;&quot;. When I debug in vscode and look into the value from the debugger, inspecting it (possibly triggering something), suddenly the variable becomes available. When I'm not debugging the value is empty string. I couldn't understand what's happening here. Is there some kind of lazy evaluation that I'm missing in django forms?</p> <p>I'm trying to keep the submitted value in the datalist when the form submission fails. Trying to take that value from the <code>self.instance</code> which is an instance of my django db model basically.</p> <pre class="lang-py prettyprint-override"><code>class DatalistFieldMixin: def make_field_datalist(self, field_name, choices, choice_format=lambda c: c): field_val = getattr(self.instance, field_name) or &quot;&quot; # Create the datalist widget while setting the html &quot;value&quot; attribute. # And set the widget as the form field. if self.fields.get(field_name): widget = DatalistWidget( choices=[choice_format(c) for c in choices], attrs={&quot;value&quot;: field_val}, ) self.fields[field_name].widget = widget class ObjectAdminForm(DatalistFieldMixin, forms.ModelForm): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.make_field_datalist( field_name=&quot;broadsign_active_container_id&quot;, choices=BroadsignContainer.objects.all(), choice_format=lambda c: ( c.container_id, &quot; - &quot;.join((str(c.container_id), c.container_name)), ), ) </code></pre> <p>Real quick gif explaining my issue: <a href="https://i.sstatic.net/RPlkP.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RPlkP.gif" alt="issue_gif" /></a></p>
<python><django><django-admin>
2024-04-23 16:20:02
1
2,831
BcK
78,373,765
10,426,490
How to trigger the `Blob Renamed` EventGrid Event using Azure Python SDK?
<p>When looking at the possible event types for an EventGrid Subscription, one is <code>Blob Renamed</code>.</p> <p><a href="https://i.sstatic.net/ktqJf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ktqJf.png" alt="enter image description here" /></a></p> <p>How is this event triggered when using the Azure Python SDK?</p> <p>I don't see a <code>Rename Blob</code> method in the SDK.</p>
<python><azure-eventgrid><azure-python-sdk>
2024-04-23 16:12:01
1
2,046
ericOnline
78,373,660
11,447,747
How can I establish an SSH tunnel to a database for Dask?
<p>I want to connect to postgres db using Dask where the database is behind an SSH tunnel. While there are methods to create SSH tunnels in Python, I haven't found a straightforward way to integrate this with Dask's connection mechanisms.</p> <p>One potential solution is to locally port forward the database connection and then connect to localhost:port, where the tunnel is established. However, I'm uncertain about how this approach will function within a Dask cluster. Since the SSH tunnel is created on the node where the tunnel code is executed, it might not be accessible on Dask workers.</p> <p>I'm currently using the</p> <pre><code>dd.read_sql_query(query, connection_string) </code></pre> <p>method for reading SQL queries in Dask. I'm considering whether I need to create the SSH tunnel on each worker node using</p> <pre><code>client.run(create_ssh_tunnel) </code></pre> <p>However, I'm unsure about how this will interact with auto-scaling. Specifically, during periods of high load when Dask workers autoscale, will they first create the tunnel on the worker nodes?</p>
<python><ssh><dask><dask-dataframe><dask-kubernetes>
2024-04-23 15:55:01
1
341
Faizan
78,373,511
3,258,600
Force setuptools to skip cache for certain python dependencies
<p>Is there an option in the setup.cfg <code>install_requires</code> or in alternately in the pyproject.toml file to skip the cache when loading certain pip dependencies.</p> <p>I have libraries in a local PyPi repository that get updated during development and I don't want to have to go through the hassle of having to update the version every time I want to make change. I also don't want to disable the cache entirely so that third-party libraries have to be reloaded for every build.</p>
<python><pip><setuptools><pyproject.toml>
2024-04-23 15:28:28
0
12,963
kellanburket
78,373,468
4,907,639
Input shape error when updating pretrained CNN from binary classification to multiclassification
<p>I have a dataset of 3 classes of images, subdivided into training/validation/testing folders:</p> <pre><code>new_base_dir = '/Users/.../img_dir' import os from tensorflow.keras.utils import image_dataset_from_directory train_dataset = image_dataset_from_directory( os.path.join(new_base_dir, 'train'), image_size=(180, 180), batch_size=10) validation_dataset = image_dataset_from_directory( os.path.join(new_base_dir, 'validation'), image_size=(180, 180), batch_size=10) test_dataset = image_dataset_from_directory( os.path.join(new_base_dir, 'test'), image_size=(180, 180), batch_size=10) </code></pre> <p>I verified there are 3 classes:</p> <pre><code>train_dataset Found 3000 files belonging to 3 classes &lt;BatchDataset element_spec=(TensorSpec(shape=(None, 180, 180, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None))&gt; </code></pre> <p>I am first loading the images into a pretrained <code>Xception</code> model:</p> <pre><code># load a pre-trained model import keras from keras import layers from keras.applications import VGG16 conv_base = keras.applications.xception.Xception( weights=&quot;imagenet&quot;, include_top=False, input_shape=(180, 180, 3)) import numpy as np def get_features_and_labels(dataset): all_features = [] all_labels = [] for images, labels in dataset: preprocessed_images = keras.applications.xception.preprocess_input(images) features = conv_base.predict(preprocessed_images) all_features.append(features) all_labels.append(labels) return np.concatenate(all_features), np.concatenate(all_labels) train_features, train_labels = get_features_and_labels(train_dataset) val_features, val_labels = get_features_and_labels(validation_dataset) test_features, test_labels = get_features_and_labels(test_dataset) </code></pre> <p>However, when I layer on a densely connected classifier, I get an error:</p> <pre><code>inputs = keras.Input(shape=(6, 6, 2048)) x = layers.Flatten()(inputs) x = layers.Dense(256)(x) x = layers.Dropout(0.5)(x) # the commented lines work fine for a binary classification model #outputs = layers.Dense(1, activation=&quot;sigmoid&quot;)(x) #model.compile(loss=&quot;binary_crossentropy&quot;, # optimizer=&quot;rmsprop&quot;, # metrics=[&quot;accuracy&quot;]) outputs = layers.Dense(3, activation=&quot;softmax&quot;)(x) model = keras.Model(inputs, outputs) model.compile(loss=&quot;categorical_crossentropy&quot;, optimizer=&quot;rmsprop&quot;, metrics=[&quot;accuracy&quot;]) </code></pre> <p>Here is the error:</p> <p><code>ValueError: Shapes (None, 1) and (None, 3) are incompatible</code></p>
<python><tensorflow><keras><deep-learning><conv-neural-network>
2024-04-23 15:20:21
1
2,109
coolhand
78,373,278
9,343,043
Create interval of data from dataframe with given median
<p>I have a large data frame of subjects. For simplcity's sake I have posted a smaller modified version of my data frame below.</p> <pre><code>subject age sex A 5.35 Female B 5.70 Male C 6.00 Female D 6.07 Male E 6.25 Male F 6.88 Male G 7.00 Female H 7.02 Male I 7.11 Female J 8.00 Male K 8.50 Female </code></pre> <p>I am writing a function that will input <code>age</code> and <code>tail</code> (and eventually <code>sex</code> using the large data frame I have). The desired output is an interval data frame that is filtered from the data frame with the age input being the median of the interval, and <code>tail</code> being the number of data points on both sides of the median, <code>age</code> input.</p> <p><code>df.loc</code> is not entirely what I am looking for. Example functionality is down below:</p> <pre><code>def interval_set(age = 7, tail = 3): # take dataframe above and output below: Out[1]: subject age sex D 6.07 Male E 6.25 Male F 6.88 Male G 7.00 Female H 7.02 Male I 7.11 Female J 8.00 Male </code></pre> <p>Thanks in advance!</p>
<python><pandas><median>
2024-04-23 14:48:11
2
871
florence-y
78,373,158
899,200
Optimizing Path Through Large Directed Acyclic Graph Tree
<p>This is a rephrasing/different approach to <a href="https://stackoverflow.com/questions/77804202/fitting-curve-with-restricted-relative-orientations">Fitting curve with restricted relative orientations</a>.</p> <p>I have a directed acyclic graph that flows from Level 0 to Level 3500. Edges only occur between levels, not within levels (e.g. a Level 1 node connects to several Level 2 nodes, Level 2 nodes never connect to Level 2 nodes).</p> <p>Each Level N node is connected to between 1 and 9 Level N+1 nodes. This causes the tree to grow very rapidly - Level 3500 has something like 8^3500 Nodes.</p> <p>The goal is to find a path from Level 0 to Level 3500 that has the lowest maximum node value. I've highlighted a couple paths in the image. I only care about the maximum node value, not the total path value. I can also get a starting path that provides a good but non-optimal path.</p> <p>While I would like to be able to find the optimal route, I would accept being able to find a route that is within say 20% or 50% of optimal.</p> <p>My current approach:</p> <ol> <li>Calculated a good but non-optimal path to give a starting maximum node value (not shown)</li> <li>Proceed depth first, calculating node values as they are encountered. Using the right side of the graph I would calculate L0=0.1, L1=0.1, L2=0.2. The cost of this path is then 0.2</li> <li>Move up the graph until I reach a cost of &lt;0.2 - in this case it occurs at L1=0.1.</li> <li>Investigate the other nodes connected to this node to see if they are lower value. L2=0.3, so no.</li> <li>Therefore move up the tree to L0 and investigate down again.</li> <li>L2=0.2, therefore it is not an improvement and can be ignored.</li> <li>L2=0.1, good, continue to investigate</li> <li>L3=0.1, good, therefore best path is now 0.1 and can't be improved upon because L0=0.1.</li> </ol> <p>Optimization occurs by moving up the tree (up levels) and choosing a different path down.</p> <p>Due to the number of nodes, even short trees (10 levels) take too long to optimize.</p> <p>If you are at Level 1, determining the values of the Level 2 nodes connected to you takes about 1/100 of a second.</p> <p>Is there another approach to this problem?</p> <p>For example, I can get a pretty good, non-optimal solution for the tree in about 10 minutes using a method like nearest neighbor.</p> <p><a href="https://i.sstatic.net/yIu10.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yIu10.png" alt="directed acyclic graph" /></a></p>
<python><optimization><directed-acyclic-graphs>
2024-04-23 14:30:41
0
414
CrazyArm
78,372,815
14,253,961
Print statistics in train
<p>To train cifar100 dataset, I find this function train, while I'm newer in Pytorch, I would like to understand the value 10000, because when I change it, the loss change</p> <pre><code>def train(net,trainloader,epochs,use_gpu = True): net.train() criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(net.parameters(), lr=0.001, momentum=0.9) print(f&quot;Training {epochs} epoch(s) w/ {len(trainloader)} batches each&quot;) # Train the network for epoch in range(epochs): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): images, labels = data[0].to(device), data[1].to(device) optimizer.zero_grad() outputs = net(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 100 == 99: # print every 100 mini-batches print(&quot;[%d, %5d] loss: %.3f&quot; % (epoch + 1, i + 1, running_loss / 10000)) running_loss = 0.0 </code></pre>
<python><pytorch>
2024-04-23 13:37:59
1
741
seni
78,372,803
1,238,967
Grouping rows in a Pandas dataframe based on one column, sorting based on a second column, and selecting elements in a third column (not pivoting)
<p>I have a dataframe, df_prices, which contain information about a set of objects. AI have 3 columns: 'uid', 'date' and 'price', where 'uid' has multiple (repeated) entries:</p> <pre><code>df_prices index uid date price 0 123 02-02-2000 11100 1 123 01-01-2000 22200 2 123 03-03-2000 44000 3 123 04-04-2000 66000 4 456 03-03-2000 77700 5 456 02-02-2000 88800 6 456 04-04-2000 66600 ... ... ... ... 98 987 01-01-2005 12300 99 987 04-04-2005 45600 100 987 05-05-2005 78900 </code></pre> <p>I want to group the dataframe based on the 'uid' column, then sort each group based on 'date' column, and extract the 'price' on the &quot;oldest&quot; date.</p> <p>The result will saved in another dataframe with only 'uid' (with no duplicated entries) and 'price' which contains the oldest price.</p> <p>So, the needed result should be:</p> <pre><code>df_oldest_prices index uid price 0 123 22200 1 456 88800 ... ... ... 40 987 12300 </code></pre> <p>I have managed to achieve the result with an iterrows loop:</p> <pre><code>uids = df_prices['uid'] uids_unique = list(set(uids)) df_oldest_prices['uid'] = uids_unique df_oldest_prices['price'] = np.nan for index, row in df_oldest_prices.iterrows(): uid = row['uid'] group = df_prices.loc[df_prices['uid']==uid] groupsorted = group.sort_values('date') firstrow = groupsorted.iloc[0] price = firstrow['price'] df_oldest_prices.at[index, 'price'] = price </code></pre> <p>However, I need a vectorized pandas code, for optimized speed.</p> <p>I have carefully checked all the questions/answers in <a href="https://stackoverflow.com/questions/47152691/how-can-i-pivot-a-dataframe">this question</a>, however none of them is suitable for my task. In particular, in my task I can not use a pivot, because I don't need aggregated data. I need to sort data in 'date', and then select a single element in the 'price' column, the one on the same row as the oldest sorted date. So, no aggregation, no pivoting.</p>
<python><pandas><dataframe><sorting>
2024-04-23 13:36:32
0
1,234
Fabio
78,372,776
10,071,715
Is there a way to group data by value in one column to produce a sum of contents in other column in pandas?
<p>I'm sorry if this is a repeat, I can't find anything that gives me an answer...</p> <p>I have a dataframe containing pixel values and the number of pixels of that value. It looks something like this:</p> <pre><code>Value Count 0.1457 900 0.1458 1800 0.1459 900 0.2144 1800 0.4357 2700 0.5764 900 0.7891 1800 0.7892 900 nan 0 nan 0 </code></pre> <p>In this case each instance of nan indicates a single pixel with no data.</p> <p>I'd like to group these values into 4 classes as follows...:</p> <ul> <li>Low: &lt;0.2</li> <li>Mid: 0.2 - 0.6</li> <li>High: &gt;0.6</li> <li>No Data: nan</li> </ul> <p>...and then produce a sum for each class, like so using the above example data:</p> <pre><code>Class Count Lo 3600 Mid 6300 Hi 2700 ND 2 </code></pre> <p>I appreciate there are likely to be several steps to this, but does anyone have any pointers?</p>
<python><pandas>
2024-04-23 13:32:28
4
1,007
SHV_la
78,372,696
1,439,912
pandas.DataFrame advanced groupby clustering
<p>Given the following dataframe (Python 3)</p> <pre><code>df = pd.DataFrame({'Name':['Smith', 'Brown', 'Smith', 'Miller'], 'Country': ['US', 'GB', 'DE','US']}, index=[0,1,2,3]) df.index.name = 'ID' df ID Name Country 0 Smith US 1 Brown GB 2 Smith DE 3 Miller US </code></pre> <p>I want to perform &quot;groupby&quot; but where at least Name OR Country are identical. That is, <strong>ID0</strong> is in the same cluster as <strong>ID2</strong> due to sharing the same Name. However, <strong>ID0</strong> shares the same Country as <strong>ID3</strong>, so they are both in the same cluster.</p> <p>Hence the following groups or clusters should be found</p> <pre><code>{'cluster0':[0,2,3], 'cluster2':[1]} </code></pre> <p>Technically, we have an undirected graph where each individual &quot;ID&quot; is a node, and edges exist between nodes with identical names, as well as between nodes with identical Country.</p> <p>How can we accomplish this in Pandas? Alternatively, would using the package <em>networkx</em> be better?</p>
<python><pandas><grouping><cluster-analysis><networkx>
2024-04-23 13:19:49
0
480
Pontus Hultkrantz
78,372,643
7,695,845
Numba parallelization doesn't help performance in Monte-Carlo simulation?
<p>This is a follow-up question to a <a href="https://stackoverflow.com/questions/78334676/monte-carlo-simulation-of-pi-with-numba-is-the-slowest-for-the-lowest-number-of/78336106?noredirect=1#comment138143733_78336106">question</a> I asked before, but I think I should start over. I am trying to implement a Monte-Carlo simulation of pi, and I am using <code>numba</code> to improve performance. Since each iteration of the loop is independent of the others, I thought I could get better performance with <code>parallel=True</code> and <code>numba.prange</code>. I tried it and got that for small values of <code>n</code>, the parallelization isn't worth it. I tried an improved version where I use parallelization after a certain threshold for <code>n</code> is crossed, but I found it performs worse than my previous attempts most of the time. I now have a compression of 3 versions of the algorithm: a regular one without parallelization, a parallel version using <code>numba.prange</code> and an &quot;improved&quot; hybrid version that uses parallelization after a specified threshold for <code>n</code> is crossed:</p> <pre class="lang-py prettyprint-override"><code>from datetime import timedelta from time import perf_counter import numba as nb import numpy as np import numpy.typing as npt jit_opts = dict( nopython=True, nogil=True, cache=False, error_model=&quot;numpy&quot;, fastmath=True ) rng = np.random.default_rng() @nb.jit( [ nb.types.Tuple((nb.bool_[:], nb.int64))(nb.float64[:, :]), nb.types.Tuple((nb.bool_[:], nb.int32))(nb.float32[:, :]), ], **jit_opts, parallel=True, ) def count_points_in_circle_parallel( points: npt.NDArray[float], ) -&gt; tuple[npt.NDArray[bool], int]: in_circle = np.empty(points.shape[0], dtype=np.bool_) in_circle_count = 0 for i in nb.prange(points.shape[0]): in_ = in_circle[i] = points[i, 0] ** 2 + points[i, 1] ** 2 &lt; 1 in_circle_count += in_ return in_circle, in_circle_count def monte_carlo_pi_parallel( n: int, ) -&gt; tuple[npt.NDArray[float], npt.NDArray[bool], float]: points = rng.random((n, 2)) in_circle, count = count_points_in_circle_parallel(points) return points, in_circle, 4 * count / n @nb.jit( [ nb.types.Tuple((nb.bool_[:], nb.int64))(nb.float64[:, :]), nb.types.Tuple((nb.bool_[:], nb.int32))(nb.float32[:, :]), ], **jit_opts, parallel=False, ) def count_points_in_circle(points: npt.NDArray[float]) -&gt; tuple[npt.NDArray[bool], int]: in_circle = np.empty(points.shape[0], dtype=np.bool_) in_circle_count = 0 for i in range(points.shape[0]): in_ = in_circle[i] = points[i, 0] ** 2 + points[i, 1] ** 2 &lt; 1 in_circle_count += in_ return in_circle, in_circle_count def monte_carlo_pi(n: int) -&gt; tuple[npt.NDArray[float], npt.NDArray[bool], float]: points = rng.random((n, 2)) in_circle, count = count_points_in_circle(points) return points, in_circle, 4 * count / n def count_points_in_circle_improved( points: npt.NDArray[float], ) -&gt; tuple[npt.NDArray[bool], int]: in_circle = np.empty(points.shape[0], dtype=np.bool_) in_circle_count = 0 for i in nb.prange(points.shape[0]): in_ = in_circle[i] = points[i, 0] ** 2 + points[i, 1] ** 2 &lt; 1 in_circle_count += in_ return in_circle, in_circle_count count_points_in_circle_improved_parallel = nb.jit( [ nb.types.Tuple((nb.bool_[:], nb.int64))(nb.float64[:, :]), nb.types.Tuple((nb.bool_[:], nb.int32))(nb.float32[:, :]), ], **jit_opts, parallel=True, )(count_points_in_circle_improved) count_points_in_circle_improved = nb.jit( [ nb.types.Tuple((nb.bool_[:], nb.int64))(nb.float64[:, :]), nb.types.Tuple((nb.bool_[:], nb.int32))(nb.float32[:, :]), ], **jit_opts, parallel=False, )(count_points_in_circle_improved) def monte_carlo_pi_improved( n: int, parallel_threshold: int = 1000 ) -&gt; tuple[npt.NDArray[float], npt.NDArray[bool], float]: points = rng.random((n, 2)) in_circle, count = ( count_points_in_circle_improved_parallel(points) if n &gt; parallel_threshold else count_points_in_circle_improved(points) ) return points, in_circle, 4 * count / n def main() -&gt; None: n_values = 10 ** np.arange(1, 9) n_values = np.concatenate( ([10], n_values) ) # Duplicate 10 to avoid startup overhead time_results = np.empty((len(n_values), 3), dtype=np.float64) if jit_opts.get(&quot;cache&quot;, False): print(&quot;Using cached JIT compilation&quot;) else: print(&quot;Using JIT compilation without caching&quot;) print() print(&quot;Using parallel count_points_in_circle&quot;) for i, n in enumerate(n_values): start = perf_counter() points, in_circle, pi_approx = monte_carlo_pi_parallel(n) end = perf_counter() duration = end - start time_results[i, 0] = duration delta = timedelta(seconds=duration) elapsed_msg = ( f&quot;[{delta} (Raw time: {duration} s)]&quot; if delta else f&quot;[Raw time: {duration} s]&quot; ) print( f&quot;n = {n:,}:&quot;.ljust(20), f&quot;\N{GREEK SMALL LETTER PI} \N{ALMOST EQUAL TO} {pi_approx}&quot;.ljust(20), elapsed_msg, ) print() print(&quot;Using non-parallel count_points_in_circle&quot;) for i, n in enumerate(n_values): start = perf_counter() points, in_circle, pi_approx = monte_carlo_pi(n) end = perf_counter() duration = end - start delta = timedelta(seconds=duration) time_results[i, 1] = duration elapsed_msg = ( f&quot;[{delta} (Raw time: {duration} s)]&quot; if delta else f&quot;[Raw time: {duration} s]&quot; ) print( f&quot;n = {n:,}:&quot;.ljust(20), f&quot;\N{GREEK SMALL LETTER PI} \N{ALMOST EQUAL TO} {pi_approx}&quot;.ljust(20), elapsed_msg, ) print() print(&quot;Improved version:&quot;) for i, n in enumerate(n_values): start = perf_counter() points, in_circle, pi_approx = monte_carlo_pi_improved(n) end = perf_counter() duration = end - start delta = timedelta(seconds=duration) time_results[i, 2] = duration elapsed_msg = ( f&quot;[{delta} (Raw time: {duration} s)]&quot; if delta else f&quot;[Raw time: {duration} s]&quot; ) print( f&quot;n = {n:,}:&quot;.ljust(20), f&quot;\N{GREEK SMALL LETTER PI} \N{ALMOST EQUAL TO} {pi_approx}&quot;.ljust(20), elapsed_msg, ) print() print(&quot;Comparison:&quot;) result_types = (&quot;parallel&quot;, &quot;non-parallel&quot;, &quot;improved&quot;) for n, res in zip(n_values, time_results): res_idx = np.argsort(res) print( f&quot;n = {n:,}:&quot;.ljust(20), f&quot;{result_types[res_idx[0]]} \N{LESS-THAN OR EQUAL TO} &quot; f&quot;{result_types[res_idx[1]]} \N{LESS-THAN OR EQUAL TO} &quot; f&quot;{result_types[res_idx[2]]}&quot;, ) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>(I know, I know, this code isn't very clean and has repetitions, but it's for testing purposes, and I'll end up with one of the algorithms in the end). I tried running it with <code>cache=True</code> and <code>cache=False</code> to check if it helps with something and the results are very confusing. It looks like sometimes the non-parallel version is faster even for large values of <code>n</code> and the hybrid version doesn't really improve anything. Here's an example of the results I get:</p> <p><a href="https://i.sstatic.net/FZHDZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FZHDZ.png" alt="" /></a></p> <p>These results are very confusing and they're not consistent. In a different run, I got that the non-parallel version is faster, and in another run, I got that the parallel version is faster. It looks like I am doing something wrong here, but I can't understand what's going on. Why do I not see a consistent performance improvement in the parallel version, especially for large values of <code>n</code>, and why my hybrid approach doesn't seem to improve performance in most cases? Any insight into what's happening here would be appreciated.</p> <h1>Edit:</h1> <p>Following @Jerome Richard's answer, I modified the code to pre-allocate buffers and reuse them for all of my tests. The results are still weird to me: the parallel version performs the worst most of the time, even for large <code>n</code>. I even included <code>n = 500,000,000</code> to push the limits further (apparently, my computer can't handle 1,000,000,000 so had to cut it in half), but the parallel version still underperforms. Why do I not see any significant improvement for the parallel or hybrid version of the algorithm?</p> <p>The modified code:</p> <pre class="lang-py prettyprint-override"><code>from datetime import timedelta from time import perf_counter import numba as nb import numpy as np import numpy.typing as npt jit_opts = dict( nopython=True, nogil=True, cache=False, error_model=&quot;numpy&quot;, fastmath=True ) rng = np.random.default_rng() @nb.jit( [ nb.types.Tuple((nb.bool_[:], nb.int64))(nb.float64[:, :], nb.bool_[:]), nb.types.Tuple((nb.bool_[:], nb.int32))(nb.float32[:, :], nb.bool_[:]), ], **jit_opts, parallel=True, ) def count_points_in_circle_parallel( points: npt.NDArray[float], in_circle: npt.NDArray[bool] ) -&gt; tuple[npt.NDArray[bool], int]: in_circle_count = 0 for i in nb.prange(points.shape[0]): in_ = in_circle[i] = points[i, 0] ** 2 + points[i, 1] ** 2 &lt; 1 in_circle_count += in_ return in_circle, in_circle_count def monte_carlo_pi_parallel( n: int, out: npt.NDArray[float] | None = None, in_circle_out: npt.NDArray[bool] | None = None, ) -&gt; tuple[npt.NDArray[float], npt.NDArray[bool], float]: points = rng.random((n, 2), out=out) if in_circle_out is None: in_circle_out = np.empty(n, dtype=np.bool_) in_circle, count = count_points_in_circle_parallel(points, in_circle_out) return points, in_circle, 4 * count / n @nb.jit( [ nb.types.Tuple((nb.bool_[:], nb.int64))(nb.float64[:, :], nb.bool_[:]), nb.types.Tuple((nb.bool_[:], nb.int32))(nb.float32[:, :], nb.bool_[:]), ], **jit_opts, parallel=False, ) def count_points_in_circle( points: npt.NDArray[float], in_circle: npt.NDArray[bool] ) -&gt; tuple[npt.NDArray[bool], int]: in_circle_count = 0 for i in range(points.shape[0]): in_ = in_circle[i] = points[i, 0] ** 2 + points[i, 1] ** 2 &lt; 1 in_circle_count += in_ return in_circle, in_circle_count def monte_carlo_pi( n: int, out: npt.NDArray[float] | None = None, in_circle_out: npt.NDArray[bool] | None = None, ) -&gt; tuple[npt.NDArray[float], npt.NDArray[bool], float]: points = rng.random((n, 2), out=out) if in_circle_out is None: in_circle_out = np.empty(n, dtype=np.bool_) in_circle, count = count_points_in_circle(points, in_circle_out) return points, in_circle, 4 * count / n def count_points_in_circle_improved( points: npt.NDArray[float], in_circle: npt.NDArray[bool] ) -&gt; tuple[npt.NDArray[bool], int]: in_circle_count = 0 for i in nb.prange(points.shape[0]): in_ = in_circle[i] = points[i, 0] ** 2 + points[i, 1] ** 2 &lt; 1 in_circle_count += in_ return in_circle, in_circle_count count_points_in_circle_improved_parallel = nb.jit( [ nb.types.Tuple((nb.bool_[:], nb.int64))(nb.float64[:, :], nb.bool_[:]), nb.types.Tuple((nb.bool_[:], nb.int32))(nb.float32[:, :], nb.bool_[:]), ], **jit_opts, parallel=True, )(count_points_in_circle_improved) count_points_in_circle_improved = nb.jit( [ nb.types.Tuple((nb.bool_[:], nb.int64))(nb.float64[:, :], nb.bool_[:]), nb.types.Tuple((nb.bool_[:], nb.int32))(nb.float32[:, :], nb.bool_[:]), ], **jit_opts, parallel=False, )(count_points_in_circle_improved) def monte_carlo_pi_improved( n: int, parallel_threshold: int = 1000, out: npt.NDArray[float] | None = None, in_circle_out: npt.NDArray[bool] | None = None, ) -&gt; tuple[npt.NDArray[float], npt.NDArray[bool], float]: points = rng.random((n, 2), out=out) if in_circle_out is None: in_circle_out = np.empty(n, dtype=np.bool_) in_circle, count = ( count_points_in_circle_improved_parallel(points, in_circle_out) if n &gt; parallel_threshold else count_points_in_circle_improved(points, in_circle_out) ) return points, in_circle, 4 * count / n def main() -&gt; None: n_values = 10 ** np.arange(1, 9) n_values = np.concatenate( ([10], n_values, [500_000_000]) ) # Duplicate 10 to avoid startup overhead n_max = n_values.max() buffer = np.empty((n_max, 2), dtype=np.float64) in_circle_buffer = np.empty(n_max, dtype=np.bool_) use_preallocated_buffer = False time_results = np.empty((len(n_values), 3), dtype=np.float64) if jit_opts.get(&quot;cache&quot;, False): print(&quot;Using cached JIT compilation&quot;) else: print(&quot;Using JIT compilation without caching&quot;) if use_preallocated_buffer: print(&quot;Using preallocated buffers&quot;) else: print(&quot;Not using preallocated buffers&quot;) print() print(&quot;Using parallel count_points_in_circle&quot;) for i, n in enumerate(n_values): start = perf_counter() if use_preallocated_buffer: points, in_circle, pi_approx = monte_carlo_pi_parallel( n, out=buffer[:n], in_circle_out=in_circle_buffer[:n] ) else: points, in_circle, pi_approx = monte_carlo_pi_parallel(n) end = perf_counter() duration = end - start time_results[i, 0] = duration delta = timedelta(seconds=duration) elapsed_msg = ( f&quot;[{delta} (Raw time: {duration} s)]&quot; if delta else f&quot;[Raw time: {duration} s]&quot; ) print( f&quot;n = {n:,}:&quot;.ljust(20), f&quot;\N{GREEK SMALL LETTER PI} \N{ALMOST EQUAL TO} {pi_approx}&quot;.ljust(20), elapsed_msg, ) print() print(&quot;Using non-parallel count_points_in_circle&quot;) for i, n in enumerate(n_values): start = perf_counter() if use_preallocated_buffer: points, in_circle, pi_approx = monte_carlo_pi( n, out=buffer[:n], in_circle_out=in_circle_buffer[:n] ) else: points, in_circle, pi_approx = monte_carlo_pi(n) end = perf_counter() duration = end - start delta = timedelta(seconds=duration) time_results[i, 1] = duration elapsed_msg = ( f&quot;[{delta} (Raw time: {duration} s)]&quot; if delta else f&quot;[Raw time: {duration} s]&quot; ) print( f&quot;n = {n:,}:&quot;.ljust(20), f&quot;\N{GREEK SMALL LETTER PI} \N{ALMOST EQUAL TO} {pi_approx}&quot;.ljust(20), elapsed_msg, ) print() print(&quot;Improved version:&quot;) for i, n in enumerate(n_values): start = perf_counter() if use_preallocated_buffer: points, in_circle, pi_approx = monte_carlo_pi_improved( n, out=buffer[:n], in_circle_out=in_circle_buffer[:n] ) else: points, in_circle, pi_approx = monte_carlo_pi_improved(n) end = perf_counter() duration = end - start delta = timedelta(seconds=duration) time_results[i, 2] = duration elapsed_msg = ( f&quot;[{delta} (Raw time: {duration} s)]&quot; if delta else f&quot;[Raw time: {duration} s]&quot; ) print( f&quot;n = {n:,}:&quot;.ljust(20), f&quot;\N{GREEK SMALL LETTER PI} \N{ALMOST EQUAL TO} {pi_approx}&quot;.ljust(20), elapsed_msg, ) print() print(&quot;Comparison:&quot;) result_types = (&quot;parallel&quot;, &quot;non-parallel&quot;, &quot;improved&quot;) for n, res in zip(n_values, time_results): res_idx = np.argsort(res) print( f&quot;n = {n:,}:&quot;.ljust(20), f&quot;{result_types[res_idx[0]]} \N{LESS-THAN OR EQUAL TO} &quot; f&quot;{result_types[res_idx[1]]} \N{LESS-THAN OR EQUAL TO} &quot; f&quot;{result_types[res_idx[2]]}&quot;, ) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The results with pre-allocation:</p> <pre><code>Using JIT compilation without caching Using preallocated buffers Using parallel count_points_in_circle n = 10: π ≈ 2.8 [0:00:00.018026 (Raw time: 0.018026399999996556 s)] n = 10: π ≈ 2.4 [0:00:00.000072 (Raw time: 7.180000000062137e-05 s)] n = 100: π ≈ 3.12 [0:00:00.000047 (Raw time: 4.7400000028119393e-05 s)] n = 1,000: π ≈ 3.208 [0:00:00.000075 (Raw time: 7.499999998117346e-05 s)] n = 10,000: π ≈ 3.1392 [0:00:00.000235 (Raw time: 0.00023540000000821237 s)] n = 100,000: π ≈ 3.14048 [0:00:00.001509 (Raw time: 0.0015089999999986503 s)] n = 1,000,000: π ≈ 3.143396 [0:00:00.014025 (Raw time: 0.014025000000003729 s)] n = 10,000,000: π ≈ 3.14113 [0:00:00.123001 (Raw time: 0.12300090000002228 s)] n = 100,000,000: π ≈ 3.1412414 [0:00:00.804258 (Raw time: 0.8042575999999713 s)] n = 500,000,000: π ≈ 3.141718144 [0:00:04.104100 (Raw time: 4.104099899999994 s)] Using non-parallel count_points_in_circle n = 10: π ≈ 2.8 [0:00:00.000072 (Raw time: 7.189999996626284e-05 s)] n = 10: π ≈ 3.2 [0:00:00.000023 (Raw time: 2.3100000021258893e-05 s)] n = 100: π ≈ 3.24 [0:00:00.000019 (Raw time: 1.86000000326203e-05 s)] n = 1,000: π ≈ 3.124 [0:00:00.000037 (Raw time: 3.739999999652355e-05 s)] n = 10,000: π ≈ 3.1264 [0:00:00.000120 (Raw time: 0.00012040000001434237 s)] n = 100,000: π ≈ 3.14256 [0:00:00.001055 (Raw time: 0.0010548999999855369 s)] n = 1,000,000: π ≈ 3.141884 [0:00:00.010567 (Raw time: 0.010566699999969842 s)] n = 10,000,000: π ≈ 3.1413664 [0:00:00.107006 (Raw time: 0.10700550000001385 s)] n = 100,000,000: π ≈ 3.14188264 [0:00:00.865470 (Raw time: 0.8654702999999699 s)] n = 500,000,000: π ≈ 3.141582376 [0:00:04.014441 (Raw time: 4.01444140000001 s)] Improved version: n = 10: π ≈ 2.8 [0:00:00.000067 (Raw time: 6.719999998949788e-05 s)] n = 10: π ≈ 2.4 [0:00:00.000016 (Raw time: 1.550000001770968e-05 s)] n = 100: π ≈ 3.24 [0:00:00.000029 (Raw time: 2.8799999995499093e-05 s)] n = 1,000: π ≈ 3.192 [0:00:00.000022 (Raw time: 2.1799999956328975e-05 s)] n = 10,000: π ≈ 3.172 [0:00:00.000185 (Raw time: 0.00018489999996518236 s)] n = 100,000: π ≈ 3.14124 [0:00:00.001362 (Raw time: 0.0013624999999706233 s)] n = 1,000,000: π ≈ 3.143404 [0:00:00.013065 (Raw time: 0.013065499999981967 s)] n = 10,000,000: π ≈ 3.1418088 [0:00:00.112366 (Raw time: 0.11236619999999675 s)] n = 100,000,000: π ≈ 3.141952 [0:00:00.682029 (Raw time: 0.6820288000000119 s)] n = 500,000,000: π ≈ 3.141576848 [0:00:03.210755 (Raw time: 3.210754800000018 s)] Comparison: n = 10: improved ≤ non-parallel ≤ parallel n = 10: improved ≤ non-parallel ≤ parallel n = 100: non-parallel ≤ improved ≤ parallel n = 1,000: improved ≤ non-parallel ≤ parallel n = 10,000: non-parallel ≤ improved ≤ parallel n = 100,000: non-parallel ≤ improved ≤ parallel n = 1,000,000: non-parallel ≤ improved ≤ parallel n = 10,000,000: non-parallel ≤ improved ≤ parallel n = 100,000,000: improved ≤ parallel ≤ non-parallel n = 500,000,000: improved ≤ non-parallel ≤ parallel </code></pre> <p>Results without pre-allocation:</p> <pre><code>Using JIT compilation without caching Not using preallocated buffers Using parallel count_points_in_circle n = 10: π ≈ 3.2 [0:00:00.003375 (Raw time: 0.0033753000000160682 s)] n = 10: π ≈ 3.2 [0:00:00.000062 (Raw time: 6.170000006022747e-05 s)] n = 100: π ≈ 3.2 [0:00:00.000059 (Raw time: 5.86999999541149e-05 s)] n = 1,000: π ≈ 3.112 [0:00:00.000099 (Raw time: 9.939999995367543e-05 s)] n = 10,000: π ≈ 3.1276 [0:00:00.000183 (Raw time: 0.00018330000000332802 s)] n = 100,000: π ≈ 3.13956 [0:00:00.001689 (Raw time: 0.0016891000000214262 s)] n = 1,000,000: π ≈ 3.142456 [0:00:00.015140 (Raw time: 0.015140099999939594 s)] n = 10,000,000: π ≈ 3.1418444 [0:00:00.128062 (Raw time: 0.1280623000000105 s)] n = 100,000,000: π ≈ 3.14139292 [0:00:00.831049 (Raw time: 0.8310494999999491 s)] n = 500,000,000: π ≈ 3.141657016 [0:00:04.522461 (Raw time: 4.522460500000079 s)] Using non-parallel count_points_in_circle n = 10: π ≈ 3.2 [0:00:00.323710 (Raw time: 0.3237104999999474 s)] n = 10: π ≈ 2.8 [0:00:00.000035 (Raw time: 3.4599999935380765e-05 s)] n = 100: π ≈ 3.24 [0:00:00.000022 (Raw time: 2.1899999978813867e-05 s)] n = 1,000: π ≈ 3.14 [0:00:00.000044 (Raw time: 4.419999993388046e-05 s)] n = 10,000: π ≈ 3.1244 [0:00:00.000150 (Raw time: 0.00014989999999670545 s)] n = 100,000: π ≈ 3.13744 [0:00:00.000897 (Raw time: 0.0008967999999640597 s)] n = 1,000,000: π ≈ 3.143708 [0:00:00.008511 (Raw time: 0.008510500000056709 s)] n = 10,000,000: π ≈ 3.1406824 [0:00:00.084274 (Raw time: 0.08427370000003975 s)] n = 100,000,000: π ≈ 3.14154872 [0:00:00.902473 (Raw time: 0.9024734999999282 s)] n = 500,000,000: π ≈ 3.141605384 [0:00:04.363011 (Raw time: 4.363010799999984 s)] Improved version: n = 10: π ≈ 3.2 [0:00:00.407473 (Raw time: 0.40747319999991305 s)] n = 10: π ≈ 2.8 [0:00:00.000034 (Raw time: 3.4199999959128036e-05 s)] n = 100: π ≈ 3.16 [0:00:00.000019 (Raw time: 1.9299999962640868e-05 s)] n = 1,000: π ≈ 3.184 [0:00:00.000021 (Raw time: 2.0999999946980097e-05 s)] n = 10,000: π ≈ 3.1388 [0:00:00.000233 (Raw time: 0.0002328000000488828 s)] n = 100,000: π ≈ 3.13748 [0:00:00.001424 (Raw time: 0.0014244999999846186 s)] n = 1,000,000: π ≈ 3.140832 [0:00:00.015200 (Raw time: 0.015200499999991735 s)] n = 10,000,000: π ≈ 3.1420484 [0:00:00.131624 (Raw time: 0.13162439999996423 s)] n = 100,000,000: π ≈ 3.14133648 [0:00:00.913009 (Raw time: 0.9130087999999432 s)] n = 500,000,000: π ≈ 3.141633632 [0:00:04.001366 (Raw time: 4.001365899999996 s)] Comparison: n = 10: parallel ≤ non-parallel ≤ improved n = 10: improved ≤ non-parallel ≤ parallel n = 100: improved ≤ non-parallel ≤ parallel n = 1,000: improved ≤ non-parallel ≤ parallel n = 10,000: non-parallel ≤ parallel ≤ improved n = 100,000: non-parallel ≤ improved ≤ parallel n = 1,000,000: non-parallel ≤ parallel ≤ improved n = 10,000,000: non-parallel ≤ parallel ≤ improved n = 100,000,000: parallel ≤ non-parallel ≤ improved n = 500,000,000: improved ≤ non-parallel ≤ parallel </code></pre>
<python><numba><montecarlo>
2024-04-23 13:14:06
1
1,420
Shai Avr
78,372,618
7,302,169
acme error - AttributeError: module 'jax' has no attribute 'linear_util'
<p>I am using acme framework to run some experiments, and I installed acme based on documentation. However, I have attribute error that raised likely from JAX, HAIKU, and when I looked into github issue, there was no solution given at this time. Can anyone take a look what package dependecy caused this issue?</p> <p><strong>my venv spec:</strong></p> <p>here is my venv spec</p> <pre><code>dm-acme 0.4.0 dm-control 0.0.364896371 dm-env 1.6 dm-haiku 0.0.10 dm-launchpad 0.5.0 dm-reverb 0.7.0 dm-tree 0.1.8 acme 2.10.0 dm-acme 0.4.0 jax 0.4.26 jaxlib 0.4.26+cuda12.cudnn89 python -V Python 3.9.5 </code></pre> <p>error details:</p> <blockquote> <p>File &quot;/data/acme/examples/baselines/rl_discrete/run_dqn.py&quot;, line 18, in from acme.agents.jax import dqn File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/agents/jax/dqn/<strong>init</strong>.py&quot;, line 18, in from acme.agents.jax.dqn.actor import behavior_policy File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/agents/jax/dqn/actor.py&quot;, line 20, in from acme.agents.jax import actor_core as actor_core_lib File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/agents/jax/actor_core.py&quot;, line 22, in from acme.jax import networks as networks_lib File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/jax/networks/<strong>init</strong>.py&quot;, line 18, in from acme.jax.networks.atari import AtariTorso File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/jax/networks/atari.py&quot;, line 29, in from acme.jax.networks import base File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/jax/networks/base.py&quot;, line 24, in import haiku as hk File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/haiku/<strong>init</strong>.py&quot;, line 20, in from haiku import experimental File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/haiku/experimental/<strong>init</strong>.py&quot;, line 34, in from haiku._src.dot import abstract_to_dot File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/haiku/_src/dot.py&quot;, line 163, in @jax.linear_util.transformation File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/jax/_src/deprecations.py&quot;, line 54, in getattr raise AttributeError(f&quot;module {module!r} has no attribute {name!r}&quot;) AttributeError: module 'jax' has no attribute 'linear_util'</p> </blockquote> <p>seems it raised from haiku and JAX, how this can be fixed? any quick thoughts?</p> <p><strong>updated attempt</strong></p> <p>based on @jakevdp suggestion, I reinstalled jax, jaxlib, but now I am getting this error again:</p> <pre><code>Traceback (most recent call last): File &quot;/data/acme/examples/baselines/rl_discrete/run_dqn.py&quot;, line 18, in &lt;module&gt; from acme.agents.jax import dqn File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/agents/jax/dqn/__init__.py&quot;, line 18, in &lt;module&gt; from acme.agents.jax.dqn.actor import behavior_policy File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/agents/jax/dqn/actor.py&quot;, line 20, in &lt;module&gt; from acme.agents.jax import actor_core as actor_core_lib File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/agents/jax/actor_core.py&quot;, line 22, in &lt;module&gt; from acme.jax import networks as networks_lib File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/jax/networks/__init__.py&quot;, line 45, in &lt;module&gt; from acme.jax.networks.multiplexers import CriticMultiplexer File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/jax/networks/multiplexers.py&quot;, line 20, in &lt;module&gt; from acme.jax import utils File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/acme/jax/utils.py&quot;, line 190, in &lt;module&gt; devices: Optional[Sequence[jax.xla.Device]] = None, File &quot;/data/acme/acme_venv_new/lib/python3.9/site-packages/jax/_src/deprecations.py&quot;, line 53, in getattr raise AttributeError(f&quot;module {module!r} has no attribute {name!r}&quot;) AttributeError: module 'jax' has no attribute 'xla' </code></pre> <p>here is my pip freeze list on this public gist: <a href="https://gist.githubusercontent.com/datageek19/337b3e94a2379f9ea3ac3e9e78d17695/raw/340824604e36642c4061c7ec78c26ff138770803/acme_pip_list.md" rel="nofollow noreferrer">acme pip list</a></p> <p>I looked into this github issue: <a href="https://github.com/sokrypton/ColabFold/issues/484" rel="nofollow noreferrer">jax xla attribute issue</a></p> <p>@jakevdp, any updated comment or possible workaround for this <code>jax.xla</code> issue? thanks</p>
<python><jax><haiku><acme-deepmind>
2024-04-23 13:10:26
1
941
Jerry07
78,372,458
19,580,067
Call Back Error:updating graph.figure in Plotly Dash
<p>I tried to visualise the stock data using candle light chart in plotly, dash. But the chart doesn't showup throwing call back error. This is the first time I'm using plotly. So not sure how to get it fixed.</p> <p>The chart works well if I pull data from the csv file but not showing up when pulled the data directly from database table. Any help on this will be much useful.</p> <p>Here is my code:</p> <pre><code>from dash import Dash, dcc, html, Input, Output import plotly.graph_objects as go import pandas as pd app = Dash(__name__) conn = sqlite3.connect('stocks.db') cursor = conn.cursor() app.layout = html.Div([ html.H4('Apple stock candlestick chart'), dcc.Checklist( id='toggle-rangeslider', options=[{'label': 'Include Rangeslider', 'value': 'slider'}], value=['slider'] ), dcc.Graph(id=&quot;graph&quot;), ]) @app.callback( Output(&quot;graph&quot;, &quot;figure&quot;), Input(&quot;toggle-rangeslider&quot;, &quot;value&quot;)) def display_candlestick(value): cursor = conn.cursor() df = pd.read_sql(&quot;SELECT Datetime, Open, High, Low, Close FROM stock_data WHERE Symbol = ?&quot;, conn, params=('AAPL',)) fig = go.Figure(go.Candlestick( x=df['Datetime'], open=df['Open'], high=df['High'], low=df['Low'], close=df['Close'] )) fig.update_layout( xaxis_rangeslider_visible='slider' in value ) return fig app.run_server(debug=True) </code></pre>
<python><plotly><visualization><plotly-dash>
2024-04-23 12:42:55
0
359
Pravin
78,372,333
918,093
pcolormesh showing white gaps between cells of data
<p>I'm trying to graph a series of cells represented by the top left coordinate. The cells are spaced 20 units apart. When I graph the data with pcolormesh I get the individual cells with white gaps representing presumably the missing data between each cell. How can I force the cells to be 20x20 instead of 10x10?</p> <p><a href="https://i.sstatic.net/45URV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/45URV.png" alt="image of graphed data" /></a></p> <pre><code>from matplotlib.colors import LinearSegmentedColormap cmap0 = LinearSegmentedColormap.from_list('', ['white', 'darkblue']) eastings = range(df275.E.min(), df275.E.max()+1) northings = range(df275.N.min(), df275.N.max()+1) fig, axs = plt.subplots(1, 1, figsize=(8,10)) for ax, period in zip([axs], [2]): z = [] for i, y in enumerate(northings): for j, x in enumerate(eastings): result = df275.query(f'E == {x} &amp; N == {y}') if not result.empty: z.append(result[str(period)].values[0]) else: z.append(0) ax.set(xlabel=&quot;Easting&quot;, ylabel=&quot;Northing&quot;) x_extents = [e + 0.5 for e in eastings] y_extents = [n - 0.5 for n in northings] ax.pcolormesh(x_extents, y_extents, np.reshape(z, (len(northings), len(eastings))), cmap=cmap0, vmin=0, vmax=max_value) ax.title.set_text(f&quot;Period {period}&quot;) </code></pre>
<python><matplotlib>
2024-04-23 12:22:08
1
720
labarna
78,372,313
7,566,673
pandas explode list values row wise
<p>I have a Dataframe like this</p> <pre><code>data = [[1, [10, 11]], [1, [15, 16]], [2, [20, 24]], [2, [22, 23]]] df = pd.DataFrame(data, columns = ['id', 'val']) id val 0 1 [10, 11] 1 2 [15, 16] </code></pre> <p>Here length of each list in <code>val</code> is same.</p> <p>I want to explode <code>val</code> row wise. I could not fine a direct way of doing it . So as work around I used</p> <pre><code>df.explode('val') </code></pre> <p>Now I want to use <code>df.pivot</code> but before that I want to add column names for each <code>id</code>. Something like this</p> <pre><code> id val col_name 0 1 10 col_1 0 1 11 col_2 1 2 15 col_1 1 2 16 col_2 </code></pre> <p>How can I add <code>col_name</code> using <code>groupby</code>?? This will help me pivot the dataframe. Also is there any direct way of explode row wise which looks like</p> <pre><code>id col_1 col_2 1 10 11 2 15 16 </code></pre>
<python><pandas>
2024-04-23 12:19:49
1
1,219
Bharat Sharma
78,372,168
2,530,674
Move isort configs to vscode settings.json
<p>I understand that <a href="https://stackoverflow.com/questions/67059648/vscode-how-to-config-organize-imports-for-python-isort">VSCode: how to config &#39;Organize imports&#39; for Python (isort)</a> may be related, but all the answers seems outdated.</p> <p>I have the following in my pyproject.toml file and my settings.json is shown below. While <code>black</code> seems to run properly, I can't seem to run isort without manually typing in <code>isort .</code>.</p> <p>Side question, isort seems to be slow when I run it via terminal, should I simply add a pre-commit hook instead if isort is generally slow?</p> <pre class="lang-ini prettyprint-override"><code>[tool.black] line_length = 100 [tool.isort] honor_noqa = true line_length = 100 profile = &quot;black&quot; verbose = false known_first_party = [ # all folders you want to lump &quot;src&quot;, ] # Block below is google style import formatting https://pycqa.github.io/isort/docs/configuration/profiles.html force_sort_within_sections = true force_single_line = true lexicographical = true single_line_exclusions = [&quot;typing&quot;] order_by_type = false group_by_package = true </code></pre> <pre class="lang-json prettyprint-override"><code>{ &quot;files.insertFinalNewline&quot;: true, &quot;jupyter.debugJustMyCode&quot;: false, &quot;editor.formatOnSave&quot;: true, &quot;editor.formatOnPaste&quot;: true, &quot;files.autoSave&quot;: &quot;onFocusChange&quot;, &quot;editor.defaultFormatter&quot;: &quot;ms-python.black-formatter&quot;, &quot;black-formatter.path&quot;: [&quot;/opt/homebrew/bin/black&quot;], &quot;black-formatter.args&quot;: [&quot;--config&quot;, &quot;./pyproject.toml&quot;], &quot;black-formatter.cwd&quot;: &quot;${workspaceFolder}&quot;, &quot;isort.args&quot;: [&quot;--config&quot;, &quot;./pyproject.toml&quot;], &quot;isort.check&quot;: true, &quot;python.analysis.typeCheckingMode&quot;: &quot;basic&quot;, } </code></pre>
<python><visual-studio-code><isort>
2024-04-23 11:58:06
2
10,037
sachinruk
78,372,145
3,520,792
403 Error while creating a spreadsheet using a GCP Service Account credentials in gspread python despite correct scopes
<p>I'm encountering a 403 error when attempting to create a Google spreadsheet using the gspread library in Python, despite setting what I believe to be the correct authentication scopes for my service account. The error details are:</p> <pre><code>{ &quot;code&quot;: 403, &quot;message&quot;: &quot;Request had insufficient authentication scopes.&quot;, &quot;errors&quot;: [ { &quot;message&quot;: &quot;Insufficient Permission&quot;, &quot;domain&quot;: &quot;global&quot;, &quot;reason&quot;: &quot;insufficientPermissions&quot; } ], &quot;status&quot;: &quot;PERMISSION_DENIED&quot;, &quot;details&quot;: [ { &quot;@type&quot;: &quot;type.googleapis.com/google.rpc.ErrorInfo&quot;, &quot;reason&quot;: &quot;ACCESS_TOKEN_SCOPE_INSUFFICIENT&quot;, &quot;domain&quot;: &quot;googleapis.com&quot;, &quot;metadata&quot;: { &quot;service&quot;: &quot;drive.googleapis.com&quot;, &quot;method&quot;: &quot;google.apps.drive.v3.DriveFiles.Create&quot; } } ] } </code></pre> <p>I am using a service account attached to a VM to authenticate the requests. Below is the Python code snippet triggering the error:</p> <pre class="lang-py prettyprint-override"><code>from google.auth import default import gspread credentials, _ = default(scopes=[&quot;https://www.googleapis.com/auth/drive&quot;]) gc = gspread.authorize(credentials) sheet_name = 'test_service_account' sh = gc.create(sheet_name) </code></pre> <p>Here is what I've tried so far:</p> <ul> <li>I've checked for Google Drive-related roles (like roles/drive.file or roles/drive) in the IAM console but didn't find them listed for the service account.</li> <li>I've explicitly set the necessary scope (<a href="https://www.googleapis.com/auth/drive" rel="nofollow noreferrer">https://www.googleapis.com/auth/drive</a>) when initializing the default service account credentials.</li> <li>I've ensured that the Google Drive API is enabled for my project. I have another service account with seemingly the same roles, which is able to create spreadsheets without any issues.</li> </ul> <p>How can I resolve this permission error? Is there a different way to assign the necessary roles to my service account, or is there something else I might be overlooking?</p>
<python><google-cloud-platform><google-drive-api><service-accounts><gspread>
2024-04-23 11:54:37
0
526
Vipul Vishnu av
78,371,971
13,998,438
Removing the wide margins in a streamlit app
<p>I am making a streamlit app, but there is a huge amount of white space on the left and right sides of my graph. I want to remove the margins and place some graphs horizontally. How can I remove the margins on the left and right side of the app? I got this code from a tutorial I've been following.</p> <pre><code>import streamlit as st import pandas as pd import matplotlib.pyplot as plt import plotly_express as px def stats(dataframe): st.header('Data Statistics') st.write(dataframe.describe()) def data_header(dataframe): st.header('Data Header') st.write(dataframe.head()) def plot(dataframe): fig, ax = plt.subplots(1,1) ax.scatter(x = dataframe['Depth'], y = dataframe['Magnitude']) ax.set_xlabel('Depth') ax.set_ylabel('Magnitude') st.pyplot(fig) def interactive_plot(dataframe): x_axis_val = st.selectbox('Select x-axis attribute', options = df.columns) y_axis_val = st.selectbox('Select y-axis attribute', options = df.columns) col = st.color_picker('Select a plot color') plot = px.scatter(dataframe, x = x_axis_val, y = y_axis_val) plot.update_traces(marker = dict(color = col)) st.plotly_chart(plot) st.title('Earthquake Data Explorer') st.text('This is a web app to explore earthquake data.') # st.markdown('## This is **markdown**.') st.sidebar.title('Navigation') uploaded_file = st.sidebar.file_uploader('Upload your file here.') options = st.sidebar.radio('Pages', options = ['Home', 'Data Statistics', 'Data Header', 'Plot', 'Interactive Plot']) df = pd.read_csv('/Users/deisert/Documents/FAA-Sabin/Dashboard/kaggle_significant_earthquakes_database.csv') if options == 'Data Statistics': stats(df) elif options == 'Data Header': data_header(df) elif options == 'Plot': plot(df) elif options == 'Interactive Plot': interactive_plot(df) </code></pre>
<python><streamlit>
2024-04-23 11:27:47
1
606
325
78,371,846
13,227,420
How to return multiple properties while using a split extension in jsonpath-ng?
<p>I'm working with a dictionary in Python, such as:</p> <pre><code>event = {'name': 'team1 - team2', 'competitionId': 19790, 'sportId': 3} </code></pre> <p>I am trying to extract the name value and split it using the split extension. The extraction works as expected when returning the name value separately, like this:</p> <pre><code>from jsonpath_ng.ext import parse jsonpath_expression = parse(&quot;$.name.`split(-, 0, -1)`&quot;) # This returns: 'team1 ' </code></pre> <p>However, when I try to return multiple properties, including the split name value alongside other properties, I encounter an error:</p> <pre><code>jsonpath_expression = parse(&quot;$.[sportId, dateTime, name.`split(-, 0, -1)`]&quot;) # Error: raise JsonPathParserError('Parse error at %s:%s near token %s (%s)' </code></pre> <p>How can I return multiple properties (including the split name value) at once, while avoiding this error?</p>
<python><json><python-3.x><jsonpath-ng>
2024-04-23 11:06:18
1
394
sierra_papa
78,371,754
4,108,376
Storing numpy array in raw binary file
<p>How to store a 2D numpy <code>ndarray</code> in raw binary format? It should become a raw array of float32 values, in row-major order, no padding, without any headers.</p> <p>According to the documentations, <code>ndarray.tofile()</code> can store it as binary or textual, but the <code>format</code> argument is a string to textual formatting. And <code>np.save()</code> saves it in <code>.npy</code> format.</p>
<python><numpy><numpy-ndarray>
2024-04-23 10:50:50
1
9,230
tmlen
78,371,663
6,197,439
PyQt5 QTableView header with two lines of text, each with different font?
<p>The minimal example code below renders the following GUI:</p> <p><a href="https://i.sstatic.net/aqTP7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aqTP7.png" alt="enter image description here" /></a></p> <p>What I would like to achieve, instead, is this (manually edited image):</p> <p><a href="https://i.sstatic.net/Mip7T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mip7T.png" alt="enter image description here" /></a></p> <p>... which is to say, I would like the column (horizontal) headings to have two lines of text, centered: the top line of text in bigger font size, and the bottom one in smaller font size and bold font.</p> <p>I originally hoped that I could just use HTML and QCSS styling for this, but</p> <ul> <li><a href="https://forum.qt.io/topic/30598/solved-how-to-display-subscript-text-in-header-of-qtableview" rel="nofollow noreferrer">https://forum.qt.io/topic/30598/solved-how-to-display-subscript-text-in-header-of-qtableview</a></li> </ul> <blockquote> <p>this is not supported by Qt, you would need to implement a QStyledItemDelegate and set it to your QHeaderView.</p> </blockquote> <p>The example below shows clearly HTML is not accepted in headings, too.</p> <p>Unfortunately, I could not find any example that demonstrates how to use a <code>QStyledItemDelegate</code> with an effect similar to what I'm looking for, so I tried something in my code below from some examples I could find - and as the first screenshot shows, obviously it does not work. (and in fact, <code>self.table.setItemDelegateForColumn(0, htest_item)</code> touches all the data cells in the column, - but NOT the heading cell (which is what I'd want changed)!)</p> <p>So, how can I get a heading that has two lines of text, one under the other, where the second line of text has a different font from the first one?</p> <p>Code example:</p> <pre class="lang-python prettyprint-override"><code># base of example from https://www.pythonguis.com/tutorials/qtableview-modelviews-numpy-pandas/ import sys from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtCore import Qt, QTimer from PyQt5.QtWidgets import QStyledItemDelegate, QWidget, QVBoxLayout, QLabel coltxts = { 0: &quot;STANDARD_COLUMN_DESCRIPTION&quot;, 1: &quot;ADVANCED_COLUMN_DESCRIPTION&quot;, 2: &quot;VERBOSE_COLUMN_DESCRIPTION&quot;, } class TableModel(QtCore.QAbstractTableModel): def __init__(self, data): super(TableModel, self).__init__() self._data = data def data(self, index, role): if role == Qt.DisplayRole: return self._data[index.row()][index.column()] def headerData(self, section, orientation, role=Qt.DisplayRole): # SO:64287713 if orientation == Qt.Horizontal: if role == Qt.DisplayRole: retstr = &quot;Column {}&quot;.format(section) if section == 0: retstr += &quot;&lt;br&gt;{}&quot;.format(coltxts[section]) else: retstr += &quot;\n{}&quot;.format(coltxts[section]) return retstr return super().headerData(section, orientation, role) def rowCount(self, index): return len(self._data) def columnCount(self, index): return len(self._data[0]) class MainWindow(QtWidgets.QMainWindow): def __init__(self): super().__init__() self.table = QtWidgets.QTableView() data = [ [4, 9, 2], [1, 0, 0], [3, 5, 0], [3, 3, 2], [7, 8, 9], ] self.model = TableModel(data) self.table.setModel(self.model) htest_item = TwoLineTableWidgetItemDelegate(self.table) self.table.setItemDelegateForColumn(0, htest_item) #self.table.horizontalHeader().setStyleSheet(&quot;QHeaderView { font-size: 8pt; font-weight: bold; }&quot;); QTimer.singleShot(10, self.table.resizeColumnsToContents) self.setCentralWidget(self.table) class TwoLineTableWidgetItemDelegate(QStyledItemDelegate): # via SO:52545316, bit of SO:24024815 &quot;&quot;&quot; Tried to start this by building on: https://stackoverflow.com/q/52545316/ https://stackoverflow.com/q/24024815/ &quot;&quot;&quot; def __init__(self, icons, parent=None): super(TwoLineTableWidgetItemDelegate, self).__init__(parent) self.main_widget = QWidget() self.layout = QVBoxLayout() self.label_top = QLabel() self.label_bottom = QLabel() self.layout.addWidget(self.label_top) self.layout.addWidget(self.label_bottom) self.main_widget.setLayout(self.layout) def paint(self, painter, option, index): #icon = self.get_icon(index) self.label_top.setText(str(index.row())) self.label_bottom.setText(str(index.column())) #self.main_widget.paint(painter, option.rect, QtCore.Qt.AlignCenter) # no paint here self.main_widget.update() app=QtWidgets.QApplication(sys.argv) window=MainWindow() window.show() app.exec_() </code></pre>
<python><qt><pyqt5>
2024-04-23 10:35:07
0
5,938
sdbbs
78,371,595
1,818,935
Unexpected output from pandas' DataFrameGroupBy.diff function
<p>Consider the following piece of python code, which is essentially copied from the first code insert in <a href="https://pandas.pydata.org/docs/user_guide/groupby.html#transformation" rel="nofollow noreferrer">the <em>Transformation</em> section</a> of <em>pandas</em>' user guide's <em>Group by: split-apply-combine</em> chapter.</p> <pre><code>import pandas as pd import numpy as np speeds = pd.DataFrame( data = {'class': ['bird', 'bird', 'mammal', 'mammal', 'mammal'], 'order': ['Falconiformes', 'Psittaciformes', 'Carnivora', 'Primates', 'Carnivora'], 'max_speed': [389.0, 24.0, 80.2, np.NaN, 58.0]}, index = ['falcon', 'parrot', 'lion', 'monkey', 'leopard'] ) grouped = speeds.groupby('class')['max_speed'] grouped.diff() </code></pre> <p>When executed in Google Colab, the output is:</p> <pre><code>falcon NaN parrot -365.0 lion NaN monkey NaN leopard NaN Name: max_speed, dtype: float64 </code></pre> <p>This is the same output as shown in the user guide.</p> <p>Why is the value corresponding to the <code>parrot</code> index element <code>-365.0</code> rather than <code>NaN</code> like the rest of the values in this Series?</p>
<python><pandas>
2024-04-23 10:21:50
1
6,053
Evan Aad
78,371,117
7,677,894
How to use different "with" under "if" statement in python?
<p>The source code is like:</p> <pre><code>with A: do_some() </code></pre> <p>I want to choose differenct <code>with</code> statement base on <code>if</code> conditions like the code below. And the <code>do_some()</code> is still under <code>with</code> statement.</p> <pre><code>if cond1: with A: else: with B: do_some() </code></pre> <p>How should I do that?</p>
<python>
2024-04-23 09:06:45
2
983
Ink
78,371,003
14,667,788
Parallel a process in Python
<p>I am learning how to run simple function on multiple threads in Python.</p> <p>Assume this simple code:</p> <pre class="lang-py prettyprint-override"><code> from itertools import product all_combinations = [] for cas in range(3): target_sum = 10 combinations = product(range(target_sum + 1), repeat=4) valid_combinations = [combo for combo in combinations if sum(combo) == target_sum] all_combinations.append(valid_combinations) compute_combo = [] for a in all_combinations[2]: for b in all_combinations[1]: for c in all_combinations[0]: compute_combo.append([a, b, c]) def foo(bar): max_combo = 0 for j in bar: for a in j: suma = a[0] + a[1] + 3*a[2] + a[3] if suma &gt; max_combo: max_combo = suma return max_combo import time start_time_simple = time.time() result_simple = foo(compute_combo) print(f&quot;1 CPU result: {result_simple}&quot;) end_time_simple = time.time() execution_time_simple = end_time_simple - start_time_simple print(f&quot;1 CPU run time: {execution_time_simple} s&quot;) </code></pre> <p>This is how I try to run the foo() function on multiple threads:</p> <pre class="lang-py prettyprint-override"><code> from multiprocessing.dummy import Pool as ThreadPool start_time_par = time.time() pool = ThreadPool(4) result_par = pool.map(foo, [compute_combo]) pool.close() pool.join() end_time_par = time.time() execution_time_par = end_time_par - start_time_par print(f&quot;4 CPUs result {result_par}&quot;) print(f&quot;4 CPUs run time: {execution_time_par} s&quot;) </code></pre> <p>But the run time is the same, what is the problem here, please? Thanks a lot</p>
<python><parallel-processing>
2024-04-23 08:48:04
1
1,265
vojtam
78,370,980
11,149,556
Customizing Y-axis Major Ticks on Symlog Scale
<p>I am working on a boxplot in Python using Seaborn and Matplotlib where the Y-axis is set to a symmetrical logarithmic scale. I'm attempting to place major ticks at specific intervals using a custom <code>SymmetricalLogLocator</code>, but I'm encountering unexpected results. Below is my minimal working example:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from matplotlib import ticker url = 'https://public.tableau.com/app/sample-data/titanic%20passenger%20list.csv' titanic = pd.read_csv(url) ax = sns.boxplot(x=titanic[&quot;age&quot;].round(-1), y=titanic[&quot;fare&quot;], native_scale=True) ax.set_yscale('symlog', linthresh=1) ax.set_ylim(bottom=-1, top=10**8) # Extending the y-axis upper limit # Attempting to set major ticks at specific intervals subs = [10**i for i in range(0, 8, 2)] ax.yaxis.set_major_locator(ticker.SymmetricalLogLocator(base=10.0, linthresh=1, subs=subs)) plt.show() </code></pre> <p>I intended to generate major ticks every two power steps with <code>subs=[10**i for i in range(0, 8, 2)]</code>, expecting ticks at 1, 100, 10,000, etc. However, the actual plot displays ticks at every power of 10 (1, 10, 100, etc.). What am I misunderstanding or doing incorrectly in specifying the <code>subs</code> parameter for <code>SymmetricalLogLocator</code>?</p> <p><a href="https://i.sstatic.net/5sAnV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5sAnV.png" alt="enter image description here" /></a></p>
<python><matplotlib><seaborn><logarithm><yticks>
2024-04-23 08:44:57
0
479
ex1led
78,370,958
2,233,500
How to make AWS Lambda / API Gateway return an image?
<h2>Original question</h2> <p>I'm writing a Lambda function and I want it to return an image and I cannot make it work.</p> <p>Usually, given the URL of an ordinary image, in Python, I can load the image using PILLOW and BytesIO this way:</p> <pre class="lang-py prettyprint-override"><code>response = requests.get(IMAGE_URL) pil_image = Image.open(BytesIO(response.content)) pil_image.show() </code></pre> <p>If I use my Lambda + API Gateway endpoint, the Python code I need to write is:</p> <pre class="lang-py prettyprint-override"><code>response = requests.post(API_URL, json={&quot;url&quot;: IMAGE_URL}) pil_image = Image.open(BytesIO(base64.b64decode(response.content))) pil_image.show() </code></pre> <p>If I don't decode the content, it doesn't work.</p> <p>I have done the following:</p> <ol> <li>Response of the Lambda function:</li> </ol> <pre class="lang-py prettyprint-override"><code>file_object = BytesIO() pil_image.save(file_object, format=&quot;JPEG&quot;, quality=95) image_str = base64.b64encode(file_object.getvalue()).decode(&quot;ascii&quot;) response = { &quot;isBase64Encoded&quot;: True, &quot;statusCode&quot;: 200, &quot;headers&quot;: {&quot;Content-Type&quot;: &quot;image/jpeg&quot;}, &quot;body&quot;: image_str, } </code></pre> <ol start="2"> <li>In API Gateway, I created a new resource and a POST method with the following properties:</li> </ol> <ul> <li>Integration type: Lambda</li> <li>Lambda proxy integration: Activated</li> </ul> <ol start="3"> <li><p>In API Gateway, in the newly created POST method, in the Method response, I added the Content-type <code>image/jpeg</code></p> </li> <li><p>In API Gateway, in the API settings, I added the <code>image/jpeg</code> binary media types</p> </li> </ol> <p>I've read that API Gateway is supposed to convert the image, but maybe I'm wrong. I don't get what I am missing.</p> <p>I have tried to follow different tutorials, including AWS ones, and yet, I couldn't make it work as some options are not available in the console. Here are some of the links:</p> <p><a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/lambda-proxy-binary-media.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/apigateway/latest/developerguide/lambda-proxy-binary-media.html</a></p> <p><a href="https://stackoverflow.com/questions/35804042/aws-api-gateway-and-lambda-to-return-image">AWS API Gateway and Lambda to return image</a></p> <p><a href="https://stackoverflow.com/questions/41429551/aws-api-gateway-base64decode-produces-garbled-binary/41434295#41434295">AWS API Gateway base64Decode produces garbled binary?</a></p> <h2>Edit / Modifications</h2> <p>After trying many things and I modified the steps as follow:</p> <ol> <li>Unchanged</li> <li>I created a GET method instead of a POST (should not change anything)</li> <li>I did not add the Content-type <code>image/jpeg</code> in the Method response</li> <li>I added the <code>*/*</code> binary media types in the API settings</li> </ol> <p>These steps seem to be working and I am able to load the image from the response.</p> <p>The only problem with this solution is that API Gateway encodes the payload and I need to do something like <code>json.loads(base64.b64decode(event[&quot;body&quot;]))[&quot;url&quot;])</code> in the Lambda function to access the input URL.</p> <p>Maybe there's a better way?</p>
<python><aws-lambda><aws-api-gateway>
2024-04-23 08:42:12
0
867
Vincent Garcia
78,370,548
6,573,770
Filter data from dataframe using each element of a list
<p>I have a dataframe:</p> <pre><code>import pandas as pd </code></pre> <p>#Create a sample dataframe</p> <pre><code>df = pd.DataFrame({ 'material_sub_category_id': [8038, 10063, 8038, 9539], 'auction_id': [400, 401, 402, 403], 'material_name': ['pig iron', 'sponge iron', 'pig iron' , 'billet'], 'auc_rule': ['yankee', 'english no tie', 'yankee', 'english no tie'], 'h1_price': [27200, 24678, 27800, 34000] }) </code></pre> <p>I have created a list:</p> <pre><code>subcat_list = df['material_sub_category_id'].unique().tolist() Gives the output [8038, 10063, 9627, 9539] </code></pre> <p>Expected df: This I have done for 1 subcat</p> <pre><code>df_8038 material_sub_category_id | auction_id | material_name | auc_rule | h1_price 8038 400 'pig iron' 'yankee' 27200 8038 402 'pig iron' 'yankee' 27800 </code></pre> <p>I am trying to filter out the dataframe for each 'material_sub_category_id' using a loop.</p>
<python><pandas><dataframe>
2024-04-23 07:23:58
0
329
Ami
78,370,502
4,277,485
Pandas, find difference between two columns, each having different datatype values
<p>consider following input data</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">prod</th> <th style="text-align: center;">col1</th> <th style="text-align: right;">col2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">One</td> <td style="text-align: center;">hi</td> <td style="text-align: right;">hello</td> </tr> <tr> <td style="text-align: left;">One</td> <td style="text-align: center;">18.0</td> <td style="text-align: right;">19.52</td> </tr> <tr> <td style="text-align: left;">One</td> <td style="text-align: center;">2024-02-12 00:00:00</td> <td style="text-align: right;">2024-03-07 00:00:00</td> </tr> <tr> <td style="text-align: left;">two</td> <td style="text-align: center;">2024-02-12 00:00:00</td> <td style="text-align: right;">2024-02-11 00:00:00</td> </tr> <tr> <td style="text-align: left;">two</td> <td style="text-align: center;">in-transit</td> <td style="text-align: right;">in-stock</td> </tr> </tbody> </table></div> <p>want to find difference between col1 and col2, since there is difference in datatype in each row, I am facing difficulty to apply pandas functions. using SQL knowledge tried this code but didn't work</p> <p>logic:</p> <ol> <li>if str then difference = &quot;not same&quot;</li> <li>if datetime then difference = (col2-col1).days</li> <li>else difference = col2 - col1</li> </ol> <pre><code>df[&quot;difference&quot;] = np.where( df['col2'].apply(lambda x: isinstance(x, str)), &quot;not same&quot;, df[&quot;col2&quot;].apply(lambda x: isinstance(x, datetime)), (df['col2'] - df['col1']).dt.days, df['old_value'] - df['new_value']) </code></pre> <p>** Not getting expected output, datetime is still in timedelta</p> <p>Expected output:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">prod</th> <th style="text-align: center;">col1</th> <th style="text-align: center;">col2</th> <th style="text-align: right;">difference</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">One</td> <td style="text-align: center;">hi</td> <td style="text-align: center;">hello</td> <td style="text-align: right;">not same</td> </tr> <tr> <td style="text-align: left;">One</td> <td style="text-align: center;">18.0</td> <td style="text-align: center;">19.52</td> <td style="text-align: right;">1.52</td> </tr> <tr> <td style="text-align: left;">One</td> <td style="text-align: center;">2024-02-12 00:00:00</td> <td style="text-align: center;">2024-03-07 00:00:00</td> <td style="text-align: right;">25</td> </tr> <tr> <td style="text-align: left;">two</td> <td style="text-align: center;">2024-02-12 00:00:00</td> <td style="text-align: center;">2024-02-11 00:00:00</td> <td style="text-align: right;">1</td> </tr> <tr> <td style="text-align: left;">two</td> <td style="text-align: center;">in-transit</td> <td style="text-align: center;">in-stock</td> <td style="text-align: right;">not same</td> </tr> </tbody> </table></div> <p>Any other approach please suggest</p>
<python><pandas><datetime><timedelta>
2024-04-23 07:17:18
2
438
Kavya shree
78,370,307
3,142,695
How to change variable/dynamic json content
<p>This is how I modify a tsconfig.json file in a python script. But as you can see, the most part is quite the same, so this is not a very elegant solution. As I'm new to python, I need some help how to write this code more efficient.</p> <p>And also there is another <code>if</code> case where I have to add a single parameter (e.g. <code>&quot;noUncheckedIndexedAccess&quot;: False</code>) or set an existing one to different value (<code>False</code>).</p> <pre><code>with open('/'.join([directory, &quot;tsconfig.json&quot;]),'r+') as file: file_data = json.load(file) if plugin == &quot;node&quot;: file_data[&quot;compilerOptions&quot;] = { &quot;module&quot;: &quot;commonjs&quot;, &quot;strict&quot;: True, &quot;alwaysStrict&quot;: True, &quot;noImplicitAny&quot;: True, &quot;noImplicitThis&quot;: True, &quot;strictNullChecks&quot;: True, &quot;strictBindCallApply&quot;: True, &quot;strictFunctionTypes&quot;: True, &quot;strictPropertyInitialization&quot;: True, &quot;useUnknownInCatchVariables&quot;: True, &quot;noUnusedLocals&quot;: True, &quot;noUnusedParameters&quot;: True, &quot;allowUnusedLabels&quot;: True, &quot;allowUnreachableCode&quot;: True, &quot;noImplicitOverride&quot;: True, &quot;noImplicitReturns&quot;: True, &quot;noUncheckedIndexedAccess&quot;: True, &quot;noPropertyAccessFromIndexSignature&quot;: True, &quot;noFallthroughCasesInSwitch&quot;: True, &quot;exactOptionalPropertyTypes&quot;: True, &quot;forceConsistentCasingInFileNames&quot;: True } elif plugin == 'react': file_data[&quot;compilerOptions&quot;] = { &quot;jsx&quot;: &quot;react-jsx&quot;, &quot;allowJs&quot;: False, &quot;esModuleInterop&quot;: False, &quot;allowSyntheticDefaultImports&quot;: True, &quot;strict&quot;: True, &quot;alwaysStrict&quot;: True, &quot;noImplicitAny&quot;: True, &quot;noImplicitThis&quot;: True, &quot;strictNullChecks&quot;: True, &quot;strictBindCallApply&quot;: True, &quot;strictFunctionTypes&quot;: True, &quot;strictPropertyInitialization&quot;: True, &quot;useUnknownInCatchVariables&quot;: True, &quot;noUnusedLocals&quot;: True, &quot;noUnusedParameters&quot;: True, &quot;allowUnusedLabels&quot;: True, &quot;allowUnreachableCode&quot;: True, &quot;noImplicitOverride&quot;: True, &quot;noImplicitReturns&quot;: True, &quot;noUncheckedIndexedAccess&quot;: True, &quot;noPropertyAccessFromIndexSignature&quot;: True, &quot;noFallthroughCasesInSwitch&quot;: True, &quot;exactOptionalPropertyTypes&quot;: True, &quot;forceConsistentCasingInFileNames&quot;: True } file.seek(0) json.dump(file_data, file, indent = 2) </code></pre>
<python>
2024-04-23 06:40:35
1
17,484
user3142695
78,370,271
498,154
Dependency check with liccheck fails - Expected matching RIGHT_PARENTHESIS for LEFT_PARENTHESIS, after version specifier
<p>In a django app, with poetry as the dependency/packaging manager, I am using a library (also a django one) that is used by multiple django apps.</p> <p>Those different apps are using different versions of django. So, in the <code>pyproject.toml</code> file <strong>of the library</strong>, the <code>django</code> dependency is defined like so:</p> <pre><code>django = &quot;&gt;=3.2.18,&lt;4.0.0 || &gt;=4.0.10,&lt;4.1.0 || &gt;=4.1.7&quot; </code></pre> <p>while in my application I am have:</p> <pre><code>django=&quot;4.1.13&quot; </code></pre> <p>The dependency check with liccheck is done as follows:</p> <pre><code>poetry export --without-hashes -f requirements.txt &gt; requirements.txt liccheck -r requirements.txt </code></pre> <p>After I run the check, it fails for with the following error message:</p> <pre><code>pkg_resources.extern.packaging.requirements.InvalidRequirement: Expected matching RIGHT_PARENTHESIS for LEFT_PARENTHESIS, after version specifier django (&gt;=3.2.18,&lt;4.0.0 || &gt;=4.0.10,&lt;4.1.0 || &gt;=4.1.7) ~~~~~~~~~~~~~~~~~^ </code></pre> <p>It seems that the django dependency from the library is affecting the dependency check. Initially I had the 0.9.1 version of liccheck and I have already updated it to the latest 0.9.2 but with the same results.</p> <p>Any idea?</p>
<python><django><python-poetry>
2024-04-23 06:32:44
0
334
kostia
78,369,936
10,003,538
Django CloudinaryField: Setting Default Privacy on Upload and Generating Presigned URLs for Public Access
<p>I'm working on a Django project where I have a model Media with an image field stored using CloudinaryField.</p> <p>Currently, I'm able to upload images via multipart/form, and I can generate a public link using <code>media_object.image.url</code>. However, I'm facing two challenges:</p> <ol> <li><p>Default Upload Privacy: How can I set the default privacy of the uploaded images to be private?</p> </li> <li><p>Generating Presigned URLs: I need to generate presigned URLs for public access to these images, but I want these URLs to expire after a certain period, say 5 minutes.</p> </li> </ol> <p>Here's what my current model looks like:</p> <pre><code>from django.db import models from cloudinary.models import CloudinaryField class Media(models.Model): # Other fields... image = CloudinaryField('document', null=True, blank=True) </code></pre> <p>Could someone please guide me on how to achieve these two objectives? Any help or pointers would be greatly appreciated. Thanks!</p>
<python><python-3.x><django><cloudinary>
2024-04-23 04:50:03
1
1,225
Chau Loi
78,369,880
342,553
error: call to undeclared function 'OPENSSL_sk_find_all'
<p>Trying to install M2Crypto==0.41.0 on Python 3.10 on OSX v13, but got a lot of similar errors like:</p> <pre><code>src/SWIG/_m2crypto_wrap.c:10746:17: error: call to undeclared function 'OPENSSL_sk_find_all'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] result = (int)OPENSSL_sk_find_all(arg1,(void const *)arg2,arg3); ^ src/SWIG/_m2crypto_wrap.c:10746:17: note: did you mean 'OPENSSL_sk_find_ex'? /opt/homebrew/Cellar/openssl@1.1/1.1.1w/include/openssl/stack.h:41:5: note: 'OPENSSL_sk_find_ex' declared here int OPENSSL_sk_find_ex(OPENSSL_STACK *st, const void *data); ^ src/SWIG/_m2crypto_wrap.c:10991:20: error: call to undeclared function 'ossl_check_OPENSSL_STRING_type'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] result = (char *)ossl_check_OPENSSL_STRING_type(arg1); ^ src/SWIG/_m2crypto_wrap.c:10991:20: note: did you mean '_wrap_ossl_check_OPENSSL_STRING_type'? src/SWIG/_m2crypto_wrap.c:10975:22: note: '_wrap_ossl_check_OPENSSL_STRING_type' declared here SWIGINTERN PyObject *_wrap_ossl_check_OPENSSL_STRING_type(PyObject *self, PyObject *args) { ^ src/SWIG/_m2crypto_wrap.c:10991:12: warning: cast to 'char *' from smaller integer type 'int' [-Wint-to-pointer-cast] result = (char *)ossl_check_OPENSSL_STRING_type(arg1); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ </code></pre> <p>Already tried to <code>brew reinstall swig</code> and googled possible answers but to no avail.</p>
<python><m2crypto>
2024-04-23 04:25:21
1
26,828
James Lin
78,369,812
1,592,334
Is there a way to fix can't compare offset-naive and offset-aware datetimes when copying files between s3 buckets
<p>I am trying to write a python script that will move specific files from source s3 bucket to target s3 bucket. The objective is to copy specific files to the target bucket in the initial run. on the second run, it compares the max lastmodified date in target with the lastmodified date in the source, then it uses that to copy new files in the source to the target.</p> <p>This is what I have written</p> <pre><code>import boto3 import os from datetime import datetime, timezone def get_max_last_modified_time(bucket): &quot;&quot;&quot; Get the maximum Last Modified time of files in an S3 bucket. Args: - bucket (str): The name of the S3 bucket. Returns: - datetime: The maximum Last Modified time of files in the bucket (timezone-aware). &quot;&quot;&quot; s3_client = boto3.client('s3') response = s3_client.list_objects_v2(Bucket=bucket) files = response.get('Contents', []) if not files: return None # Convert Last Modified timestamps to datetime objects with UTC timezone last_modified_times = [file['LastModified'].astimezone(timezone.utc) for file in files] # Return the maximum Last Modified time as a timezone-aware datetime object return max(last_modified_times) def copy_files(source_bucket, target_bucket): &quot;&quot;&quot; Copy new files from source bucket to target bucket, excluding files containing 'example' in their filename. Args: - source_bucket (str): The name of the source bucket. - target_bucket (str): The name of the target bucket. Returns: - None &quot;&quot;&quot; # Initialize the S3 client s3_client = boto3.client('s3') # Get the maximum Last Modified time of files in the source bucket source_max_last_modified_time = get_max_last_modified_time(source_bucket) # Get the maximum Last Modified time of files in the target bucket target_max_last_modified_time = get_max_last_modified_time(target_bucket) # If there are no files in the target bucket, set the maximum Last Modified time to None if not target_max_last_modified_time: target_max_last_modified_time = datetime.min # List objects in the source bucket response = s3_client.list_objects_v2(Bucket=source_bucket) files = response.get('Contents', []) # Iterate over the files in the source bucket for file_obj in files: file_key = file_obj['Key'] file_name = os.path.basename(file_key) # Check if the file name contains 'example' if 'example' in file_name.lower(): print(f&quot;File containing 'example' found: '{file_name}'&quot;) else: # Get the Last Modified time of the file in the source bucket source_last_modified_time = file_obj['LastModified'] # Skip files that have not been modified since the last execution if source_last_modified_time &gt; target_max_last_modified_time: # Copy the file to the target bucket s3_client.copy_object( Bucket=target_bucket, CopySource={'Bucket': source_bucket, 'Key': file_key}, Key=file_key ) print(f&quot;Successfully copied new file '{file_name}' from '{source_bucket}' to '{target_bucket}'&quot;) if __name__ == &quot;__main__&quot;: # Specify the source and target bucket names source_bucket = 'queen-data-lake' target_bucket = 'queen-output' # Call the copy_files function copy_files(source_bucket, target_bucket) </code></pre> <p>I seem to be getting this error. also I wonder if this is an effective approach to handle millions of file</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /var/folders/06/2zj_616d5vx22y29nq1jq2hc0000gn/T/ipykernel_30361/3639448357.py in &lt;module&gt; 120 121 # Call the copy_files function --&gt; 122 copy_files(source_bucket, target_bucket) /var/folders/06/2zj_616d5vx22y29nq1jq2hc0000gn/T/ipykernel_30361/3639448357.py in copy_files(source_bucket, target_bucket) 105 106 # Skip files that have not been modified since the last execution --&gt; 107 if source_last_modified_time &gt; target_max_last_modified_time: 108 # Copy the file to the target bucket 109 s3_client.copy_object( TypeError: can't compare offset-naive and offset-aware datetimes </code></pre>
<python><amazon-web-services><amazon-s3>
2024-04-23 03:56:50
1
1,095
Abiodun Adeoye
78,369,755
4,732,111
Polars compare two dataframes - is there a way to fail immediately on first mismatch
<p>I'm using polars.testing <code>assert_frame_equal</code> method to compare two sorted dataframes containing same columns and below is my code:</p> <pre><code>assert_frame_equal(src_df, tgt_df, check_dtype=False, check_row_order=False) </code></pre> <p>For a dataframe containing 5 million records, it takes long time to report a failure as it compares all the rows between two dataframes. Is there a way that we can make polars to fail immediately and report on first mismatch/failure and stop the execution as we just need to know the first failure. I tried searching through and i'm unable to find any documentation for this requirement.</p> <p>Can someone please help me on this?</p>
<python><pandas><python-polars>
2024-04-23 03:26:09
3
363
Balaji Venkatachalam
78,369,740
12,314,521
Is there any way to cancel initialize object class if there is an error occurs in Python?
<pre><code>class EntityRetrieval(object): def __init__(self, entity_kb_path: str): &quot;&quot;&quot; Args entity_kb_path: path of entity knowledge base (string) which is json format &quot;&quot;&quot; try: entity_kb = json.load(open(entity_kb_path,'r')) except Exception as e: logging.error(e) # Don't use entity type entity_dict = {} for entity_type, entities in entity_kb.items(): for entity_id, list_entity_strings in entities.items(): entity_dict[entity_id] = list_entity_strings </code></pre> <p>Above is my ugly code. Here is what I concern:</p> <ul> <li>Does it really need to catch error to prevent crashing? In the above code, the <code>entity_kb_path</code> has to be json file. There might be incorrect format file error, not found file error. And I try to throw message error to user when they not pass in correct argument instead of throw crash.</li> <li>But when I use try catch, the initialization still running and return an instance, I think this is not logical, I mean it should stop initalizing right?</li> <li>If try except puts there then there something weird that the <code>entity_kb</code> variable might not be declared. But the process block code behind that is too long. I might not want to put it in the <code>try</code> block</li> </ul> <p>So how do you do in those scenarios?</p>
<python><class>
2024-04-23 03:22:24
2
351
jupyter
78,369,674
8,124,392
How to autoselect a result from a list
<p>This is the output of my Segment Anything Model: <a href="https://i.sstatic.net/fVga7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fVga7.png" alt="enter image description here" /></a></p> <p>These are the segmentation results:</p> <pre><code>masks = [ mask['segmentation'] for mask in sorted(sam_result, key=lambda x: x['area'], reverse=True) ] sv.plot_images_grid( images=masks, grid_size=(8, int(len(masks) / 8)), size=(16, 16) ) </code></pre> <p><a href="https://i.sstatic.net/TMrHi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMrHi.png" alt="enter image description here" /></a></p> <p>The second segmentation in the list is what I want. My goal is to use that to erase out the background and just maintaing the car.</p> <p>However, I want to do this automatically and while I can automate the process, the caveat is that I will have cars of different shapes. How do I automatically segment out the entirety of the car? How to go about this?</p>
<python><deep-learning><computer-vision><image-segmentation>
2024-04-23 02:55:38
2
3,203
mchd
78,369,583
1,115,716
Combine two rows in a CSV and preserve the number of columns
<p>I have a CSV file with some values represented thusly:</p> <pre><code>Superman/, 250lbs, Batman/, 180lbs Clark Kent, , Bruce Wayne, </code></pre> <p>I need to combine the rows such that it looks like:</p> <pre><code>Superman/Clark Kent, 250lbs, Batman/Bruce Wayne, 180lbs </code></pre> <p>Obviously I need to delete that second row once merged. Can the <code>CSV</code> library do this? I know there's the <code>extend(...)</code> function, but I need to keep the column number the same.</p>
<python><csv>
2024-04-23 02:04:54
1
1,842
easythrees
78,369,487
16,312,980
What to put in dt parameter or "sample spacing" for pycwt's `wct()` or their wavelet coherence test function?
<p>I am trying to create some stuff from an academic paper, and one point it <a href="https://www.mathworks.com/help/wavelet/ref/wcoherence.html" rel="nofollow noreferrer">requires wavelet coherence or <code>wcoherence()</code> from <code>matlab</code></a>.</p> <p>I am required to use python so the next best alternative is <code>pycwt</code> library. <a href="http://regeirk.github.io/pycwt/reference/#pycwt.wct" rel="nofollow noreferrer"><code>wct()</code> has a compulsory <code>dt</code> parameter for sample spacing.</a></p> <p>I am sorry but I am new to signal processing. It may be some fundamentals that I have skipped.</p>
<python><matlab><wavelet><wavelet-transform>
2024-04-23 01:26:16
0
426
Ryan
78,369,485
259,543
Pass array argument into C++ function from Python via SWIG
<p>I've got an interface such as:</p> <pre class="lang-cpp prettyprint-override"><code>int func(int array[]); </code></pre> <p>I must call it from Python via SWIG. <em>The binding code is already compiled</em> and I do not want to edit, compile or otherwise mess with SWIG, but the overload for <code>func(int[])</code> seems available -- SWIG prints it in the exception log.</p> <p>How can I call this? It should be pretty standard, but neither <code>func((1,2,3))</code> nor <code>func([1,2,3])</code> works and there doesn't seem to be any obvious way to do it.</p>
<python><c++><swig>
2024-04-23 01:25:28
1
5,252
alecov
78,369,157
10,962,766
Regex pattern for different date formats does not seem to capture all desired cases
<p>I am trying to process a txt file line per line to find date information in different patterns and write them to a consistent YYYY, YYYY-MM and YYYY-MM-DD format.</p> <p>My input formats across different lines of narrative text are:</p> <pre><code>1) YYYY, e.g. 1890 2) MM.YYYY, e.g. 10.1765 3) M.YYYY, e.g. 9.1700 4) DD.MM.YYYY, e.g. 11.11.1876 5) D.MM.YYYY, e.g. 9.10.1678 6) D.M.YYYY, e.g. 9.1.1768 7) DD.M.YYYY, e.g. 21.3.1789 8) DD.MM., e.g. 12.12. (no year) 9) D.M., e.g. 1.1. (no year) </code></pre> <p>In context, the concerned lines typically look like this:</p> <pre><code>&quot;7.1.1695 jur. geprüft&quot; &quot;12.1. verteidigt unter v. Haaren&quot; &quot;ord. [Weihe] Mainz 21.9.1743&quot; &quot;erhielt 1786 die Pfarrei Irmstraut (Diöz. Trier)&quot; &quot;ein Anton Alperstätt 20.9. 1748 bacc.&quot; </code></pre> <p>I have been trying to write a regex pattern that captures as many of these cases as possible and replaces the identified strings (extracted from a more extended code) with my desired output formats:</p> <pre><code> def replace_dates(merged_lines): def format_date(search): year = &quot;&quot; month = &quot;&quot; day = &quot;&quot; # check how many groups are in the pattern num_groups = search.groups() print(num_groups) if len(num_groups) == 0: year = &quot;0000&quot; if len(num_groups) == 1: year = search.group(0) print(year) elif 2 &lt; len(num_groups) &lt;= 4: day = search.group(1) if search.group(1) else '00' month = search.group(3) if search.group(3) else '00' elif num_groups == 5: day = search.group(1) if search.group(1) else '00' month = search.group(3) if search.group(3) else '00' year = search.group(5) # determine the output format if month != '00' and day != '00': return f&quot;{year}-{month}-{day}&quot; # Format: YYYY-MM-DD elif month != '00': return f&quot;{year}-{month}&quot; # Format: YYYY-MM else: return year # Format: YYYY # Compile regex pattern date_pattern = re.compile(r'(?&lt;!\d)(\d{1,2})([.-]|\s)?(\d{1,2})?([.-]|\s)?(\d{4})(?!\d)') #Group 1: day (1 or 2 digits) #Group 2: separator between day and month (if present) #Group 3: month (1 or 2 digits) #Group 4: separator between month and year (if present) #Group 5: year (4 digits) replaced_lines = [] for line in merged_lines: if line.startswith(&quot;[Source]&quot;): replaced_lines.append(line) else: searches = date_pattern.finditer(line) for search in searches: line = line.replace(search.group(0), format_date(search)) replaced_lines.append(line) return replaced_lines </code></pre> <p>Looking at the searches that are being captured, however, I have several issues:</p> <ol> <li>Instances where only the year is present are somehow being ignored.</li> <li>Cases where I have time ranges in my data, e.g. &quot;1786-1790&quot;, are retrieved as ('17', None, '86', '-', '1790').</li> </ol> <p>If more than one date is in each row, I would gladly only process the first one to keep things simple. I have also considered abandoning the idea of a single regex and processing each case separately, but I fear this would make the script unnecessarily complex.</p>
<python><regex><match>
2024-04-22 22:38:37
2
498
OnceUponATime
78,369,152
85,381
Custom orange learner in python script is failing
<p>I am trying to implement a custom learner in Orange similar to the stack learner.</p> <p><a href="https://i.sstatic.net/hTrHw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hTrHw.png" alt="enter image description here" /></a></p> <p>The input learners return <code>yes</code>, <code>no</code> and <code>maybe</code>. I want to implement an aggregate learner that takes the values of multiple input learners and applies the following rule</p> <ul> <li>If any <code>yes</code> =&gt; <code>yes</code></li> <li>Else if any <code>maybe</code> =&gt; <code>maybe</code></li> <li>Otherwise =&gt; <code>no</code></li> </ul> <p>I have tried this with the following code. It seams to work with the debug statements I added to the bottom (taken from applying <a href="https://orangedatamining.com/blog/learners-in-python/" rel="nofollow noreferrer">learners documentation</a>). This enabled me to reproduce the errors I got in my in the <code>Test and Score</code> step. After I fixed those by returning the values from the input learners, I get an error in the <code>Test and Score</code> that I cannot reproduce or fix.</p> <pre><code>MedicaLearner failed with error: TypeError: only size-1 arrays can be converted to Python scalars </code></pre> <p><a href="https://i.sstatic.net/GbLW9.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GbLW9.jpg" alt="Error in UI" /></a></p> <p>This is the python code for my learner:</p> <pre class="lang-py prettyprint-override"><code>from Orange.classification import Learner, Model from Orange.preprocess import Continuize, RemoveNaNColumns, SklImpute, Normalize from Orange.data.filter import HasClass from Orange.data import Table from operator import countOf from Orange.evaluation import CrossValidation from Orange.evaluation.scoring import CA, AUC import numpy as np import datetime; ct = datetime.datetime.now() print(&quot;current time:-&quot;, ct) class MedicalLearner(Learner): def __init__(self, preprocessors=[HasClass(), Continuize(), RemoveNaNColumns(), SklImpute()]): super().__init__(preprocessors=preprocessors) self.name = 'MedicalLearner' def fit_storage(self, data): classifiers = [l(data) for l in in_learners] return MedicalModel(data, classifiers) class MedicalModel(Model): def __init__(self, data, classifiers): super().__init__(data.domain) self.domain = data.domain self.classifiers = classifiers def callClassifier(self, X, classifier): Y = np.zeros((len(X), len(self.domain.class_vars))) Y[:] = np.nan table = Table(self.domain, X, Y) return classifier(table) def predict(self, X): cs = [self.callClassifier(X, c) for c in self.classifiers] values = [self.domain.class_var.values[int(c)] for c in cs] value_map = {self.domain.class_var.values[int(c)] : c for c in cs} yes, no, maybe = countOf(values, &quot;yes&quot;), countOf(values, &quot;no&quot;), countOf(values, &quot;maybe&quot;) if yes &gt; 0: return value_map[&quot;yes&quot;] elif maybe &gt; 0: return value_map[&quot;maybe&quot;] elif no &gt; 0: return value_map[&quot;no&quot;] else: return value_map[&quot;maybe&quot;] out_learner = MedicalLearner() # Test the learner by applying it in code classifiers = [l(in_data) for l in in_learners] cs = [c(in_data[0]) for c in classifiers] values = [in_data.domain.class_var.values[int(c)] for c in cs] print([type(c) for c in cs]) print(values) print(type(in_data[0])) print(type(in_data[0].x)) print(out_learner(in_data)(in_data[0])) </code></pre> <p>The debug script output is:</p> <pre><code>Running script: current time:- 2024-04-22 23:09:56.891111 [&lt;class 'numpy.float64'&gt;, &lt;class 'numpy.int64'&gt;] ['no', 'no'] &lt;class 'Orange.data.table.RowInstance'&gt; &lt;class 'numpy.ndarray'&gt; 1 &gt;&gt;&gt; </code></pre> <p>Any help would be greatly appreciated, either to fix the issue, reproduce the issue in my test or find the stacktrace for the error in the <code>Test and Score</code> step.</p>
<python><data-mining><orange>
2024-04-22 22:37:12
0
10,928
iain
78,368,877
4,666,912
Most efficient way to create a time varying dataframe
<p>I have the following dataframe:</p> <pre><code>from_year to_year id gender 1990 1993 1 Female 1987 1992 2 Male 2000 2000 3 Male 2010 2011 4 Female </code></pre> <p>I would like to produce the following time varying dataframe:</p> <pre><code>id year gender 1 1990 Female 1 1991 Female 1 1992 Female 1 1993 Female 2 1987 Male 2 1988 Male 2 1989 Male 2 1990 Male 2 1991 Male 2 1992 Male 3 2000 Male 4 2010 Female 4 2011 Female </code></pre> <p>What's the most efficient way to convert the top dataframe to the bottom one using python pandas?</p>
<python><pandas>
2024-04-22 21:07:40
2
2,343
BKS
78,368,435
10,145,953
PyMuPDF converted image into a numpy array?
<p>I have an existing function using <code>pdf2image</code> to convert each page of a PDF into images. For a variety of reasons, I am no longer able to use <code>pdf2image</code> and must now instead use <code>PyMuPDF</code>, however, I am having trouble yielding the same results as I did from <code>pdf2image</code>.</p> <p>The code for <code>pdf2image</code> and <code>PyMuPDF</code> are each below.</p> <p>Each item in <code>pages_list</code> for <code>pdf2image</code> is a <code>numpy.ndarray</code> and I can verify that the PDFs were properly converted by reviewing the resulting image of <code>Image.fromarray(pages_list[i])</code> using the <code>PIL</code> library. When I review this with the result of <code>pdf2image</code> I can see my original PDF as an image. When I review this with the result of <code>PyMuPDF</code> I see one long super skinny column of pixels that do not make a full image.</p> <p>ETA: I can use Pillow locally to review the images but this will eventually be going into an AWS lambda function and I am not allowed to use Pillow nor can I save files.</p> <p><code>pdf2image</code></p> <pre><code>pages = convert_from_path(img_path, 500) pages_list = [] for i in range(len(pages)): pages_list.append(np.array(pages[i])) </code></pre> <p><code>PyMuPDF</code></p> <pre><code>pdf_doc = fitz.open(img_path) pages_list = [] for i in range(len(pdf_doc)): page = pdf_doc[i] pixmap = page.get_pixmap(dpi=300) img = pixmap.tobytes() img_array = np.frombuffer(bytearray(img), dtype=np.uint8) img_array_np = np.array(img_array) pages_list.append(img_array_np) </code></pre> <p>While I did successfully convert the resulting bytes object into a numpy array, the array looks very different from the results of <code>pdf2image</code>. I was hoping to get an exact identical result from <code>PyMuPDF</code> as I did from <code>pdf2image</code> but not sure exactly where I'm going wrong. I imagine it's something in the way I'm converting from bytes to a numpy array, but I have yet to find a working fix.</p> <pre><code># Repeated for pdf2image and PyMuPDF print(f&quot;{library_name}: \n{type(pages_list[0])}&quot;) print(f&quot;.shape: {pages_list[0].shape}&quot;) print(f&quot;.ndim: {pages_list[0].ndim}&quot;) print(f&quot;.size: {pages_list[0].size}&quot;) # pdf2image: # &lt;class 'numpy.ndarray'&gt; # .shape: (5500, 4250, 3) # .ndim: 3 # .size: 70125000 # PyMuPDF: # &lt;class 'numpy.ndarray'&gt; # .shape: (378861,) # .ndim: 1 # .size: 378861 </code></pre> <p>How can I get the same results from <code>PyMuPDF</code> as I did from <code>pdf2image</code>?</p>
<python><arrays><numpy><pymupdf><pdf2image>
2024-04-22 19:13:20
1
883
carousallie
78,368,325
3,851,085
Celery name task execution and get task execution by name
<p>I have a Celery task called <code>my_task</code>. I create multiple exeucutions of the task by calling <code>my_task.delay()</code> multiple times. I want to give a unique label/name to each execution, and to be able to get and stop a task execution for a given label/name. How can I do something like this?</p>
<python><celery><django-celery>
2024-04-22 18:50:02
1
1,110
Software Dev
78,368,287
6,824,949
2nd gen Cloud function response size too large
<p>I have a 2nd gen HTTP Python cloud function that queries a large Bigquery table and returns large amounts of data, like this:</p> <pre><code>from google.cloud import bigquery import functions_framework client = bigquery.Client() @functions_framework.http def qry_bq(request): request_json = request.get_json(silent=True) df = client.query(qry).to_dataframe() return df.to_json() </code></pre> <p>Sometimes, when the returned results of the query are large, the response results in a 500 error with the error message:</p> <p><code>Response size was too large. Please consider reducing response size.</code></p> <p>I saw <a href="https://cloud.google.com/functions/quotas#resource_limits" rel="nofollow noreferrer">here</a> that the HTTP response size is limited to 32MB. Are there any suggestions to circumvent this issue for my use case?</p>
<python><google-cloud-functions>
2024-04-22 18:39:29
2
348
aaron02
78,368,208
2,710,855
Python - Raspberry Pi as BLE sender for sensor data
<p>I'm using my RaspberryPi as a device that tracks various sensor data. Now I want to create a Mobile App (Flutter) to read the data in real time using bluetooth low energy (BLE).</p> <p>I know that there are many tutorials out there, but all of them are more or less to complex. I need a really really simple python script that does the following:</p> <ol> <li>Wait for a device that wants to connect with the RPi</li> <li>Connect with the device</li> <li>Send every x seconds the current values to the connected device</li> </ol> <p>I tried out bluepy and did the following: I created a <code>Peripheral</code> object and called <code>peripheral.writeCharacteristic(0x0011, b&quot;Hello World&quot;, withResponse=True)</code> in an endless loop every 5 seconds. But what about the connection part?</p> <p>Also I have one more question: Where can I set the name of my Raspberry for the connection with other BLE devices? Cause right now when I use a BLE Scanner mobile app, it finds a lot of devices, but most of them have the name &quot;N/A&quot; and my Raspberry is not showing up.</p>
<python><raspberry-pi><bluetooth><bluetooth-lowenergy>
2024-04-22 18:21:10
1
2,415
Mike_NotGuilty
78,368,186
23,260,297
Move data in dataframe based on a value of a different column
<p>I have a dataframe that looks like this:</p> <pre><code>Put/Call StrikePrice fixedprice floatprice fixedspread floatspread Put 10 0 20 0 0 Put 10 0 20 0 0 nan 0 0 0 13 15 nan 0 0 0 14 16 </code></pre> <p>If the put/call column has the value 'Put', I need to take the value from the strike price column and place it in the fixedspread column, and I need to take the value from the float price column and place it in the float spread column. Once the values are in the correct places I can get rid of the Put/Call column, strike price column, float price column and fixed price column. </p> <p>the output should look something like this:</p> <pre><code> fixedspread floatspread 10 20 10 20 13 15 14 16 </code></pre>
<python><pandas>
2024-04-22 18:15:48
2
2,185
iBeMeltin
78,368,177
1,060,209
make TCPServer/HTTPServer not send the [FIN, PSH, ACK] packet
<p>Have a python web server listening on port <code>8080</code>, based on <code>socketserver.TCPServer</code>. But there are intermittent failures in the communication between the client and the server, esp. when the server sends the <code>[FIN, PSH, ACK]</code> packet to the client, when serving the data the client is downloading.</p> <p>Network traces have been captured for both the passed runs and the failed ones.</p> <p>In the passed runs, only <code>[FIN, ACK]</code> was sent to the client. <a href="https://i.sstatic.net/me5lq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/me5lq.png" alt="tcpdump for the passed run" /></a></p> <p>In the failed ones, <code>[FIN, PSH, ACK]</code> was sent. After that, the client would send <code>[RST]</code> back to the server and failed the process. <a href="https://i.sstatic.net/XjivI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XjivI.png" alt="tcpdump for the failed one" /></a></p> <p>Is there a way to tune the TCPServer/HTTPServer so that the <code>PSH</code> packet would not be sent together with the <code>[FIN, ACK]</code> one? I mean, make the server deliver all the data that are requested, and only send <code>[FIN, ACK]</code> after that.</p>
<python><httpserver><tcpserver>
2024-04-22 18:15:08
0
4,881
Qiang Xu
78,368,109
2,685,034
Can API Gateway with Lambda only response a JSON string type body?
<p>I am using API Gateway with Python lambda. I created the openapi.yml as below code:</p> <pre><code> responses: &quot;200&quot;: description: &quot;200 response&quot; content: application/json: schema: type: array items: type: object properties: table_id: type: integer format: int64 table_name: type: string </code></pre> <p>I am returning below code from python lambda:</p> <pre><code> response = [{ &quot;table_id&quot;: 2323, &quot;table_name&quot;:&quot;test_table_1&quot; }, { &quot;table_id&quot;: 1234, &quot;table_name&quot;:&quot;test_table_2&quot; }] return { &quot;statusCode&quot;: 200, &quot;headers&quot;: {'Content-Type': 'application/json'}, &quot;body&quot;: response }; </code></pre> <p>Now with the above code, I am getting 502 error- execution failed due to configuration error. Malformed Lambda proxy response. Which tells me that my response format is incorrect. However, when I check my lambda response and openapi.yml specs it looks fine.</p> <p>However when I do <code>&quot;body&quot;: json.dumps(response)</code> it works perfectly.</p> <p>Do we always need to stringify response objects when using lambda or I am missing something here?</p>
<python><json><lambda>
2024-04-22 17:59:17
1
307
Ben P
78,368,062
1,411,376
Celery with redis doesn't seem to honor visibility_timeout
<p>We want stalled tasks to be picked up by a new worker so we're using <code>task_acks_late: True</code> along with <code>visibility_timeout</code>. Some of the tasks can take a pretty long time to run so we attempted to update the visibility_timeout.</p> <p>Celery settings are as follows:</p> <pre class="lang-py prettyprint-override"><code> &quot;broker_url&quot;: &quot;redis://app-redis:6379/0&quot;, &quot;result_backend&quot;: &quot;redis://app-redis:6379/0&quot;, &quot;worker_send_task_events&quot;: True, &quot;task_serializer&quot;: &quot;pickle&quot;, &quot;result_serializer&quot;: &quot;pickle&quot;, &quot;accept_content&quot;: [&quot;pickle&quot;], &quot;task_acks_late&quot;: True, &quot;task_reject_on_worker_lost&quot;: True, &quot;worker_prefetch_multiplier&quot;: 1, &quot;broker_transport_options&quot;: {&quot;visibility_timeout&quot;: 3600}, &quot;result_backend_transport_options&quot;: {&quot;visibility_timeout&quot;: 3600}, &quot;visibility_timeout&quot;: 3600, &quot;worker_max_memory_per_child&quot;: 200000, &quot;redis_socket_keepalive&quot;: True, &quot;task_routes&quot;: {&quot;apps.worker.batch_tasks.*&quot;: {&quot;queue&quot;: &quot;batch_tasks&quot;}} </code></pre> <p>We're using Celery 5.3.6.</p> <p>Looking at the logs, even with <code>&quot;broker_transport_options&quot;: {&quot;visibility_timeout&quot;: 3600}, &quot;result_backend_transport_options&quot;: {&quot;visibility_timeout&quot;: 3600}, &quot;visibility_timeout&quot;: 3600,</code> tasks are still getting requeued every ~20 minutes. (The tasks aren't failing, they're just taking a long time. They'd probably complete in 30-35 minutes. They are being submitted to the default queue, not the batch_tasks queue.)</p> <p>Am I missing an addition setting? Any help would be appreciated, thanks.</p>
<python><python-3.x><redis><celery>
2024-04-22 17:47:22
0
795
Max
78,367,950
7,169,710
psycopg3 dynamic sql.Identifier with alias/label
<p>The problem I am trying to solve is getting <code>feature</code> data from JSON fields so that resulting column name is preserved. This translates in trying to understand whether there is a better and/or safer way to dynamically define column aliases using psycopg(3).</p> <p>I currently have the implemented the following solution:</p> <pre class="lang-py prettyprint-override"><code># imports import psycopg from psycopg import Connection, sql from psycopg.rows import dict_row # constants project = &quot;project_1&quot; location = &quot;location_1&quot; data_table = &quot;table_1&quot; features = [&quot;feature_1&quot;, &quot;feature_2&quot;] start_dt = &quot;2024-04-22T16:00:00&quot; end_dt = &quot;2024-04-22T17:00:00&quot; __user = &quot;My&quot; __password = &quot;I am supposed to be extra-complicated!&quot; __host = &quot;localhost&quot; __database = &quot;db&quot; __port = 5432 # connection connection = psycopg.connect( user=__user, password=__password, host=__host, dbname=__database, port=__port, row_factory=dict_row, ) </code></pre> <pre class="lang-py prettyprint-override"><code># Adapted from https://github.com/psycopg/psycopg2/issues/791#issuecomment-429459212 def alias_identifier( ident: str | tuple[str], alias: str | None = None ) -&gt; sql.Composed: &quot;&quot;&quot;Return a SQL identifier with an optional alias.&quot;&quot;&quot; if isinstance(ident, str): ident = (ident,) if not alias: return sql.Identifier(*ident) # fmt: off return sql.Composed([sql.Literal(*ident), sql.SQL(&quot; AS &quot;), sql.Identifier(alias)]) # fmt: on # source query str QUERY = &quot;&quot;&quot;SELECT current_database() AS project, timestamp, location, feature -&gt; {feature} FROM {data_table} WHERE lower(location) = {location} AND timestamp BETWEEN {start_dt} AND {end_dt} &quot;&quot;&quot; # SQL query query = sql.SQL(QUERY).format( feature=sql.SQL(&quot;, feature -&gt; &quot;).join([alias_identifier(m, alias=m) for m in features]), data_table=sql.Identifier(data_table), location=sql.Literal(location), start_dt=sql.Literal(start_dt), end_dt=sql.Literal(end_dt), ) print(query.as_string(connection)) </code></pre> <pre><code>SELECT current_database() AS project, timestamp, location, feature -&gt; 'feature_1' AS &quot;feature_1&quot;, feature -&gt; 'feature_2' AS &quot;feature_2&quot; FROM &quot;table_1&quot; WHERE lower(location) = 'location_1' AND timestamp BETWEEN '2024-04-22T16:00:00' AND '2024-04-22T17:00:00' </code></pre> <p>The solution provides the expected results, although I am wondering whether it violates any of the psycopg guidelines and whether there is a better way to achieve what I want.</p>
<python><sql><postgresql><psycopg2><psycopg3>
2024-04-22 17:25:51
1
405
Pietro D'Antuono
78,367,868
10,704,952
Datahub actions getting failed for hello_world action, giving PipelineConfig validation error
<p>I'm trying to follow datahub-actions quick start <a href="https://datahubproject.io/docs/actions/" rel="nofollow noreferrer">https://datahubproject.io/docs/actions/</a> but when i do <code>datahub actions -c hello_world.py</code> i'm getting below error</p> <pre><code>[2024-04-22 22:33:35,602] INFO {datahub_actions.cli.actions:80} - DataHub Actions version: 0.0.15 Failed to instantiate Actions Pipeline using config hello_world: 5 validation errors for PipelineConfig filter Field required [type=missing, input_value={'name': 'hello_world', '...{'type': 'hello_world'}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing transform Field required [type=missing, input_value={'name': 'hello_world', '...{'type': 'hello_world'}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing action.config Field required [type=missing, input_value={'type': 'hello_world'}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing datahub Field required [type=missing, input_value={'name': 'hello_world', '...{'type': 'hello_world'}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing options Field required [type=missing, input_value={'name': 'hello_world', '...{'type': 'hello_world'}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing </code></pre>
<python><datahub>
2024-04-22 17:08:58
1
673
Vijay Jangir
78,367,705
4,198,514
Tkinter Limits Frame vs Canvas
<p>Creating an ImageGallery I used a scrolled Frame for thumbnails.</p> <p>This scrolled Frame gets filled with ~300 labels that hold the images. The Frame however seems to be limited at around max(shortInt), the canvas seems not to be that limited.</p> <p>From what I gathered from similar questions is that there is a limit of +/-32767 coordinate points inside the canvas. It does however look like the canvas (orange area) is capable of more than the frame.</p> <p><a href="https://i.sstatic.net/xKQT4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xKQT4.png" alt="![UI View" /></a></p> <p>Code for the scrolled Frame:</p> <pre><code> from sys import hexversion if hexversion &lt; 0x03000000: # pylint: disable=import-error # if we are in 0x02whaterver that works import Tkinter as tk import ttk else: import tkinter as tk from tkinter import ttk class ScrolledFrame(ttk.Frame, object): &quot;&quot;&quot; Scrolled Frame, can use an horizontal scrollbar or none. Scrolling via mousewheel is also supported. &quot;&quot;&quot; def __init__(self, parent, *args, **kwargs): # pylint: disable=unused-argument # *args and **kwargs are used implicitly scrollbars = kwargs.pop(&quot;scrollbars&quot;, []) if hexversion &lt; 0x03000000: super(ScrolledFrame, self).__init__(parent, **kwargs) else: super().__init__(parent, **kwargs) self.grid_rowconfigure(0, weight=1) self.grid_columnconfigure(0, weight=1) self.hsb = None if &quot;h&quot; in scrollbars: self.hsb = tk.Scrollbar(self, orient=&quot;horizontal&quot;) self.hsb.grid(row=1, column=0, sticky=tk.NW+tk.SE) # The Canvas which supports the Scrollbar Interface, layout to the left self.canvas = tk.Canvas(self, borderwidth=0, background=&quot;orange&quot;) self.canvas.grid(row=0, column=0, sticky=tk.NW+tk.SE) # Bind the Scrollbar to the self.canvas Scrollbar Interface if &quot;h&quot; in scrollbars: self.canvas.configure(xscrollcommand=self.hsb.set) self.hsb.configure(command=self.canvas.xview) # The Frame to be scrolled, layout into the canvas # All widgets to be scrolled have to use this Frame as parent self.scrolled_frame = ttk.Frame(self.canvas, style=&quot;Scrolled.TFrame&quot;) self.canvas.create_window((4, 4), window=self.scrolled_frame, anchor=&quot;nw&quot;, tags=&quot;self.scrolled_frame&quot;) if self.hsb: self.scrolled_frame.bind('&lt;Shift-MouseWheel&gt;', self.__on_hscroll) # Configures the scrollregion of the Canvas dynamically self.scrolled_frame.bind(&quot;&lt;Configure&gt;&quot;, self.on_configure) self.s = ttk.Style() self.s.configure(&quot;Scrolled.TFrame&quot;, background=&quot;red&quot;) #end __init__ def __on_hscroll(self, event): #print(f&quot;{event} on {self}&quot;) self.canvas.xview_scroll(int(-1*(event.delta/120)), 'units') #end __on_hscroll def on_configure(self, event): &quot;&quot;&quot;Set the scroll region to encompass the scrolled frame&quot;&quot;&quot; self.canvas.configure(scrollregion=self.canvas.bbox(&quot;all&quot;)) for child in self.scrolled_frame.winfo_children(): # print(child) if self.hsb: child.bind('&lt;Shift-MouseWheel&gt;', self.__on_hscroll) #end on_configure #end class ScrolledFrame from PIL import Image, ImageTk root = tk.Tk() root.grid_rowconfigure(0, weight=1) root.grid_columnconfigure(0, weight=1) root.title(&quot;Test 34 - Show Frame Limits&quot;) x = 0 images=[] while x &lt; limit: w = 60 h = 80 images.append( ImageTk.PhotoImage(Image.new('RGB', (w, h), color = (255,0,255))) ) x += w view = ScrolledFrame(root, scrollbars=[&quot;h&quot;]) view.grid(row=0, column=0, sticky=tk.NW+tk.SE) for image in images: ttk.Label(view.scrolled_frame, image=image).grid(row=0, column=images.index(image), sticky=tk.NW+tk.SE) root.mainloop() </code></pre> <ol> <li>Is there a way to show more than the current limits of the frame?</li> <li>Do I understand it correctly that Frame and Canvas have different limits?</li> <li>Is there documentation for me to understand these limits?</li> </ol>
<python><tkinter>
2024-04-22 16:35:46
0
2,210
R4PH43L
78,367,686
4,462,831
Why Django Signals async post-save is locking other async ORM calls?
<p>I have a Django 5 application that use websockets using <code>channels</code>, and I'm trying to switch to <a href="https://channels.readthedocs.io/en/latest/topics/consumers.html#asyncjsonwebsocketconsumer" rel="nofollow noreferrer">AsyncConsumers</a> to take advantage of asyncronous execution for I/O or external API tasks.</p> <p>I already wrote a demo project and everything it's working fine, however in my application I use Django <code>signals</code>, and I have a long I/O task to be performed in the <a href="https://docs.djangoproject.com/en/5.0/ref/signals/#post-save" rel="nofollow noreferrer">post_save</a> of <code>MyModel</code>, previously implemented using threads:</p> <pre class="lang-py prettyprint-override"><code>from asyncio import sleep @receiver(dj_models.signals.post_save, sender=MyModel) async def auto_process_on_change(sender, instance, **kwargs): logger.log(&quot;Starting Long Save task&quot;); await sleep(30) logger.log(&quot;Starting Long Save task&quot;); </code></pre> <p>The websocket consumer that is serving the request is similar to the following:</p> <pre class="lang-py prettyprint-override"><code>from channels.generic import websocket class AsyncGenericConsumer(websocket.AsyncJsonWebsocketConsumer): async def connect(self): await self.accept() ... async def receive_json(self, message): ... user = await User.objects.aget(id=...) # Authentication code # .... # (eventually) Create MyModel, depending on request type mod = await my_models.MyModel.objects.acreate(...) </code></pre> <p>Now, the problem is as follow:</p> <p>When <strong>Alice</strong>'s request is still performing a <code>post_save</code> operation (i.e., awaiting the long task contained in it) and a <strong>Bob</strong> user opens up the browser and makes a request to the same consumer, the computation gets stuck to <code>user = await User.objects.aget(id=...)</code> until the long task (e.g., the <code>asyncio.sleep(30)</code>) terminates for Alice.</p> <p>The actual application is much more complex than this and we can't get rid of the <code>post_save</code>. I would like to understand how the <code>.aget()</code> on a model could get locked by a <code>post_save</code> on another model, when they should be asynchronous calls and be processed as such. Are there ways to avoid this while still using asyncio code?</p> <p>I am using the latest versions of all the packages involved.</p> <p>Thanks for your support.</p>
<python><django><python-asyncio><django-channels><django-signals>
2024-04-22 16:32:13
0
355
EdoG
78,367,560
4,815,263
How to add multiple columns in of a CSV file and then add a column in the last column and put the summation in this column?
<p>How to add three columns in below given sample CSV and add those to integers to the fifth column by adding them all.</p> <p>Sample:-</p> <pre><code>Emdid,name,basic_sal,Allowance,Perk 11,Dave,1100,500,50 22,Gina,1000,600,50 33,Kyle,2000,300,100 </code></pre> <p>Output:-</p> <pre><code>Emdid,name,basic_sal,Allowance,Perk,salary 11,Dave,1100,500,50,1650 22,Gina,1000,600,50,1650 33,Kyle,2000,300,100,2400 </code></pre> <p>We can solve it using awk command but not getting how to add a new column.</p>
<python><shell>
2024-04-22 16:08:39
1
651
satyaki
78,367,555
10,985,257
Mock Module Variable instead of a callable
<p>If I want to Mock a variable inside a module with the following code:</p> <pre class="lang-py prettyprint-override"><code>@pytest.fixture(autouse=True) def mock_default(mocker, tmp_path): &quot;&quot;&quot;Mock Default Config File Path.&quot;&quot;&quot; mocker.patch(&quot;mypackage.mymodule.DEFAULT&quot;, return_value=tmp_path / &quot;myfile&quot;) </code></pre> <p>it is returned always as a <code>MagicMock</code> object.</p> <p>How can I return it as a <code>Path</code> object (or any other type needed)?</p> <p>What I have noticed while debugging: All methods Path object specific doesn't throw any error. But as example <code>DEFAULT.touch(exsist_ok=True)</code>, does not create the file as expected.</p>
<python><mocking><pytest>
2024-04-22 16:07:48
1
1,066
MaKaNu
78,367,252
8,035,710
Submitting Python Request to ASP.Net Form Service - No Data Returned
<p>I'm trying to automate data retrieval from this website - <a href="https://renewablesandchp.ofgem.gov.uk/Public/ReportViewer.aspx?ReportPath=/DatawarehouseReports/CertificatesExternalPublicDataWarehouse&amp;ReportVisibility=1&amp;ReportCategory=2" rel="nofollow noreferrer">https://renewablesandchp.ofgem.gov.uk/Public/ReportViewer.aspx</a></p> <p>I'm copying the headers and form data from the flow I see in the Networks tab when submitting the form manually (n.b. the page size is not set by default). I also grab the ASP.Net cookie that is returned in the <code>get_session_cookies</code> query.</p> <p>I then use the following code to try and query the service which returns a HTML snippet that is used to update the page (the data I want is in this snippet).</p> <pre class="lang-py prettyprint-override"><code>import requests from bs4 import BeautifulSoup as bs def get_report_viewer_soup(session: requests.Session) -&gt; bs: &quot;&quot;&quot;Also grabs the ASP.Net session cookie&quot;&quot;&quot; url = 'https://renewablesandchp.ofgem.gov.uk/Public/ReportViewer.aspx' params = { 'ReportPath': '/DatawarehouseReports/CertificatesExternalPublicDataWarehouse', 'ReportVisibility': 1, 'ReportCategory': 2 } r = session.get(url, params=params) r.raise_for_status() return bs(r.text, 'html.parser') def get_table(session: requests.Session, soup: bs): url = 'https://renewablesandchp.ofgem.gov.uk/Public/ReportViewer.aspx' params = { 'ReportPath': '/DatawarehouseReports/CertificatesExternalPublicDataWarehouse', 'ReportVisibility': 1, 'ReportCategory': 2 } headers = { 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-US,en;q=0.9', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive', 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8', 'Host': 'renewablesandchp.ofgem.gov.uk', 'Origin': 'https://renewablesandchp.ofgem.gov.uk', 'Referer': 'https://renewablesandchp.ofgem.gov.uk/Public/ReportViewer.aspx?ReportPath=/DatawarehouseReports/CertificatesExternalPublicDataWarehouse&amp;ReportVisibility=1&amp;ReportCategory=2', 'Sec-Ch-Ua': '&quot;Chromium&quot;;v=&quot;116&quot;, &quot;Not)A;Brand&quot;;v=&quot;24&quot;, &quot;Google Chrome&quot;;v=&quot;116&quot;', 'Sec-Ch-Ua-Mobile': '?0', 'Sec-Ch-Ua-Platform': '&quot;macOS&quot;', 'Sec-Fetch-Dest': 'empty', 'Sec-Fetch-Mode': 'cors', 'Sec-Fetch-Site': 'same-origin', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36', 'X-Microsoftajax': 'Delta=true', 'X-Requested-With': 'XMLHttpRequest' } data = { &quot;ReportViewer$ctl04$ctl03$txtValue&quot;: &quot;REGO, RO&quot;, &quot;ReportViewer$ctl04$ctl03$divDropDown$ctl01$HiddenIndices&quot;: &quot;0,1&quot;, &quot;ReportViewer$ctl04$ctl05$txtValue&quot;: &quot;Aerothermal, Biodegradable, Biogas, Biomass, Biomass 50kW DNC or less, Biomass using an Advanced Conversion Technology, CHP Energy from Waste, Co-firing of Biomass with Fossil Fuel, Co-firing of Energy Crops, Filled Storage Hydro, Filled Storage System, Fuelled, Geopressure, Geothermal, Hydro, Hydro 20MW DNC or less, Hydro 50kW DNC or less, Hydro greater than 20MW DNC, Hydrothermal, Landfill Gas, Micro Hydro, Ocean Energy, Off-shore Wind, On-shore Wind, Photovoltaic, Photovoltaic 50kW DNC or less, Sewage Gas, Solar and On-shore Wind, Tidal Flow, Tidal Power, Waste using an Advanced Conversion Technology, Wave Power, Wind, Wind 50kW DNC or less&quot;, &quot;ReportViewer$ctl04$ctl05$divDropDown$ctl01$HiddenIndices&quot;: &quot;0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33&quot;, &quot;ReportViewer$ctl04$ctl07$txtValue&quot;: &quot;N/A, NIRO, RO, ROS&quot;, &quot;ReportViewer$ctl04$ctl07$divDropDown$ctl01$HiddenIndices&quot;: &quot;0,1,2,3&quot;, &quot;ReportViewer$ctl04$ctl09$txtValue&quot;: &quot;AD, Advanced gasification, Biomass (e.g. Plant or animal matter), Biomass using an Advanced Conversion Technology, Co-firing of biomass, Co-firing of biomass with fossil fuel, Co-firing of energy crops, Co-firing of regular bioliquid, Dedicated biomass, Dedicated biomass - BL, Dedicated biomass with CHP, Dedicated biomass with CHP - BL, Dedicated energy crops, Dedicated energy crops with CHP, Electricity generated from landfill gas, Electricity generated from sewage gas, Energy from waste with CHP, High-range co-firing, Low range co-firing of relevant energy crop, Low-range co-firing, Mid-range co-firing, N/A, Standard gasification, Station conversion, Station conversion - BL, Unit conversion, Unspecified, Waste using an Advanced Conversion Technology&quot;, &quot;ReportViewer$ctl04$ctl09$divDropDown$ctl01$HiddenIndices&quot;: &quot;0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27&quot;, &quot;ReportViewer$ctl04$ctl11$txtValue&quot;: &quot;England, Northern Ireland, Scotland, Wales&quot;, &quot;ReportViewer$ctl04$ctl11$divDropDown$ctl01$HiddenIndices&quot;: &quot;2,4,5,7&quot;, &quot;ReportViewer$ctl04$ctl17$txtValue&quot;: &quot;&lt;ALL&gt;&quot;, &quot;ReportViewer$ctl04$ctl17$divDropDown$ctl01$HiddenIndices&quot;: &quot;0&quot;, &quot;ReportViewer$ctl04$ctl27$txtValue&quot;: &quot;General, NFFO, AMO&quot;, &quot;ReportViewer$ctl04$ctl27$divDropDown$ctl01$HiddenIndices&quot;: &quot;0,1,2&quot;, &quot;ReportViewer$ctl04$ctl31$txtValue&quot;: &quot;Issued, Revoked, Retired, Redeemed, Expired&quot;, &quot;ReportViewer$ctl04$ctl31$divDropDown$ctl01$HiddenIndices&quot;: &quot;0,1,2,3,4&quot;, &quot;ReportViewer$ctl04$ctl37$txtValue&quot;: &quot;&lt;ALL&gt;&quot;, &quot;ReportViewer$ctl04$ctl37$divDropDown$ctl01$HiddenIndices&quot;: &quot;0&quot;, &quot;ReportViewer$ctl04$ctl13$ddValue&quot;: &quot;1&quot;, &quot;ReportViewer$ctl04$ctl19$ddValue&quot;: &quot;4&quot;, &quot;ReportViewer$ctl04$ctl21$ddValue&quot;: &quot;3&quot;, &quot;ReportViewer$ctl04$ctl23$ddValue&quot;: &quot;3&quot;, &quot;ReportViewer$ctl04$ctl25$ddValue&quot;: &quot;3&quot;, &quot;ReportViewer$ctl04$ctl39$ddValue&quot;: &quot;1&quot;, &quot;ReportViewer$ctl04$ctl35$ReportViewer_ctl04_ctl35&quot;: &quot;rbTrue&quot;, &quot;ReportViewer$ctl04$ctl15$ReportViewer_ctl04_ctl15&quot;: &quot;rbTrue&quot;, &quot;ReportViewer$ctl04$ctl29$txtValue&quot;: &quot;&quot;, &quot;ReportViewer$ctl04$ctl29$cbNull&quot;: &quot;on&quot;, &quot;ReportViewer$ctl04$ctl33$txtValue&quot;: &quot;&quot;, &quot;ReportViewer$ctl04$ctl33$cbNull&quot;: &quot;on&quot;, &quot;__VIEWSTATEGENERATOR&quot;: &quot;75CF6949&quot;, &quot;hdnCookieConsent&quot;: &quot;&quot;, &quot;hdnCookieAcceptanceRefreshDate&quot;: &quot;&quot;, &quot;hdnCookieAcceptanceRefreshDay&quot;: &quot;&quot;, &quot;hdnCookieAcceptanceRefreshMonth&quot;: &quot;&quot;, &quot;ReportViewer$ctl03$ctl00&quot;: &quot;&quot;, &quot;ReportViewer$ctl03$ctl01&quot;: &quot;&quot;, &quot;ReportViewer$ctl10&quot;: &quot;ltr&quot;, &quot;ReportViewer$ctl11&quot;: &quot;standards&quot;, &quot;ReportViewer$AsyncWait$HiddenCancelField&quot;: &quot;False&quot;, &quot;ReportViewer$ToggleParam$store&quot;: &quot;&quot;, &quot;ReportViewer$ToggleParam$collapse&quot;: &quot;false&quot;, &quot;ReportViewer$ctl08$ClientClickedId&quot;: &quot;&quot;, &quot;ReportViewer$ctl07$store&quot;: &quot;&quot;, &quot;ReportViewer$ctl07$collapse&quot;: &quot;false&quot;, &quot;ReportViewer$ctl09$ScrollPosition&quot;: &quot;&quot;, &quot;ReportViewer$ctl09$ReportControl$ctl04&quot;: &quot;100&quot;, &quot;__ASYNCPOST&quot;: &quot;true&quot;, &quot;ReportViewer$ctl04$ctl00&quot;: &quot;View Report&quot;, &quot;ScriptManager1&quot;: &quot;ScriptManager1|ReportViewer$ctl04$ctl00&quot;, &quot;ReportViewer$ctl05$ctl00$CurrentPage&quot;: &quot;1&quot;, &quot;ReportViewer$ctl09$VisibilityState$ctl00&quot;: &quot;ReportPage&quot; } view_state_and_event_validation_inputs = { input_elem['id']: input_elem['value'] for input_elem in soup.find_all('input', type='hidden') if 'id' in input_elem.attrs and input_elem['id'] in ['__VIEWSTATE', '__EVENTVALIDATION'] } data.update(view_state_and_event_validation_inputs) r = session.post(url, params=params, headers=headers, data=data) r.raise_for_status() return r with requests.Session() as session: soup = get_report_viewer_soup(session) r = get_table(session, soup) r.text </code></pre> <p>However, this returns Validation Errors for each of the elements in the form, e.g.</p> <blockquote> <p>&quot;NullValueText&quot;:&quot;Null&quot;,&quot;PostBackOnChange&quot;:true,&quot;RelativeDivId&quot;:null,&quot;TextBoxDisabledClass&quot;:null,&quot;TextBoxDisabledColor&quot;:&quot;#ECE9D8&quot;,&quot;TextBoxEnabledClass&quot;:null,&quot;TextBoxId&quot;:&quot;ReportViewer_ctl04_ctl03_txtValue&quot;,&quot;TriggerPostBackScript&quot;:function(){__doPostBack('ReportViewer$ctl04$ctl03','');},&quot;ValidationMessage&quot;:&quot;Please enter a value for the parameter \u0027Scheme:\u0027. The parameter cannot be blank.&quot;,&quot;ValidatorIdList&quot;:[]}, null, null, $get(&quot;ReportViewer_ctl04_ctl03&quot;)); });</p> </blockquote> <p>The start of the response also includes <code>The Report Viewer Web Control HTTP Handler has not been registered in the application's web.config file.</code> which I don't see when inspecting the page manually. <a href="https://stackoverflow.com/a/37963905/8035710">This SO answer</a> seems to indicate that the issue is related to sending null values incorrectly.</p> <p>What additional configuration is required to query this service succesfully? Any help would be much appreciated.</p>
<python><asp.net><forms><web-scraping><python-requests>
2024-04-22 15:15:03
0
327
Ayrton Bourn
78,367,230
793,961
plotly: change background color for areas of polar chart
<p>There has been a <a href="https://stackoverflow.com/questions/60281382/plotly-how-to-assign-background-colors-to-different-ranges-in-a-radar-or-polar">question</a> about how to colorize the <em>rings</em> of a plotly polar chart. I would like to add different background colors depending on the categories.</p> <p>Consider a skill rating where I would like to highlight the <em>kind</em> of skill in the overall overview:</p> <pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go categories = [ &quot;[lang] English&quot;, &quot;[lang] Spanish&quot;, &quot;[lang] German&quot;, &quot;[dev] JS&quot;, &quot;[dev] Python&quot;, &quot;[dev] Go&quot;, &quot;[dev] C#&quot;, &quot;[skill] leadership&quot;, &quot;[skill] creativity&quot; ] # remove prefix, this should be reflected by the background color categories = [c.split(&quot;] &quot;)[1] for c in categories] ratings = { &quot;Alice&quot;: [4,5,0,3,4,5,2,3,5], &quot;Bob&quot;: [3,1,4,2,3,1,1,3,2], } fig = go.Figure() for name in ratings.keys(): fig.add_trace(go.Scatterpolar( r=ratings[name] + [ratings[name][0]], theta=categories + [categories[0]], name=name, )) fig.show() </code></pre> <p>Is it possible to highlight the <em>kind</em> of the categories by changing the background color so it looks similar to this (faked) example?</p> <p><a href="https://i.sstatic.net/paeZ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/paeZ8.png" alt="example image" /></a></p> <p>I searched the net and looked for different options, but all I found was the other question I linked.</p>
<python><plotly>
2024-04-22 15:11:19
1
7,450
muffel
78,367,206
4,105,440
Simple Horizontal layout of graph nodes in networks diagram
<p>I have a dataframe containing flight connection data in a network representation</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>edge</th> <th>source</th> <th>target</th> <th>type</th> </tr> </thead> <tbody> <tr> <td>ZRH_ZRH_-1_base_ZRH_PMI_470_flight_idle0_start</td> <td>ZRH source</td> <td>ZRH-PMI 470</td> <td>start</td> </tr> <tr> <td>ZRH_PMI_470_flight_PMI_ZRH_655_flight_idle70_sit</td> <td>ZRH-PMI 470</td> <td>PMI-ZRH 655</td> <td>sit</td> </tr> <tr> <td>PMI_ZRH_655_flight_ZRH_ZRH_-1_base_idle0_end</td> <td>PMI-ZRH 655</td> <td>ZRH sink</td> <td>end</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table></div> <p><code>source</code> or <code>target</code> are either a flight (e.g. <code>ZRH-PMI 470</code>) or a base (<code>ZRH</code>). I'm trying to create the following diagram (the gray regions represent additional flight connections that are not present in the sample dataframe posted before, just to give an idea on how the final diagram should be)</p> <p><a href="https://i.sstatic.net/EQwTp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EQwTp.png" alt="enter image description here" /></a></p> <p>I'm creating the network from the dataframe using</p> <pre class="lang-py prettyprint-override"><code>import networkx as nx G = nx.from_pandas_edgelist( test, &quot;source&quot;, &quot;target&quot;, True, create_using=nx.DiGraph() ) </code></pre> <p>but the only way I can get kind of close to what I want is the <code>kamada_kawai_layout</code>.</p> <p><a href="https://i.sstatic.net/cUvix.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cUvix.png" alt="enter image description here" /></a></p> <p>which looks weird because I don't think is supposed to be used like this.</p> <p>Is there any way to create the layout without having to manually define the positions for every node? In the end I only want to organize the connections between child and parent on horizontal lines...nothing fancy. I tried to look into the <code>multipartite_layout</code> but I don't really have node attributes indicating the layers.</p>
<python><networkx><graph-theory><graphviz>
2024-04-22 15:06:12
1
673
Droid
78,367,100
880,783
Does this code adding -256 to a uint8 fail in numpy 2 due to NEP 50?
<p>The following code works in numpy 1.26.4, but not in numpy 2.0.0rc1, giving</p> <blockquote> <p>OverflowError: Python integer -256 out of bounds for int8</p> </blockquote> <pre class="lang-py prettyprint-override"><code>import numpy as np np.array([1], dtype=np.int8) + (-256) </code></pre> <p>Is that expected? Is it due to <a href="https://github.com/numpy/numpy/pull/23912" rel="nofollow noreferrer">NEP 50 adoption</a>? If so, which part of NEP 50 mandates this new behavior?</p>
<python><numpy>
2024-04-22 14:50:07
1
6,279
bers
78,367,014
5,985,921
Quantiles of a series with polars as dataframe
<p>Say I have a dataframe in polars with a column <code>outcome</code> that is some float.</p> <pre class="lang-py prettyprint-override"><code>df = pl.from_repr(&quot;&quot;&quot; ┌─────┬──────────┐ │ a ┆ outcome │ │ --- ┆ --- │ │ i64 ┆ f64 │ ╞═════╪══════════╡ │ 2 ┆ 0.17745 │ │ 2 ┆ 0.712477 │ │ 2 ┆ 0.038308 │ │ 1 ┆ 0.886266 │ │ 2 ┆ 0.578249 │ │ 1 ┆ 0.80318 │ └─────┴──────────┘ &quot;&quot;&quot;) </code></pre> <p>How can I get the quantiles of that outcome as a dataframe, i.e. I would like to get something like:</p> <pre><code>| quantile | value | |----------|--------------| | 0.1 | &lt;some value&gt; | | 0.2 | &lt;some value&gt; | | 0.3 | &lt;some value&gt; | | ... | | | ... | | </code></pre> <p>Notice I would be most interested in a solution without a group by (i.e. across the whole dataframe) but also with a group by where the group is identified by some additional variable, in the example below <code>a</code> would also be of interest.</p>
<python><aggregation><python-polars><quantile>
2024-04-22 14:37:32
2
1,651
clog14
78,366,926
1,231,450
Calculation of point of control in Pandas
<p>Suppose, we have yet another dataframe with finance data:</p> <pre><code>timestamp,close,security_code,volume,bid_volume,ask_volume 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.383985+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.384040+00:00,4968.5,ES,1,1,0 2024-02-05 01:00:01.385840+00:00,4968.5,ES,2,0,2 2024-02-05 01:00:01.385840+00:00,4968.5,ES,1,0,1 2024-02-05 01:00:01.385840+00:00,4968.5,ES,1,0,1 2024-02-05 01:00:01.385840+00:00,4968.5,ES,1,0,1 2024-02-05 01:00:01.385840+00:00,4968.5,ES,2,0,2 </code></pre> <p>One could calculate the POC (the point of control, where the most contracts have been traded) like so</p> <pre><code>def poc(self, df): &quot;&quot;&quot; Calculate the POC. &quot;&quot;&quot; return df['close'].expanding().agg({'poc': lambda s:mode(s)[0]})['poc'] </code></pre> <p>However, this does not take into account the volume. If there have been traded more volume on that specific level, this should be taken into account. How to change the <code>mode</code> / lambda accordingly?</p>
<python><pandas>
2024-04-22 14:22:22
1
43,253
Jan
78,366,610
6,484,726
Using ListSerializer inside "to_representation" of Serializer class
<p>I want to create multiple objects based on data received in nested json array. In order to do so I created a <code>serializers.Serializer</code> and overriden <code>to_representation</code> method to use <code>ListSerializer</code> for serialization of newly created objects.</p> <p>Simplified code look like so (for ease of reproducing by simply running python with DRF installed)</p> <pre class="lang-py prettyprint-override"><code>import json from rest_framework import serializers class Link: def __init__(self, title: str, description: str, url: str): self.title = title self.description = description self.url = url class LinkResponseSerializer(serializers.Serializer): title = serializers.CharField(max_length=100) description = serializers.CharField(required=False) url = serializers.CharField() class URLSerializer(serializers.Serializer): url = serializers.CharField() class LinkSerializer(serializers.Serializer): title = serializers.CharField(max_length=100) description = serializers.CharField() urls = URLSerializer(many=True) def create(self, validated_data): links = [] for obj in validated_data.pop(&quot;urls&quot;, []): links.append(Link(url=obj[&quot;url&quot;], **validated_data)) return links def to_representation(self, instance): return LinkResponseSerializer(instance, many=True).data data = { &quot;title&quot;: &quot;Example&quot;, &quot;description&quot;: &quot;Exampe Description&quot;, &quot;urls&quot;: [ {&quot;url&quot;: &quot;https://google.com&quot;}, {&quot;url&quot;: &quot;https://yahoo.com&quot;}, ] } serializer = LinkSerializer(data=data) serializer.is_valid(raise_exception=True) serializer.save() print(json.dumps(serializer.data, indent=4)) </code></pre> <p>Running this code will result in an error:</p> <pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last): File &quot;/tmp/.../test.py&quot;, line 54, in &lt;module&gt; print(json.dumps(serializer.data, indent=4)) ^^^^^^^^^^^^^^^ File &quot;/tmp/.../venv/lib64/python3.12/site-packages/rest_framework/serializers.py&quot;, line 556, in data return ReturnDict(ret, serializer=self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/.../venv/lib64/python3.12/site-packages/rest_framework/utils/serializer_helpers.py&quot;, line 19, in __init__ super().__init__(*args, **kwargs) ValueError: too many values to unpack (expected 2) </code></pre> <p>I understand that issue comes from the <code>.data</code> property of a <code>serializers.Serializer</code> which tries to use <code>ReturnDict</code> as a return value which initialized with a list as an argument.</p> <p>I can workaround this issue by overriding <code>.data</code> property like so:</p> <pre class="lang-py prettyprint-override"><code>from rest_framework.utils.serializer_helpers import ReturnList class LinkSerializer(serializers.Serializer): title = serializers.CharField(max_length=100) description = serializers.CharField() urls = URLSerializer(many=True) def create(self, validated_data): links = [] for obj in validated_data.pop(&quot;urls&quot;, []): links.append(Link(url=obj[&quot;url&quot;], **validated_data)) return links def to_representation(self, instance): return LinkResponseSerializer(instance, many=True).data @property def data(self): self._data = self.to_representation(self.instance) return ReturnList(self._data, serializer=self) </code></pre> <p>Which will produce the desired output:</p> <pre class="lang-json prettyprint-override"><code>[ { &quot;title&quot;: &quot;Example&quot;, &quot;description&quot;: &quot;Exampe Description&quot;, &quot;url&quot;: &quot;https://google.com&quot; }, { &quot;title&quot;: &quot;Example&quot;, &quot;description&quot;: &quot;Exampe Description&quot;, &quot;url&quot;: &quot;https://yahoo.com&quot; } ] </code></pre> <p>But DRF Browserable API is no longer going to work if we use generic views, as it expects ReturnDict as a return type for <code>.data</code> property of a <code>serializers.Serializer</code>!</p> <p>What is the right way to handle the issue? Use case doesn't seem odd to me and it feels like there has to be an easier way.</p>
<python><django-rest-framework>
2024-04-22 13:29:35
1
398
hardhypochondria
78,366,521
3,305,534
How to add packages to Graalvm python polyglot?
<p>I followed the StackOverflow question <a href="https://stackoverflow.com/questions/77315830/how-to-install-graalvm-with-python">Here</a> and the subsequent link <a href="https://medium.com/graalvm/truffle-unchained-13887b77b62c" rel="nofollow noreferrer">https://medium.com/graalvm/truffle-unchained-13887b77b62c</a> in order to setup a working example of running Python from a Spring Boot App.</p> <p>While the example works good, I am not sure how I can add packages if my python program requires any.</p> <p>Here's an excerpt from my <code>pom.xml</code> where I have the polyglot dependencies:</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.graalvm.polyglot&lt;/groupId&gt; &lt;artifactId&gt;python&lt;/artifactId&gt; &lt;version&gt;23.1.0&lt;/version&gt; &lt;type&gt;pom&lt;/type&gt; &lt;/dependency&gt; </code></pre> <p>I have a simple Python code in <code>posts.py</code> which does this:</p> <pre><code>def fetch_posts(): url = 'https://jsonplaceholder.typicode.com/posts' try: with request.urlopen(url) as response: if response.status == 200: source = response.read() posts = json.loads(source) return posts else: print(&quot;Failed to fetch posts due to HTTP error.&quot;) except Exception as e: print(f&quot;An error occurred: {e}&quot;) return None </code></pre> <p>Given that I've put this file in <code>src/main/resources/python/posts.py</code>, I'm successfully able to call this from my Java code:</p> <pre><code>@Test void testPythonScript2() throws IOException { ClassPathResource resource = new ClassPathResource(&quot;python/posts.py&quot;); try (Context context = Context.newBuilder().allowAllAccess(true).build()) { Source source = Source.newBuilder(&quot;python&quot;, resource.getFile()).build(); context.eval(source); Value pyObject = context.getBindings(&quot;python&quot;).getMember(&quot;fetch_posts&quot;); Value result = pyObject.execute(); assertTrue(result.hasArrayElements()); assertEquals(100, result.getArraySize()); } catch (IOException e) { log.error(e.getMessage(),e); assertFalse(true); } catch (Exception e) { log.error(e.getMessage(),e); assertFalse(true); } } </code></pre> <p>This test passes.</p> <p>But, what do I do, if <code>posts.py</code> needs the <code>numpy</code> package? How do I install it?</p> <p>I'm using <code>graalvm-jdk-21.0.3+7.1</code> as the underlying JDK.</p> <p>Thanks, Sriram</p>
<python><java><graalvm><graalpython>
2024-04-22 13:16:41
1
740
Sriram Sridharan
78,366,381
5,997,555
Checking if integers in list are consecutive - including cyclic iteration
<p>I have a list of integers from 1 to <code>n</code>.</p> <p>How can check if a subset of this list contains only consecutive numbers, accounting for a cyclic iteration (i.e. after <code>n</code> the iteration continues with <code>1</code>)</p> <p>Assuming my function is called <code>foo</code> and <code>n=36</code>, this is what I need:</p> <pre class="lang-py prettyprint-override"><code># subsets could be bigger, using 3 elements as example foo([1, 2, 3]) # True foo([8, 9, 10]) # True foo([35, 36, 1]) # True foo([36, 1, 2]) # True foo([1, 3, 4]) # False foo([15, 17, 20]) # False foo([3, 2, 1]) # False </code></pre> <p>My current approach is to create a string template with two cycles, and then compare if the string representation of the subset is contained in the template.</p> <p>I'm sure there's a better way.</p> <h5>Edit:</h5> <p>I'm looking to check for <strong>unique</strong>, <strong>all consecutive</strong> subsets.</p> <p><code>[1,2,3,1,2,3]</code> should also be <code>False</code> (since after <code>3</code> I'd expect <code>4</code>).</p> <p><code>[1,3,4]</code> should be <code>False</code> since after <code>1</code> I'd expect <code>2</code>.</p> <h5>Edit 2:</h5> <p>the integers of the list <code>1-n</code> are sequential.</p>
<python>
2024-04-22 12:50:14
2
7,083
Val
78,366,269
1,656,671
Define a dot(a,b) function + properties and simplify / expand
<p>I would like to define a function + rules, and let sympy use the rule for simplification:</p> <pre><code>a,b,c = symbols(&quot;a b c&quot;) dot = function(&quot;dot&quot;) rule1 = Eq( dot(a+b,c) , dot(a,c)+dot(b,c)) rule2 = Eq( dot(a,b) , dot(b,a) ) </code></pre> <p>Now, use the above to <code>expand</code> and <code>simplify</code> expressions.</p> <p>How can this be done? Are there alternatives to sympy that can do this?</p>
<python><sympy><symbolic-math>
2024-04-22 12:33:26
2
1,036
QT-1
78,366,268
4,098,506
How to work with PosrgreSQL's point type in peewee?
<p>How do I work with PostrgreSQL's geometric type <code>point</code>? When I create the model code with pwiz, a point column is defined as <code>column_name = UnknownField() # point</code>. When I try to read this field, I only get a <code>db_schema.UnknownField</code> with no data in it. When I try to write it, it writes a <code>null</code> value no matter what I assign to it.</p>
<python><postgresql><peewee>
2024-04-22 12:33:04
0
662
Mr. Clear
78,366,208
547,231
How can we cast a `ctypes.POINTER(ctypes.c_float)` to `int`?
<p>I think this is a simple task, but I could not find a solution on the web to this. I have a external C++ library, which I'm using in my Python code, returning a <code>ctypes.POINTER(ctypes.c_float)</code> to me. I want to pass an array of these pointers to a <code>jax.vmap</code> function. The problem is that <code>jax</code> does not accept the <code>ctypes.POINTER(ctypes.c_float)</code> type. So, can I somehow cast this pointer to an ordinary <code>int</code>. Technically, this is clearly possible. But how do I do this in Python?</p> <p>Here is an example:</p> <pre><code>lib = ctypes.cdll.LoadLibrary(lib_path) lib.foo.argtypes = None lib.foo.restype = ctypes.POINTER(ctypes.c_float) bar = jax.vmap(lambda : dummy lib.foo())(jax.numpy.empty(16)) x = jax.numpy.empty(16, 256, 256, 1) y = jax.vmap(lib.bar, in_axes = (0, 1))(x, bar) </code></pre> <p>So, I want to invoke <code>lib.foo</code> 16-times so that I have an array <code>bar</code> containing all the pointers. Then I want to invoke another library function <code>lib.bar</code> which expects <code>bar</code> together with another (batched) parameter <code>x</code>.</p> <p>The problem is that jax claims that <code>ctypes.POINTER(ctypes.c_float)</code> is not a valid jax type. This is why I think the solution is to cast the pointers to <code>int</code>s and store those <code>int</code>s in <code>bar</code> instead.</p>
<python><ctypes><jax>
2024-04-22 12:24:50
1
18,343
0xbadf00d
78,366,113
9,833,362
pytest fixtures in nested Classes
<p>I have written the following testcase using pytest.</p> <pre><code>import pytest data_arg = [&quot;arg1&quot;, &quot;arg2&quot;] class TestParentClass1: @pytest.fixture(scope=&quot;class&quot;, params=data_arg,autouse=True) def common_setup(self, request): print(f'Configure the system according to {request.param}') class TestClass1: def test_class1_test1(self): print(&quot;Executing test1 of class1&quot;) def test_class1_test2(self): print(&quot;Executing test2 of class1&quot;) class TestClass2: def test_class2_test1(self): print(&quot;Executing test1 of class2&quot;) def test_class2_test2(self): print(&quot;Executing test2 of class2&quot;) </code></pre> <p>I have the following requirements:-</p> <ul> <li>There are two classes (TestClass1 and TestClass2) in which I have written several testcases.</li> <li>I want the following flow:- <ul> <li>For arg1, execute the common_setup, configure the system according to the arg1 then call the testcases written in TestClass1 and TestClass2.</li> <li>Then do the same for arg2.</li> </ul> </li> <li>I have to keep the TestClass1 and TestClass2. I can't merge the testcases of them, this is necessary for me.</li> </ul> <p>I am unable to achieve this flow. Can somebody please help me on how can I do it?</p>
<python><unit-testing><testing><pytest><fixtures>
2024-04-22 12:07:50
1
475
Shreyansh Jain
78,366,104
11,337,114
Unable to connect to cloud datastore from local legacy project based on python2.7 and django 1.4
<p>I have a <code>django==1.4</code> project (<code>python==2.7</code>) that I wanted to run and make some changes. I am unable to connect to cloud datastore from my local codebase. Right now, when I run the project using <code>dev_appserver.py</code> like:</p> <p><code>dev_appserver.py PROJECT_NAME --enable_console</code></p> <p>It runs three different servers:</p> <ol> <li>API server on <code>localhost:RANDOM_PORT</code></li> <li>Module default at <code>localhost:8080</code></li> <li>Admin server at <code>localhost:8000</code></li> </ol> <p>Now I visit <code>http://localhost:8000/console</code> and browse interative console and then run some python script like importing User model and fetching if there is anything there or not. And there isn't, why? Because it connects to the local <code>AppIdentityServiceStub</code>. And obviously there isn't any data unless I create some.</p> <p>Now I want to connect this codebase to my cloud datastore and I have tried different implementations that are already on stackoverflow and other platforms. One of them is setting environment variable <code>GOOGLE_APPLICATION_CREDENTIALS</code> to a <code>keyfile.json</code> that service account provides. I have the <code>keyfile.json</code> and I have set it to env variable but still I get connected to local datastore which has no data. Let me know if I am running this project wrong? or is there any other way to connect to cloud datastore from my local? Also, one more thing, when I do not set the env variable <code>GOOGLE_APPLICATION_CREDENTIALS</code> it shows this WARNING:</p> <pre><code>WARNING 2024-04-22 08:12:02,588 app_identity_stub.py:206] An exception has been encountered when attempting to use Application Default Credentials: File /Users/USER/keyfile.json (pointed by GOOGLE_APPLICATION_CREDENTIALS environment variable) does not exist!. Falling back on dummy AppIdentityServiceStub. </code></pre> <p>And when I set it to keyfile.json, this warning goes away but still this isn't connecting to the cloud datastore. What could be the reason? What am I doing wrong? Is there any other way to run this code? Any help is appreciated. Thanks</p>
<python><django><python-2.7><google-app-engine><google-cloud-datastore>
2024-04-22 12:05:25
1
365
Akif Hussain
78,365,983
1,869,090
Python sockets and windows
<p>I got this small script:</p> <pre><code>import socket with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as server_socket: server_socket.bind((&quot;127.0.0.1&quot;, 5353)) </code></pre> <p>Running it under windows, I got the error <code>PermissionError: [WinError 10013] Der Zugriff auf einen Socket war aufgrund der Zugriffsrechte des Sockets unzulässig</code></p> <p>I cannot run it as admin, because I am using an IDE.</p> <p>Any ideas? TIA!</p>
<python><windows><sockets>
2024-04-22 11:41:49
2
3,429
t777
78,365,846
6,002,727
Text conversion processing and extracting information
<p>I have a call center software which consist of industry specific calls, in which all calls are recorded and then all calls transform into text. After transforming into text, every calls is passed to other agent who is human, who extract all information from text. I have to extract information from AI, or some sort of services and analyze conversation between two if them, then save it in DB.</p> <p>Let say a call center person name is Bob and the person whom need assisstance from bob is foo. the conversation between the two person is below.</p> <pre><code>Bob : Hey Foo, I am Bob, How may I help you today? Foo: Hey, I am good, I have a issue with product which I have purchased from you. Bob : We never like to hear customers are unhappy. Why don’t you start by giving me your full name and order number so I can try to address this issue for you? Foo : Yeah, it is a electric bike, and Number is BBM-3344. and so on... </code></pre> <p>Now I have to extract all information from conversation like</p> <ol> <li>sentiments (light or harsh mood, happy or sad etc.)</li> <li>filler words (um, ha, etc words)</li> <li>confident level</li> <li>answer is appropriate, answer related to topic</li> <li>engagements (engagement between the two)</li> <li>talk time</li> <li>conversation topic</li> <li>Number of questions asked.</li> </ol> <p>and all other information.</p> <p>Now my questions are:</p> <ol> <li>What information we extract from conversation other than that?</li> <li>How to extract all information from python and their libraries/packages?</li> </ol>
<python><nlp><bots><artificial-intelligence>
2024-04-22 11:17:10
0
1,627
Faraz Ahmed
78,365,344
5,761,010
cv2.stereoRectify works only when the rotation and translation are from camera 2 to camera 1
<p>I am using the euroc-mav dataset to create a disparity map from stereo images: <a href="https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets" rel="nofollow noreferrer">https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets</a>.</p> <p>In this dataset the cameras are already calibrated (intrinsic and extrinsic) relative to a common coordinate system (the imu frame).</p> <p>My goal is to rectify both of the images to create a disparity map. I am using the opencv cv2.stereoRectify().</p> <p>On the <a href="https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga617b1685d4059c6040827800e72ad2b6" rel="nofollow noreferrer">opencv documentation</a>, the stereoRectify function receive the rotation and translation between the two cameras (from the coordinate system of camera 1 to the coordinate system of camera 2).</p> <p>I first calculated the rotation between as follow:</p> <pre><code>relative_R_cam0_cam1 = np.linalg.inv(R_cam0) @ R_cam1 relative_T_cam0_cam1 = np.linalg.inv(R_cam0) @ (T_cam1 - T_cam0) </code></pre> <p>With this I get the disparity map to look very bad. <a href="https://i.sstatic.net/4O9ID.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4O9ID.png" alt="enter image description here" /></a></p> <p>I then check what happened when I calculate the rotation and translation from camera 2 frame to camera 1 frame with the following change:</p> <pre><code>relative_R_cam0_cam1 = np.linalg.inv(R_cam1) @ R_cam0 relative_T_cam0_cam1 = np.linalg.inv(R_cam1) @ (T_cam0 - T_cam1) </code></pre> <p>I received a good disparity map:</p> <p><a href="https://i.sstatic.net/27yS6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/27yS6.png" alt="enter image description here" /></a></p> <p><strong>Is there a bug in the opencv stereoRectify ? or I am missing something?</strong></p> <p>Here is my entire code:</p> <pre><code>cam0_intrinsics = np.array([ [458.654, 0.0, 367.215], [0.0, 457.296 , 248.3759], [0.0, 0.0, 1.0]]) cam0_distortion = np.array([[-0.28340811, 0.07395907, 0.00019359, 1.76187114e-05]]) cam1_intrinsics = np.array([ [457.587, 0.0, 379.999], [0.0, 456.134 , 255.238], [0.0, 0.0, 1.0]]) cam1_distortion = np.array([[-0.28368365, 0.07451284, -0.00010473, -3.55590700e-05]]) R_cam0 = np.array([ [0.0148655429818, -0.999880929698, 0.00414029679422], [0.999557249008, 0.0149672133247, 0.025715529948], [-0.0257744366974, 0.00375618835797, 0.999660727178] ]) T_cam0 = np.array([-0.0216401454975, -0.064676986768, 0.00981073058949]) R_cam1 = np.array([ [0.0125552670891, -0.999755099723, 0.0182237714554], [0.999598781151, 0.0130119051815, 0.0251588363115], [-0.0253898008918, 0.0179005838253, 0.999517347078] ]) T_cam1 = np.array([-0.0198435579556, 0.0453689425024, 0.00786212447038]) relative_R_cam0_cam1 = np.linalg.inv(R_cam1) @ R_cam0 relative_T_cam0_cam1 = np.linalg.inv(R_cam1) @ (T_cam0 - T_cam1) # relative_R_cam0_cam1 = np.linalg.inv(R_cam0) @ R_cam1 # relative_T_cam0_cam1 = np.linalg.inv(R_cam0) @ (T_cam1 - T_cam0) R1, R2, P1, P2, Q, roi1, roi2 = cv2.stereoRectify( \ cam0_intrinsics, cam0_distortion, cam1_intrinsics, cam1_distortion, (752, 480), relative_R_cam0_cam1, relative_T_cam0_cam1) </code></pre>
<python><opencv><ros><robotics>
2024-04-22 09:50:38
0
1,293
Idan Aviv
78,365,343
13,977,239
unexpected behavior with `inspect.getmembers` on @property methods that throw exceptions
<p>I feel like I'm encountering a rather strange behavior in Python. Try it out yourself:</p> <pre class="lang-py prettyprint-override"><code>import inspect class SomeClass: def __init__(self): inspect.getmembers(self, predicate=inspect.ismethod) def this_is_okay(self): raise Exception('an error that may be thrown') @property def this_is_not_okay(self): raise Exception('an error that may be thrown') @property def but_this_is_okay(self): if True: raise Exception('an error that may be thrown') </code></pre> <p>Inspecting the methods of a class will cause an error if there is a method decorated with <code>@property</code>, but only if it throws an error at the first indentation level.</p> <p>How can this be? And how can I get around this?</p> <p>P.S. The reason I'm inspecting like so is I'm trying to get an array of the class methods (actual callable objects) in the order that they're defined in the class.</p>
<python><class><properties><decorator><abstract-syntax-tree>
2024-04-22 09:50:25
1
575
chocojunkie
78,365,335
16,674,436
DataFrame groupby function returning tuple from column instead of the value
<p>Here is my pandas DataFrame:</p> <pre><code> id_country txt_template_1 txt_template_2 id_set id_question txt_question 0 NEUTRAL template neutral 1 template neutral 2 1 1 1_1 1 NEUTRAL template neutral 1 template neutral 2 1 2 1_2 2 NEUTRAL template neutral 1 template neutral 2 1 3 1_3 3 NEUTRAL template neutral 1 template neutral 2 1 4 1_4 4 NEUTRAL template neutral 1 template neutral 2 2 1 2_1 5 NEUTRAL template neutral 1 template neutral 2 2 2 2_2 6 NEUTRAL template neutral 1 template neutral 2 2 3 2_3 7 NEUTRAL template neutral 1 template neutral 2 2 4 2_4 8 FRA template FRA 1 template FRA 2 1 1 1_1 9 FRA template FRA 1 template FRA 2 1 2 1_2 10 FRA template FRA 1 template FRA 2 1 3 1_3 11 FRA template FRA 1 template FRA 2 1 4 1_4 12 FRA template FRA 1 template FRA 2 2 1 2_1 13 FRA template FRA 1 template FRA 2 2 2 2_2 14 FRA template FRA 1 template FRA 2 2 3 2_3 15 FRA template FRA 1 template FRA 2 2 4 2_4 </code></pre> <p>Here is my function so far:</p> <pre><code>def ask_question(df): grouped_country = df.groupby(['id_country']) # loop through each group of country for country_id, group_country_df in grouped_country: grouped_id_set = group_country_df.groupby(['id_set']) # loop through each group of id_set for set_id, group_set_df in grouped_id_set: print(set_id) </code></pre> <p>the output of <code>print(set_id)</code> gives me the following:</p> <pre><code>(1,) (2,) (1,) (2,) (1,) (2,) [] </code></pre> <p>It seems like the <code>group_country_df.groupby(['id_set'])</code> is creating a tuple of the <code>id_set</code> values of the DataFrame, but from my understanding it shouldn’t.</p> <p>What am I getting wrong? And how to make sure that <code>set_id</code> is indead the value of <code>id_set</code> and not a tuple?</p>
<python><dataframe><group-by><tuples>
2024-04-22 09:48:30
1
341
Louis
78,365,173
1,769,197
matplotlib: failed to plot time series bars at the right timeframe
<p>Basically, i have the following data below from 9am to 1pm on a particular date. When I tried to create a time series bar plot, the bars are overlapping each other and the plot kept displaying a timeframe that is way wider than the 9am -1pm window. How to fix this using matplotlib without seaborn?</p> <p>Note that the data columns are already in the following format <code>End_Datetime datetime64[ns]</code> and <code>VOLUME float64</code>. And if i were to do a line plot, there would be no issues but if I do a bar plot, the bars overlapped and the times are just out of place.</p> <pre><code>{'End_Datetime': {0: Timestamp('2024-02-29 09:14:59.999000'), 1: Timestamp('2024-02-29 09:29:59.999000'), 2: Timestamp('2024-02-29 09:44:59.999000'), 3: Timestamp('2024-02-29 09:59:59.999000'), 4: Timestamp('2024-02-29 10:14:59.999000'), 5: Timestamp('2024-02-29 10:29:59.999000'), 6: Timestamp('2024-02-29 10:44:59.999000'), 7: Timestamp('2024-02-29 10:59:59.999000'), 8: Timestamp('2024-02-29 11:14:59.999000'), 9: Timestamp('2024-02-29 11:29:59.999000'), 10: Timestamp('2024-02-29 11:44:59.999000'), 11: Timestamp('2024-02-29 11:59:59.999000'), 12: Timestamp('2024-02-29 12:14:59.999000'), 13: Timestamp('2024-02-29 12:29:59.999000'), 14: Timestamp('2024-02-29 12:44:59.999000'), 15: Timestamp('2024-02-29 12:59:59.999000'), 16: Timestamp('2024-02-29 13:14:59.999000'), 17: Timestamp('2024-02-29 13:29:59.999000'), 18: Timestamp('2024-02-29 13:44:59.999000')}, 'VOLUME': {0: 45000.0, 1: 142000.0, 2: 55000.0, 3: 39000.0, 4: 66000.0, 5: 32000.0, 6: 28000.0, 7: 30000.0, 8: 21000.0, 9: 18000.0, 10: 61000.0, 11: 154000.0, 12: 33000.0, 13: 24000.0, 14: 13000.0, 15: 12000.0, 16: 339000.0, 17: 31000.0, 18: 281000.0}} fig, ax = plt.subplots(nrows=1, ncols = 1,dpi=600, figsize=(12,9)) ax.bar(x=df['End_Datetime'], height = df['VOLUME'].values, color = 'blue', label = '15 mins Bin Volume', edgecolor = 'white', alpha=0.5) ax.set_ylabel('Volume') ax.legend(fontsize=6, title_fontsize=8, loc='upper left') ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M')) fig.tight_layout() fig.show() </code></pre> <p><a href="https://i.sstatic.net/XNqa8.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XNqa8.jpg" alt="enter image description here" /></a></p>
<python><matplotlib>
2024-04-22 09:19:55
1
2,253
user1769197
78,365,076
7,052,826
Reading .xlsx file with Python Pandas changes some '<' to '<.1'
<p>I was given a very messy .xlsx file. It contains a multiheader. A small selection of these double headers looks like this.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>zoxamide</th> <th></th> <th>zoxamide</th> <th></th> </tr> </thead> <tbody> <tr> <td>zOaAd [ug/l] [NVT] [AW]</td> <td>&lt;</td> <td>zOaAd [ug/l] [NVT] [OW]</td> <td>&lt;</td> </tr> </tbody> </table></div> <p>As can be seen, every second column contains a '&lt;' symbol. When I read this into pandas using</p> <pre><code>df = pd.read_excel(file, header=[0,1]) </code></pre> <p>Then, when I look at the headers in this file, I can see the following:</p> <pre><code>df.columns MultiIndex([( 'zoxamide', 'zOaAd [ug/l] [NVT] [AW]'), ( 'zoxamide', '&lt;'), ( 'zoxamide', 'zOaAd [ug/l] [NVT] [OW]'), ( 'zoxamide', '&lt;.1'), ... </code></pre> <p>Note now that one of the '&lt;' symbol has been converted to '&lt;.1'.</p> <p>I have no idea how why this happens.</p>
<python><pandas>
2024-04-22 09:05:51
0
4,155
Mitchell van Zuylen
78,364,965
2,007,927
How to set an automated action to change Active status of a view on Odoo (v14)
<p>I have created a view which creates a popup on our website. The idea is the show this popup when the company is closed due to the public holidays. I set the public holidays in the calendar and I would like to trigger this view <code>Active</code> status when the date meets with those dates of public holiday.</p> <p>So, when it is public holiday, automated action would change the <code>Active</code> status from <code>False</code> to <code>True</code> and the popup will be appear on the website. When it is not holiday, it will trigger it again and the status will be <code>False</code>.</p> <p>Below, you can find the automated action that I have created. I am pretty sure that is doable either with <code>execute python code</code> or with another action but I don't how to achieve this. Any idea?</p> <p><a href="https://i.sstatic.net/Mpx9M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mpx9M.png" alt="enter image description here" /></a></p>
<python><odoo><odoo-14>
2024-04-22 08:46:13
0
586
george
78,364,791
2,886,789
How to do multipart/mixed requests?
<p>I'm tinkering with a website which is part of what SAP provides companies to do their work time accounting.</p> <p>I found out that what I'm planning to do is actually a post request, and here is what I took from the Chrome developer console Request payload field. I recorded this when I clicked on the button that I wanted to replicate. It actually creates three(3) requests that look similar to the one below, but as far as I see don't contain relevant information.</p> <pre><code> --batch_5932-e086-aabb Content-Type: multipart/mixed; boundary=changeset_f03e-c18c-f82d --changeset_f03e-c18c-f82d Content-Type: application/http Content-Transfer-Encoding: binary POST TimeEventSet?sap-client=100 HTTP/1.1 sap-contextid-accept: header Accept: application/json Accept-Language: de DataServiceVersion: 2.0 MaxDataServiceVersion: 2.0 x-csrf-token: &lt;some-token&gt;== Content-Type: application/json Content-Length: 153 {&quot;EmployeeID&quot;:&quot;&lt;some-number&gt;&quot;,&quot;EventDate&quot;:&quot;2024-04-19T00:00:00&quot;,&quot;EventTime&quot;:&quot;PT15H15M29S&quot;,&quot;TimeType&quot;:&quot;&lt;some-code&gt;&quot;,&quot;TimezoneOffset&quot;:&quot;&lt;some-offset&gt;&quot;,&quot;ApproverPernr&quot;:&quot;&lt;some-number&gt;&quot;} --changeset_f03e-c18c-f82d-- --batch_5932-e086-aabb-- </code></pre> <p>There is a couple of examples on the net that show how to do file transfers with mutlipart requests. Here is what I tried on a different occasion but gives me some random server error, that the syntax is wrong. That was created by recording a <code>har file</code> in chrome and using the python package <code>har2requests</code></p> <pre><code>r = session.post( &quot;&lt;some-url&gt;&quot;, params={&quot;sap-client&quot;: &quot;100&quot;}, data='\r\n--batch_9003-5d2c-29c4\r\nContent-Type: multipart/mixed; boundary=changeset_34fb-a3ca-3327\r\n\r\n--changeset_34fb-a3ca-3327\r\nContent-Type: application/http\r\nContent-Transfer-Encoding: binary\r\n\r\nPOST TimeEventSet?sap-client=100 HTTP/1.1\r\nsap-contextid-accept: header\r\nAccept: application/json\r\nAccept-Language: de\r\nDataServiceVersion: 2.0\r\nMaxDataServiceVersion: 2.0\r\nx-csrf-token: &lt;some-token&gt;==Content-Type: application/json\r\nContent-Length: 152\r\n\r\n{&quot;EmployeeID&quot;:&quot;&lt;some-number&gt;&quot;,&quot;EventDate&quot;:&quot;2024-04-19T00:00:00&quot;,&quot;EventTime&quot;:&quot;PT17H10M6S&quot;,&quot;TimeType&quot;:&quot;&lt;some-code&gt;&quot;,&quot;TimezoneOffset&quot;:&quot;&lt;some-offset&gt;&quot;,&quot;ApproverPernr&quot;:&quot;&lt;some-number&quot;}\r\n--changeset_34fb-a3ca-3327--\r\n\r\n--batch_9003-5d2c-29c4--\r\n' ) </code></pre> <p>Any idea how to solve this? I'm aware that the csrf token is hardcoded as well, that is something I need to takle later on too.</p>
<python><python-requests><odata>
2024-04-22 08:10:01
1
389
daeda
78,364,647
1,537,366
Avoiding accidental execution of python scripts as bash scripts
<p>I like to run my python scripts directly in bash like so</p> <pre><code>$ ./script.py </code></pre> <p>But sometimes I forget the shebang line <code>#!/usr/bin/env python3</code>, and it has the potential to overwrite a file named <code>np</code> if the script has <code>import numpy as np</code> (and if you click a few times when the ImageMagick import command is run).</p> <p>What can I do to avoid such accidental execution of python scripts as bash scripts? Is there a way to block bash from executing a script that has an extension &quot;.py&quot; as a bash script?</p>
<python><bash><shebang>
2024-04-22 07:41:57
1
1,217
user1537366
78,364,486
16,383,578
How to tell if an archive file is corrupt?
<p>I have downloaded 424 files from <a href="http://www.nexusmods.com" rel="nofollow noreferrer">www.nexusmods.com</a>, they have a combined size of 21.576GiB.</p> <p>The average download speed is 1MiB/s, because I am a non premium user and subject to NexusMods' download speed cap. The download took well over 6 hours, and I downloaded the files in batch using a Python script I wrote.</p> <p>The download links are generated using the following code:</p> <pre class="lang-py prettyprint-override"><code>import contextlib import json import requests from socket import gaierror from requests import RequestException from urllib3.exceptions import HTTPError DOWNLOAD_LINK_GENERATOR = ( &quot;https://www.nexusmods.com/Core/Libs/Common/Managers/Downloads?GenerateDownloadUrl&quot; ) while True: with contextlib.suppress( RequestException, HTTPError, gaierror, TimeoutError ): url = json.loads(requests.post( url=DOWNLOAD_LINK_GENERATOR, data={&quot;fid&quot;: file_id, &quot;game_id&quot;: game_id}, headers=HEADERS, cookies=COOKIES, timeout=1, ).content)[&quot;url&quot;] </code></pre> <p><code>COOKIES</code> and <code>HEADERS</code> are constants loaded from a file, <code>file_id</code> and <code>game_id</code> are NexusMods ids for the file and game.</p> <p>The files are downloaded using <code>requests.get(url, stream=True)</code>, previously I used a more complicated version with <code>aiohttp</code>, <code>asyncio</code> and <code>aiofiles</code>, and I download the file through 16 coroutines using <code>&quot;Content-Range&quot;</code> header if the file is larger than 10MiB, with automatic retrying and cancelling download and resuming if the download speed drops to 0 for several seconds...</p> <p>It was working perfectly, but somehow I couldn't make it work yesterday, so I fell back to single thread <code>requests.get</code>.</p> <p>Now I have good reasons to suspect some files may be corrupted, first of all I am behind the Great Firewall of China, which you may or may not know. Anyway the GFW censors international internet content, throttles international network speed and outright blocks many foreign websites, like YouTube, Google and even Wikipedia.</p> <p>Naturally I have ways to bypass the GFW, I can access Google, YouTube, Wikipedia and whatnot, but the services I use to access international content are also being constantly being disrupted by the GFW, then again, the service providers are actively combating it and I am actively combating the GFW as well, I learned the know-hows all by myself, I have to. But this is a war that never ends.</p> <p>And this is why there are those checks in the code. Now download speed from NexusMods for non-premium users is capped, and that speed is extremely low, this compounded with the fact that NexusMods is <a href="https://nexusmods.statuspage.io/" rel="nofollow noreferrer">experiencing heavy traffic</a> due to the popularity of the Fallout TV series, all its services have <code>&quot;Degraded Performance&quot;</code>, making the download speed even slower, which equals longer download time and more time for the GFW to disrupt the download process, and I have removed many checks and safeguards and error handling in my hasty reimplementation...</p> <p>The files are all either .zip, .rar or .7z files (some files are named .dazip, which are just renamed .zip files), I have programmatically verified their magic numbers, all files that are supposed to be zips have magic number of <code>b'PK\x03\x04'</code>, all 7zs <code>b&quot;7z\xbc\xaf'\x1c&quot;</code> and all rars <code>b'Rar!\x1a\x07'</code> (they have different eight-byte magic numbers but that is just because of the version numbers), and I intend to use Python to extract the files in batch asynchronously.</p> <p>Now I have 7z.exe and I know I can use that, but I think it is unPythonic and parsing its verbose stdout (or whatever stream it uses to print to console) while easy requires unnecessary extra work. Plus these files have very bad folder structures, and there are many garbage files in the archives (.txt, .doc, .pdf, .odt and whatnot, these are readme files, there are .png, .jpg and whatnot image files, these files aren't used by the game, plus other garbage files), I would rather not extract those files. And I know I can use <code>7z e file</code> to extract the files without directories, but many archives contain files with the same name in different folders, trouble is the game will only use one.</p> <p>All these necessitate that I use more complex code for absolute control. And I absolutely have to do the testing before hand.</p> <p>I use <code>rarfile.RarFile</code> for the rars, <code>py7zr.SevenZipFile</code> for the 7zs and <code>zipfile.ZipFile</code> for the zips and I have found to test the files I need to use <code>rarfile.RarFile.testrar</code> for the rars, <code>py7zr.SevenZipFile.test</code> for the 7zs and <code>zipfile.ZipFile.testzip</code> for the zips.</p> <p>From my testing, if an archive is intact, these functions should return nothing. But if these files are corrupt, they are supposed to return the corrupted files, however I have manually made copies and corrupted the copies and they raise exceptions instead. And these exceptions are inconsistent.</p> <p>I corrupted an rar file, changing the length, <code>RarFile</code> initializes and <code>testrar</code> raises <code>rarfile.BadRarFile</code>, <code>BadRarFile: read failed</code>, I corrupted the copy in a hex editor and I got this instead: <code>BadRarFile: Corrupt file</code>.</p> <p>Corrupting 7z file, changing length, <code>py7zr.Bad7zFile</code> is raised when trying to initialize the object, interestingly corrupting the file without changing length, <code>py7zr.SevenZipFile.test</code> succeeds without returning anything, but <code>py7zr.SevenZipFile.testzip</code> raises <code>LZMAError: Corrupt input data</code> (I can't find the full name of LZMAError)</p> <p>Corrupting zip file, changing length, <code>zipfile.ZipFile.testzip</code> raises <code>OSError</code>. But when I corrupt the file without changing length I noticed different behaviors:</p> <pre><code>error: Error -3 while decompressing data: invalid distance too far back </code></pre> <p>and returning the corrupt file names. There is <code>zipfile.BadZipFile</code> and I am unable to make it get raised.</p> <p>So the basic idea is to run the test function and check if it returns anything, if it returns nothing the archive is intact, but when I corrupted the files manually different exceptions are raised, and I am sure I haven't found all of them.</p> <p>So if exceptions are raised, then the archive is also corrupt, the archive is corrupt if the test function returns truthy values or the function raises exceptions. Of course I know the exceptions will stop the code if I don't catch them.</p> <p>I know I can use <code>except:</code> to catch all exceptions, but there might be other exceptions that are unrelated to corrupted files which I would rather not catch.</p> <p>So what are all possible exceptions that might be raised when an archive is corrupt and these aforementioned libraries are used?</p>
<python>
2024-04-22 07:09:15
1
3,930
Ξένη Γήινος
78,364,279
1,559,401
Server-sent events (SSE) using Python httpx-sse
<p>I recently moved my API client code from <code>requests</code> to <code>httpx</code>. Meanwhile, the backend added SSE support that I would like to take advantage of.</p> <p>I looked into the topic and the only thing I was able to find (in the context of <code>httpx</code>) is the <a href="https://pypi.org/project/httpx-sse/" rel="nofollow noreferrer"><code>httpx-sse</code></a> package. However, I am unable to get it to work.</p> <p>I have an <code>httpx.AsyncClient</code> with the following configuration:</p> <ul> <li><p><code>base_url</code> - as I placeholde for this post I will use <code>https://example.com/v2api</code></p> </li> <li><p><code>/data/sse</code> - the path where SSEs can be retrieved from</p> </li> <li><p><code>headers</code> - a dictionary in the form of</p> <pre><code>{ 'Content-type' : 'application/json', 'api-key' : 'test1234' } </code></pre> <p>where the <code>api-key</code> is required for any REST operation</p> </li> <li><p><code>params</code> - a dictionary in the form of</p> <pre><code>{ 'layerList' : 'test1,test2' } </code></pre> <p>where the <code>layerList</code> represents a string of comma separated IDs of data structures I would like to &quot;probe&quot; for SSEs (in my case these are map layers).</p> </li> </ul> <p>I use the following code to check for SSEs. Due to a timeout with the SSE I had to set the client's <code>timeout</code> to <code>None</code> so that it runs indefinitely. That already indicated an issue...</p> <pre><code>client = httpx.AsyncClient( base_url='https://example.com/v2api', headers={ 'x-api-key' : 'test1234', 'Content-type' : 'application/json' }, timeout=None) try: async with httpx_sse.aconnect_sse(client, 'GET', '/data/sse') as event_source: events = [sse async for sse in event_source.aiter_sse()] print(events) except httpx.ReadTimeout as hrt: print(hrt) except Exception as ex: print(ex) </code></pre> <p><strong>Note:</strong> The client works with other endpoints of the same API that are not SSE.</p> <p>I don't get any output in the console. I also tried creating a new client in the <code>with</code> statement and instead of setting the <code>base_url</code> (<code>httpx</code> automatically concatenates any <code>url</code> with the <code>base_url</code> of the client's instance) I just used the full one (in case the <code>httpx-sse</code> works in a different way) along with headers and parameters again set in that very statement. The outcome was the same.</p> <p>I know that the SSE is working since I was able to get it to work with the <a href="https://pypi.org/project/sseclient" rel="nofollow noreferrer"><code>sseclient</code></a>:</p> <pre><code>from sseclient import SSEClient events = SSEClient('https://example.com/v2api/data/sse', headers={ 'Content-type' : 'application/json', 'x-api-key' : config['credentials']['api-key'] }, params={'layerList':'S1-test1-78bbda5d-929c-497e-b212-86a33dfbc8ff'}) for event in events: print(event) </code></pre> <hr /> <p><strong>UPDATE:</strong></p> <p>Interestingly enough, it works with the non-async client from <code>httpx-sse</code>.</p>
<python><server-sent-events><httpx>
2024-04-22 06:15:58
0
9,862
rbaleksandar
78,364,276
11,447,747
How does dask handles the datasets larger than the memory?
<p>I'm seeking guidance on efficiently profiling data using Dask.</p> <p>I've opted to use Dask to lazily load the DataFrame, either from SQL tables (dask.read_sql_table) or CSV files (dask.read_csv).</p> <p>I am using this code</p> <pre><code>df = dd.read_sql_table(args) df= client.persist(df) ... ... df[column].min().compute() df[column].max().compute() </code></pre> <p>The reason of using persist method is that, if i don't use the persist then dask loads the data in memory on each subsequent calls on min and max, and I don't want that it takes alot of time on each call, if I uses persist, it only loads data once in memory and then subsequent calls(min, max) gets very fast.</p> <p>But I am confused how does dask persist method work? If my dataset size is 8Gb and my memory size is 4gb, Does persist method loads all the data in memory? or does it loads the only possible partitions in the memory and then uses partition swap mechanism to does the calcualtions?</p>
<python><dask><dask-distributed>
2024-04-22 06:15:21
1
341
Faizan
78,364,010
10,138,470
Replicating records in Pandas based on some condition and efficiently
<p>I have a pandas data frame with records like the below:</p> <pre><code>df = pd.DataFrame({ 'APPN': [1001, 1002, 1003, 1004, 1005, 1006], 'Applct_Id_1': ['A', 'B', 'C', 'D', None, 'F'], 'Applct_Id_2': [None, 'E', 'F', None, 'G', None], 'Applct_Id_3': ['W', 'Z', None, 'Y', None], 'Name': ['Alice', 'Bob', 'Charlie', 'David', 'Eve', 'Frank'], 'Age': [25, 30, 35, 40, 45, 50] }) </code></pre> <p>Ideally, all values for APPN are unique. However, there are different Applct_Ids like Applct_Id_1 etc in each APPN. Like 1001 one has A (Applct_Id_1) and W (Applct_Id_3). Applct_Id_2 is None so not of interest. What I want to do is to replicate the records for in row with 1001 on Applct_Id_1 and Applct_Id_3. The idea I have is to create a new column called ID_Number and record the values for Applct_Id_1 and Applct_Id_3 for each APPN like 1001 for this example. This will be followed by a copy of row affecting this APPPN. I acknowledge that this will be different for other APPN. Therefore, the replication of records will only be for APPNs with more than 1 recorded Applct_Id in the dataset. At the end I want to achieve something like this for 1001 as an example.</p> <pre><code>new_df = pd.DataFrame({ 'APPN': [1001, 1001], 'ID_Number': ['A', 'W'], 'Name': ['Alice', 'Alice'], 'Age': [25, 25] }) </code></pre> <p>How can I do this in an efficient way in Pandas as I'll be dealing with about 400K records?</p>
<python><pandas>
2024-04-22 04:46:37
1
445
Hummer
78,363,959
4,264,017
numpy.linalg.LinAlgError: Matrix is singular to machine precision
<p>I'm trying to use bai-perron code from <a href="https://github.com/ceholden/pybreakpoints" rel="nofollow noreferrer">https://github.com/ceholden/pybreakpoints</a> repo.</p> <p>My python version is 3.12.2</p> <p>The code I wrote is similar to their test.</p> <pre class="lang-py prettyprint-override"><code> from pybreakpoints.baiperron import breakpoint import numpy as np import pandas as pd nile = pd.read_csv(&quot;tests/data/nile.csv&quot;) X = np.ones_like(nile) results = breakpoint(X, nile) </code></pre> <p>But I got errors.</p> <pre><code>mytest.py&quot;, line 9, in &lt;module&gt; results = breakpoint(X, nile) ^^^^^^^^^^^^^^^^^^^ \pybreakpoints\baiperron.py&quot;, line 76, in breakpoint ssr = recresid(X_[i:n, :], y_[i:n]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \pybreakpoints\recresid_.py&quot;, line 134, in recresid rresid = _recresid(_X, _y, span)[span:] ^^^^^^^^^^^^^^^^^^^^^^^ \Python312\Lib\site-packages\numba\np\linalg.py&quot;, line 899, in _inv_err_handler raise np.linalg.LinAlgError( numpy.linalg.LinAlgError: Matrix is singular to machine precision. </code></pre> <p>Since it is the data from the repo the test should at least be able to run. I suspect this is just data format problem but I'm new to python and not sure how to debug.</p>
<python><numpy><linear-algebra>
2024-04-22 04:28:31
0
2,987
Anurat Chapanond
78,363,851
5,942,100
dynamically mapping an excel spreadsheet using python and pandas
<p>I have a large excel spreadsheet that I need to read data from certain rows, columns and cells and then output into a different dataframe format. How would I capture the data in specific cells while also ensuring the data can be captured when the spreadsheet is changed? Meaning more columns or rows could be added, but I need to continuously capture this data. Could you provide the code using python and pandas and using loops to dynamically capture this data. Again, not all cells will be used and only certain rows and columns will be used. Here is an example.</p> <p>Logic</p> <p><strong>Display the count of the column name for a given quarter and ID. In this case: q1.22. I created new columns called: date and TYPE</strong></p> <p>Here is the excel spreadsheet:</p> <p><strong>Data</strong></p> <pre><code> q1.22 ID type1 OFFICE nontype1 Customer NY 1 3 1 2 CA 1 33 1 0 TOTALS 2 36 2 1 data = { '0': ['id', 'NY', 'CA', 'TOTALS'], 'q1.22': ['type1', '1', '1', '2'], '0_2': ['OFFICE', '3', '33', '36'], '0_3': ['nontype1', '1', '1', '2'], '0_4': ['Customer', '2', '0', '1'] } </code></pre> <p><strong>Desired</strong></p> <pre><code>ID date TYPE NY q1.22 type1 NY q1.22 nontype1 NY q1.22 Customer NY q1.22 Customer CA q1.22 type1 CA q1.22 nontype1 </code></pre> <p><strong>Doing</strong></p> <pre><code># Define the row indices for both ranges start_row, end_row = 0, 3 # Rows 1 to 4 (0-based index) # Define the column indices for the first range (A to C) start_col_range1, end_col_range1 = 0, 2 # Columns A to C (0-based index) # Define the column indices for the second range (E to F) start_col_range2, end_col_range2 = 4, 5 # Columns E to F (0-based index) # Create an empty list to store the captured data captured_data = [] # Loop through rows and columns within the first range (A to C) for row in range(start_row, end_row + 1): row_label = df.iloc[row, 0] # Assuming the ID column is in the first column for col in range(start_col_range1, end_col_range1 + 1): col_label = df.columns[col] value = df.iloc[row, col] captured_data.append({'ID': row_label, 'date': df.iloc[0, 0], 'TYPE': col_label}) # Loop through rows and columns within the second range (E to F) for row in range(start_row, end_row + 1): row_label = df.iloc[row, 0] # Assuming the ID column is in the first column for col in range(start_col_range2, end_col_range2 + 1): col_label = df.columns[col] value = df.iloc[row, col] captured_data.append({'ID': row_label, 'date': df.iloc[0, 0], 'TYPE': col_label}) # Convert the captured data into a DataFrame output_df = pd.DataFrame(captured_data) </code></pre> <p>However, this is the output:</p> <pre><code>ID date TYPE 0 id id Unnamed: 0 1 id id q1.22 2 NY id Unnamed: 0 3 NY id q1.22 4 CA id Unnamed: 0 5 CA id q1.22 6 TOTALS id Unnamed: 0 7 TOTALS id q1.22 8 id id Unnamed: 3 9 id id Unnamed: 4 10 NY id Unnamed: 3 11 NY id Unnamed: 4 12 CA id Unnamed: 3 13 CA id Unnamed: 4 14 TOTALS id Unnamed: 3 15 TOTALS id Unnamed: 4 </code></pre> <p>Any suggestion is appreciated</p>
<python><pandas><loops>
2024-04-22 03:34:24
0
4,428
Lynn
78,363,438
5,790,653
.replace() method not replacing within for loop but works manually
<p>This is my code:</p> <pre class="lang-py prettyprint-override"><code>numbers = {'1' : '۱', '2' : '۲', '3' : '۳', '4' : '۴', '5' : '۵', '6' : '۶', '7' : '۷', '8' : '۸', '9' : '۹', '0' : '۰', ',': '٬', '.': '.'} pricing = ['12,412,424,214', '124,124,214', '243,363', '3,363,463', '6,789,543', '124,689'] </code></pre> <p>I'm going to replace each character with its equivalent in the <code>numbers</code>:</p> <pre><code>for n in numbers: for price in pricing: new = price.replace(n, numbers[n]) </code></pre> <p>But when I run, it doesn't replace any characters, but if I run like this, it replaces:</p> <pre><code>new = price.replace('8', '۸') </code></pre> <p>What's my issue?</p> <p><strong>Edit1</strong></p> <pre><code>&gt;&gt;&gt; numbers = {'1' : '۱', '2' : '۲', '3' : '۳', '4' : '۴', '5' : '۵', '6' : '۶', '7' : '۷', '8' : '۸', '9' : '۹', '0' : '۰', ',': '٬', '.': '.'} &gt;&gt;&gt; pricing = ['12,412,424,214', '124,124,214', '243,363', '3,363,463', '6,789,543', '124,689'] &gt;&gt;&gt; new_list = [] &gt;&gt;&gt; for n in numbers: ... for price in pricing: ... new = price.replace(n, numbers[n]) ... new_list.append(new) ... &gt;&gt;&gt; new_list ['۱2,4۱2,424,2۱4', '۱24,۱24,2۱4', '243,363', '3,363,463', '6,789,543', '۱24,689', '1۲,41۲,4۲4,۲14', '1۲4,1۲4,۲14', '۲43,363', '3,363,463', '6,789,543', '1۲4,689', '12,412,424,214', '124,124,214', '24۳,۳6۳', '۳,۳6۳,46۳', '6,789,54۳', '124,689', '12,۴12,۴2۴,21۴', '12۴,12۴,21۴', '2۴3,363', '3,363,۴63', '6,789,5۴3', '12۴,689', '12,412,424,214', '124,124,214', '243,363', '3,363,463', '6,789,۵43', '124,689', '12,412,424,214', '124,124,214', '243,3۶3', '3,3۶3,4۶3', '۶,789,543', '124,۶89', '12,412,424,214', '124,124,214', '243,363', '3,363,463', '6,۷89,543', '124,689', '12,412,424,214', '124,124,214', '243,363', '3,363,463', '6,7۸9,543', '124,6۸9', '12,412,424,214', '124,124,214', '243,363', '3,363,463', '6,78۹,543', '124,68۹', '12,412,424,214', '124,124,214', '243,363', '3,363,463', '6,789,543', '124,689', '12٬412٬424٬214', '124٬124٬214', '243٬363', '3٬363٬463', '6٬789٬543', '124٬689', '12,412,424,214', '124,124,214', '243,363', '3,363,463', '6,789,543', '124,689'] </code></pre>
<python>
2024-04-21 23:26:20
1
4,175
Saeed
78,363,340
3,843,029
Is it possible to define a subclass in python that creates a new atomic attribute?
<p>I'll call an &quot;atomic&quot; attribute of a python class to be one from which all other user defined attributes derive. For example, the attribute <code>a</code> in the class below is atomic to <code>A</code> because all user defined attributes (decorated with <code>@property</code>) derive from it.</p> <pre class="lang-py prettyprint-override"><code>class A: def __init__(self, a): self.a = a @property def b(self): return self.a + 1 @property def c(self): return self.a - 1 </code></pre> <p>This is useful to me because I want <code>a</code> to be the only mutable attribute, and if I change <code>a</code> all other attributes will update.</p> <p>Now suppose I would like to define a new class <code>Z</code> that inherits from <code>A</code>, but has a new atomic unit <code>g</code>. That is, <code>Z</code> will only be initialized some new argument <code>g</code> from which the original atomic attribute <code>a</code> will derive in some way. For example, my first attempt would be:</p> <pre class="lang-py prettyprint-override"><code>class Z(A): def __init__(self, g): self.g = g super().__init__(a=g**2) @property def a(self): return self.g**2 </code></pre> <p>Here I declared that <code>a = g**2</code> and would like to establish <code>g</code> as my new atomic attribute for instances of <code>Z</code>. I attempted to do that by over-writing <code>self.a</code> and forcing it to be <code>self.g**2</code>. However, this throws an error when creating an instance of <code>Z</code>:</p> <pre><code> self.a = a AttributeError: can't set attribute </code></pre> <p>Presumably because the <code>@property</code> attribute <code>a</code> has already been defined for <code>Z</code> and can't be set again. My solution which I'm not completely happy with is the following:</p> <pre class="lang-py prettyprint-override"><code>class A: def __init__(self, a): self._a = a @property def a(self): return self._a @property def b(self): return self.a + 1 @property def c(self): return self.a - 1 class Z(A): def __init__(self, g): self.g = g super().__init__(a=g**2) @property def a(self): return self.g**2 </code></pre> <p>Here I made my original class <code>A</code> atomic to <code>_a</code> and then just defined <code>a</code> to be <code>_a</code>. Then in <code>Z</code> I over-write the definition of <code>a</code>. This works as expected, i.e. if I instantiate an instance of <code>Z</code> and change <code>g</code>, all donwstream attributes (<code>b</code> and <code>c</code>) update appropriately.</p> <p>However, I'm not happy with this because in every instance of <code>Z</code> there is this lingering <code>_a</code> attribute which never gets updated if I change <code>g</code>. For example:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; MyZ = Z(4) &gt;&gt;&gt; MyZ.b 17 # 4**2 +1 &gt;&gt;&gt; MyZ.g = 5 &gt;&gt;&gt; MyZ.a, MyZ.b (25,26) # Updated, good &gt;&gt;&gt; MyZ._a 16 # outdated value... (4**2) </code></pre> <p>Is there a better way to do this? Or should I not be trying to do this?</p>
<python><oop><subclassing>
2024-04-21 22:32:56
0
570
gdavtor
78,363,045
8,713,442
Return null when multiple values has same mode
<p>Sharing one sample code . For column b as u can see both values{1,2} has same frequency{2} so mode is returning both values but I want null . Which logically means that it is unable to find one unique value which is most occuring.</p> <pre><code>import polars as pl if __name__ == '__main__': df = pl.DataFrame( { &quot;a&quot;: [1, 1, 2, 3], &quot;b&quot;: [1, 1, 2, 2],}) # print(df) print(df.select(pl.col(&quot;b&quot;).mode())) </code></pre>
<python><python-polars>
2024-04-21 20:14:07
1
464
pbh
78,362,710
1,082,349
Manually convert Stata running month format to datetime
<p>Stata has several date formats, running day, running datetime, number of months since epoch (...).</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_stata.html" rel="nofollow noreferrer">Pandas can automatically convert these to datetime when importing a .dta file</a>. Stata 18 now provide support to directly load data from a Stata session into pandas using the command <a href="https://www.stata.com/python/pystata18/stata.html#pystata.stata.pdataframe_from_data" rel="nofollow noreferrer">pystata.stata.pdataframe_from_data</a>. This command allows to keep value labels when importing, but it apparently does not provide any support for date variables.</p> <p>As a result, I have a pandas.dataframe where the month is the running month since epoch (%td in Stata format). Is there a convenient way to convert this (and any other Stata-originated date format) to datetime?</p>
<python><pandas><stata>
2024-04-21 18:24:31
1
16,698
FooBar
78,362,395
2,179,994
Replacing multiple overlapping substrings in a string
<p>I have a string in an HTML document.</p> <pre><code>astring = &quot;R=500 mm, φ=180°, Z=599 mm von TL Boden oben. Unterliegende Schale: Boden oben.&quot; </code></pre> <p>I want to replace substrings of <code>astring</code> to make links. For this I have a list of dicts like</p> <pre><code>lst = [{'id': 'coordinate_systems', 'linktext': 'TL Boden oben'}, {'id': 'PartID_1', 'linktext': 'Boden oben'}] </code></pre> <p>with the <code>linktext</code> field being the values to be found in <code>astring</code>. I want to add the href tags by simple substitution, something along the lines</p> <pre><code>astring.replace(linktext, '&lt;a href=&quot;#{}&quot;&gt;{}&lt;/a&gt;'.format(_id, linktext)) </code></pre> <p>by iterating over <code>lst</code> and doing the substitutions in each line. The naive implementation is a double loop, that is, each line of the HTML document is checked against each element of <code>lst</code> and substitutions are made.</p> <p>And this yields incorrect results, e.g. the substring 'TL Boden oben' is replaced first</p> <pre><code>'TL Boden oben' -&gt; '&lt;a href=&quot;#coordinate_systems&quot;&gt;TL Boden oben&lt;/a&gt;' </code></pre> <p>which is OK, but then but then the 'Boden oben' inside is replaced again yielding</p> <pre><code>'Boden oben' -&gt; '&lt;a href=&quot;#coordinate_systems&quot;&gt;TL &lt;a href=&quot;#PartID_1&quot;&gt;Boden oben&lt;/a&gt;&lt;/a&gt;' </code></pre> <p>which is not OK anymore I needed 'TL Boden oben' and 'Boden oben' dealt with separately:</p> <pre><code>&quot;R=500 mm, φ=180°, Z=599 mm von &lt;a href=&quot;#coordinate_systems&quot;&gt;TL Boden oben&lt;/a&gt;. Unterliegende Schale: &lt;a href=&quot;#PartID_1&quot;&gt;Boden oben&lt;/a&gt;.&quot; </code></pre> <p>The <code>linktext</code> fields are user defined, so arbitrary. These may be distinct or overlapping, that is, I can not simply do the substitutions the naivbe way, as shown before. I have no a priori knowledge <em>how</em> the <code>linktext</code> fields overlap.</p> <p>I found and implemented a solution that checks if <code>astring</code> contains more than one <code>linktext</code>. If so, it simply splits <code>astring</code> to sentences and does the substitution on a sentence level; this is OK as it is very unlikely that overlapping linktexts will be in the same sentence (see example above).</p> <p>But it is not the robust solution I'm after.</p> <p>How could this problem be attacked?</p> <p>EDIT:</p> <p>By the <a href="https://stackoverflow.com/a/78363483/2179994">answer</a> of @Nick the outer loop was eliminated and replacements are done by regex substitutions. The inner loop is kept as not all occurancies of the links are to be substrituted, e.g. an id tag should not link to itself.</p> <p>Lines, where no substitution should occur contain the substring 'nolink'. My strategy is to loop over the lines, do the substitution if 'nolink' is not a substring in the line, otherwise replace 'nolink' with ''.</p> <p>Part of the original content (it is generated using jinja2), before substitution:</p> <pre><code>'&lt;table class=&quot;table w-auto table-hover table-sm&quot;&gt;\n' '&lt;thead&gt;\n' '&lt;tr&gt;\n' '&lt;th colspan=&quot;2&quot; scope=&quot;col&quot; id=&quot;PartID_1&quot;&gt;Boden obennolink&lt;/th&gt;\n' '&lt;/tr&gt;\n' '&lt;/thead&gt;\n' '&lt;tbody&gt;\n' '&lt;tr&gt;\n' '&lt;td&gt;Halbkugelbodennolink&lt;/td&gt;\n' '&lt;td&gt;Position Z = 6000.0 mm. Bordhöhe 50.0 mm. Wanddicke s = 15 mm MW. Werkstoff 1.4541, Blech.&lt;/td&gt;\n' '&lt;/tr&gt;\n' '&lt;/tbody&gt;\n' '&lt;/table&gt;\n' '&lt;table class=&quot;table w-auto table-hover table-sm&quot;&gt;\n' '&lt;thead&gt;\n' '&lt;tr&gt;\n' '&lt;th colspan=&quot;2&quot; scope=&quot;col&quot; id=&quot;PartID_14&quot;&gt;N02nolink&lt;/th&gt;\n' '&lt;/tr&gt;\n' '&lt;/thead&gt;\n' '&lt;tbody&gt;\n' '&lt;tr&gt;\n' '&lt;td&gt;Positionnolink&lt;/td&gt;\n' '&lt;td&gt;R=500 mm, φ=180°, Z=599 mm von TL Boden oben. Unterliegende Schale: Boden oben. Ausrichtung: normal.&lt;/td&gt;\n' '&lt;/tr&gt;\n' '&lt;tr&gt;\n' '&lt;td&gt;Zargenolink&lt;/td&gt;\n' '&lt;td&gt;168.3 mm x 5.60 mm NW (5.20 mm MW). Werkstoff 1.4571, Blech.&lt;/td&gt;\n' '&lt;/tr&gt;\n' '&lt;/tbody&gt;\n' '&lt;/table&gt;\n' </code></pre>
<python><string><replace>
2024-04-21 16:48:10
1
2,074
jake77
78,362,390
51,816
How to extract the audio stream from an mp4 video file?
<p>Basically I am using this python code but when I put the extracted audio stream below the video stream with the same audio, they don't line up.</p> <p>One interesting detail is when I used the video editor itself to save the entire audio in the project when I have only the video stream, MP3 format also has a delay/lag but when I used wav format, it matched the audio track of the video 100%. So I thought changing the code to export wav format would do the same but no.</p> <pre><code>from pydub.utils import mediainfo from moviepy.editor import VideoFileClip def extractAudioFromVideoFile(inputFile, outputFile, startTimeInSeconds, endTimeInSeconds): clip = VideoFileClip(inputFile) # Ensure the end time does not exceed the video's duration if endTimeInSeconds &gt; clip.duration: endTimeInSeconds = clip.duration seg = clip.subclip(startTimeInSeconds, endTimeInSeconds).audio bitrate = mediainfo(inputFile)['bit_rate'] seg.write_audiofile(outputFile, codec=&quot;libmp3lame&quot;, bitrate=bitrate) </code></pre> <p><a href="https://i.sstatic.net/83WyO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/83WyO.png" alt="enter image description here" /></a></p>
<python><video><ffmpeg><wav><moviepy>
2024-04-21 16:46:06
0
333,709
Joan Venge
78,362,161
6,494,707
addling the difference value of stripplot groups in seaborn
<p>I have used <a href="https://seaborn.pydata.org/archive/0.11/generated/seaborn.stripplot.html" rel="nofollow noreferrer">this example</a> of <code>stripplot</code> of seaborn</p> <pre><code>[ax = sns.stripplot(x=&quot;day&quot;, y=&quot;total_bill&quot;, hue=&quot;smoker&quot;, data=tips, palette=&quot;Set2&quot;, dodge=True)][1] </code></pre> <p>My data is shown as follows:</p> <p><a href="https://i.sstatic.net/Yq9ui.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yq9ui.png" alt="enter image description here" /></a></p> <p>I have calculated the minimum and maximum among all three groups (12,13, and 14)for each color separately, because I want to show the differences between each colour among three groups. I wanna show their difference value on the strip plot with their corresponding color in the stripplot, is there any way? or with a vertical line beside the stripplot that can show the difference value between minimum and maximum value of each color among all?</p> <p>In picture, the minimum and maximum I marked manually with the arrow.</p> <p>for example the the min and max values are as follows:</p> <pre><code> {'min_320': 29.977999999084474, 'max_320': 35.66800000610352} {'min_384': 33.4459999987793, 'max_384': 36.849999999389645} {'min448': 34.49400000915527, 'max_448': 37.29800001159668} </code></pre>
<python><matplotlib><plot><seaborn>
2024-04-21 15:40:44
1
2,236
S.EB
78,362,147
1,070,092
pyqtgraph plot with x-Axis in date units
<p>I try to plot a chart with dates in the x-axis. But the units displayed are in hours and minutes.</p> <pre><code>import sys import datetime import pyqtgraph as pg from PySide6.QtWidgets import QApplication, QMainWindow class MainWindow(QMainWindow): def __init__(self): super().__init__() self.plotWidget = pg.PlotWidget() self.setCentralWidget(self.plotWidget) self.scatter = pg.ScatterPlotItem() self.plotWidget.addItem(self.scatter) self.plotWidget.setBackground('w') dateAxis = pg.DateAxisItem() self.plotWidget.setAxisItems({'bottom': dateAxis}) self.add_points() def add_points(self): x_values = [datetime.date(2024, 1, 1), datetime.date( 2024, 3, 1), datetime.date(2024, 4, 21)] points = [ {'pos': (x_values[0].toordinal(), 1)}, {'pos': (x_values[1].toordinal(), 2)}, {'pos': (x_values[2].toordinal(), 3)}] self.scatter.addPoints(points) if __name__ == '__main__': app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec()) </code></pre> <p><a href="https://i.sstatic.net/6yszv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6yszv.png" alt="enter image description here" /></a></p> <p>How can I force the x-Axis unit with dates? eg 01.01.2024</p> <p>Thanks for hints</p> <p>Vik</p>
<python><pyqtgraph>
2024-04-21 15:35:19
1
345
Vik
78,362,091
7,496,406
How to start jupyter notebook in vscode using jupyter extension with workspace specified
<p>I am currently using the following <code>settings.json</code> file in my <code>.vscode</code> folder</p> <pre><code>{ &quot;terminal.integrated.env.windows&quot;: { &quot;PYTHONPATH&quot;: &quot;${workspaceFolder}&quot; } } </code></pre> <p>This allows me to run <code>jupyter notebook</code> from the command line. This allows me to have a parent folder for python such that i can run packages in the folder that have not been installed through pip (They are still under development so it does not make sense to pip install them).</p> <p>How to i update the <code>path</code> on the <code>notebook</code> when i use the jupyter extension to create or open a notebook?</p> <p>Currently, I am running <code>jupyter notebook</code> and then using the command palette to create a new notebook than I then connect to an existing jupyter server. I can also code <code>os.setwd</code> at the beginning of my code either. I am looking for a simpler workflow than what i am currently doing.</p>
<python><visual-studio-code><jupyter-notebook>
2024-04-21 15:17:28
1
1,371
Jrakru56
78,361,743
6,638,903
Getting tabular data into a dataframe from a PHP webpage
<p>This webpage ('*<a href="https://www.nseindia.com/market-data/top-gainers-losers*%27" rel="nofollow noreferrer">https://www.nseindia.com/market-data/top-gainers-losers*'</a>) has 2 tables ('gainers' and 'losers').</p> <p>I want a code to read the contents of the webpage and download these 2 tables into 2 separate dataframes. How do I achieve this?</p>
<python><python-3.x><dataframe>
2024-04-21 13:33:36
1
825
Tanmoy
78,361,630
3,477,339
Baseline3 TD3, reset() method too many values to unpack error
<p>The env is <code>python 3.10</code>, <code>stable-baseline3 2.3.0</code> and I'm trying TD3 Algorithm.</p> <p>I'm keep getting same error for whatever I do.</p> <p>As far as I know, the reset method has return as same as observation space defined</p> <p>The environment I made has reset method like below</p> <pre><code>def reset(self, seed=0): self.current_index = 0 self.current_cash = self.start_cash self.done = False self.current_time = self.start_time # 초기 관찰 상태 계산 initial_state = self.get_state() # dict return initial_state </code></pre> <p>Its never been complicated whatsoever and define env, model is also fine</p> <pre><code>from stable_baselines3.common.torch_layers import BaseFeaturesExtractor from stable_baselines3 import TD3 class CustomFeatureExtractor(BaseFeaturesExtractor): def __init__(self, observation_space, features_dim=5): super(CustomFeatureExtractor, self).__init__(observation_space, features_dim) self.model_alpha = ModelAlpha() def forward(self, observations): prices = observations['prices'] position = observations['position'] quantity = observations['quantity'] pnr = observations['pnr'] return self.model_alpha(prices, torch.cat([position, quantity, pnr])) # 환경과 모델 설정 env = MarketEnvironment(candles, '2020-07-01 00:00:00', '2023-12-31 23:59:00') # 여러분의 환경 설정 policy_kwargs = dict( features_extractor_class=CustomFeatureExtractor, features_extractor_kwargs=dict(features_dim=5) ) model = TD3(&quot;MultiInputPolicy&quot;, env, policy_kwargs=policy_kwargs, batch_size=128, verbose=1) </code></pre> <p>Jupyter prompt says that</p> <p>Using cpu device Wrapping the env with a <code>Monitor</code> wrapper Wrapping the env in a DummyVecEnv.</p> <p>and it runs fine until</p> <pre><code>model.learn(total_timesteps=1, log_interval=10, progress_bar=True) </code></pre> <p>this code.</p> <p>It keep saying that again and again no matter what I've done</p> <pre><code>File ~\.conda\envs\mlbase-py3.10\lib\site-packages\stable_baselines3\common\off_policy_algorithm.py:297, in OffPolicyAlgorithm._setup_learn(self, total_timesteps, callback, reset_num_timesteps, tb_log_name, progress_bar) 290 if ( 291 self.action_noise is not None 292 and self.env.num_envs &gt; 1 293 and not isinstance(self.action_noise, VectorizedActionNoise) 294 ): 295 self.action_noise = VectorizedActionNoise(self.action_noise, self.env.num_envs) --&gt; 297 return super()._setup_learn( 298 total_timesteps, 299 callback, 300 reset_num_timesteps, 301 tb_log_name, 302 progress_bar, 303 ) File ~\.conda\envs\mlbase-py3.10\lib\site-packages\stable_baselines3\common\base_class.py:425, in BaseAlgorithm._setup_learn(self, total_timesteps, callback, reset_num_timesteps, tb_log_name, progress_bar) 423 if reset_num_timesteps or self._last_obs is None: 424 assert self.env is not None --&gt; 425 self._last_obs = self.env.reset() # type: ignore[assignment] 426 self._last_episode_starts = np.ones((self.env.num_envs,), dtype=bool) 427 # Retrieve unnormalized observation for saving into the buffer File ~\.conda\envs\mlbase-py3.10\lib\site-packages\stable_baselines3\common\vec_env\dummy_vec_env.py:77, in DummyVecEnv.reset(self) 75 for env_idx in range(self.num_envs): 76 maybe_options = {&quot;options&quot;: self._options[env_idx]} if self._options[env_idx] else {} ---&gt; 77 obs, self.reset_infos[env_idx] = self.envs[env_idx].reset(seed=self._seeds[env_idx], **maybe_options) 78 self._save_obs(env_idx, obs) 79 # Seeds and options are only used once ValueError: too many values to unpack (expected 2) </code></pre> <p>What I know is that the reset() method of this error is in an abstract class named VecEnv</p> <p>How to resolve this?</p>
<python><machine-learning><stablebaseline3>
2024-04-21 12:58:27
1
497
GatesPlan
78,361,556
3,244,776
converting pem file to encrypted private key file of PBES1 format in Python
<p>I am writing a program where I am trying to convert the certs, which is of pem format into <code>PBES1</code> private key (the one with <code>-----BEGIN ENCRYPTED PRIVATE KEY-----</code> and <code>-----END ENCRYPTED PRIVATE KEY-----</code> header and footer.)</p> <p>Below is the code I wrote. I am specific about the encryption algorithm used here for compatibility reasons. I was initially trying to use <code>utf-8</code> encoding, but it gave me</p> <pre><code>UnicodeDecodeError: 'utf-8' codec can't decode »(,¾æ0x82 in position 1: invalid start byte </code></pre> <p>hence I changed it to <code>latin-1</code>, but please correct me if I am wrong here.</p> <pre><code>from cryptography.hazmat.primitives import hashes, serialization from cryptography.hazmat.primitives.serialization import PrivateFormat, pkcs12 from cryptography import x509 from cryptography.hazmat.backends import default_backend def get_CN(crt_subject: x509.Name) -&gt; str: return next(attr.value for attr in crt_subject if attr.oid == x509.NameOID.COMMON_NAME) def pkcs12_pbes1(password: str, pkey, clcert, cacerts: list[x509.Certificate]) -&gt; bytes: encryption_algorithm = ( PrivateFormat.PKCS12.encryption_builder() .kdf_rounds(30) .key_cert_algorithm(pkcs12.PBES.PBESv1SHA1And3KeyTripleDESCBC) .hmac_hash(hashes.SHA256()) .build(password.encode(&quot;latin-1&quot;)) ) return pkcs12.serialize_key_and_certificates( name=get_CN(clcert.subject).encode(&quot;latin-1&quot;), key=pkey, cert=clcert, cas=cacerts, encryption_algorithm=encryption_algorithm, ) def load_ec_private_key_from_pem(file_path): try: with open(file_path, 'rb') as pem_file: pem_data = pem_file.read() private_key = serialization.load_pem_private_key( pem_data, password=None, # Assuming the private key is not password-protected backend=default_backend() ) return private_key except Exception as e: print(&quot;Error:&quot;, e) return None def read_pem_file(file_path): try: with open(file_path, 'rb') as pem_file: pem_data = pem_file.read() p_cert = x509.load_pem_x509_certificate(pem_data) return p_cert except FileNotFoundError: print(f&quot;Error: File '{file_path}' not found.&quot;) return None p_cert = read_pem_file('test_cert.pem') p_key = load_ec_private_key_from_pem('test_key.pem') p_ca = read_pem_file('test_ca.pem') pbes1_pem = pkcs12_pbes1(&quot;somepasswd&quot;, p_key, p_cert, [p_ca]) print(pbes1_pem.decode('latin-1')) </code></pre> <p>Upon testing, I am able to read my test cert, key and ca from the file and the <code>pbes1_pem</code> seems to be a binary array in encrypted form. However, I fail to write it to another file of encrypted private key <code>pem</code> format like below</p> <pre><code>-----BEGIN ENCRYPTED PRIVATE KEY----- data -----END ENCRYPTED PRIVATE KEY----- </code></pre> <p>I was going through the <code>hazmat</code> documentation and I couldn't find a helper function that could convert the encrypted binary array into the above mentioned specified format. Is this the part where I need to base64 encode the data myself and add it within those header and footer manually, or is there a helper function that I'm missing? Any help is appreciated.</p> <p>Thanks.</p>
<python><ssl><encryption><cryptography><pem>
2024-04-21 12:31:27
0
3,185
nohup