QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,553,399 | 16,988,223 | flet python can't get the value of a dynamic dropdown | <p>I have a list of items for my dropdown created with <a href="https://flet.dev" rel="nofollow noreferrer">flet</a></p>
<pre><code>dropdown_options = ["np.int_", "np.str_", "np.float_"]
</code></pre>
<p>This is no the problem, the problem is when I'm trying to get the value. But first let me share my code where I'm adding the items:</p>
<p>def on_dialog_file_picker_result(e: ft.FilePickerResultEvent):
global dropdown_options
if file_picker.result != None and file_picker.result.files != None:
columnsList = []
columnsNames = []</p>
<pre><code> for f in file_picker.result.files:
print(f"File: {f.name}")
print(f"Path: {f.path}")
# Manage csv
dataframe = dd.read_csv(f.path, low_memory=False, dtype=str, encoding='latin-1')
# Get columns names
for colName in dataframe.columns:
# save the column name in the list
columnsNames.append(colName)
# Add Column to the list for the row
columnsList.append(ft.Text(value=colName, size=12, color=ft.colors.BLACK))
# Add Dropdown datatypea
dtypes = ft.Dropdown(width=100,
label="dtype",
options=[ft.dropdown.Option(option) for option in dropdown_options],
on_change=lambda e, col=colName: dropdown_changed(e, col))
columnsList.append(dtypes)
row = Row(spacing=10, controls = list(columnsList))
page.add(row)
page.update()
</code></pre>
<p>Exactly this is the code:</p>
<pre><code># Add Dropdown datatypea
dtypes = ft.Dropdown(width=100,
label="dtype",
options=[ft.dropdown.Option(option) for option in dropdown_options],
on_change=lambda e, col=colName: dropdown_changed(e, col))
columnsList.append(dtypes)
</code></pre>
<p>As we can see there, I'm adding this items dynamically inside a for loop.</p>
<p>And this is the code on the on_change event for the dropdown:</p>
<pre><code>def dropdown_changed(e, column_name):
selected_option = e.value
selected_value = dropdown_options[selected_option]
print(f"Dropdown for column {column_name} changed to {selected_value}")
# Perform any additional actions based on the selected value
page.update()
</code></pre>
<p>The items of the dropdown after a pick a csv file by using the file picker, this are added without problem, the problem is how I'm trying to get the value of the dropdown:</p>
<p>I'm getting this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\fredd\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner
self.run()
File "C:\Users\fredd\AppData\Local\Programs\Python\Python310\lib\threading.py", line 946, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\fredd\PycharmProjects\bigCsvImporter\app.py", line 67, in <lambda>
on_change=lambda e, col=colName: dropdown_changed(e, col))
File "C:\Users\fredd\PycharmProjects\bigCsvImporter\app.py", line 37, in dropdown_changed
selected_option = e.value
AttributeError: 'ControlEvent' object has no attribute 'value'
</code></pre>
<p>This error happens after trying to select an item of the dropdown:</p>
<p><a href="https://i.sstatic.net/4hcLl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4hcLl.png" alt="enter image description here" /></a></p>
<p>I would like to fix this problem guys, I need this value to set to my dataframe the datatype for every column, because by default all the columns are setted as "str".</p>
<p>thanks in advance.</p>
| <python><flet> | 2023-11-26 20:35:13 | 0 | 429 | FreddicMatters |
77,553,300 | 11,154,841 | Anaconda: "ModuleNotFoundError: No module named 'gower'". Module not available in conda, but only in pip. Same error even after installing with pip | <p>Consider:</p>
<pre class="lang-none prettyprint-override"><code>pip install gower
Defaulting to user installation because normal site-packages is not writeable
Looking in links: /usr/share/pip-wheels
Collecting gower
Using cached gower-0.1.2-py3-none-any.whl (5.2 kB)
Requirement already satisfied: scipy in /opt/conda/envs/anaconda-2022.05-py39/lib/python3.9/site-packages (from gower) (1.7.3)
Requirement already satisfied: numpy in /opt/conda/envs/anaconda-2022.05-py39/lib/python3.9/site-packages (from gower) (1.21.5)
Installing collected packages: gower
Successfully installed gower-0.1.2
Note: you may need to restart the kernel to use updated packages.
</code></pre>
<p>Thus, it is installed, but it can only be installed with pip, not with conda, and the module cannot be found:</p>
<pre class="lang-py prettyprint-override"><code>import gower
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[3], line 1
----> 1 import gower
ModuleNotFoundError: No module named 'gower'
</code></pre>
<p>I also tried installing it with conda then, but it is not in the conda package manager:</p>
<pre class="lang-none prettyprint-override"><code>conda install gower
Channels:
- defaults
- conda-forge
- pytorch
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- gower
Current channels:
- defaults
- https://conda.anaconda.org/conda-forge/linux-64
- https://conda.anaconda.org/conda-forge/noarch
- https://conda.anaconda.org/pytorch/linux-64
- https://conda.anaconda.org/pytorch/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
Note: you may need to restart the kernel to use updated packages.
</code></pre>
<p>I am in the same virtual environment all the time during the tests, one that is offered online on Anaconda.com with "anaconda-2022.05-py39". The error is the same when switching to "anaconda-panel-2023.05-py310".</p>
<p>What can I do to get <code>import gower</code> to run?</p>
| <python><python-3.x><pip><anaconda> | 2023-11-26 20:03:48 | 1 | 9,916 | questionto42 |
77,553,299 | 5,594,008 | Wagtail, change allowed extensions for WagtailImageField | <p>I'm trying to add extra extensions <code>wagtail.images.fields.WagtailImageField</code></p>
<p>How it can be done? I've tried <a href="https://docs.wagtail.org/en/stable/reference/settings.html#wagtailimages-extensions" rel="nofollow noreferrer">https://docs.wagtail.org/en/stable/reference/settings.html#wagtailimages-extensions</a> , but seems like this is not the correct option</p>
<p>P.S. based on comments. Seems like the problem is that wagtail version in my application is 4</p>
| <python><django><wagtail> | 2023-11-26 20:03:12 | 0 | 2,352 | Headmaster |
77,553,257 | 1,470,127 | Select behavior different between pyspark 2.4.8 and 3.3.2 | <p>Recently, one of job failed after we upgraded the cluster from version 2.4.8 to 3.3.2 of Apache Spark.</p>
<p>Below is an example to reproduce the issue.</p>
<p>Here I am creating two datasets with 3 columns each. Two columns are common between them colA and colB.</p>
<pre><code>from pyspark.sql.dataframe import DataFrame
df_current_data = [("abc.com","SA1","1"), \
("def.com","SA2","1"), \
("ee.com","SA3","1")
]
df_current_data_cols = ["colA", "colB", "2023-10-10"]
curr_df = spark.createDataFrame(data=df_current_data, schema = df_current_data_cols)
df_historical_data = [("zzz.com","SAz","1"), \
("yyy.com","SA2","y"), \
("ee.com","SA3","1")
]
df_historical_data_cols = ["colA", "colB", "2022-09-09"]
his_df = spark.createDataFrame(data=df_historical_data, schema = df_historical_data_cols)
</code></pre>
<p>Here is a function, where I perform a full outer join between the two datasets.
Notice the "select" after the join. The goal is to get the following columns "colA", "colB", "2023-10-10", "2022-09-09". The code inside select statement will return ["colA", "colB", "colA", "colB", "2023-10-10", "2022-09-09"]. Till this point, both spark versions are behaving the same.</p>
<pre><code>def append_to_existing_output(df_current_data: DataFrame, df_historical_data: DataFrame):
joined_df_with_duplicate_columns = (df_current_data.alias("left")
.join(df_historical_data.alias("right"), ["colA", "colB"],
how="full_outer")
.select(["colA", "colB", "left.*"] +
df_historical_data.drop(*df_current_data.columns).columns))
return joined_df_with_duplicate_columns
</code></pre>
<p>Here, I call the function and pass the dataframes</p>
<pre><code>append_to_existing_output(curr_df, his_df).show()
</code></pre>
<p>Here is the interesting part:</p>
<p>The below picture is from Spark 2.4.8 showing the output and plan. Notice that it auto removed the duplicate colA and colB. It also did it correctly since it did not remove the coalesce columns after the join but rather the normal colA.</p>
<p><a href="https://i.sstatic.net/ltNVe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ltNVe.png" alt="enter image description here" /></a></p>
<p>The below picture is from Spark 3.3.2. I run the same code but the output contains the extra columns and the plan also shows that.</p>
<p><a href="https://i.sstatic.net/eCG2V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eCG2V.png" alt="enter image description here" /></a></p>
<p>I understand that doing left.* was not the best idea.</p>
<p>The question I have is in which spark PR this logic was changed and why? Is the previous version of Spark running distinct on column list (in specific cases) and now not? Which Spark version is showing correct behavior?</p>
| <python><apache-spark><pyspark> | 2023-11-26 19:53:57 | 0 | 4,059 | Behroz Sikander |
77,553,231 | 10,082,534 | Plotting a cube on top of a plane, given all the vertices of the cube | <p>I am tring to plot a cube on top of a plane using matplotib.</p>
<p>I am doing a project on UAV(An unmanned aerial vehicle) Landing and there are these tests i have to visualize. I have a landing plane already plotted as shown below.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
xs = np.linspace(-10, 10, 100)
ys = np.linspace(-10, 20, 100)
X, Y = np.meshgrid(xs, zs)
Z = 0 / X
fig = plt.figure(figsize=(25,25))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X, Y, Z)
ax.set_ylim([-40, 40])
ax.set_zlim([0, 10])
ax.set_xlim([-10, 10])
# Hide grid lines
# ax.grid(False)
plt.xlabel("X axis")
plt.ylabel("Y axis")
# Hide axes ticks
# ax.set_xticks([])
# ax.set_yticks([])
# ax.set_zticks([])
# plt.savefig("foo.png")
plt.show()
</code></pre>
<p>and the output is the following image.</p>
<p><a href="https://i.sstatic.net/QA2GN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QA2GN.png" alt="a landing strip for the UAV" /></a></p>
<p>now i want to plot a cube inside the figure. I have a function that generates all the vertices points of a cube but i am running into errors trying to plot it. The vertices are generated from top-Northwest,top-NorthEast,top-Southeast, top-south west. the same applies for bottom.</p>
<p>Here is the function.</p>
<pre><code>
def createVertices(c3, stepSize):
hs = 0.5 * stepSize
a = [c3[0]-hs,c3[1]+hs,c3[2]+hs]
b = [c3[0]+hs,c3[1]+hs,c3[2]+hs]
c = [c3[0]+hs,c3[1]-hs,c3[2]+hs]
d = [c3[0]-hs,c3[1]-hs,c3[2]+hs]
e = [c3[0]-hs,c3[1]+hs,c3[2]-hs]
f = [c3[0]+hs,c3[1]+hs,c3[2]-hs]
g = [c3[0]+hs,c3[1]-hs,c3[2]-hs]
h = [c3[0]-hs,c3[1]-hs,c3[2]-hs]
return [a,b,c,d,e,f,g,h]
</code></pre>
<p>I have to do this dynamically as I have over 100 testcases. Any help will be much appreciated.</p>
<p>ADDITIONAL INFORMATION:</p>
<p>Jared suggested <a href="https://stackoverflow.com/questions/33540109/plot-surfaces-on-a-cube">almost similar question</a> but I run into an error <code> AttributeError: 'int' object has no attribute 'ndim'</code> coming from <code>ax.plot_surface(X,Y,1, alpha=0.5)</code>.</p>
| <python><numpy><matplotlib><jupyter-notebook><numpy-ndarray> | 2023-11-26 19:48:45 | 2 | 325 | Wanja Wilson |
77,552,951 | 15,494,335 | How to call functions (in both directions) across processes in Python using MultiProcessing? | <p>I have two separate python processes, one is not spawned by the other. The server listens on localhost:50000 as such:</p>
<pre><code>from multiprocessing.managers import BaseManager
def my_func(arg=None): print(arg)
class FuncManager(BaseManager): pass
FuncManager.register('get_func', callable=lambda:my_func)
m = FuncManager(address=('', 50000), authkey=b'abracadabra')
s = m.get_server()
s.serve_forever()
</code></pre>
<pre><code>from multiprocessing.managers import BaseManager
class FuncManager(BaseManager): pass
FuncManager.register('get_func')
m = FuncManager(address=('', 50000), authkey=b'abracadabra')
m.connect()
func = m.get_func()
func("foo")
</code></pre>
<p>But it throws an exception, that func isn't callable, even though it reports its type as a function. Is there a way in Python for two separate processes to be able to pass and trigger callbacks from one another (ideally with arguments and return values)? If not, what is the simplest additional package or workaround needed to achieve this?</p>
<p>I did manage to be able to call functions defined in another process while passing arguments and getting return values, but the functions have to be defined in a class, and it works only from client calling the server. I can't get it to work the other way around or use arbitrary functions (such as passing them as arguments). Having to predefine them in the class seems too much of a constraint, because I don't see how then several clients can connect using the same class/socket-address and still be able to call the server such that the server knows which client is calling, and such that the server can call any client specifically at any moment without requiring being called by the client first.</p>
<p>I wouldn't mind using IPC/RPC such that clients connect to a socket, and then functions are called through the socket. But each side needs to know (through exceptions/events not through polling of socket), if/when the other process has terminated, and be able to dynamically un-/register one or more callbacks with the other side in a simple pythonic way. Is there a simple/reliable package in python to do this?</p>
| <python><multiprocessing><python-multiprocessing><ipc> | 2023-11-26 18:28:43 | 0 | 707 | mo FEAR |
77,552,941 | 10,227,815 | cuDF not working on 'apply' function for string return type | <p>I'm getting below error while calling the apply function on cuDF.
<em>AttributeError: 'CPUDispatcher' object has no attribute '<strong>closure</strong>'</em></p>
<pre><code>File /opt/conda/lib/python3.10/site-packages/cudf/utils/cudautils.py:78, in make_cache_key(udf, sig)
76 names = udf.__code__.co_names
---> 78 if udf.__closure__ is not None:
79 cvars = tuple(x.cell_contents for x in udf.__closure__)
AttributeError: 'CPUDispatcher' object has no attribute '__closure__'
</code></pre>
<p>Which results below error as well:
<em>ValueError: user defined function compilation failed.</em></p>
<pre><code>File /opt/conda/lib/python3.10/site-packages/cudf/core/indexed_frame.py:2288, in IndexedFrame._apply(self, func, kernel_getter, *args, **kwargs)
2284 kernel, retty = _compile_or_get(
2285 self, func, args, kernel_getter=kernel_getter
2286 )
2287 except Exception as e:
-> 2288 raise ValueError(
2289 "user defined function compilation failed."
2290 ) from e
2292 # Mask and data column preallocated
2293 ans_col = _return_arr_from_dtype(retty, len(self))
ValueError: user defined function compilation failed.
</code></pre>
<p><strong>Sample code to reproduce:</strong></p>
<pre><code>import numba, cudf
print(numba.__version__) # 0.58.1
print(cudf.__version__) # 23.08.00
@numba.njit
def f(row):
st = row['str_col']
scale = row['scale']
if len(st) == 0:
return 'a' + str(scale) + 'x'
elif st.startswith('a'):
return 'b'
elif 'example' in st:
return 'c'
else:
return 'd'
df1 = cudf.DataFrame({
'str_col': ['', 'abc', 'some_example'],
'scale': [1, 2, 3]
})
df1['abc'] = df1.apply(f, axis=1)
df1.head(100)
</code></pre>
| <python><pandas><dataframe><dask-dataframe><cudf> | 2023-11-26 18:23:48 | 1 | 303 | SHM |
77,552,916 | 7,422,128 | ModuleNotFoundError: No module named pydantic_core._pydantic_core | <p>I am using venv virtual environment in python.The pydantic core module version 2.10.1 is present but stIll I am getting a weird error mentioned below.</p>
<pre><code>ModuleNotFoundError: No module named 'pydantic_core._pydantic_core'
</code></pre>
<p>The strange part is it goes aways when I uninstall the pydantic core module and then reinstall in the the same virtual environment.
Any clues what is happening here?</p>
| <python><fastapi><pydantic> | 2023-11-26 18:15:42 | 0 | 932 | user7422128 |
77,552,883 | 1,326,330 | How to compute truncated backpropagation through time (BPTT) for RNN cell in PyTorch | <p>For simplicity I have a sequence of N input data like words and i have an RNN cell. I want to compute trunkated backpropagation thorugh time (BPTT) over sliding window of K words within the loop:</p>
<pre><code>optimizer.zero_grad()
h = torch.zeros(hidden_size)
for i in range(N):
out, h = rnn_cell.forward(data[i], h)
if i > K:
loss += compute_loss(out, target)
loss.backward()
optimizer.step()
</code></pre>
<p>but obviously it will compute gradient over all previous steps. I tried also this approach:</p>
<pre><code>h = torch.zeros(hidden_size)
for i in range(N):
optimizer.zero_grad()
out, h = rnn_cell.forward(data[i], h.detach())
loss += compute_loss(out, target)
loss.backward(retain_graph=True)
optimizer.step()
</code></pre>
<p>but it will compute the gradient only for the last step. I tried also to maintain previous hidden states only for K steps in <code>deque(maxlen=K)</code> because I thought that when the reference to <code>h</code> state is discarded from the list it will be also removed from the graph:</p>
<pre><code>optimizer.zero_grad()
h = torch.zeros(hidden_size)
last_h = deque(maxlen=10)
for i in range(N):
last_h.append(h)
out, h = rnn_cell.forward(data[i], h)
if i > K:
optimizer.zero_grad()
loss += compute_loss(out, target)
loss.backward(retain_graph=True)
optimizer.step()
</code></pre>
<p>but I doubt if any approach here works as I intended. As a very naive workaround I can do that:</p>
<pre><code>h = torch.zeros(hidden_size)
optimizer.zero_grad()
for i in range(0, N, K):
h = h.detach()
optimizer.zero_grad()
for j in range(i, min(i + K, N)):
out, h = rnn_cell.forward(data[j], h)
loss += compute_loss(out, target)
loss.backward()
</code></pre>
<p>but it requires computation of each step K times. Eventually I can also detach <code>h</code> every K steps but this way gradient will be inaccurate:</p>
<pre><code>h = torch.zeros(hidden_size)
optimizer.zero_grad()
for i in range(0, N, K):
out, h = rnn_cell.forward(data[j], h)
if i % K == 0 and i > 0:
optimizer.zero_grad()
h = h.detach()
loss += compute_loss(out, target)
loss.backward()
optimizer.step()
</code></pre>
<p>If you have any idea how to do such sliding gradient window better I would be very glad for your help.</p>
| <python><pytorch><backpropagation><back-propagation-through-time> | 2023-11-26 18:06:18 | 1 | 991 | Ziemo |
77,552,861 | 1,341,773 | Multiple large requests using aiohttp | <p>I need to send a large number (approximately 50) of HTTP POST requests, each with a size of around 5 MB. I use the asyncio.gather function to wait for the responses from these requests. However, the <a href="https://github.com/aio-libs/aiohttp/blob/master/aiohttp/client.py#L383" rel="nofollow noreferrer">JSON serialization process</a> takes approximately 30-40 milliseconds and blocks the EventLoop.</p>
<p>This causes all requests to be sent to the server simultaneously. For instance, if there are 50 requests and the first request is available at time T, it is only sent at (T + (50 * 30) = T + 1500ms). Additionally, at T+1500 ms, all 50 requests are sent to the server. I want Aiohttp to send requests as soon as they become available.</p>
<p>Can I bypass the JSON serialization step to achieve this?</p>
<p>Also, <a href="https://github.com/aio-libs/aiohttp/blob/master/aiohttp/connector.py#L1106" rel="nofollow noreferrer">this line</a> is where aiohttp is waiting for the eventloop. Our APIs are latency sensitive. Is there any way to speed up this? or any other alternatives?</p>
| <python><asynchronous><async-await><python-asyncio><aiohttp> | 2023-11-26 18:00:54 | 2 | 1,139 | Bala |
77,552,436 | 10,101,636 | Replace placeholder in string stored in a variable | <p>I have a variable <code>mystr</code>, that will contain strings like</p>
<pre><code>name = 'John'
mystr = 'src/edu/{name}/obj.txt'
</code></pre>
<p>I have to replace <code>{name}</code> in <code>mystr</code> with the actual value which is stored in another variable, i./e. <code>name</code>.</p>
<p>However, the catch is that the value of <code>mystr</code> is not hardcoded. I have shown here only for demonstration purposes. Actually, it gets assigned from another function, which means the value could be anything. But no matter what the value is, <code>{name}</code> needs to be replaced with the actual value.
For e.g. in first iteration -</p>
<p><code>mystr = 'src/edu/{name}/obj.txt'</code>,</p>
<p>in second iteration -</p>
<p><code>mystr = 'code/new/{name}/obj2.txt'</code>, and so on.</p>
<p>So the solution has to be generic that no matter what string gets assigned to <code>mystr</code> it should replace the <code>{name}</code> placeholder.</p>
<pre><code>If mystr = 'src/edu/{name}/obj2.txt'
</code></pre>
<p>then output :</p>
<pre><code>'src/edu/john/obj2.txt'
</code></pre>
<p>If <code>mystr = 'src/edu/{name}/obj2.txt'</code></p>
<p>then output:</p>
<pre><code>'src/edu/john/obj2.txt'
</code></pre>
<p>I have tried f-strings, but it only works on hard-coded strings.</p>
| <python> | 2023-11-26 16:06:10 | 3 | 403 | Matthew |
77,552,428 | 14,529,779 | Python function to find the highest frequency number | <p>I'm working on a Python function that is supposed to return the number with the highest frequency in an array. In cases where two numbers or more have the same frequency, it should return the larger number.</p>
<p>However, I'm facing an issue with my current implementation.</p>
<pre class="lang-py prettyprint-override"><code>def highest_rank(arr):
count_num = {}
for i in arr:
if i not in count_num:
count_num[i] = 0
else:
count_num[i] = arr.count(i)
return max(count_num, key=count_num.get)
</code></pre>
<p>In the this example <code>[9, 48, 1, 8, 44, 45, 32]</code> , I expect the function to return <code>48</code> but it return 9</p>
| <python><list><dictionary><for-loop> | 2023-11-26 16:04:12 | 8 | 9,636 | TAHER El Mehdi |
77,552,393 | 2,733,436 | Using clear() vs copy() in a hashset | <p>I was <a href="https://www.lintcode.com/problem/663/" rel="nofollow noreferrer">solving this problem</a> on lintcode</p>
<p>I came up with the following solution first and i failed some test cases</p>
<pre><code>from typing import List
class Solution:
def wallsAndGates(self, rooms: List[List[int]]) -> None:
"""
Do not return anything, modify rooms in-place instead.
"""
ROWS, COLS = len(rooms), len(rooms[0])
visited = set()
def dfs(row, col, distance):
if row not in range(ROWS) or col not in range(COLS):
return
if (row, col) in visited:
return
if rooms[row][col] == -1:
return
if distance > rooms[row][col]:
return
visited.add((row, col))
rooms[row][col] = min(distance, rooms[row][col])
dfs(row + 1, col, distance + 1)
dfs(row - 1, col, distance + 1)
dfs(row, col + 1, distance + 1)
dfs(row, col - 1, distance + 1)
for row in range(ROWS):
for col in range(COLS):
if rooms[row][col] == 0:
visited.clear() # Clear the visited set before each traversal
dfs(row, col, 0)
</code></pre>
<p>Then i looked around to see why that was and i saw a suggestion that passing a different copy of hashset will resolve it so i did the following and i passed all test cases.</p>
<pre><code>class Solution:
def wallsAndGates(self, rooms: List[List[int]]) -> None:
"""
Do not return anything, modify rooms in-place instead.
"""
ROWS, COLS = len(rooms), len(rooms[0])
def dfs(row, col, distance, visited):
if row not in range(ROWS) or col not in range(COLS):
return
if (row, col) in visited:
return
if rooms[row][col] == -1:
return
if distance > rooms[row][col]:
return
visited.add((row, col))
rooms[row][col] = min(distance, rooms[row][col])
dfs(row + 1, col, distance + 1, visited.copy())
dfs(row - 1, col, distance + 1, visited.copy())
dfs(row, col + 1, distance + 1, visited.copy())
dfs(row, col - 1, distance + 1, visited.copy())
for row in range(ROWS):
for col in range(COLS):
if rooms[row][col] == 0:
dfs(row, col, 0, set())
</code></pre>
<p>I am confused to why passing visited.copy() solves the problem. I know that when we pass visited.copy() we are passing a copy and not a reference of the hashset. That makes sense but in my original code i am doing visited.clear() so does that not achieve the same thing? It would be really helpful i guess if someone can explain this to me as i am lost.</p>
<p>I mean i would get it if dfs calls were happening in parallel but they are not happening in parallel so why do i need to pass visited.copy().</p>
| <python><algorithm><data-structures><depth-first-search><hashset> | 2023-11-26 15:52:18 | 1 | 1,658 | user1010101 |
77,552,328 | 6,610,407 | How to type-hint a variable whose type is any subclass of a generic base class? | <p>I have two abstract base classes that are linked, and should be subclassed together. For the sake of a minimal example, let's say its some class <code>TobeProcessed</code>, and a another class <code>Processor</code> that performs some processing on instances of the <code>TobeProcessed</code> class. I made the <code>Processor</code> <code>Generic</code> with the type of the <code>TobeProcessed</code> class as type-argument.</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
from typing import Generic, TypeVar
class TobeProcessed(ABC):
pass
TobeProcessedType = TypeVar("TobeProcessedType", bound=TobeProcessed)
class Processor(ABC, Generic[TobeProcessedType]):
@abstractmethod
def process(self, to_be_processed: TobeProcessedType) -> None:
pass
</code></pre>
<p>Now I have some concrete implementations of both classes:</p>
<pre class="lang-py prettyprint-override"><code>class TobeProcessedConcrete(TobeProcessed):
pass
class ProcessorConcrete(Processor[TobeProcessedConcrete]):
def process(self, to_be_processed: TobeProcessedConcrete) -> None:
return None
</code></pre>
<p>Finally, I have a "wrapper" class which has an attribute <code>processor</code> which is an instance of any subclass of the <code>Processor</code> class.</p>
<pre class="lang-py prettyprint-override"><code>class WrapperClass:
processor: Processor
def __init__(self, processor: Processor) -> None:
self.processor = processor
processor = ProcessorConcrete()
wrapper = WrapperClass(processor=processor)
</code></pre>
<p>If I check this with <code>mypy</code> with <code>--disallow-any-generics</code> (or <code>--strict</code>), I get two errors for <code>WrapperClass</code> because I omitted the type parameter for <code>Processor</code>, which makes sense. However, if I replace <code>Processor</code> with <code>Processor[TobeProcessed]</code>, I get an error for the line <code>wrapper = WrapperClass(processor=processor)</code>:</p>
<p><code>Argument "processor" to "WrapperClass" has incompatible type "ProcessorConcrete"; expected "Processor[TobeProcessed]"</code>.</p>
<p>Is there a way to do this without errors, and without making <code>mypy</code> less strict?</p>
| <python><generics><mypy><python-typing><abstract-base-class> | 2023-11-26 15:33:52 | 1 | 475 | MaartenB |
77,552,270 | 1,492,613 | how can I group rows by col1 but sort the the rows by col2? | <p>for example:</p>
<pre><code>df = pd.DataFrame({'col1': ['A', 'B', 'A', 'B', 'C'],
'col2': [3, 1, 2, 4, 3],
'col3': [10, 20, 30, 40, 50]})
</code></pre>
<p>I want it like follow:</p>
<pre><code> col1 col2 col3
1 B 1 20
3 B 4 40
0 A 3 10
2 A 2 30
4 C 3 50
</code></pre>
<p><code>df.sort_values(['col1', 'col2'])</code></p>
<pre><code>col1 col2 col3
2 A 2 30
0 A 3 10
1 B 1 20
3 B 4 40
4 C 3 50
</code></pre>
<p><code>df.sort_values(['col2', 'col1'])</code></p>
<pre><code>col1 col2 col3
1 B 1 20
2 A 2 30
0 A 3 10
4 C 3 50
3 B 4 40
</code></pre>
<p>Then only way i found is very ugly:</p>
<pre><code>df['min_col2'] = df.groupby('col1')['col2'].transform('min')
df.sort_values("min_col2").drop("min_col2", axis="columns")
</code></pre>
<p>What is the canonical way to do this?</p>
| <python><pandas> | 2023-11-26 15:20:08 | 1 | 8,402 | Wang |
77,552,198 | 8,030,746 | XPath - how to select div that has a specific class, but only if it has a child div with a specific class inside iframe? | <p>I'm scraping <a href="https://bbrauncareers-bbraun.icims.com/jobs/search?ss=1&searchRelation=keyword_all" rel="nofollow noreferrer">this page</a> with Python and Selenium. Specifically, I'm trying to scrape all the job search results (divs with job information), and as you can see, they're in a div element with a class <code>row</code>. However, because they don't have a class specific for them alone, but just have a generic class <code>row</code>, I can't just get them by that alone.</p>
<p>This is what I tried, in an attempt to get an element with a class <code>row</code>, which has a child with a class <code>header</code>. But I think I'm using the contains wrong, and I have no idea how to fix it:</p>
<pre><code>wait.until(EC.visibility_of_element_located((By.XPATH, "(//div[contains(@class,'row') and (contains(@class, 'header'))])")))
</code></pre>
<p>When I use the code above, I get the TimeoutException error, so I'm assuming it's looking for a div with both of those classes and failing to find it. Which is not what I had in mind.</p>
<p>How do I adjust the <code>contains</code> above to get what I need? Is it even possible to use this approach? Thanks!</p>
| <python><selenium-webdriver><xpath> | 2023-11-26 14:59:47 | 1 | 851 | hemoglobin |
77,552,178 | 16,674,436 | NLP preprocessing text in Data Frame, what is the correct order? | <p>I’m trying to preprocess a data frame with two columns. Each cell contains a string, called "title" and "body".</p>
<p>Based on this <a href="https://towardsdatascience.com/elegant-text-pre-processing-with-nltk-in-sklearn-pipeline-d6fe18b91eb8?gi=163784b49fe7" rel="nofollow noreferrer">article</a> I tried to reproduce the preprocessing. However, there is clearly something I am not getting right, and it’s the order to process this or that, and have the correct type that each function expects. I keep getting errors of <code>type list as no attribute str</code>, or <code>type str as no attribute str</code> and so on.</p>
<p>Here is what I have done:</p>
<pre><code>def lemmatize_pos_tagged_text(text, lemmatizer, post_tag_dict):
sentences = nltk.sent_tokenize(text)
new_sentences = []
for sentence in sentences:
sentence = sentence.lower()
new_sentence_words = []
pos_tuples = nltk.pos_tag(nltk.word_tokenize(sentence))
for word_idx, word in enumerate(nltk.word_tokenize(sentence)):
nltk_word_pos = pos_tuples[word_idx][1]
wordnet_word_pos = post_tag_dict.get(nltk_word_pos[0].upper(), None)
if wordnet_word_pos is not None:
new_word = lemmatizer.lemmatize(word, wordnet_word_pos)
else:
new_word = lemmatizer.lemmatize(word)
new_sentence_words.append(new_word)
new_sentence = " ".join(new_sentence_words)
new_sentences.append(new_sentence)
return " ".join(new_sentences)
def processing_steps(df):
lemmatizer = WordNetLemmatizer()
pos_tag_dict = {"J": wordnet.ADJ, "N": wordnet.NOUN, "V": wordnet.VERB, "R": wordnet.ADV}
local_stopwords = set(stopwords.words('english'))
additional_stopwords = ["http", "u", "get", "like", "let", "nan"]
words_to_keep = ["i'" " i ", "me", "my", "we", "our", "us"]
local_stopwords.update(additional_stopwords)
for word in words_to_keep:
if word in local_stopwords:
words_to_keep.remove(word)
for column in df.columns:
# Tokenization
df[column] = df[column].apply(lambda x: word_tokenize(x))
# Lowercasing each word within the list
df[column] = df[column].apply(lambda x: [word.lower() for word in x])
# Removing stopwords
df[column] = df[column].apply(lambda tokens: [word for word in tokens if word.isalpha() and word not in local_stopwords])
# Replace diacritics
df[column] = df[column].apply(lambda x: unidecode(x, errors="preserve"))
# Expand contractions
df[column] = df[column].apply(lambda x: " ".join([contractions.fix(expanded_word) for expanded_word in x.split()]))
# Remove numbers
df[column] = df[column].apply(lambda x: re.sub(r'\d+', '', x))
# Typos correction
df[column] = df[column].apply(lambda x: str(TextBlob(x).correct()))
# Remove punctuation except period
df[column] = df[column].apply(lambda x: re.sub('[%s]' % re.escape(string.punctuation.replace('.', '')), '' , x))
# Remove double space
df[column] = df[column].apply(lambda x: re.sub(' +', ' ', x))
# Lemmatization
df[column] = df[column].apply(lambda x: lemmatize_pos_tagged_text(x, lemmatizer, pos_tag_dict))
return df
</code></pre>
<p>As an example, that’s the error message I get with the current state of the function. But keep in mind that whenever I try to change things, like commenting out the part for splitting, I would get another error of <code>type</code>, or <code>attribute</code>. So the question really is: <strong>what’s the proper order? How to handle the fact that different function need different types for processing the same element?</strong>:</p>
<pre><code> 49
50 # Expand contractions
---> 51 df[column] = df[column].apply(lambda x: " ".join([contractions.fix(expanded_word) for expanded_word in x.split()]))
52
53 # Remove numbers
AttributeError: 'list' object has no attribute 'split'
</code></pre>
<p>Any conceptual explanation is very welcome!</p>
| <python><pandas><nlp><nltk><data-preprocessing> | 2023-11-26 14:53:29 | 1 | 341 | Louis |
77,552,176 | 217,844 | aiohttp: ValueError: Newline or carriage return character detected in HTTP status message or header. This is a potential security issue | <p>I am using <a href="https://pypi.org/project/gql" rel="nofollow noreferrer"><code>gql</code></a> to run a query against a GraphQL API. I get this error:</p>
<pre><code> File "<path to poetry venv>/lib/python3.10/site-packages/aiohttp/http_writer.py", line 129, in write_headers
buf = _serialize_headers(status_line, headers)
File "aiohttp/_http_writer.pyx", line 132, in aiohttp._http_writer._serialize_headers
File "aiohttp/_http_writer.pyx", line 116, in aiohttp._http_writer._safe_header
ValueError: Newline or carriage return character detected in HTTP status message or header. This is a potential security issue.
</code></pre>
<p>From looking at <a href="https://stackoverflow.com/a/63500729">this SO answer</a> and <a href="https://github.com/aio-libs/aiohttp/issues/4818" rel="nofollow noreferrer">this GitHub issue</a>, I get a rough idea of the general problem.</p>
<p>However, I don't even set any headers myself, I just run something like</p>
<pre class="lang-py prettyprint-override"><code>from gql import Client as gql_client, gql
expr_ = '''mutation myMutation($var: Type) {
nameOfMyGraphQLMutation(var: $var) {
... (fields to return) ...
}
}'''
expr = gql(expr_)
client = gql_client(...)
client.execute(expr, ...)
</code></pre>
<p>and from the looks of it, <code>gql</code> seems to make use of <a href="https://docs.aiohttp.org/en/stable/" rel="nofollow noreferrer"><code>aiohttp</code></a> internally.</p>
<p>I tried to hack the <code>aiohttp</code> python code in my venv to display the problematic headers to get an idea of what the root cause might be, but AFAICT, there is compiled code at play there (files like e.g. <code>_http_writer.cpython-310-darwin.so</code>), so local changes won't be picked up.</p>
<p>Also, from looking at Google, I seem to be the only dev with this issue (which typically is a sign that I myself am the root cause...)</p>
<p>Does anyone have an idea how to fix this ?</p>
| <python><graphql><aiohttp> | 2023-11-26 14:53:13 | 1 | 9,959 | ssc |
77,551,865 | 4,557,607 | How to extend Keras GPT2 model (MoE example) | <p>I was playing around with <code>Keras GPT2</code> model - in an attempt to make a Mixture of Experts and achieve agi.</p>
<p>Link to Keras docs: <a href="https://keras.io/api/keras_nlp/models/gpt2/" rel="nofollow noreferrer">https://keras.io/api/keras_nlp/models/gpt2/</a></p>
<p><strong>Final Edit</strong></p>
<p>Got it to work properly. The code below works. Feel free to leave any feedback or improvements. I feel the agi.</p>
<p>Some thoughts - the gating network does not need time distributed as dense layers now support 3d tensors. However, I have no idea how big this network should be for a base gpt2 model with 2, 4, etc. experts.</p>
<p>Also, seems like this implementation - does not return choices per query. Maybe that wasn't a thing when it was implemented.</p>
<p>Lots of issues I think were happening because I was low on memory on top of all the bugs.</p>
<p><strong>Edit 2</strong></p>
<p>Running this in Colab gives another clue. I don't understand why the loss expects values between [0,768]. The token id values are 0 to max vocab.</p>
<pre><code>Received a label value of 50256 which is outside the valid range of [0, 768). Label values: 31373 11 703 389 345 30 50256 0 0 0 0...
</code></pre>
<p>The problem here was that I called the <code>backbone</code> model in the <code>gpt</code> layer instead of <code>GPT2CausalLM</code>. The first must be used for something else.</p>
<p><strong>Edit 1</strong></p>
<p>My general question is - what is the best way to chain or extend Keras GPT model i.e.: to implement a bigger model such as MoE.</p>
<p><strong>Here is the updated and working code</strong>:</p>
<pre><code>import tensorflow as tf
import keras_nlp
def create_gating_network(sequence_length, num_experts, feature_dim=768):
inputs = tf.keras.layers.Input(shape=(sequence_length, feature_dim))
x = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(64, activation="relu"))(
inputs
)
outputs = tf.keras.layers.TimeDistributed(
tf.keras.layers.Dense(num_experts, activation="softmax")
)(x)
gating_model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
return gating_model
def moe_function(args):
expert_outputs, gating_coefficients = args
weighted_experts = expert_outputs * gating_coefficients
intermediate_sum = tf.reduce_sum(weighted_experts, axis=2)
weighted_sum = tf.reduce_sum(intermediate_sum, axis=2)
return weighted_sum
class ExpertGPT2Layer(tf.keras.layers.Layer):
def __init__(self, name="gpt2_base_en", sequence_length=128, **kwargs):
super(ExpertGPT2Layer, self).__init__(**kwargs)
self.sequence_length = sequence_length
self.preprocessor = keras_nlp.models.GPT2CausalLMPreprocessor.from_preset(
name, sequence_length=sequence_length
)
self.gpt2_model = keras_nlp.models.GPT2CausalLM.from_preset(
name,
preprocessor=self.preprocessor,
)
def call(self, inputs, training=False):
preprocess = self.preprocessor(inputs)
outputs = self.gpt2_model(preprocess[0], training=True)
return outputs
class CustomGPT2Model(tf.keras.Model):
def __init__(
self,
gating_network,
name="gpt2_base_en",
sequence_length=128,
feature_dim=768,
num_experts=4,
**kwargs
):
super(CustomGPT2Model, self).__init__(**kwargs)
self.sequence_length = sequence_length
self.feature_dim = feature_dim
self.num_experts = num_experts
self.tokenizer = keras_nlp.models.GPT2Tokenizer.from_preset(name)
self.preprocessor = keras_nlp.models.GPT2CausalLMPreprocessor.from_preset(
name, sequence_length=sequence_length
)
self.expert_layers = [
ExpertGPT2Layer(sequence_length=sequence_length, name=name)
for _ in range(num_experts)
]
self.gating_network = gating_network
def apply_expert(self, expert, inputs, training):
result = expert(inputs, training=training)
return result
def build(self, input_shape):
inputs = tf.keras.layers.Input(
shape=input_shape, dtype=tf.string, name="text-input"
)
# Preprocessor returns x, y, w
# https://github.com/keras-team/keras-nlp/blob/v0.6.2/keras_nlp/models/gpt2/gpt2_causal_lm_preprocessor.py#L127
x, labels, w = self.preprocessor(inputs)
time_dim_token_ids = tf.expand_dims(x["token_ids"], axis=-1)
replicated_token_ids = tf.tile(time_dim_token_ids, [1, 1, self.feature_dim])
# Compute expert predictions
expert_outputs = [
self.apply_expert(expert, inputs, training=True)
for expert in self.expert_layers
]
stacked_expert_outputs = tf.stack(expert_outputs, axis=1)
# Compute gating coefficients
gating_coefficients = self.gating_network(replicated_token_ids)
expanded_gating_coefficients = tf.expand_dims(
tf.expand_dims(gating_coefficients, axis=-1), axis=-1
)
moe_output = moe_function(
[stacked_expert_outputs, expanded_gating_coefficients]
)
self.model = tf.keras.Model(inputs=inputs, outputs=[moe_output, labels])
super(CustomGPT2Model, self).build(input_shape)
def call(self, inputs, training=False):
return self.model(inputs, training)
@tf.function
def train_step(self, data):
x = data
with tf.GradientTape() as tape:
y_pred, y_true = self.model(x, training=True)
loss = self.compiled_loss(y_true, y_pred, regularization_losses=self.losses)
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
self.compiled_metrics.update_state(y_true, y_pred)
return {m.name: m.result() for m in self.metrics}
def main():
text = ["hello, how are you?", "I am good"]
batch_size = 1
num_experts = 2
sequence_length = 64
dataset = tf.data.Dataset.from_tensor_slices(text)
dataset = dataset.batch(batch_size).prefetch(tf.data.AUTOTUNE)
gating_network = create_gating_network(sequence_length, num_experts)
moe_model = CustomGPT2Model(
gating_network, sequence_length=sequence_length, num_experts=num_experts
)
moe_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(2e-5),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)
moe_model.build(input_shape=(1,))
moe_model.summary()
moe_model.fit(dataset, epochs=3, verbose=1)
if __name__ == "__main__":
main()
</code></pre>
| <python><tensorflow><keras><large-language-model><gpt-2> | 2023-11-26 13:24:56 | 0 | 1,020 | Edv Beq |
77,551,852 | 3,099,733 | How to use poetry to exclude some dependencies when install via `mypackage[core]`? | <p>I want to release a python package to pypi, named it <code>mypackage</code> as an example. This package has a lot of dependencies. But for some users that just use parts of the API they can skip some niche and large packages which may take a lot of time to build and install.</p>
<p>I learn that there is a feature named <code>extras</code> that allow users to install extra packages by adding a tag in <code>pip install</code> command, for example <code>pip install package[all]</code>.</p>
<p>But what I am looking for is kind of <strong>opposite</strong> of <code>extras</code>. I hope that when a user run <code>pip install mypackage</code>, then all dependencies will be installed. But if a user run <code>pip intall mypackage[core]</code>, then a minimal dependencies should be installed.</p>
<p>I am using <code>poetry</code> but I have no idea how to implement this. It would be appreciated to provide some examples.</p>
| <python><python-packaging><python-poetry> | 2023-11-26 13:21:56 | 0 | 1,959 | link89 |
77,551,816 | 13,118,338 | How to properly close gather tasks if some of them take too long | <p>I have code that looks like</p>
<pre><code>async def watch_task1():
while not stop:
await client.ws.get_data()
async def watch_task2():
while not stop:
await client.ws.get_news()
async def stop_after():
global stop
await client.sleep(60)
stop = True
async def main():
tasks = [
watch_task1(),
watch_task2(),
stop_after(),
]
try:
await gather(*tasks, return_exceptions=True)
except Exception as e:
print(e)
</code></pre>
<p>My problem here, is that <code>client.ws.get_data()</code> and <code>client.ws.get_news()</code> do not receive messages often. And so it can take 24h before the <code>await client.ws.method</code> gets a message and thus the program stops, while I want it to stop after 60 seconds max, whether the tasks have finished or not.</p>
<p>How could I do this?</p>
| <python><python-asyncio> | 2023-11-26 13:13:15 | 1 | 481 | Nicolas Rey |
77,551,788 | 9,038,562 | `torch_geometric.nn.attention` not found in PyTorch Geometric Reference | <p>I was following an tutorial in the PyG repository, <a href="https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_gps.py" rel="nofollow noreferrer">GraphGPS</a>.</p>
<p>In the example there is a line:</p>
<pre><code>from torch_geometric.nn.attention import PerformerAttention
</code></pre>
<p>Which caused an error:</p>
<pre><code>Cannot find reference 'attention' in '__init__.py'
</code></pre>
<p>I browsed PyG's <a href="https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GPSConv.html#torch_geometric.nn.conv.GPSConv" rel="nofollow noreferrer">documentation</a> again, on the left in package reference, I couldn't find <code>torch_geometric.nn.attention</code>:</p>
<p><a href="https://i.sstatic.net/o98Bx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o98Bx.png" alt="enter image description here" /></a></p>
<p>How to resolve this problem? I have PyG 2.3.1 installed, and I had no issue running other layers like GCN. Do I need install additional packages?</p>
| <python><pytorch-geometric> | 2023-11-26 13:07:26 | 1 | 639 | Tianjian Qin |
77,551,587 | 6,357,916 | Unable to fetch list from angular website using beautiful soup | <p>I am trying to parse nptel website (<a href="https://nptel.ac.in/courses" rel="nofollow noreferrer">url</a>) to fetch all courses listed on it.</p>
<p><a href="https://i.sstatic.net/fHeuq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fHeuq.png" alt="enter image description here" /></a>
Each course correspond to <code>app-courses-card</code> tag. The DOM looks like this:
<a href="https://i.sstatic.net/qrQrr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qrQrr.png" alt="enter image description here" /></a></p>
<p>I am unable to get hold of these tags with beautiful soup. I tried following:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def extract_text_from_webpage(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')
for li in soup.select("app-course-card"):
print(li.get_text())
else:
print(f"Failed to retrieve the webpage. Status code: {response.status_code}")
return None
url = "https://nptel.ac.in/courses"
webpage_text = extract_text_from_webpage(url)
</code></pre>
<p>The print statement never executed since the list was always empty as no <code>app-courses-card</code> is captured by <code>soup.select()</code>.</p>
<p>I felt that angular data is taking up some time to get fetched. So I tried with selenium+beautiful soup by adding some delay. Still the webpage returned no list:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
chrome_options = Options()
chrome_options.add_argument('--headless') # Run Chrome in headless mode (without opening a browser window)
driver = webdriver.Chrome(options=chrome_options)
def extract_text_from_angular_webpage(url):
driver.get(url)
driver.implicitly_wait(10)
page_source = driver.page_source
soup = BeautifulSoup(page_source, 'html.parser')
text = soup.get_text()
return text
angular_url = url = "https://nptel.ac.in/courses"
webpage_text = extract_text_from_angular_webpage(angular_url)
if webpage_text:
print(webpage_text)
driver.quit()
</code></pre>
<p>What I am missing here?</p>
| <python><selenium-webdriver><web-scraping><beautifulsoup> | 2023-11-26 12:06:31 | 2 | 3,029 | MsA |
77,551,514 | 462,794 | Simple way to create matrix of random numbers of x,y size where the sum of each col and rows is equal to z | <p>hi i would like for the game of quantum werewolf to generate a matrix of x,y size where the sum is equal to Z</p>
<p>and x = y</p>
<p>Here is what i would like to obtain :</p>
<p>example : x=3 , y=3 z=1</p>
<pre><code>[0.1,0.2,0.7] = 1
[0.5,0.3,0.2] = 1
[0.4,0.5,0.1] = 1
1 1 1
</code></pre>
<p>here is what i tried but this code is false:</p>
<pre><code>import numpy as np
def generate_matrix(x, y, z):
matrix = np.random.rand(x, y)
row_sums = matrix.sum(axis=1, keepdims=True)
col_sums = matrix.sum(axis=0, keepdims=True)
matrix = matrix / row_sums * z
# Verify that the sum of each row is equal to Z
assert np.allclose(matrix.sum(axis=1), z), "Row sums are not equal to Z"
# Verify that the sum of each column is equal to Z
assert np.allclose(matrix.sum(axis=0), z), "Column sums are not equal to Z"
return matrix.round(2)
x = 3
y = 3
z = 1
result_matrix = generate_matrix(x, y, z)
print(result_matrix)
</code></pre>
<p>i've looked on this question :</p>
<p><a href="https://stackoverflow.com/questions/15451958/simple-way-to-create-matrix-of-random-numbers">Simple way to create matrix of random numbers</a></p>
<p>regards</p>
| <python> | 2023-11-26 11:44:31 | 1 | 1,244 | Bussiere |
77,551,174 | 532,819 | ctypes function definition with paramflags and output parameters - how to retrieve the original return value of the function? | <p>The python <a href="https://docs.python.org/3/library/ctypes.html" rel="nofollow noreferrer">documentation</a> for ctypes show an example of using the prototype definition and paramflags with the windows function GetWindowRect.</p>
<p>Since GetWindowRect has signature</p>
<pre><code>BOOL GetWindowRect(
[in] HWND hWnd,
[out] LPRECT lpRect
);
</code></pre>
<p>we see that there is an <strong>input</strong> parameter hWnd, an <strong>output</strong> parameter lpRect, as well as a <strong>return value</strong> of the function as BOOL.</p>
<p>The documentation provides this snippet</p>
<pre><code>from ctypes import POINTER, WINFUNCTYPE, windll, WinError
from ctypes.wintypes import BOOL, HWND, RECT
prototype = WINFUNCTYPE(BOOL, HWND, POINTER(RECT))
paramflags = (1, "hwnd"), (2, "lprect")
GetWindowRect = prototype(("GetWindowRect", windll.user32), paramflags)
</code></pre>
<p>to get an usable function with input hWnd and output lpRect. Quoting from the doc:</p>
<blockquote>
<p>Functions with output parameters will automatically return the output parameter value if there is a single one, or a tuple containing the output parameter values when there are more than one, so the GetWindowRect function now returns a RECT instance, when called.</p>
</blockquote>
<p>This is all well and good, but how do I get the original return value of the function (in this case a BOOL)? It seems to get lost with this approach.</p>
| <python><windows><ctypes> | 2023-11-26 09:48:56 | 4 | 687 | pygri |
77,551,120 | 832,490 | How to use PublisherServiceAsyncClient | <p>I am trying to use AsyncPublisherClient from pubsublite</p>
<pre><code>pip install google-cloud-pubsublite==1.8.3
</code></pre>
<pre><code>from google.cloud import pubsublite_v1
pubsub = pubsublite_v1.AsyncPublisherClient()
</code></pre>
<blockquote>
<p>error: Module has no attribute "AsyncPublisherClient"</p>
</blockquote>
<p>The documentation is very scarce and I couldn't even find this class in the virtualenv directory, just its interface.</p>
<h2>How do I use this library?</h2>
<p>EDIT: It looks like the correct class is <code>PublisherServiceAsyncClient</code></p>
<hr />
<p>EDIT2:</p>
<pre><code>from google.cloud.pubsublite_v1.types.publisher import PublishRequest
from google.cloud.pubsublite_v1.types import PubSubMessage
from google.cloud.pubsublite_v1 import PublisherServiceAsyncClient
pubsub = PublisherServiceAsyncClient()
message = PubSubMessage(data=json.dumps(payload).encode("utf-8"))
request = PublishRequest(topic=os.environ["TOPIC"], messages=[message])
async def request_generator():
yield request
await pubsub.publish(requests=request_generator())
</code></pre>
<blockquote>
<p>ValueError: Unknown field for PublishRequest: topic</p>
</blockquote>
| <python><google-cloud-platform><google-cloud-pubsub> | 2023-11-26 09:31:31 | 0 | 1,009 | Rodrigo |
77,550,969 | 3,861,775 | Weighted sum of pytrees in JAX | <p>I have a pytree represented by a list of lists holding parameter tuples. The sub-lists all have the same structure (see example).</p>
<p>Now I would like to create a weighted sum so that the resulting pytree has the same structure as one of the sub-lists. The weights for each sub-list are stored in a separate array / list.</p>
<p>So far I have the following code that seems to works but requires several steps and for-loop that I would like avoid for performance reasons.</p>
<pre><code>import jax
import jax.numpy as jnp
list_1 = [
[jnp.asarray([[1, 2], [3, 4]]), jnp.asarray([2, 3])],
[jnp.asarray([[1, 2], [3, 4]]), jnp.asarray([2, 3])],
]
list_2 = [
[jnp.asarray([[2, 3], [3, 4]]), jnp.asarray([5, 3])],
[jnp.asarray([[2, 3], [3, 4]]), jnp.asarray([5, 3])],
]
list_3 = [
[jnp.asarray([[7, 1], [4, 4]]), jnp.asarray([6, 2])],
[jnp.asarray([[6, 4], [3, 7]]), jnp.asarray([7, 3])],
]
weights = [1, 2, 3]
pytree = [list_1, list_2, list_3]
weighted_pytree = [jax.tree_map(lambda tree: weight * tree, tree) for weight, tree in zip(weights, pytree)]
reduced = jax.tree_util.tree_map(lambda *args: sum(args), *weighted_pytree)
</code></pre>
| <python><jax> | 2023-11-26 08:30:44 | 1 | 3,656 | Gilfoyle |
77,550,858 | 12,395,277 | Python:interp2d get the same and wrong value | <p>I have a 3*3 table so my expected is to use <code>interp2d</code> interpolating then predict a bigger table maybe 5*5 or 10*10 to get more results then show in <code>plot_surface</code></p>
<ol>
<li>This is a simple 3*3 table for test and relationship:</li>
</ol>
<pre><code>x = np.array([1, 2,3]) #---X,Y,Z relationship------
y = np.array([0.05, 0.5,1]) #(1, 0.05, -1.0)(1, 0.5, -0.5)(1, 1.0, 2.0)
z = np.array([-1, -0.5,2,\ #(2, 0.05, -2.0)(2, 0.5, 1.5)(2, 1.0, 3.5)
-2, 1.5,3.5, #(3, 0.05, -1.5)(3, 0.5, 2.5)(3, 1.0, 5.0)
-1.5,2.5,5])
</code></pre>
<ol start="2">
<li>To achieve this relationship then i set:</li>
</ol>
<pre><code>X,Y=np.meshgrid(x,y,indexing='ij')
Z=z.reshape(len(x),len(y))
</code></pre>
<ol start="3">
<li>Interploting 5*5 tabble based on the current data</li>
</ol>
<pre><code>#interp2d Z value
f2 = interp2d(x,y,Z,kind='linear')
x_new=np.linspace(0.01,0.02,5)
y_new=np.linspace(0.002,0.004,5)
X_new,Y_new=np.meshgrid(x_new,y_new,indexing='ij')
z_new=f2(x_new,y_new)
Z_new=z_new.reshape(len(x_new),len(y_new))
print(z_new)
</code></pre>
<p>Now at this step i get the wrong number of interploted Z value,all the same and not expected</p>
<pre><code> # [-1. -1. -1. -1. -1.]
# [-1. -1. -1. -1. -1.]
# [-1. -1. -1. -1. -1.]
# [-1. -1. -1. -1. -1.]]
</code></pre>
<p><strong>So finlly the 3Dsurface become a flat picture<br />
I am not sure why the script or <code>function interp2d</code> wrong with it.<br />
How can i fix the scipts?</strong></p>
<p>This is my full script:</p>
<pre><code>from scipy.interpolate import interp1d,interp2d,griddata
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
x = np.array([1, 2,3])
y = np.array([0.05, 0.5,1])
z = np.array([-1, -0.5,2,\
-2, 1.5,3.5,
-1.5,2.5,5])
fig = plt.figure()
ax=Axes3D(fig)
ax = fig.add_subplot(projection='3d')
X,Y=np.meshgrid(x,y,indexing='ij')
Z=z.reshape(len(x),len(y))
#interp2d Z value
f2 = interp2d(x,y,Z,kind='linear')
x_new=np.linspace(0.01,0.02,5)
y_new=np.linspace(0.002,0.004,5)
X_new,Y_new=np.meshgrid(x_new,y_new,indexing='ij')
z_new=f2(x_new,y_new)
Z_new=z_new.reshape(len(x_new),len(y_new))
print(z_new) #---->not as expected [[-1. -1. -1. -1. -1.]
# [-1. -1. -1. -1. -1.]
# [-1. -1. -1. -1. -1.]
# [-1. -1. -1. -1. -1.]
# [-1. -1. -1. -1. -1.]]
#This is for check X,Y,Z value
def Check():
n,j=0,0
print("----X,Y,Z-----")
for i in zip(X.flat,Y.flat,Z.flat): #----X, Y, Z - ----
print(i, end=" ") #(1, 0.05, -1.0)(1, 0.5, -0.5)(1, 1.0, 2.0
n += 1 #(2, 0.05, -2.0)(2, 0.5, 1.5)(2, 1.0, 3.5)
if n % int(len(x))==0: #(3, 0.05, -1.5)(3, 0.5, 2.5)(3, 1.0, 5.0)
print()
print("----X_new,Y_new,Z_new-----")
for i in zip(X_new.flat,Y_new.flat,Z_new.flat):
print(i, end=" ")
j += 1
if j % int(len(x_new))==0:
print()
Check()
ax.plot_surface(X, Y, Z,linewidth=0,antialiased=True,cmap="cividis",rstride=1,cstride=1)
ax.plot_surface(X_new, Y_new, Z_new, linewidth=0, antialiased=True, cmap=cm.winter, rstride=1, cstride=1)
plt.show()```
</code></pre>
| <python><numpy><matplotlib><scipy><interpolation> | 2023-11-26 07:42:48 | 1 | 491 | M_Sea |
77,550,486 | 7,077,532 | Convert Time in hh:mm:ss Format to Total Minutes in pandas | <p>I have the following sample input table below. The <code>Time</code> column is in the format of <code>hh:mm:ss</code> or hours, minutes, and seconds.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>Jim</td>
<td>1:33:04</td>
</tr>
<tr>
<td>Chrissy</td>
<td>0:06:39</td>
</tr>
<tr>
<td>Billy</td>
<td>10:00:02</td>
</tr>
</tbody>
</table>
</div>
<p>The code to create the above table is:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'Name':["Jim","Chrissy","Billy"], 'Time':['1:33:04', '0:06:39', '10:00:02']})
</code></pre>
<p>I want to create a new column called "_timemin" that converts the Time column to minutes. For example, 10:00:02 would be equal to 600.03 minutes.</p>
<p>I tried to apply the following code but it didn't work:</p>
<pre class="lang-py prettyprint-override"><code>df['_timemin'] = df['Time'].str.split(':').apply(lambda x: (int(x[0])*60) + int(x[1])) + int(x[2]/60)
</code></pre>
<p>... the above code produces the error:</p>
<pre class="lang-none prettyprint-override"><code>NameError: name 'x' is not defined
</code></pre>
| <python><pandas><time><type-conversion><calculated-columns> | 2023-11-26 04:47:57 | 2 | 5,244 | PineNuts0 |
77,550,426 | 9,404,261 | What is the fastest way to map numpy array of unsigned integer {0,1} to float {1,-1} | <p>I have a numpy array of <code>np.uint64</code> holding only <code>0</code> or <code>1</code> values, and I have to map <code>0</code> to <code>np.float64(1.0)</code>, and <code>1</code> to <code>np.float64(-1.0)</code>.</p>
<p>Since the interpreter doesn't knows that it has only to convert <code>0</code> and <code>1</code>, it uses a costly general algorithm, so I thought to use an array with the result, and use the <code>uint64</code> as index for the array, avoiding any conversion, but it is even slower.</p>
<pre><code>import numpy as np
import timeit
random_bit = np.random.randint(0, 2, size=(10000), dtype=np.uint64)
def np_cast(random_bit):
vectorized_result = 1.0 - 2.0 * np.float64(random_bit)
return vectorized_result
def product(random_bit):
mapped_result = 1.0 - 2.0 * random_bit
return mapped_result
np_one_minus_one = np.array([1.0, -1.0]).astype(np.float64)
def _array(random_bit):
mapped_result = np_one_minus_one[random_bit]
return mapped_result
one = np.float64(1)
minus_two = np.float64(-2)
def astype(random_bit):
mapped_result = one + minus_two * random_bit.astype(np.float64)
return mapped_result
function_list = [np_cast, product, _array, astype]
print("start benchmark")
for function in function_list:
_time = timeit.timeit(lambda: function(random_bit), number=100000)
print(f"{function.__name__}: {_time:.3f} seconds")
</code></pre>
<p>I get these times:</p>
<pre><code>np_cast: 178.604 seconds
product: 172.939 seconds
_array: 239.305 seconds
astype: 186.031 seconds
</code></pre>
| <python><numpy><casting> | 2023-11-26 04:18:11 | 1 | 609 | tutizeri |
77,550,421 | 308,827 | Use different colors for different parts of a line in seaborn lineplot | <p>Related to the accepted answer of this question: <a href="https://stackoverflow.com/questions/77471818/exclude-subplots-without-any-data-and-left-align-the-rest-in-relplot">Exclude subplots without any data and left-align the rest in relplot</a></p>
<p>Is there a way to use the seaborn lineplot such that values in certain ranges are painted using one color and other values in other colors. I want to use green to paint <code>Z-Score CEI</code> values between -1 and 1 and red to paint any <code>Z-Score CEI</code> values < -1 or > 1? Essentially the same line will have different color based on its value at that point. The x-axis will be the <code>Stage</code> values</p>
| <python><pandas><seaborn> | 2023-11-26 04:17:02 | 1 | 22,341 | user308827 |
77,550,265 | 313,768 | Redundant prototype parameter specification in ctypes | <p>The <a href="https://docs.python.org/3/library/ctypes.html#ctypes-function-prototypes" rel="nofollow noreferrer">ctypes function <code>prototype</code> specification</a> says</p>
<blockquote>
<p>The first item is an integer containing a combination of direction flags for the parameter: [...]</p>
<p>4: Input parameter which defaults to the integer zero.</p>
</blockquote>
<p>and shortly after,</p>
<blockquote>
<p>The optional third item is the default value for this parameter.</p>
</blockquote>
<p>Why does "type 4" exist when the exact same thing, seemingly, can be specified by writing <code>0</code> in the third tuple element? Are they indeed equivalent? Why would one be preferred over the other?</p>
<p>In fact, there's some evidence that they aren't equivalent: if I define <a href="https://learn.microsoft.com/en-us/windows/desktop/api/wlanapi/nf-wlanapi-wlanregisternotification" rel="nofollow noreferrer">WlanRegisterNotification</a> like</p>
<pre><code>proto = ctypes.WINFUNCTYPE(
ctypes.wintypes.DWORD,
ctypes.wintypes.HANDLE,
ctypes.wintypes.DWORD,
ctypes.wintypes.BOOL,
WLAN_NOTIFICATION_CALLBACK,
ctypes.wintypes.LPVOID,
ctypes.wintypes.LPVOID,
ctypes.POINTER(ctypes.wintypes.DWORD),
)
fun = proto(
('WlanRegisterNotification', wlanapi),
(
(IN, 'hClientHandle'),
(IN, 'dwNotifSource'),
(IN, 'bIgnoreDuplicate'),
(IN | DEFAULT_ZERO, 'funcCallback'),
(IN | DEFAULT_ZERO, 'pCallbackContext'),
(IN | DEFAULT_ZERO, 'pReserved'),
(OUT, 'pdwPrevNotifSource'),
),
)
</code></pre>
<p>It behaves nonsensically when passed values for the first four parameters:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: call takes exactly 3 arguments (4 given)
</code></pre>
<p>Why should it assume that there are exactly three arguments, when arguments four, five and six are given defaults (but should accept explicit values)? Removing <code>DEFAULT_ZERO</code> and adding <code>, None</code> solves the problem, but is not a satisfying answer.</p>
| <python><ctypes> | 2023-11-26 02:44:26 | 1 | 16,660 | Reinderien |
77,550,257 | 5,837,992 | Loading File into DuckDB using Python Fails Die to "/N" Values Used to Represent Nulls | <p>I'm trying to load a csv into Python, but the file keeps failing because one of the fields has a '\N' to represent null values in a field that is Integer. I can't figure out how to deal with this - I'd like to convert it on the way in.</p>
<p>It would be great if I could ignore error and insert the rest of the record into the table, but that doesn't seem to be a thing.</p>
<p>Any help would be much appreciated</p>
<p>So the following code</p>
<pre><code>con.sql("INSERT INTO getNBBOtimes SELECT * FROM read_csv_auto('G:/temp/timeexport.csv')")
</code></pre>
<p>results in the following error</p>
<pre><code>InvalidInputException Traceback (most recent call last)
<timed eval> in <module>
InvalidInputException: Invalid Input Error: Could not convert string '\N' to INT64 in column "column3", at line 856438.
Parser options:
file=G:/temp/timeexport.csv
delimiter=',' (auto detected)
quote='"' (auto detected)
escape='"' (auto detected)
header=0 (auto detected)
sample_size=20480
ignore_errors=0
all_varchar=0.
Consider either increasing the sample size (SAMPLE_SIZE=X [X rows] or SAMPLE_SIZE=-1 [all rows]), or skipping column conversion (ALL_VARCHAR=1)
</code></pre>
<p>I figured I would try to handle the error on the way in, but nothing seems to work</p>
<pre><code>con.sql("CREATE TABLE test1 as seLECT NULLIF(column1,'\\N') , NULLIF(column2,'\\N'),NULLIF(column3,'\\N'),NULLIF(column4,'\\N'),NULLIF(column2,'\\N') FROM read_csv_auto('G:/temp/timeexport.csv')")
</code></pre>
<p>returns the following error:</p>
<pre><code>SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 46-47: malformed \N character escape
</code></pre>
<p>I tried this</p>
<pre><code>con.sql("CREATE TABLE test1 as seLECT NULLIF(column1,repr('\\N')) , NULLIF(column2,repr('\\N')),NULLIF(column3,repr('\\N')),NULLIF(column4,(repr'\\N')),NULLIF(column2,repr('\\N')) FROM read_csv_auto('G:/temp/timeexport.csv')")
</code></pre>
<p>and got this error</p>
<pre><code>CatalogException: Catalog Error: Scalar Function with name repr does not exist!
Did you mean "exp"?
</code></pre>
| <python><csv><data-transform><duckdb> | 2023-11-26 02:37:36 | 1 | 1,980 | Stumbling Through Data Science |
77,550,203 | 13,148,680 | I am getting this error with asdf when running 'python' in my terminal on mac | <p>I go to my terminal on my mac and run the 'python' command but receive the error:</p>
<pre><code>/Users/mattlaszcz/.asdf/shims/python: line 3:
/opt/homebrew/Cellar/asdf/0.11.0/libexec/bin/asdf: No such file or directory
/Users/mattlaszcz/.asdf/shims/python: line 3: exec:
/opt/homebrew/Cellar/asdf/0.11.0/libexec/bin/asdf: cannot execute: No such file or directory
</code></pre>
<p>Why would I be missing this file or directory?</p>
| <python><macos><homebrew><asdf> | 2023-11-26 02:06:54 | 1 | 419 | Matt Laszcz |
77,550,161 | 3,156,300 | QGraphicsItem only visible when parent selected | <p>I'm trying to mimick path editing similar to what you would see in photoshop, which interacts this way...</p>
<ol>
<li>You select the Path and it's Points become visible</li>
<li>Users can click and drag any Point item of the Path, to adjust the Path</li>
<li>When users click and drag the Path directly it moves the Path</li>
<li>When the Path is deselected the Points become hidden again</li>
</ol>
<p>Where I'm having issues are</p>
<ol>
<li>Making the Points hidden when the Path is deselected but not hidden when a Point of the selected spline is being Edited</li>
</ol>
<p>Here is what i have:</p>
<p><a href="https://i.sstatic.net/EMXMs.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EMXMs.gif" alt="enter image description here" /></a></p>
<p>Here is a reference to something I'm trying to match:</p>
<p><a href="https://i.sstatic.net/xKEhR.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xKEhR.gif" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>import sys
import math
import random
from PySide2 import QtWidgets, QtGui, QtCore
# SETTINGS
handle_size = 16
handle_color = QtGui.QColor(40,130,230)
handle_radius = 8
class AnnotationPointItem(QtWidgets.QGraphicsEllipseItem):
def __init__(self, positionFlag=0, pos=QtCore.QPointF(), parent=None):
super(AnnotationPointItem, self).__init__(-handle_radius, -handle_radius, 2*handle_radius, 2*handle_radius, parent)
self.setFlags(QtWidgets.QGraphicsItem.ItemIsMovable | QtWidgets.QGraphicsItem.ItemIsSelectable | QtWidgets.QGraphicsItem.ItemSendsScenePositionChanges)
self.setPen(QtGui.QPen(handle_color, 4, QtCore.Qt.SolidLine))
self.setBrush(QtGui.QBrush(QtGui.QColor('white')))
self.positionFlag = positionFlag
def paint(self, painter, option, widget=None):
# Remove the selection outline
# if self.isSelected():
# option.state &= ~QtWidgets.QStyle.State_Selected
super(AnnotationPointItem, self).paint(painter, option, widget)
# def mousePressEvent(self, event):
# # Handle the event, but don't propagate to the parent
# # event.accept()
# print('clicked....')
# return super(AnnotationPointItem, self).mousePressEvent(event)
def itemChange(self, change, value):
# print(change, self.isSelected())
if change == QtWidgets.QGraphicsItem.ItemPositionChange:
# print('ItemPositionChange')
pass
elif change == QtWidgets.QGraphicsItem.ItemPositionHasChanged:
# print('ItemPositionHasChanged')
parent = self.parentItem()
if parent:
# Get the position of the cursor in the view's coordinates
if self.positionFlag == 0:
parent.setPoints(start=self.pos())
elif self.positionFlag == 1:
parent.setPoints(end=self.pos())
elif change == QtWidgets.QGraphicsItem.ItemSelectedChange:
pass
return super(AnnotationPointItem, self).itemChange(change, value)
class AnnotationPathItem(QtWidgets.QGraphicsLineItem):
def __init__(self,
start=QtCore.QPointF(),
end=QtCore.QPointF(),
color=QtCore.Qt.green,
thickness=10,
parent=None):
super(AnnotationPathItem, self).__init__(start.x(), start.y(), end.x(), end.y(), parent)
self._color = color
self._thickness = thickness
self.setFlags(QtWidgets.QGraphicsItem.ItemIsMovable | QtWidgets.QGraphicsItem.ItemIsSelectable)
self.setPen(QtGui.QPen(self._color, self._thickness, QtCore.Qt.SolidLine))
# child items
self.startPointItem = AnnotationPointItem(positionFlag=0, parent=self)
self.startPointItem.hide()
self.startPointItem.setPos(self.line().p1())
self.endPointItem = AnnotationPointItem(positionFlag=1, parent=self)
self.endPointItem.hide()
self.endPointItem.setPos(self.line().p2())
def itemChange(self, change, value):
if change == QtWidgets.QGraphicsItem.ItemSelectedChange:
self.selectionChanged(value)
return super(AnnotationPathItem, self).itemChange(change, value)
def selectionChanged(self, selected):
# Implement what you want to do when the selection changes
print(self.startPointItem.isSelected(), self.endPointItem.isSelected())
if selected or self.startPointItem.isSelected() or self.endPointItem.isSelected():
self.startPointItem.show()
self.endPointItem.show()
# else:
# self.startPointItem.hide()
# self.endPointItem.hide()
def paint(self, painter, option, widget=None):
# Remove the selection outline
if self.isSelected():
option.state &= ~QtWidgets.QStyle.State_Selected
super(AnnotationPathItem, self).paint(painter, option, widget)
def setPoints(self, start=None, end=None):
currentLine = self.line()
if start != None:
currentLine.setP1(start)
if end != None:
currentLine.setP2(end)
self.setLine(currentLine)
class MainWindow(QtWidgets.QWidget):
def __init__(self):
super(MainWindow, self).__init__()
self.resize(1200,1200)
self.scene = QtWidgets.QGraphicsScene(self)
self.scene.setBackgroundBrush(QtGui.QColor(40,40,40))
self.view = QtWidgets.QGraphicsView(self)
self.view.setSceneRect(-4000, -4000, 8000, 8000)
self.view.setRenderHints(QtGui.QPainter.Antialiasing | QtGui.QPainter.SmoothPixmapTransform)
self.view.setMouseTracking(True)
self.view.setScene(self.scene)
self.addButton = QtWidgets.QPushButton("Add Annotation", self)
self.addButton.clicked.connect(self.add_annotation)
layout = QtWidgets.QVBoxLayout(self)
layout.addWidget(self.view)
layout.addWidget(self.addButton)
self.setLayout(layout)
# samples
item = AnnotationPathItem(QtCore.QPointF(-70, -150), QtCore.QPointF(150, -350))
self.scene.addItem(item)
def add_annotation(self):
r = random.randint(0,255)
g = random.randint(0,255)
b = random.randint(0,255)
color = QtGui.QColor(r,g,b)
startPos = QtCore.QPointF(random.randint(-200,200), random.randint(-200,200))
endPos = QtCore.QPointF(random.randint(-200,200), random.randint(-200,200))
item = AnnotationPathItem(startPos, endPos, color)
self.scene.addItem(item)
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec_()
</code></pre>
| <python><pyside2><qgraphicsitem> | 2023-11-26 01:42:48 | 2 | 6,178 | JokerMartini |
77,550,133 | 7,077,532 | Python Dataframe: If Value in Column Equals String AND Value in Second Column is Null, Replace the Null Cell With Another String | <p>I have the following sample python dataframe below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Type</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>sesame</td>
</tr>
<tr>
<td>B</td>
<td></td>
</tr>
<tr>
<td>A</td>
<td></td>
</tr>
<tr>
<td>C</td>
<td>tea</td>
</tr>
<tr>
<td>A</td>
<td>bun</td>
</tr>
</tbody>
</table>
</div>
<p>The code to create the table is below:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Type':["A","B","A", "C", "A"],'Value':["sesame", None, None, "tea", "bun"]})
</code></pre>
<p>I want to do the following:</p>
<ul>
<li>If Type column equals "A" AND Value column is null, then replace the null with "custard"</li>
<li>If Type column equals "B" AND Value column is null, then replace null with "peanuts"</li>
<li>Otherwise leave Value column as is</li>
</ul>
<p>My desired output table is below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Type</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>sesame</td>
</tr>
<tr>
<td>B</td>
<td>peanuts</td>
</tr>
<tr>
<td>A</td>
<td>custard</td>
</tr>
<tr>
<td>C</td>
<td>tea</td>
</tr>
<tr>
<td>A</td>
<td>bun</td>
</tr>
</tbody>
</table>
</div>
<p>I can't even seem to figure out the first bullet point. I tried the following code:</p>
<pre><code>df.loc[df['Type'] == 'A', ['Value']].fillna(value='custard', axis=1)
</code></pre>
<p>But it produces the wrong output:
<a href="https://i.sstatic.net/FHiUb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FHiUb.png" alt="enter image description here" /></a></p>
| <python><filter><replace><conditional-statements><null> | 2023-11-26 01:25:51 | 1 | 5,244 | PineNuts0 |
77,549,958 | 12,158,757 | How to use a dropdown widget to highlight selected categorical variable in stacked bar chart? | <p>I am learning <code>matplotlib</code> and <code>ipywidgets</code> and attempt to design an interactive bar chart, such that the selected category can be highlighted.</p>
<h1>Data Example</h1>
<p>Assuming I have a dataframe:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plot
data = {"Production":[10000, 12000, 14000],
"Sales":[9000, 10500, 12000]}
index = ["2017", "2018", "2019"]
df = pd.DataFrame(data=data, index=index)
df.plot.bar(stacked=True,rot=15, title="Annual Production Vs Annual Sales")
</code></pre>
<p>The resulting stacked bar chart looks like below:</p>
<p><a href="https://i.sstatic.net/m6aAo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m6aAo.png" alt="enter image description here" /></a></p>
<h1>What I am after</h1>
<p>If we select <code>production</code> in the dropdown list, the blue bars will be highlighted by adding a box (or a frame) surrounding it. Similar should happen to <code>Sales</code> if it is selected.</p>
<h1>Question</h1>
<p>I am not sure if <code>ipywidgets</code> and <code>matplotlib</code> are enough to fulfill this feature, or do we need other package to make it? If possible to do with those two packages, could anyone share some clues? Thanks!</p>
| <python><pandas><matplotlib><ipywidgets><matplotlib-widget> | 2023-11-25 23:44:51 | 1 | 105,741 | ThomasIsCoding |
77,549,857 | 9,148,880 | Iterate over a list of lists, assert multiple conditions and render when true in Jinja2 | <p>I have the following variables to use in my Jinja template:</p>
<pre class="lang-py prettyprint-override"><code>list_python_version = [3, 9]
all_python_version = [
[3, 8],
[3, 9],
[3, 10],
[3, 11],
[3, 12]
]
</code></pre>
<p>Is there a way to use a combination of Jinja filters and tests so it iterates over <code>all_python_version</code>, checks that both the first and second elements of the list are greater or equal than the elements of <code>list_python_version</code>, and when the conditions are met, it generates the string for <code>MAJOR.MINOR</code> version and joins all that are valid in a single string?</p>
<p>This way, considering the variables above, the rendered template should give <code>3.9, 3.10, 3.11, 3.12</code>?</p>
<p>I have tried the following expression:</p>
<pre><code>{{
all_python_version |
select('[0] >= minimal_python_version[0] and [1] >= minimal_python_version[1]') |
join('.') |
join(', ')
}}
</code></pre>
<p>But it will fail since the <code>select</code> filter asks for a function, and so far I have not found in Jinja's documentation any hint as to how we can use conditionals to filter values inside an expression.</p>
<p>Alternatively, the solution could encompass a <code>for</code> loop in Jinja, but I need the string to be rendered in one line, and if we do:</p>
<pre><code>{% for version in all_python_version %}
{% if version[0] >= list_python_version[0] and version[1] >= list_python_version[1] %}
{{ version[0] ~ '.' ~ version[1] ~ ',' }}
{% endif %}
{% endfor %}
</code></pre>
<p>Each version will render in its own line, with the last <code>,</code> also being rendered.</p>
<p>Is there a way to do get the versions to be rendered in a single line in pure Jinja and its plugins?</p>
| <python><jinja2> | 2023-11-25 22:57:01 | 2 | 647 | manoelpqueiroz |
77,549,759 | 6,502,500 | ValueError: y contains previously unseen labels when LabelEncoding MultiIndex dataframe | <p>So currently I have an 18 x 80 pandas Dataframe which are grouped under two labels "TEAM ONE" and "TEAM TWO". All the categorical columns are in an array. Because the data is not numerical (they contain strings), I'd need to encode them using the LabelEncoder.</p>
<pre><code>categoricals = ['TRAITS', 'UNIT_1', 'UNIT_2', 'UNIT_3', 'AUGMENT_1', 'AUGMENT_2', 'AUGMENT_3']
# Creating a label encoder dictionary for each categorical column
le = {col: LabelEncoder() for col in categoricals}
# example iteration
# d = train_data[('TEAM ONE', 'TRAITS')] = le['TRAITS'].fit_transform(train_data[('TEAM ONE', 'TRAITS')].astype(str))
for col in train_data.columns.levels[0]:
for sub_col in train_data.columns.levels[1]:
if sub_col in le:
train_data[(col, sub_col)] = le[sub_col].fit_transform(train_data[(col, sub_col)].astype(str))
test_data[(col, sub_col)] = le[sub_col].transform(test_data[(col, sub_col)].astype(str))
</code></pre>
<p>the train_data line in the <code>if</code> statement passes just fine, but when it gets to test_data, it breaks giving the ValueError y contains previously unseen labels. Unlike OneHotEncoder, LabelEncoder doesn't have the <code>handle_unknown</code> parameter. There's also <code>try/except ValueError</code> but this seems impractical. Is there possibly a way the test_data has knowledge of these values before trying to transform?</p>
| <python><pandas> | 2023-11-25 22:17:04 | 1 | 1,902 | Mangohero1 |
77,549,564 | 4,581,085 | from sklearn.metrics can't import PredictionErrorDisplay | <p>I have scikit-learn 1.3.2 installed, and I try to import PredictionErrorDisplay, but get an error that I'm not able to resolve:</p>
<pre><code>from sklearn.metrics import PredictionErrorDisplay
</code></pre>
<pre><code>ImportError: cannot import name 'PredictionErrorDisplay' from 'sklearn.metrics'
</code></pre>
<p>Other functions have been running fine.</p>
<p>This is a relatively fresh PC and I haven't installed many packages, so I don't think it's dependency hell.</p>
| <python><scikit-learn> | 2023-11-25 20:51:44 | 1 | 985 | Alex F |
77,549,521 | 898,160 | Running sphinx-build leads to AttributeError: 'Values' object has no attribute 'link_base' | <p>I have a Django 4.2 project and wish to run Sphinx to generate the docs. When I run</p>
<pre><code>sphinx-build -b html docs_source/ docs/
</code></pre>
<p>I got the following error:</p>
<blockquote>
<p>Exception occurred:</p>
<p>File
"C:\ProgramData\Anaconda3\envs\django_3_8\Lib\site-packages\django\contrib\admindocs\utils.py",
line 116, in _role</p>
<pre><code>inliner.document.settings.link_base,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
</code></pre>
<p>AttributeError: 'Values' object has no attribute 'link_base'</p>
<p>The full traceback has been saved in
C:\Users\user\AppData\Local\Temp\sphinx-err-r_cbc4k5.log, if you want
to report the issue to the developers.</p>
</blockquote>
<p>The list of the installed packages is as follows:</p>
<pre class="lang-none prettyprint-override"><code>Package Version
----------------------------- ---------
alabaster 0.7.13
amqp 5.1.1
anyio 3.5.0
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
asgiref 3.7.2
asttokens 2.0.5
attrs 22.1.0
autopep8 1.6.0
Babel 2.13.1
backcall 0.2.0
beautifulsoup4 4.12.2
billiard 3.6.4.0
bleach 4.1.0
celery 5.1.2
certifi 2022.12.7
cffi 1.15.1
charset-normalizer 3.3.2
click 7.1.2
click-didyoumean 0.0.3
click-plugins 1.1.1
click-repl 0.2.0
colorama 0.4.6
comm 0.1.2
coverage 7.2.2
cryptography 39.0.1
debugpy 1.5.1
decorator 5.1.1
defusedxml 0.7.1
diagrams 0.23.3
Django 4.2.7
django-debug-toolbar 4.0.0
django-nose 1.4.6
django-postgres-extra 2.0.8
djangorestframework 3.14.0
djangorestframework-simplejwt 4.4.0
docutils 0.20.1
drf-extra-fields 3.5.0
drf-spectacular 0.26.2
entrypoints 0.4
et-xmlfile 1.1.0
executing 0.8.3
fastjsonschema 2.16.2
filetype 1.2.0
graphviz 0.20.1
idna 3.4
imagesize 1.4.1
inflection 0.5.1
ipykernel 6.19.2
ipython 8.18.0
ipython-genutils 0.2.0
jedi 0.18.1
Jinja2 3.1.2
jsonschema 4.17.3
jupyter_client 8.1.0
jupyter_core 5.3.0
jupyter-server 1.23.4
jupyterlab-pygments 0.1.2
kombu 5.3.1
lxml 4.9.2
Mako 1.3.0
Markdown 3.5.1
MarkupSafe 2.1.1
matplotlib-inline 0.1.6
mistune 0.8.4
nbclassic 0.5.5
nbclient 0.5.13
nbconvert 6.5.4
nbformat 5.7.0
nest-asyncio 1.5.6
nose 1.3.7
notebook 6.5.4
notebook_shim 0.2.2
openpyxl 3.0.10
packaging 23.0
pandocfilters 1.5.0
parso 0.8.3
pdoc3 0.10.0
pickleshare 0.7.5
Pillow 9.4.0
pip 23.0.1
platformdirs 2.5.2
prometheus-client 0.14.1
prompt-toolkit 3.0.36
psutil 5.9.0
psycopg2 2.9.3
pure-eval 0.2.2
pycodestyle 2.10.0
pycparser 2.21
Pygments 2.15.1
PyJWT 2.4.0
pyOpenSSL 23.1.1
pyrsistent 0.18.0
python-dateutil 2.8.2
pytz 2022.7
pywin32 305.1
pywinpty 2.0.10
PyYAML 6.0
pyzmq 25.0.2
redis 3.5.3
requests 2.31.0
scout-apm 2.26.1
Send2Trash 1.8.0
setuptools 66.0.0
six 1.16.0
sniffio 1.2.0
snowballstemmer 2.2.0
soupsieve 2.4
Sphinx 7.2.6
sphinx-autodoc-typehints 1.25.2
sphinxcontrib-applehelp 1.0.7
sphinxcontrib-devhelp 1.0.5
sphinxcontrib-htmlhelp 2.0.4
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.6
sphinxcontrib-serializinghtml 1.1.9
sqlparse 0.4.3
stack-data 0.2.0
tblib 3.0.0
terminado 0.17.1
tinycss2 1.2.1
toml 0.10.2
tornado 6.2
traitlets 5.7.1
typed-ast 1.5.4
typing_extensions 4.5.0
tzdata 2021.1
uritemplate 4.1.1
urllib3 1.26.15
urllib3-secure-extra 0.1.0
vine 5.0.0
wcwidth 0.2.5
webencodings 0.5.1
websocket-client 0.58.0
wheel 0.38.4
wrapt 1.15.0
</code></pre>
<p>Any hint?</p>
<p>I tried today with a fresh new environment, Phyton, Django and Sphinx. I got the same error message. I cannot paste here the Sphinx error message since this editor does not allow it....</p>
| <python><django><python-sphinx> | 2023-11-25 20:38:01 | 1 | 801 | pittnerf |
77,549,397 | 9,975,452 | "Full rank" error when estimating OLS with statsmodel | <p>I have historical data for crop yield, annual temperature and annual precipitation for a given region. My goal is to estimate the following linear model:
<a href="https://i.sstatic.net/L3hbo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L3hbo.png" alt="enter image description here" /></a></p>
<p>In which y is the crop annual yield, t stands for time (year), tmp for temperature (annual average) and p for precipitation (annual sum). Squared terms capture influence of extreme values.</p>
<p>My code is:</p>
<pre><code>import pandas as pd
import statsmodels.formula.api as smf
df = pd.read_csv('https://raw.githubusercontent.com/kevinkuranyi/data/main/crop_yield.csv')
model = smf.ols(formula = 'y_banana ~ year+year2+tmp+tmp2+pre+pre2+tmp_pre+tmp2_pre2',
data=df, missing='drop').fit(cov_type='HAC', cov_kwds={'maxlags': 2})
model.summary()
</code></pre>
<p>By running this, I`m getting the following error message:</p>
<pre><code>/usr/local/lib/python3.10/dist-packages/statsmodels/base/model.py:1888: ValueWarning: covariance of constraints does not have full rank. The number of constraints is 8, but rank is 5
warnings.warn('covariance of constraints does not have full '
</code></pre>
<p>I suspected it could be due to multicolinearity problems, but no matter which variable I ommit, as long as I include more then 4 variables (even without interaction terms, or squared values, that could be linear combinations) I got this error.
I included several combinations as examples in this <a href="https://colab.research.google.com/drive/1y7fZ2ZzmPoVzVjoblPZbsBwfwt3lHDo0?usp=sharing" rel="nofollow noreferrer">Colab notebook.</a></p>
<p>What could be the problem?</p>
| <python><statistics><regression><linear-regression><modeling> | 2023-11-25 19:53:44 | 1 | 470 | Oalvinegro |
77,549,390 | 1,519,417 | pytest does not see the webdriver from the different file | <p>I have a simple code setup_test.py that perfectly works in the common package</p>
<pre><code>from selenium import webdriver
driver = None
def setup_function():
print("Setup")
global driver
options = webdriver.ChromeOptions()
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options)
driver.maximize_window()
def teardown_function():
print("Teardown")
if driver:
driver.quit()
def test_me1():
driver.get("https://www.selenium.dev/")
print("Inside t1")
time.sleep(3)
def test_me2():
driver.get("https://www.selenium.dev/")
print("Inside t2")
time.sleep(3)
</code></pre>
<p>Then I moved the following part to the file pre_test.py in the same package</p>
<pre><code>from selenium import webdriver
driver = None
def setup_function():
print("Setup")
global driver
options = webdriver.ChromeOptions()
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options)
driver.maximize_window()
def teardown_function():
print("Teardown")
if driver:
driver.quit()
</code></pre>
<p>Now the test file contains only</p>
<pre><code>from .pre_test import driver
def test_me1():
driver.get("https://www.selenium.dev/")
print("Inside t1")
time.sleep(3)
def test_me2():
driver.get("https://www.selenium.dev/")
print("Inside t2")
time.sleep(3)
</code></pre>
<p>However, when I run it, I get error message</p>
<p>def test_me1():</p>
<blockquote>
<pre><code> driver.get("https://www.selenium.dev/")
</code></pre>
</blockquote>
<p>E AttributeError: 'NoneType' object has no attribute 'get'</p>
<p>setup_test.py:27: AttributeError</p>
<p>that indicates that setup_test.py does not see the driver, which is None.</p>
<p>Please, help. What is wrong?</p>
| <python><selenium-webdriver><pytest> | 2023-11-25 19:52:37 | 0 | 668 | Vladimir |
77,549,259 | 19,512,611 | Cython Error with installation of psychopy package on Mac Silicon | <p>I am attempting to use the <code>psychopy</code> package (version 2023.2.3), and it appears to install some <code>psychtoolbox</code> files alongside itself. On running <code>import psychtoolbox as ptb</code> with Mac Silicon, I get:</p>
<pre><code>ImportError Traceback (most recent call last)
Cell In[57], [line 1](vscode-notebook-cell:?execution_count=57&line=1)
----> [1](vscode-notebook-cell:?execution_count=57&line=1) import psychtoolbox as ptb
File [~/miniforge3/envs/goodfeeling/lib/python3.10/site-packages/psychtoolbox/__init__.py:28](https://file+.vscode-resource.vscode-cdn.net/Users/davidcsuka/Documents/nanoGPT/~/miniforge3/envs/myenv/lib/python3.10/site-packages/psychtoolbox/__init__.py:28)
[26](file:///Users/davidcsuka/miniforge3/envs/myenv/lib/python3.10/site-packages/psychtoolbox/__init__.py?line=25) from .WaitSecs import WaitSecs
[27](file:///Users/davidcsuka/miniforge3/envs/myenv/lib/python3.10/site-packages/psychtoolbox/__init__.py?line=26) from .GetSecs import GetSecs
---> [28](file:///Users/davidcsuka/miniforge3/envs/myenv/lib/python3.10/site-packages/psychtoolbox/__init__.py?line=27) from .PsychHID import PsychHID
[29](file:///Users/davidcsuka/miniforge3/envs/myenv/lib/python3.10/site-packages/psychtoolbox/__init__.py?line=28) from .IOPort import IOPort
[30](file:///Users/davidcsuka/miniforge3/envs/myenv/lib/python3.10/site-packages/psychtoolbox/__init__.py?line=29) if is_64bits:
ImportError: dlopen(/Users/davidcsuka/miniforge3/envs/myenv/lib/python3.10/site-packages/psychtoolbox/PsychHID.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '_AllocateHIDObjectFromIOHIDDeviceRef'
</code></pre>
<p>The same thing happens with python 3.9, and with Python 3.8 (recommended) the <code>psychopy</code> installation doesn't even work in pip due to:</p>
<pre><code>File "/private/var/folders/9x/x950xm6j3ws6cm5rcv8w34kh0000gn/T/pip-build-env-a0oj0w5k/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "/private/var/folders/9x/x950xm6j3ws6cm5rcv8w34kh0000gn/T/pip-build-env-a0oj0w5k/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 928, in <module>
File "<string>", line 923, in get_cython_extfiles
File "/private/var/folders/9x/x950xm6j3ws6cm5rcv8w34kh0000gn/T/pip-build-env-a0oj0w5k/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
cythonize_one(*args)
File "/private/var/folders/9x/x950xm6j3ws6cm5rcv8w34kh0000gn/T/pip-build-env-a0oj0w5k/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: tables/utilsextension.pyx
[end of output]
</code></pre>
<p>Do I perhaps need to use some different version of Cython, or are various C compilers incompatible with each other? Or maybe this pertains to my operating system?</p>
| <python><pip><cython><apple-silicon><cythonize> | 2023-11-25 19:09:18 | 0 | 3,007 | dcsuka |
77,549,170 | 2,173,773 | PyQt: How to position QToolTip over widget? | <p>When I click "Button1" (see screenshot below) the tooltip appears over "Button2" and not over "Button1" as I wanted:</p>
<p><a href="https://i.sstatic.net/MwOr1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MwOr1.png" alt="enter image description here" /></a></p>
<hr />
<p><strong>t.py</strong></p>
<pre><code>import logging
import sys
# from PyQt6.QtCore import Qt
from PyQt6.QtWidgets import (
QApplication, QMainWindow, QPushButton, QToolTip, QVBoxLayout, QWidget
)
class MainWindow(QMainWindow):
def __init__(self, app: QApplication):
super().__init__()
self.app = app
self.resize(330, 200)
self.setWindowTitle("Testing")
layout = QVBoxLayout()
button1 = QPushButton("Button1")
layout.addWidget(button1)
def button1_clicked() -> None:
self.button_clicked(button1, "Button1")
button1.clicked.connect(button1_clicked)
button2 = QPushButton("Button2")
layout.addWidget(button2)
def button2_clicked() -> None:
self.button_clicked(button2, "Button2")
button2.clicked.connect(button2_clicked)
widget = QWidget()
widget.setLayout(layout)
self.setCentralWidget(widget)
def button_clicked(self, button: QPushButton, txt: str) -> None:
point = button.mapToGlobal(button.pos())
size = button.size()
logging.info(f"{txt} pos: {point}")
logging.info(f"{txt} size: {size}")
QToolTip.showText(point, txt, msecShowTime=4000)
def main():
logging.basicConfig(level=logging.INFO)
app = QApplication(sys.argv)
window = MainWindow(app)
window.show()
app.exec()
if __name__ == '__main__':
main()
</code></pre>
<p>Why does this happen? How to fix it?</p>
| <python><pyqt> | 2023-11-25 18:43:59 | 0 | 40,918 | Håkon Hægland |
77,549,151 | 9,315,690 | How do I ensure that a dependency only gets installed on Windows with Poetry? | <p>I'm developing a Python application using Poetry as project management tool, and I have a dependency which I only want to install on Windows. <a href="https://peps.python.org/pep-0425/" rel="nofollow noreferrer">PEP 425</a> says the following on the matter:</p>
<blockquote>
<p>The platform tag is simply distutils.util.get_platform() with all hyphens - and periods . replaced with underscore _.</p>
</blockquote>
<p>If I run <code>distutils.util.get_platform()</code> on a 64-bit Windows host, I get this:</p>
<pre><code>>>> import distutils
distutils.utils.get_platform()
'win-amd64'
</code></pre>
<p>So far so good. I now have the platform tag for 64-bit Windows. The issue is that I don't want to specifically specify 64-bit Windows but rather Windows in general as the dependency should be installed on all variants of Windows. For Linux, this tag is just "linux", and macOS is "darwin", but I can't seem to find any examples for Windows.</p>
<p>What value should I pass to <code>$ poetry add --platform</code> to ensure that a dependency only gets installed on Windows? Or, is there a better way to do it?</p>
| <python><windows><python-poetry><package-management> | 2023-11-25 18:37:12 | 1 | 3,887 | Newbyte |
77,549,111 | 13,135,901 | numpy replace repeating values with 0 | <p>I have two arrays that loooks like this:</p>
<pre><code>arr1 = [0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 1 1 1 1 1 1 0 1 0 1 0 0 1
1 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 1 0 0 0 1 1 1 1 1 0 0
1 1 1 1 0 0 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 1 1 0 1 0]
arr2 = [0 0 0 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 1 0
0 0 0 1 1 1 0 0 0 0 0 0 1 0 1 1 0 1 1 0 0 0 1 0 0 1 0 1 1 1 0 0 1 0 0 0 1
0 0 0 1 1 1 1 0 0 1 0 1 0 0 1 1 1 1 1 0 0 0 1 1 0 0]
</code></pre>
<ol>
<li>What is the fastest way to compare both arrays and if both have "1" in the same position, figure out which array has the closest "0" looking backwards and replace "1" in that array with "0".</li>
<li>Replace all "1"s in an array that are followed by a "1" with "0".</li>
</ol>
<p>I solved it using iteration, but I am sure there's an easy and much faster solution for this in <code>numpy</code> or <code>pandas</code>, which I am just only beginning to learn.</p>
<p>Here's an ugly example solution to first problem using iteration:</p>
<pre><code> df = pd.DataFrame({"A": arr1, "B": arr2, })
df2 = df[(df.A > 0) & (df.B > 0)]
i = 1
for idx in df2.index:
while df.loc[idx, 'A'] == 1 and df.loc[idx, 'B'] == 1:
try:
if df.loc[idx - i, 'A'] > 0 or df.loc[idx - i, 'B'] > 0:
df.loc[idx, 'A'] = df.loc[idx - i, 'A']
df.loc[idx, 'B'] = df.loc[idx - i, 'B']
else:
i += 1
except KeyError:
df.loc[idx, 'A'] = 0
df.loc[idx, 'B'] = 0
</code></pre>
<p>And here's a solution to the second one:</p>
<pre><code> df2 = df[(df.A > 0)].A
for idx in df2.index:
if df.loc[idx + 1, 'A'] > 0:
df.loc[idx, 'A'] = 0
df2 = df[(df.B > 0)].B
for idx in df2.index:
if df.loc[idx + 1, 'B'] > 0:
df.loc[idx, 'B'] = 0
</code></pre>
<p>Now do that pandas voodoo and make it all a one-liner.</p>
| <python><pandas><numpy> | 2023-11-25 18:29:16 | 4 | 491 | Viktor |
77,549,014 | 3,842,845 | How to convert Unix (LF format) to Windows (CR-LF) using Python | <p>I am trying to convert bottom text (Unix (LF format)) - let's say "Unix.csv":</p>
<pre><code>Admissions
"Not Started: 1","Sent: 0","Completed: 0"
,,,,
Division,Community,"Resident Name",Date,"Document Status","Last Update"
,"Test","Name",2023-11-22,"Not Started",
,,,,
,,,,
Readmissions
"Not Started: 1","Sent: 0","Completed: 0"
,,,,
Division,Community,"Resident Name",Date,"Document Status","Last Update"
,"Test","Name","2023-11-22 7:29 AM","Not Started",
,,,,
,,,,
Discharges
,,,,
Division,Community,"Resident Name",Date
,"Test","Name","2023-11-22 1:23 AM"
,"Test","Name","2023-11-22 7:49 AM"
,"Test","Name","2023-11-22 1:12 AM"
</code></pre>
<p>To this (Windows(CR-LF format)) - let's say "Windows.csv" using <strong>Python 3</strong> script?:</p>
<pre><code>Admissions,,,,,
Not Started: 1,Sent: 0,Completed: 0,,,
,,,,,
Division,Resident Name,Date,Document Status,Last Update
,Test,Name,11/22/2023,Not Started,
,,,,,
,,,,,
Readmissions,,,,,
Not Started: 1,Sent: 0,Completed: 0,,,
,,,,,
Division,Community,Resident Name,Date,Document Status,Last Update
,Test,Name,11/22/2023 7:29,Not Started,
,,,,,
,,,,,
Discharges,,,,,
,,,,,
Division,Community,Resident Name,Date,,
,Test,Name,11/22/2023 1:23,,
,Test,Name,11/22/2023 7:49,,
,Test,Name,11/22/2023 1:12,,
</code></pre>
<p>After I tried this code (update):</p>
<pre><code>#!/bin/python3
with open('Unix.csv', 'r') as file:
toWrite = file.read().replace('\n', '\r\n')
with open('Convert.csv', 'w') as file:
file.write(toWrite)
</code></pre>
<p>This is result:</p>
<pre><code>Admissions
"Not Started: 1","Sent: 0","Completed: 0"
,,,,
Division,Community,"Resident Name",Date,"Document Status","Last
Update"
,"Test","name",2023-11-22,"Not
Started",
,,,,
,,,,
Readmissions
"Not Started: 1","Sent: 0","Completed: 0"
,,,,
Division,Community,"Resident Name",Date,"Document Status","Last
Update"
,"Test","Name","2023-11-22 7:29 AM","Not Started",
,,,,
,,,,
Discharges
,,,,
Division,Community,"Resident Name",Date
,"Test","Name","2023-11-22 1:23 AM"
,"Test","Name","2023-11-22
7:49 AM"
,"Test","Name","2023-11-22 1:12 AM"
</code></pre>
<p>This is another code that I tried:</p>
<p>I am not sure if this is what @furas meant. I tried, but it still would not work.</p>
<pre><code>#!/bin/python3
with open('Test.csv', newline="\r\n") as file:
toWrite = file.read()
with open('convert.csv', 'w') as file:
file.write(toWrite)
</code></pre>
| <python><python-3.x> | 2023-11-25 18:03:48 | 1 | 1,324 | Java |
77,548,853 | 5,821,028 | FFT for non power-of-2 input length | <p>I am writing a Fast Fourier Transform (FFT) in Python and facing problems with input data lengths that are not powers of 2. I came across recommendations to pad such inputs with zeros to reach the nearest power of 2, but I found the results are much different compared to standard implementations (numpy)</p>
<p>Here's my implementation of the FFT:</p>
<pre><code>def myfft(x):
N = len(x)
if N <= 1:
return x
evens = myfft(x[0::2])
odds = myfft(x[1::2])
t = np.exp(-2j * np.pi * np.arange(N) / N)
f = np.concatenate([evens + t[:N//2] * odds,
evens + t[N//2:] * odds])
return f
</code></pre>
<p>This gives the same result as numpy.fft.fft() when the input length is a power of 2, but gives a different result for zero padding input.</p>
<pre><code>output1 = np.fft.fft([1,2,3,4,5,6,7])
output2 = myfft([1,2,3,4,5,6,7,0])
output1: [28 -3.5+7.3j -3.5+2.8j -3.5+0.8j -3.5-0.8j -3.5-2.8j -3.5-7.3j ]
output2: [28 -9.7+4j -4-4j 1.7-4j 4 1.7+4j -4+4j -9.7-4j]
</code></pre>
<p>Is zero-padding a good approach for handling non power-of-2 input lengths in FFT? If so, what is the correct way to implement it?</p>
| <python><numpy><fft> | 2023-11-25 17:20:48 | 0 | 1,125 | Jihyun |
77,548,778 | 200,783 | How can I efficiently generate all N-bit values with M set bits, and the corresponding bit-reversed values? | <p>I'm using the following Python code to generate all values with <code>popcount</code> set bits among <code>bits</code> total bits:</p>
<pre><code>def trailing_zeros(v):
return (v & -v).bit_length() - 1
def bit_permutations(popcount, bits):
if popcount < 0 or popcount > bits:
pass
elif popcount == 0:
yield 0
elif popcount == bits:
yield (1 << bits) - 1
else:
v = (1 << popcount) - 1
while v < (1 << bits):
yield v
t = v | (v - 1)
v = (t + 1) | (((~t & -~t) - 1) >> (trailing_zeros(v) + 1))
</code></pre>
<p>For example, <code>bit_permutations(5, 3)</code> yields <code>0b00111, 0b01011, 0b01101, 0b01110, 0b10011, 0b10101, 0b10110, 0b11001, 0b11010, 0b11100</code>.</p>
<p>For each of the above values, I also need the value with the order of bits reversed, e.g. <code>0b11100, 0b11010, 0b10110, 0b01110, 0b11001, 0b10101, 0b01101, 0b10011, 0b01011, 0b00111</code>. Currently I'm calling a separate <code>reverse</code> function each time a value is generated by <code>bit_permutations</code>:</p>
<pre><code>def reverse(v, bits):
assert bits <= 16
v = ((v >> 1) & 0x5555) | ((v & 0x5555) << 1);
v = ((v >> 2) & 0x3333) | ((v & 0x3333) << 2);
v = ((v >> 4) & 0x0F0F) | ((v & 0x0F0F) << 4);
v = ((v >> 8) & 0x00FF) | ((v & 0x00FF) << 8);
return v >> (16 - bits)
</code></pre>
<p>However that seems inefficient. Is it possible to modify <code>bit_permutations</code> so that it directly generates each permutation along with its reverse, rather than using a separate, less-efficient <code>reverse</code> function?</p>
| <python><algorithm><binary><bit-manipulation> | 2023-11-25 17:03:07 | 3 | 14,493 | user200783 |
77,548,723 | 2,636,579 | FFmpeg to rotate video if necessary, and overlay png | <p>I am trying to write a script that can accept videos of arbitrary orientation, use FFmpeg to detect the orientation, then overlay a logo in the correct way in the bottom right corner of the screen.</p>
<p>This script is working for landscape mode:</p>
<pre><code>import subprocess
from moviepy.editor import VideoFileClip, CompositeVideoClip, ImageClip
def is_video_portrait(video_path):
video_clip = VideoFileClip(video_path)
return video_clip.size[1] > video_clip.size[0] # True if portrait
def transpose_video(video_path, output_path):
subprocess.run(['/opt/homebrew/bin/ffmpeg', '-i', video_path, '-vf', 'transpose=2', '-c:a', 'copy', output_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def overlay_logo(video_clip, image_path):
logo_clip = ImageClip(image_path).set_duration(video_clip.duration)
logo_position = (video_clip.size[0] - logo_clip.size[0], video_clip.size[1] - logo_clip.size[1]) # Bottom right
return CompositeVideoClip([video_clip, logo_clip.set_position(logo_position)], size=video_clip.size)
def overlay_image_on_video(video_path, image_path, output_path):
if is_video_portrait(video_path):
# Process for portrait videos
transposed_path = "transposed_" + video_path
transpose_video(video_path, transposed_path)
video_clip = VideoFileClip(transposed_path)
final_clip = overlay_logo(video_clip, image_path)
temp_output_path = "temp_" + output_path
final_clip.write_videofile(temp_output_path, codec='libx264', audio_codec='aac')
transpose_video(temp_output_path, output_path)
subprocess.call(['rm', transposed_path, temp_output_path]) # Remove temporary files
else:
# Process for landscape videos
video_clip = VideoFileClip(video_path)
final_clip = overlay_logo(video_clip, image_path)
final_clip.write_videofile(output_path, codec='libx264', audio_codec='aac')
# Example usage
overlay_image_on_video("cat.mov", "logo.png", "output_video.mp4")
</code></pre>
<p>Example output:</p>
<p><a href="https://i.sstatic.net/xsvY2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xsvY2.jpg" alt="enter image description here" /></a></p>
<p>But when I give it a video shot in portrait mode - screenshot example:
<a href="https://i.sstatic.net/D7zMD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D7zMD.jpg" alt="enter image description here" /></a></p>
<p>It gives me an output that is flipped to landscape -- and thus distorted!</p>
<p><a href="https://i.sstatic.net/B7UIX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B7UIX.jpg" alt="enter image description here" /></a></p>
<p>I want to detect the orientation of the video, if it is shot in portrait, to rotate the logo appropriately (e.g. -90 degrees) and stick it in the top right corner.</p>
<p>How can I achieve this?</p>
<p>EDIT:</p>
<p>Tried this and got the same result:</p>
<pre><code>import subprocess
from moviepy.editor import VideoFileClip, CompositeVideoClip, ImageClip
def get_orientation(size):
return "Portrait" if size[1] > size[0] else "Landscape"
def overlay_logo(video_clip, image_path):
logo_clip = ImageClip(image_path).set_duration(video_clip.duration)
logo_position = (video_clip.size[0] - logo_clip.size[0], video_clip.size[1] - logo_clip.size[1]) # Bottom right
print(f"Placing logo at position: {logo_position}")
return CompositeVideoClip([video_clip, logo_clip.set_position(logo_position)], size=video_clip.size)
def overlay_image_on_video(video_path, image_path, output_path):
video_clip = VideoFileClip(video_path)
print(f"Original video '{video_path}' size: {video_clip.size}, Orientation: {get_orientation(video_clip.size)}")
# Check if video is rotated and resize if necessary
if video_clip.rotation == 90:
video_clip = video_clip.resize(video_clip.size[::-1])
video_clip.rotation = 0
print(f"Video '{video_path}' was rotated. Adjusted size: {video_clip.size}, Orientation: {get_orientation(video_clip.size)}")
final_clip = overlay_logo(video_clip, image_path)
final_clip.write_videofile(output_path, codec='libx264', audio_codec='aac')
# Example usage
overlay_image_on_video("cat.mov", "logo.png", "output_video.mp4")
</code></pre>
<p>Edit2:
Tried this as well with the same result:</p>
<pre><code>import subprocess
from moviepy.editor import VideoFileClip, CompositeVideoClip, ImageClip
def get_orientation(size):
return "Portrait" if size[1] > size[0] else "Landscape"
def overlay_logo(video_clip, image_path):
logo_clip = ImageClip(image_path).set_duration(video_clip.duration)
logo_position = (video_clip.size[0] - logo_clip.size[0], video_clip.size[1] - logo_clip.size[1]) # Bottom right
print(f"Placing logo at position: {logo_position}")
return CompositeVideoClip([video_clip, logo_clip.set_position(logo_position)], size=video_clip.size)
def overlay_image_on_video(video_path, image_path, output_path):
video_clip = VideoFileClip(video_path)
original_orientation = get_orientation(video_clip.size)
print(f"Original video '{video_path}' size: {video_clip.size}, Orientation: {original_orientation}")
# Check if video is rotated and resize if necessary
if video_clip.rotation == 90:
video_clip = video_clip.resize(video_clip.size[::-1])
video_clip.rotation = 0
adjusted_orientation = get_orientation(video_clip.size)
print(f"Video '{video_path}' was rotated. Adjusted size: {video_clip.size}, Orientation: {adjusted_orientation}")
final_clip = overlay_logo(video_clip, image_path)
final_clip.write_videofile(output_path, codec='libx264', audio_codec='aac')
# Example usage
overlay_image_on_video("cat2.mov", "logo.png", "output_video.mp4")
</code></pre>
| <python><ffmpeg><moviepy> | 2023-11-25 16:47:39 | 2 | 1,034 | reallymemorable |
77,548,718 | 1,012,010 | Passing dictionary permutations to a function as inputs | <p>I have a function like the following:</p>
<pre><code>def function_name(a, b, c):
# Do some stuff with a, b, and c
print(result)
</code></pre>
<p>I've generated several dictionaries like these:</p>
<pre><code>dict1 = {25: 1015, 36: 1089, 41: 1138}
dict2 = {12: 2031, 25: 2403, 31: 2802}
dict3 = {12: 3492, 28: 3902, 40: 7843}
</code></pre>
<p>I can generate all possible permutations with a range of 3 for these dictionaries, but I can't seem to feed them into my function as inputs. I can print the combinations like this:</p>
<pre><code>print([x for x in itertools.permutations(['dict1', 'dict2', 'dict3'], 3)])
</code></pre>
<p>which correctly generates:</p>
<pre><code>[('dict1', 'dict2', 'dict3'), ('dict1', 'dict3', 'dict2'), ('dict2', 'dict1', 'dict3'), ('dict2', 'dict3', 'dict1'), ('dict3', 'dict1', 'dict2'), ('dict3', 'dict2', 'dict1')]
</code></pre>
<p>But when I try to feed each group from the permutation result as a, b, and c in my function by using:</p>
<pre><code>data = [x for x in itertools.permutations([dict1, dict2, dict3], 3)]
function_name(data)
</code></pre>
<p>I get this:</p>
<pre><code>TypeError: function_name() missing 2 required positional arguments: 'b', and 'c'
</code></pre>
<p>I also tried to define the function to accept **data as an input, but that results in this:</p>
<pre><code>function_name(**data)
TypeError: __main__.function_name() argument after ** must be a mapping, not list
</code></pre>
<p>How can I pass the permutations of my dictionary to my function as inputs?</p>
| <python><permutation> | 2023-11-25 16:46:26 | 2 | 730 | Alligator |
77,548,545 | 1,186,417 | Adding subttiles to Plotly Dash video player in Python | <p>I am trying to add subtitles to my Plotly Dash video player, i.e. VTT captions overlay, in Python. I cannot find any examples or instruction on this.</p>
<pre><code>from dash import Dash, dcc, html, Input, Output, State
import dash_player
</code></pre>
<p>And in a html Div somewhere:</p>
<pre><code>dash_player.DashPlayer(
id='vid1',
controls = True,
url = "http://86.47.173.33:1935/playlist.m3u8",
width='100%',
style={'margin-top':'20px','margin-bottom':'20px'},
playing= True,
muted= True
)
</code></pre>
<p>The DashPlayer object has no methods to handle a subtitle track in the documentation. Perhaps this is something that could be handled in CSS?</p>
<p>To find some React player examples.</p>
| <python><plotly-dash><video-player> | 2023-11-25 15:56:00 | 1 | 851 | Jace999 |
77,548,448 | 3,130,747 | How to Dynamically Generate (raw) SQL Queries with Multiple Parameters Using Jinja2 and SQLAlchemy in Python | <p>I'm using jinja2 and SQLAlchemy to dynamically generate queries in python, specifically with a postgres db. I'm trying to workout how to render the raw SQL that will be executed by the database.</p>
<p>I'd like to be able to do this for jinja templates generally - so using a regex replace on <code>:age</code> or whatever wouldn't be suitable. I also don't want to have to parse <code>logs</code> output from the terminal, I'd like the result as a string in my python session which I can then write to a file using <code>with open(...</code> if needed.</p>
<h2>Error:</h2>
<p>This is the error that I'm getting:</p>
<pre><code> raise exc.CompileError(
sqlalchemy.exc.CompileError: No literal value renderer is available for literal value "['New York', 'Boston']" with datatype NULL
</code></pre>
<h2>Expected output from example below</h2>
<p>I'd like to see the following output from the above:</p>
<pre><code>WHERE 1 = 1
AND age = 30
AND array_column @> ARRAY['New York', 'Boston']
</code></pre>
<h2>Jinja template</h2>
<pre><code>-- query.sql.j2
SELECT * FROM tbl_with_array
WHERE 1 = 1
{% if age %}
AND age = :age
{% endif %}
{% if template_value %}
AND array_column @> :template_value::text[]
{% endif %}
</code></pre>
<h2>python code</h2>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.sql import text
import sqlalchemy
from sqlalchemy import create_engine, Table, Column, Integer, String, ARRAY, MetaData
from sqlalchemy.orm import sessionmaker
import jinja2
# (replace with your actual credentials and database details)
engine = create_engine('postgresql+psycopg://user:password@localhost/db', echo=True)
metadata = MetaData()
# Setup example data
tbl_with_array = Table('tbl_with_array', metadata,
Column('id', Integer, primary_key=True),
Column('age', Integer),
Column('array_column', ARRAY(String))
)
metadata.create_all(engine)
with engine.connect() as conn:
conn.execute(tbl_with_array.insert(), [
{'age': 30, 'array_column': ['New York', 'Boston']},
{'age': 25, 'array_column': ['London', 'Manchester']},
{'age': 35, 'array_column': ['San Francisco', 'Los Angeles']}
])
conn.commit()
Session = sessionmaker(bind=engine)
session = Session()
# Jinja2 template
template_env = jinja2.Environment(loader=jinja2.FileSystemLoader(searchpath="."))
template = template_env.get_template('query.sql.j2')
params = {
'age': 30,
'template_value': ['New York', 'Boston']
}
# Trying to render the jinja template with the given parameters here.
rendered_query = template.render(age=params.get('age'), template_value=params.get('template_value'))
stmt = text(rendered_query).bindparams(age=params['age'], template_value=params['template_value'])
compiled_stmt = stmt.compile(compile_kwargs={"literal_binds": True})
raw_sql_query = str(compiled_stmt)
result = session.execute(stmt)
for row in result:
print(row)
print(raw_sql_query)
</code></pre>
<h2>package versions</h2>
<pre><code>jinja2 3.1.2
psycopg 3.1.13
sqlalchemy 2.0.23
</code></pre>
| <python><sqlalchemy><jinja2> | 2023-11-25 15:28:37 | 0 | 4,944 | baxx |
77,548,440 | 703,462 | TatSu: square brackets are being ignored in the grammar | <p><a href="https://github.com/neogeny/TatSu" rel="nofollow noreferrer">TatSu</a> tends to ignore the square bracket characters, be it <code>[</code>, <code>]</code>, and the mix of two at times and recognize them at different times for some reason, which I will show in an example below I'm experimenting with in TatSu 5.10.1, Python 3.11.6, Linux 6.5.7 if it is related in any way.</p>
<p>I aim to render a subset of Markdown, but I'll start with a simplified grammar to discuss the issue.</p>
<p>(I'm using a unit separator as a rare character since other ways to disable whitespace handling were more confusing. If there's a more straightforward and reliable way to tell TatSu to recognize the whitespace as characters it should treat as a part of the text, that'll be useful to know, too.)</p>
<pre><code>@@grammar::Markdown
@@whitespace :: /[␟]/
start = pieces $ ;
text = text:/[a-z]+/ ;
pieces = {text}*
;
</code></pre>
<p>This test code leads TatSu to ignore the <code>[]</code> and not fail with an error.
If I set the markdown_str as something else, like () or {}, TatSu will fail.
Individual square brackets, [ or ], won't lead to an exception.</p>
<pre class="lang-py prettyprint-override"><code>import tatsu
with open("./grammar.txt", "r") as grammar_file:
grammar = grammar_file.read()
class MarkdownSemantics:
def pieces(self, ast):
return ''.join(ast)
parser = tatsu.compile(grammar)
markdown_str = "[]"
ast = parser.parse(markdown_str, semantics=MarkdownSemantics())
print(ast)
</code></pre>
<p>I expect this to be a bug, as I don't see what's so special about the square bracket characters. They are not defined as a part of whitespace to be ignored, and other characters similar to them are.</p>
<p>At the same time, I am told <a href="https://github.com/neogeny/TatSu/issues/330#issuecomment-1826019908" rel="nofollow noreferrer">here</a> that it's about learning parsing principles. Is my EBNF above allowing <code>[</code> or <code>]</code> to pass?</p>
| <python><parsing><tatsu> | 2023-11-25 15:25:50 | 2 | 446 | ISE |
77,548,225 | 3,861,775 | Reduce list of lists in JAX | <p>I have a list holding many lists of the same structure (Usually, there are much more than two sub-lists inside the list, the example shows two lists for the sake of simplicity). I would like to create the sum or product over all sub-lists so that the resulting list has the same structure as one of the sub-lists. So far I tried the following using the <code>tree_reduce</code> method but I get errors that I don't understand.</p>
<p>I could need some guidance on how to use tree_reduce() in such a case.</p>
<pre><code>import jax
import jax.numpy as jnp
list_1 = [
[jnp.asarray([1]), jnp.asarray([2, 3])],
[jnp.asarray([4]), jnp.asarray([5, 6])],
]
list_2 = [
[jnp.asarray([7]), jnp.asarray([8, 9])],
[jnp.asarray([10]), jnp.asarray([11, 12])],
]
list_of_lists = [list_1, list_2]
reduced = jax.tree_util.tree_reduce(lambda x, y: x + y, list_of_lists, 0, is_leaf=True)
# Expected
# reduced = [
# [jnp.asarray([8]), jnp.asarray([10, 12])],
# [jnp.asarray([14]), jnp.asarray([16, 18])],
# ]
</code></pre>
| <python><jax> | 2023-11-25 14:24:36 | 1 | 3,656 | Gilfoyle |
77,548,143 | 5,082,187 | Type annotations in pypy give an error but work in python3 | <p>Please compare the following two programs:</p>
<pre><code>#!/usr/bin/env pypy
i: float = 5.0
</code></pre>
<p>and this:</p>
<pre><code>#!/usr/bin/env python3
i: float = 5.0
</code></pre>
<p>The first one fails:</p>
<pre class="lang-none prettyprint-override"><code> File "./pypy_test.py", line 3
i: float = 5.0
^
SyntaxError: invalid syntax
</code></pre>
<p>The second one just runs. I thought pypy and Python were fully compatible. What could be going on?</p>
<p>The installation of pypy on my Ubuntu is just a few minutes old. I am running Python 3.10.12.</p>
<pre class="lang-none prettyprint-override"><code>2023_11_25 14:57:08 maot@hunsn:~ $ pypy --version
Python 2.7.18 (7.3.9+dfsg-1, Apr 01 2022, 21:40:34)
[PyPy 7.3.9 with GCC 11.2.0]
2023_11_25 14:57:11 maot@hunsn:~ $
</code></pre>
| <python><ubuntu><python-typing><pypy> | 2023-11-25 14:00:22 | 1 | 428 | TradingDerivatives.eu |
77,548,098 | 22,466,650 | How to aggregate unique combinations and reorder them in a specifiy way? | <p>I have this dataframe :</p>
<pre><code>df = pd.DataFrame({'CLASS': ['A', 'B', 'A'],
'MEMBERS': ['foo & bar', 'bar & luz', 'baz']})
print(df)
# CLASS MEMBERS
# 0 A foo & bar
# 1 B bar & luz
# 2 A baz
</code></pre>
<p>First, I want to group on the column <code>CLASS</code> and combine the unique values of the column <code>MEMBERS</code>. And secondly, I need the unique combinations to be in a specific order : <code>['foo', 'bar', 'baz', 'luz']</code>.</p>
<p>I was able to do the first one :</p>
<pre><code>df.groupby('CLASS')['MEMBERS'].agg(lambda s: " & ".join(set(' & '.join(s).split(' & '))))
# CLASS
# A foo & baz & bar
# B luz & bar
# Name: MEMBERS, dtype: object
</code></pre>
<p>Can you guys show me how to achieve the ordering ?</p>
<p>My expected output is this :</p>
<pre><code># CLASS
# A foo & bar & baz
# B bar & luz
# Name: MEMBERS, dtype: object
</code></pre>
| <python><pandas><group-by> | 2023-11-25 13:47:55 | 2 | 1,085 | VERBOSE |
77,547,941 | 14,790,056 | Complex iterations using multiple conditions (fee ditribution if certain conditions are met) | <p>I have two large dataframes. million x 100K.</p>
<p>I have dataframe named <code>test_swaps</code>. This dataframe shows all trades, and fees accrued from each trade within <code>POOL_ADDRESS</code>.</p>
<pre><code> BLOCK_TIMESTAMP LIQUIDITY POOL_ADDRESS PRICE_0_1 FEES
0 2021-12-02 03:42:56+00:00 7.770303e+21 0x360b9726186c0f62cc719450685ce70280774dc8 215.174737 163.787000
1 2021-12-02 04:02:18+00:00 7.770303e+21 0x360b9726186c0f62cc719450685ce70280774dc8 209.796784 223.138500
2 2021-12-02 04:03:52+00:00 7.770303e+21 0x360b9726186c0f62cc719450685ce70280774dc8 206.188879 199.961300
3 2021-12-02 06:55:37+00:00 8.165560e+19 0xfaa318479b7755b2dbfdd34dc306cb28b420ad12 203.100999 0.044125
4 2021-12-02 04:09:03+00:00 7.770303e+21 0x360b9726186c0f62cc719450685ce70280774dc8 204.329947 22.049300
...
</code></pre>
<p>and another dataframe <code>test_actions</code> which shows liquidity positions by NF_TOKEN_ID within <code>POOL_ADDRESS</code>. the data frame looks like this</p>
<pre><code> BLOCK_TIMESTAMP NF_TOKEN_ID LIQUIDITY_cumsum POOL_ADDRESS PRICE_LOWER_0_1 PRICE_UPPER_0_1
0 2021-05-05 19:31:28+00:00 374.0 2.629662e+20 0xfaa318479b7755b2dbfdd34dc306cb28b420ad12 79.820552 81.189032
1 2021-05-05 21:12:56+00:00 374.0 0.000000e+00 0xfaa318479b7755b2dbfdd34dc306cb28b420ad12 79.820552 81.189032
2 2021-05-05 20:03:50+00:00 539.0 7.412937e+23 0x360b9726186c0f62cc719450685ce70280774dc8 0.012037 0.012781
3 2021-05-05 20:13:27+00:00 539.0 0.000000e+00 0x360b9726186c0f62cc719450685ce70280774dc8 0.012037 0.012781
4 2021-05-05 20:29:05+00:00 636.0 4.235670e+19 0x360b9726186c0f62cc719450685ce70280774dc8 66.672329 95.561691
</code></pre>
<p>My goal is to have a final output that looks like NF_TOKEN_ID, POOL_ADDRESS, FEES_ACCRUED, which shows how much each liquidity position has accrued fees.</p>
<p>A liquidity position <code>NF_TOKEN_ID</code> only accrues fees if certain conditions are met:</p>
<ol>
<li><code>POOL_ADDRESS</code> of the liquidity position and the trade should be the same</li>
<li>the position is within range (<code>test_actions['PRICE_LOWER_0_1']</code> <= <code>test_swaps['PRICE_0_1']</code> <=<code>test_actions['PRICE_UPPER_0_1']</code>,</li>
<li>the position is active at the time of trade. Currently <code>test_actions</code> is sorted such that the first date is when the position became active. The position may still be active or may be inactive depending on the last value of <code>LIQUIDITY_cumsum</code> (0 means all the liquidity is withdrawn, so inactive. For instance, <code>NF_TOKEN_ID</code> <code>374</code> above, the position was active between <code>2021-05-05 19:31:28+00:00</code> and <code>2021-05-05 21:12:56+00:00</code>. <code>NF_TOKEN_ID</code> <code>636</code> has always been active bc liquidity was never fully withdrawn.</li>
</ol>
<p>Fees from <code>test_swaps</code> need to be distributed to all liquidity positions that meet these conditions within each <code>POOL_ADDRESS</code>. Fee distribution is determined by the liquidity share for each position over the total liquidity within each pool: <code>test_actions['LIQUIDITY_cumsum']</code>/<code>test_swaps['LIQUIDITY']</code>.</p>
<p>I want my final output to look like NF_TOKEN_ID, POOL_ADDRESS, FEES_ACCRUED.</p>
<p>I have very large dataframes, so I am hoping to make the running time most efficient. let me know if something is unclear, and you need more explanation. I have been trying it but have not been successful, I would be very happy if humans can help me!</p>
<p>This is what i have done so far but it gives an error msg and takes a long time..:</p>
<pre><code># Convert timestamps to pandas datetime for efficient comparison
test_swaps['BLOCK_TIMESTAMP'] = pd.to_datetime(test_swaps['BLOCK_TIMESTAMP'])
test_actions['BLOCK_TIMESTAMP'] = pd.to_datetime(test_actions['BLOCK_TIMESTAMP'])
def get_active_positions(swap_time, nf_token_id, pool_address):
# Filter for the specific NF_TOKEN_ID and POOL_ADDRESS
positions = test_actions[(test_actions['NF_TOKEN_ID'] == nf_token_id) &
(test_actions['POOL_ADDRESS'] == pool_address)]
# Find the latest position start time that is before the swap time
latest_start_time = positions[positions['BLOCK_TIMESTAMP'] <= swap_time]['BLOCK_TIMESTAMP'].max()
# Check if there is any matching record
if pd.isna(latest_start_time):
return False
# Check if the latest position is active
latest_position = positions[positions['BLOCK_TIMESTAMP'] == latest_start_time]
if not latest_position.empty and latest_position['LIQUIDITY_cumsum'].iloc[-1] > 0:
return True
return False
# Initialize a dictionary to store fees accrued for each NF_TOKEN_ID and POOL_ADDRESS
fees_accrued = {}
# Iterate over the test_swaps dataframe
for index, swap in test_swaps.iterrows():
# Check each position in test_actions
for _, position in test_actions.iterrows():
# Check if the swap price is within the range and if the position is active
if (position['PRICE_LOWER_0_1'] <= swap['PRICE_0_1'] <= position['PRICE_UPPER_0_1'] and
get_active_positions(swap['BLOCK_TIMESTAMP'], position['NF_TOKEN_ID'], position['POOL_ADDRESS'])):
# Calculate liquidity share
total_liquidity = test_swaps[test_swaps['POOL_ADDRESS'] == position['POOL_ADDRESS']]['LIQUIDITY'].sum()
liquidity_share = position['LIQUIDITY_cumsum'] / total_liquidity
# Accrue fees
fees = liquidity_share * swap['FEES']
key = (position['NF_TOKEN_ID'], position['POOL_ADDRESS'])
fees_accrued[key] = fees_accrued.get(key, 0) + fees
# Convert the accrued fees dictionary to a DataFrame
result_df = pd.DataFrame(fees_accrued.items(), columns=['NF_TOKEN_ID_POOL_ADDRESS', 'FEES_ACCRUED'])
result_df[['NF_TOKEN_ID', 'POOL_ADDRESS']] = pd.DataFrame(result_df['NF_TOKEN_ID_POOL_ADDRESS'].tolist(), index=result_df.index)
# Final output
final_output = result_df[['NF_TOKEN_ID', 'POOL_ADDRESS', 'FEES_ACCRUED']]
</code></pre>
| <python><pandas><dataframe> | 2023-11-25 12:54:05 | 0 | 654 | Olive |
77,547,512 | 16,220,410 | how to use two text display options in a banner using flet in python? | <p>i am trying to create an app that merges one or more pdf files in python using flet, my code below shows an empty banner display text when i run the code, i tested the two if statement inside the merge_pdfs function but it always display an empty <code>status_banner</code></p>
<pre><code>status_banner = ''
def main(page: ft.page):
def close_banner(e):
page.banner.open = False
page.update()
def show_banner(e):
page.banner.open = True
page.update()
def merge_pdfs(e: ft.FilePickerResultEvent):
# get file name and password from the corresponding textfields
merge_file_name = textField_name.value
file_password = textField_password1.value
# show warning when no filename is provided
if not merge_file_name or merge_file_name == ' ':
status_banner = "Please check the file name entered."
show_banner(e)
return None
# show warning if less than 2 files selected
if not e.files or len(e.files) < 2:
status_banner = "Please select at least 2 files."
show_banner(e)
return None
pick_files_dialog = ft.FilePicker(on_result=merge_pdfs)
page.overlay.append(pick_files_dialog)
...
# banner for when there is error in file name or file selection
page.banner = ft.Banner(
bgcolor=ft.colors.RED_500,
leading=ft.Icon(ft.icons.WARNING_AMBER_ROUNDED,
color=ft.colors.AMBER, size=40),
content=ft.Text(status_banner),
actions=[ft.TextButton("Dismiss", on_click=close_banner)])
</code></pre>
| <python><flet> | 2023-11-25 10:34:24 | 1 | 1,277 | k1dr0ck |
77,547,215 | 2,101,808 | How to get fastapi type conversion str->bool? | <p>I have external service. It call my api with get parameters</p>
<pre><code>class Misc(BaseModel):
# - whether to pop-up checkbox ("true" or "false")
popup: Optional[str] = None
# - whether an advertisement is pending to be displayed ("yes" or "no")
advertPending: Optional[str] = None
</code></pre>
<p>I want to convert <code>"true"</code> or <code>"false"</code>, <code>"yes"</code> or <code>"no"</code>, <code>"on"</code> or <code>"off"</code>, <code>"1"</code> or <code>"0"</code>, <code>"y"</code> or <code>"n"</code> to bool. How to do that?</p>
<pre><code>class Misc(BaseModel):
# - whether to pop-up checkbox ("true" or "false")
popup: bool = False
# - whether an advertisement is pending to be displayed ("yes" or "no")
advertPending: bool = False
</code></pre>
| <python><fastapi><python-typing><pydantic> | 2023-11-25 08:53:20 | 2 | 3,614 | eri |
77,547,205 | 3,834,483 | Not able to get precise float values from fragment shader output | <p>I am using OpenGL on desktop (using PyOpenGL) for some image processing operation and doing some floating point operations per pixel in fragment shader.After that when I read the pixels using <code>glReadPixels</code> buffers are not as expected.</p>
<p>Pasted below is the portion of the relevant code:</p>
<pre><code>vertex_src = """
#version 330 core
in vec3 a_position;
in vec2 vTexcoords;
out vec2 fTexcoords;
void main() {
gl_Position = vec4(a_position, 1.0);
fTexcoords = vTexcoords;
}
"""
fragment_src = """
#version 330 core
out vec4 out_color;
in vec2 fTexcoords;
void main() {
vec4 tempcolor = vec4(0.0);
float ran = 0.003921568627451;
for(int i = 0;i < 100;i++)
tempcolor = tempcolor + ran*ran;
out_color = tempcolor;
}
"""
# Routine calls for OpenGL setup...
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glDrawElements(GL_TRIANGLES, len(vertices) * 4, GL_UNSIGNED_SHORT,None)
buffer = glReadPixels(0, 0, 1280, 720, GL_RGB, GL_FLOAT,None)
print(buffer[1][1])
</code></pre>
<p>The print statement prints all 0's</p>
<pre><code>[0. 0. 0.]
</code></pre>
<p>If I do the same operation in Python as below</p>
<pre><code>import numpy as np
tempcolor = np.array([0.],dtype='float32')
ran = 0.003921568627451
for i in range(100):
tempcolor = tempcolor + ran * ran;
out_color = tempcolor
print(out_color)
</code></pre>
<p>I get the expected output</p>
<pre><code>[0.00153787 0.00153787 0.00153787]
</code></pre>
<p>Is this something to do with the precsion of output of fragment shader? I was hoping all the operations in fragment shader are done in <code>float32</code> precision and output will also be <code>float32</code>.</p>
<p>Just to add , in the fragment shader if I add the below statement after the for loop , I get some non-zero output</p>
<pre><code>tempcolor += 0.002383698627451;
</code></pre>
<p>Output:</p>
<pre><code>[0.00392157 0.00392157 0.00392157]
</code></pre>
| <python><opengl><floating-point><fragment-shader><pyopengl> | 2023-11-25 08:48:18 | 1 | 472 | bsguru |
77,547,154 | 5,594,439 | Python Code to Check Multiple Files on Gitlab repository by group ID | <p>I want to check multiple files in the repository, and whether the files exist or not in the Python script below.</p>
<p>The case is, that the file in the repository exists but the results say the file is not found.</p>
<pre><code>import requests
from urllib.parse import quote_plus
import json
import os
# Set your GitLab API URL, private token, and group ID as environment variables
gitlab_api_url = os.getenv("GITLAB_API_URL", "https://gitlab.com/api/v4")
private_token = os.getenv("GITLAB_PRIVATE_TOKEN", "xxx")
group_id = os.getenv("GITLAB_GROUP_ID", "xxx")
def get_project_ids(api_url, private_token, group_id):
endpoint = f"{api_url}/groups/{group_id}/projects"
headers = {"PRIVATE-TOKEN": private_token}
response = requests.get(endpoint, headers=headers)
if response.status_code == 200:
projects = response.json()
return [(project['id'], project['name']) for project in projects]
else:
print(f"Failed to retrieve projects for group {group_id}. Status code: {response.status_code}")
return []
def check_files_in_project(api_url, private_token, group_id, project_id, project_name, filenames, output_filename):
# Specify the private token for authentication
headers = {"PRIVATE-TOKEN": private_token}
# Create a dictionary to store the output information
output_data = {
"group_id": group_id,
"project_id": project_id,
"project_name": project_name,
"files": []
}
for filename in filenames:
# Encode special characters in the filename and construct the URL
encoded_filename = quote_plus(filename)
endpoint = f"{api_url}/projects/{project_id}/repository/tree?recursive=1&path={encoded_filename}&ref=dev"
print(f"\nChecking for file: {filename} in project {project_name}")
print(f"Constructed URL: {endpoint}")
# Make the API request to get the repository tree
response = requests.get(endpoint, headers=headers)
# Check if the request was successful
if response.status_code == 200:
# Parse the JSON response
repository_tree = response.json()
# Check if the specified file exists in any folder
file_found = any(filename == item.get("name", "") for item in repository_tree)
print(f"File {filename} found in project {project_name}: {file_found}")
# Append file information to output_data
output_data["files"].append({"filename": filename, "file_found": file_found})
else:
print(f"Failed to retrieve repository tree for project {project_name}. Status code: {response.status_code}")
# Append output data to a single JSON file after checking each file
with open(output_filename, 'a') as json_file:
json.dump(output_data, json_file, indent=2)
json_file.write('\n') # Add newline to separate entries
# Get the list of project IDs and names in the group
projects_info = get_project_ids(gitlab_api_url, private_token, group_id)
# Specify the list of filenames you want to check
filenames_to_check = ["serverless.yaml", "serverless.yml"]
# Single output file for all projects
output_filename = "output_all_projects.json"
# Open the file with an initial '[' to start a JSON array
with open(output_filename, 'w') as json_file:
json_file.write('[')
# Iterate over each project ID and check for the files in any folder
for project_id, project_name in projects_info:
check_files_in_project(gitlab_api_url, private_token, group_id, project_id, project_name, filenames_to_check, output_filename)
# Close the file with a ']' to close the JSON array
with open(output_filename, 'a') as json_file:
json_file.write(']')
</code></pre>
<p>This the results on JSON file</p>
<pre><code>[{
"group_id": "gid",
"project_id": xxx,
"project_name": "name,
"files": []
}
{
"group_id": "gid",
"project_id": xxx,
"project_name": "name",
"files": []
}
{
"group_id": "gid",
"project_id": xxx,
"project_name": "names",
"files": []
}
</code></pre>
<p>note:</p>
<ul>
<li>xxx my personal data.</li>
</ul>
<p>Your help is very valuable for me, thank you.</p>
| <python><gitlab><repository> | 2023-11-25 08:26:19 | 1 | 4,539 | Yuday |
77,547,060 | 13,000,229 | Fail to start Flask + Connexion + Swagger | <h3>Problem</h3>
<p>I initiated a Flask app (+ Connexion and Swagger UI) and tried to open <a href="http://127.0.0.1:5000/api/ui" rel="nofollow noreferrer">http://127.0.0.1:5000/api/ui</a>. The browser showed <code>starlette.exceptions.HTTPException: 404: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</code></p>
<h3>Setup</h3>
<pre><code>% pip install "connexion[flask, swagger-ui]"
% export FLASK_APP="app"
(Prepare files)
% flask run --debug
(Access http://127.0.0.1:5000/api/ui)
</code></pre>
<p>Result</p>
<ul>
<li>Python 3.12.0</li>
<li>connexion 3.0.2</li>
<li>Flask 3.0.0</li>
<li>swagger_ui_bundle 1.1.0</li>
<li>Werkzeug 3.0.1</li>
</ul>
<h3>Files</h3>
<p>Directory structure</p>
<pre><code>app/
__init__.py
openapi.yaml
hello.py
</code></pre>
<p><code>__init__.py</code></p>
<pre><code>from connexion import FlaskApp
from flask.app import Flask
from pathlib import Path
BASE_DIR = Path(__file__).parent.resolve()
def create_app() -> Flask:
flask_app: FlaskApp = FlaskApp(__name__)
app: Flask = flask_app.app
flask_app.add_api("openapi.yaml")
return app
</code></pre>
<p><code>openapi.yaml</code></p>
<pre><code>openapi: 3.0.3
info:
title: "test"
description: "test"
version: "1.0.0"
servers:
- url: "/api"
paths:
/hello:
get:
summary: "hello"
description: "hello"
operationId: "hello.say_hello"
responses:
200:
description: "OK"
content:
text/plain:
schema:
type: string
example: "hello"
</code></pre>
<p><code>hello.py</code></p>
<pre><code>def say_hello() -> str:
return 'Hello, world!'
</code></pre>
<h3>Error message</h3>
<p>Based on these settings, I believe I can see Swagger UI at <a href="http://127.0.0.1:5000/api/ui" rel="nofollow noreferrer">http://127.0.0.1:5000/api/ui</a>. However, I faced the error message below.</p>
<pre><code>Traceback (most recent call last):
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 867, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 841, in dispatch_request
self.raise_routing_exception(req)
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 450, in raise_routing_exception
raise request.routing_exception # type: ignore
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/ctx.py", line 353, in match_request
result = self.url_adapter.match(return_rule=True) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/werkzeug/routing/map.py", line 624, in match
raise NotFound() from None
werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 1478, in __call__
return self.wsgi_app(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 1458, in wsgi_app
response = self.handle_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 759, in handle_user_exception
return self.ensure_sync(handler)(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/connexion/apps/flask.py", line 245, in _http_exception
raise starlette.exceptions.HTTPException(exc.code, detail=exc.description)
starlette.exceptions.HTTPException: 404: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
</code></pre>
| <python><flask><swagger><werkzeug><connexion> | 2023-11-25 07:51:34 | 1 | 1,883 | dmjy |
77,547,004 | 10,216,028 | How to reduce the space between the x-ticks in matplotlib? | <p>I already checked <a href="https://stackoverflow.com/questions/44863375/how-to-change-spacing-between-ticks">this</a> but the answers did not help.</p>
<p>I have the following code that generates 4 plots:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
group_x = [5, 0, 1, 16, 3, 1, 2, 0, 8, 2, 11, 5]
group_y = [8, 10, 6, 0, 0, 5, 12, 0, 14, 2, 1, 7]
group_w = [1, 3, 13, 9, 0, 15, 6, 2, 3, 7, 1, 0]
group_z = [0, 1, 2, 0, 10, 12, 12, 0, 8, 12, 4, 6]
ranges = ['a', '(a, b)', '(b, c)', '(c, d)', '(d, e)', '(e, f)',
'(f, g)', '(g, h)', '(h, i)', '(i, j)', '(j, k)', 'k']
colors = ['red', 'blue', 'green', 'purple']
participants = ['Group X', 'Group Y', 'Group W', 'Group Z']
fig, axes = plt.subplots(2, 2, figsize=(12, 5))
c = 0
for i, ax_row in enumerate(axes):
for j, ax in enumerate(ax_row):
data = [group_x, group_y, group_w, group_z][c]
ax.text(.5, .9, f'{participants[i]}',
horizontalalignment='center',
transform=ax.transAxes)
bars = ax.bar(ranges, data, color=colors[c], width=0.2)
ax.set_yticks(np.arange(0, 21, 5))
ax.set_yticklabels(range(0, 21, 5), fontsize=9)
ax.set_xticks(np.arange(12))
ax.set_xticklabels(ranges, rotation=30, ha='right', fontsize=8.5)
ax.set_xlabel('Characters')
ax.set_ylabel('Frequency', fontsize=8)
for bar, score in zip(bars, data):
ax.text(bar.get_x() + bar.get_width() / 2, bar.get_height(), str(score), ha='center', va='bottom')
c += 1
plt.tight_layout()
plt.savefig('stackover.png', dpi=600, bbox_inches='tight')
plt.show()
</code></pre>
<p>The output is:</p>
<p><a href="https://i.sstatic.net/4JoDG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4JoDG.png" alt="enter image description here" /></a></p>
<p>I want to reduce the space between the x-axis ticks so that the bars and their corresponding x-ticks will be very close next to each other. Also, I want the <code>a</code> bar to be very close to the y-axis and reduce the space between the <code>k</code> bar and the right edge of the corresponding frame. How do I change my code to have the mentioned effects?</p>
| <python><matplotlib> | 2023-11-25 07:29:18 | 2 | 455 | Coder |
77,546,937 | 8,248,194 | Rolling function from pandas to polars | <p>I have this function in pandas:</p>
<pre class="lang-py prettyprint-override"><code>def rolling_pd(
dataf: pd.DataFrame,
groupby_cols: Union[str, list],
column: str,
function: str = "mean",
rolling_periods: int = 1,
shift_periods: int = 1,
*args,
**kwargs,
) -> pd.Series:
return dataf.groupby(groupby_cols)[column].transform(
lambda d: (
d.shift(shift_periods)
.rolling(rolling_periods, min_periods=1)
.agg(function, *args, **kwargs)
)
)
</code></pre>
<p>I want to do the same thing with polars, haven't maanged since I don't see a rolling method, can you help me do this translation?</p>
| <python><pandas><python-polars> | 2023-11-25 06:59:15 | 1 | 2,581 | David Masip |
77,546,864 | 13,000,229 | (Connexion 3.0.2) ModuleNotFoundError: Please install connexion using the 'flask' extra | <h3>Problem</h3>
<p>I use connextion with Flask. Today I upgraded connexion from 2.14.2 to 3.0.2 and see <code>ModuleNotFoundError: Please install connexion using the 'flask' extra</code>.</p>
<p><a href="https://connexion.readthedocs.io/en/latest/quickstart.html" rel="noreferrer">https://connexion.readthedocs.io/en/latest/quickstart.html</a></p>
<p>I checked the official documentation, which says "To leverage the FlaskApp, make sure you install connexion using the flask extra."</p>
<h3>Question</h3>
<p>How can I install connexion using the flask extra?<br />
The documentation says the command is <code>pip install connexion[<extra>]</code>, but I see an error message "no matches found: connexion[flask]".</p>
<pre><code>% pip install connexion[flask]
zsh: no matches found: connexion[flask]
</code></pre>
<h3>Environment</h3>
<ul>
<li>Python 3.12.0</li>
<li>Flask 3.0.0</li>
<li>Connexion 3.0.2</li>
</ul>
| <python><flask><pip><connexion> | 2023-11-25 06:23:38 | 1 | 1,883 | dmjy |
77,546,793 | 1,084,174 | ERROR: No matching distribution found for tensorflow==2.5 | <p>I am trying to run the tutorial in <a href="https://colab.research.google.com/github/googlecodelabs/odml-pathways/blob/main/audio_classification/colab/model_maker_audio_colab.ipynb#scrollTo=wbMc4vHjaYdQ" rel="nofollow noreferrer">google colab</a></p>
<pre><code>!pip install tflite-model-maker tensorflow==2.5
</code></pre>
<p>Getting the error for cell-2:</p>
<pre><code>Collecting tflite-model-maker
Downloading tflite_model_maker-0.4.2-py3-none-any.whl (577 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 577.3/577.3 kB 7.5 MB/s eta 0:00:00
ERROR: Could not find a version that satisfies the requirement tensorflow==2.5 (from versions: 2.8.0rc0, 2.8.0rc1, 2.8.0, 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.9.0rc0, 2.9.0rc1, 2.9.0rc2, 2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.10.0rc0, 2.10.0rc1, 2.10.0rc2, 2.10.0rc3, 2.10.0, 2.10.1, 2.11.0rc0, 2.11.0rc1, 2.11.0rc2, 2.11.0, 2.11.1, 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0)
ERROR: No matching distribution found for tensorflow==2.5
</code></pre>
<p>What's wrong and how to resolve it?</p>
| <python><tensorflow><google-colaboratory> | 2023-11-25 05:49:44 | 1 | 40,671 | Sazzad Hissain Khan |
77,546,592 | 1,492,613 | how can I merge 2 table with outer join but without duplicate the missing row? | <p>Example, I have 2x A=20 in df2, 1x 20 in df1, 2x A=10 in df1, 1x 10 in df2.
In the merge result it will duplicate one A=20 row in df1 and one A=10 row in df2.
But I just want it to be NaN.
I expect a result like following:</p>
<pre><code> A B_x C_x B_y C_y
0 10 0 2 4.0 7.0
1 10 4 7 NaN NaN
2 20 5 8 5.0 8.0
3 20 NaN NaN 6.0 9.0
4 30 6 9 NaN NaN
</code></pre>
<pre><code>data1 = {'A': [10, 10, 20, 30],
'B': [0, 4, 5, 6],
'C': [2, 7, 8, 9]}
data2 = {'A': [10, 20, 20],
'B': [4, 5, 6],
'C': [7, 8, 9]}
import pandas as pd
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
print(df2)
display(pd.merge(df1, df2, how="outer", on="A"))
</code></pre>
<p>result:</p>
<pre><code> A B_x C_x B_y C_y
0 10 0 2 4.0 7.0
1 10 4 7 4.0 7.0
2 20 5 8 5.0 8.0
3 20 5 8 6.0 9.0
4 30 6 9 NaN NaN
</code></pre>
<p>how can I achieve this?</p>
<p><a href="https://stackoverflow.com/questions/74373425/merging-two-data-frames-based-on-a-common-column-with-repeated-values">Merging two data frames based on a common column with repeated values</a> does not ask the same question as I asked. I do not know why anyone refer to that question, except both use merge() there is nothing similar. In that question the op want to <strong>change the shape</strong> of merge result so it is basically a groupby or pivot after or before merge. I <strong>did not</strong> ask for that at all.</p>
<p>Please, do not close my question by referring that question. At least read my example, before you rush to do any harmful action.</p>
<p>it is super clear what I want: I do not want the outer merge to duplicate the value on one side when another side has more rows, but I <strong>do want to keep the shape</strong>. I simply want to put NaN into the duplicated cells.</p>
<p>I can already do that by something like <code>merged.loc[merged.duplicated(["B_x", "C_x"]), ["B_x", "C_x"]] = None</code></p>
<p>But this is very clumsy when there are many columns.</p>
| <python><pandas><dataframe> | 2023-11-25 03:53:30 | 0 | 8,402 | Wang |
77,546,452 | 22,932,995 | Kivy, Python: Layout with nested rules produces no output of child widgets | <p>I have a simple App, that is supposed to use a ScreenManager for different screens. When I create one of my Screens and make it the app.root widget everything works fine. But as soon, as I put my Screen as a child inside the ScreenManager, it produces no output as if the widgets were not there at all.</p>
<p>testScreenManager.py:</p>
<pre><code>from kivy.app import App
from kivy.uix.textinput import TextInput
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.button import Button
from kivy.graphics.vertex_instructions import Rectangle
from kivy.uix.screenmanager import ScreenManager, Screen
class MyScreenManager(ScreenManager):
pass
class myScreen(Screen):
pass
class myNavBoxes(BoxLayout):
pass
class myApp(App):
def build(self):
return MyScreenManager()
if __name__ == "__main__":
myApp().run()
</code></pre>
<p>my.kv:</p>
<pre><code><MyScreenManager>:
canvas.before:
Rectangle:
pos: self.pos
size: self.size
myScreen:
<myScreen>:
BoxLayout:
orientation: 'vertical'
size_hint: 0.8, 0.3
pos_hint: {'x':0.1, 'y':0.5}
TextInput:
id: username
hint_text: 'Username'
font_size: '20sp'
myNavBoxes:
<myNavBoxes>:
orientation: 'horizontal'
size_hint: 1, .2
pos_hint: {'x':.0, 'y':.0}
Button:
id: navButtonLogin
text: 'Login'
font_size: '15sp'
size_hint_y: None
</code></pre>
<p>This produces a blank white rectangle with no widgets inside it.</p>
<p>I tried a lot of things. One main concern is this: If I don't use custom widgets (which inherit directly from normal kivy-classes) and use the base kivy-classes instead, the code produces expected output:</p>
<pre><code><MyScreenManager>:
canvas.before:
Rectangle:
pos: self.pos
size: self.size
Screen:
BoxLayout:
orientation: 'vertical'
size_hint: 0.8, 0.3
pos_hint: {'x':0.1, 'y':0.5}
TextInput:
id: username
hint_text: 'Username'
font_size: '20sp'
myNavBoxes:
<myNavBoxes>:
orientation: 'horizontal'
size_hint: 1, .2
pos_hint: {'x':.0, 'y':.0}
Button:
id: navButtonLogin
text: 'Login'
font_size: '15sp'
size_hint_y: None
</code></pre>
| <python><kivy><kivy-language> | 2023-11-25 02:32:08 | 1 | 303 | Wamseln |
77,546,285 | 11,672,868 | pygame mask overlapping shapes of different color? | <p>how is it possible to create masks of different color shapes on a surface and check for overlap collision?</p>
<p>suppose I have a green circle and a red polygon on a surface. I want to know when they end up colliding while moving.</p>
<p>I tried something like this:</p>
<pre class="lang-py prettyprint-override"><code>circle_color = (0,255,0)
polygon_color = (255,0,0)
circle_mask = pygame.mask.from_threshold(SURFACE, circle_color)
polygon_mask = pygame.mask.from_threshold(SURFACE, polygon_color)
if circle_mask.overlap(polygon_mask, offset=(0,0)):
print("collision")
</code></pre>
<p>but collision doesn't get detected, I guess I'm missing something. what's a way to do this?</p>
<p>EDIT: I'm trying to get masks of shapes created with pygame.draw() function and not sprites or assets. Also I want to get pixel perfect collision because my polygons are not rectangles</p>
| <python><pygame><collision> | 2023-11-25 00:52:54 | 1 | 308 | K-FLOW |
77,546,065 | 1,492,613 | how to let logical and operator (&) treat the NaN value depends on the other side's value? | <p>in pandas or numpy's I want something like following: True & NaN == True, False & False == Fase, NaN & NaN == NaN</p>
<p>What is the most efficient way to do this? so far I have to do it as:</p>
<pre><code>(a.fillna(True) & b.fillna(True)).where(~(a.isna() & b.isna()), None)
</code></pre>
<p>example:</p>
<pre><code>from itertools import product
a = pd.DataFrame((product([True, False, None], [True, False, None])))
display(a)
display((a[0].fillna(True) & a[1].fillna(True)).where(~(a[0].isna() & a[1].isna()), None))
</code></pre>
<p>out put is:</p>
<pre><code> 0 1
0 True True
1 True False
2 True None
3 False True
4 False False
5 False None
6 None True
7 None False
8 None None
0 True
1 False
2 True
3 False
4 False
5 False
6 True
7 False
8 None
dtype: object
</code></pre>
<p>I have 2 cases: A. most row has NaN and B. only a few row has NaN
I wonder what is the best way to do this in these 2 cases respectively</p>
<p><strong>performance</strong></p>
<pre><code>b = a.sample(int(1e5), weights=[1,1,1,1,1,1,1,1,0.01], ignore_index=True, replace=True)
c = a.sample(int(1e5), weights=[1,1,1,1,1,1,1,1,80], ignore_index=True, replace=True)
display(b.isna().all(axis="columns").sum())
# 117 full NaN row
display(c.isna().all(axis="columns").sum())
# 90879 full NaN rows
import timeit
timeit.timeit(lambda: b.all(1).mask(b.isna().all(1)), number=100)
# 2.4s
timeit.timeit(lambda: c.all(1).mask(c.isna().all(1)), number=100)
# 1.6s
timeit.timeit(lambda: b.stack().groupby(level=0).all().reindex(b.index), number=100)
#3.3s
timeit.timeit(lambda: c.stack().groupby(level=0).all().reindex(c.index), number=100)
#0.9s
</code></pre>
<p>So yes as expected, the stack method, first drop all nan before compute, thus it is way faster for most NaN situation.</p>
| <python><pandas><numpy> | 2023-11-24 23:24:48 | 1 | 8,402 | Wang |
77,545,935 | 136,598 | GAE: Submit task from one service to be executed by another service | <p>I have a GAE app with a Python3 service (the default) and a NodeJS18 service.</p>
<p>I currently submit App Engine tasks (not HTTP target tasks) from the Python3 service and those tasks are executed by that same Python3 service. I have a <code>default</code> task queue where those tasks are submitted.</p>
<p>Now, I would like to submit a task from the Python3 service and have it executed by the NodeJS18 service.</p>
<p>I'm using the <code>google.cloud.tasks_v2</code> Python client to submit tasks. You can select the queue to submit the task to, but I don't see a way to specify which GAE service should process the task. Is it possible to specify the service to process the task?</p>
<p>If I instead submit the task from the NodeJS18 service to the same default queue, will it then be executed by the NodeJS18 service?</p>
| <python><node.js><google-cloud-platform><google-app-engine> | 2023-11-24 22:38:09 | 2 | 16,643 | minou |
77,545,900 | 14,818,993 | Hyperlink in colab markdown that opens within colab editor? | <p>There is a file in my Google Drive <code>/content/gdrive/MyDrive/Models.yaml</code>. When I write this file path in markdown in colab cell, it's clickable, and will open the file within the colab editor, and I can make and save changes in real-time. Like this:</p>
<p><a href="https://i.sstatic.net/vKLrd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vKLrd.png" alt="enter image description here" /></a></p>
<p>This is fine. But writing this way:</p>
<p><code>#@markdown - Edit /content/gdrive/MyDrive/INITIAL_MODELS.yaml file to make changes.</code></p>
<p>looks messy to me. Is there any way to shorten it like <code>[this](/content/gdrive/MyDrive/INITIAL_MODELS.yaml)</code>:</p>
<p>So, it becomes:</p>
<p><code>#@markdown - Edit this file to make changes</code></p>
<p>For me, writing this way won't work. It will open in new tab, and will give <code>403</code> google error.</p>
<p>Need Help.</p>
| <python><markdown><google-colaboratory> | 2023-11-24 22:23:57 | 0 | 308 | Huzaifa Arshad |
77,545,826 | 1,525,423 | Cast Array of Bytes to UInt8Array | <p>I have a pyarrow <a href="https://arrow.apache.org/docs/python/generated/pyarrow.BinaryArray.html" rel="nofollow noreferrer">BinaryArray</a> with exactly one byte at each location. These bytes are not UTF8, but "actual" binary data.</p>
<p>I'd like to cast this into a <a href="https://arrow.apache.org/docs/python/generated/pyarrow.UInt8Array.html" rel="nofollow noreferrer">UInt8Array</a>, but that seems difficult.</p>
<ul>
<li><code>array_of_bytes.cast(pa.uint8())</code> fails with <code>ArrowInvalid: Failed to parse string: '�' as a scalar of type uint8</code></li>
<li><a href="https://arrow.apache.org/docs/python/generated/pyarrow.UInt8Array.html" rel="nofollow noreferrer"><code>UInt8Array</code></a> does not (seem to) have a constructor usable on its own</li>
<li>I have taken a look at a<a href="https://arrow.apache.org/docs/python/generated/pyarrow.FixedSizeBinaryArray.html" rel="nofollow noreferrer"><code>FixedSizeBinaryArray</code></a>, but that also does not seem to help</li>
</ul>
<p>This Python workaround does the job, but is incredibly slow:</p>
<pre class="lang-py prettyprint-override"><code>pa.array([int.from_bytes(scalar.as_py()) for scalar in array_of_bytes], pa.uint8())
</code></pre>
<p>Is there a way to achieve this directly in pyarrow?</p>
<hr />
<p><strong>EDIT:</strong> this is a minimal, reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import pyarrow as pa
array_of_bytes = pa.array([bytes([i]) for i in range(256)], pa.binary())
array_of_bytes.cast(pa.uint8()) # this fails
</code></pre>
| <python><pyarrow> | 2023-11-24 22:00:05 | 1 | 3,991 | Finwood |
77,545,743 | 1,543,042 | Python classes in multiple files - do all methods need to be imported into class definition? | <p>I've started splitting a class among into several files/folders and have found that there are functions that are only needed to organize code and would never be called by anything higher-up in the class hierarchy (much less by the user). Is there someway I can import these into just the functions they're needed in rather than needing to import them into the class definition?</p>
<p>I'm getting a bit annoyed in my testing of getting the <code>AttributeError: 'MyClass' object has no attribute 'my_low_level_function'</code>?</p>
| <python><class><organization> | 2023-11-24 21:31:27 | 0 | 3,432 | user1543042 |
77,545,643 | 4,348,400 | Why am I getting duplicate node labels from Ciw? | <p>Here is a working example:</p>
<pre class="lang-py prettyprint-override"><code>import ciw
import matplotlib.pyplot as plt
import pandas as pd
max_time = 20
# Routing Matrix
R = [[0]*7 for i in range(7)]
R[0][1] = 1
R[1][0] = 1/3
R[1][2] = 1/3
R[1][3] = 1/3
R[2][1] = 1
R[3][1] = 1/2
R[3][5] = 1/2
R[5][3] = 1/3
R[5][4] = 1/3
R[5][6] = 1/3
R[6][5] = 1/2
# Arrival dists
arr_dists = [ciw.dists.Exponential(1/10)] * 3 + [None] * 4
N = ciw.create_network(
arrival_distributions=arr_dists,
service_distributions=[ciw.dists.Exponential(1)] * 7,
routing=R,
number_of_servers=[1]*7,
queue_capacities=[1]*7
)
ciw.seed(2018)
Q = ciw.Simulation(N)
Q.simulate_until_max_time(max_time)
slow_df = pd.DataFrame(
Q.get_all_records()
)
slow_df.plot.bar(x='node', y='time_blocked'); plt.show()
</code></pre>
<p>I get this plot, which seems to have duplicate node labels:</p>
<p><a href="https://i.sstatic.net/TnEAo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TnEAo.png" alt="enter image description here" /></a></p>
<p>Any yet checking <code>slow_df.node.apply(type).unique()</code> gives only <code>array([<class 'int'>], dtype=object)</code>, so I am not sure what would make an integer <code>3</code> distinct from another since they are value objects.</p>
<p>Similarly:</p>
<pre class="lang-py prettyprint-override"><code>>>> slow_df.node.unique()
array([3, 2, 1, 4, 6, 5])
</code></pre>
<p>Why am I getting repeat nodes in my plot?</p>
| <python><pandas> | 2023-11-24 21:01:17 | 1 | 1,394 | Galen |
77,545,524 | 5,530,152 | How to set the file path of a remote jupyter notebook in VSCode? | <p>Is it possible to set the working directory of a <strong>remote</strong> Jupyter <code>.ipynb</code> notebook? This problem is similar to <a href="https://stackoverflow.com/questions/55491046/how-to-set-the-running-file-path-of-jupyter-in-vscode">How to set the running file path of jupyter in VScode?</a>, but the solution provided in that question looks like it applies <strong>only</strong> to local instances of Jupyter, <strong>not remote instances</strong>.</p>
<p>When connecting to a remote server and kernel from VSCode, the working directory where the remote jupyter session was started is used as the working directory.</p>
<p>This means that if the notebook instance is launched in <code>/home/foouser/src/</code>, relative imports inside each project folder will not behave as expected. Inside <code>project2</code>, <code>main.ipynb</code> will not be able to execute <code>from library import resource</code>. This will result in a <code>ModuleNotFoundError</code>.</p>
<pre><code>.
└── /home/foouser/src/
├── project1/
│ └── main.ipynb
├── project2/
│ ├── library/
│ │ └── resorce.py
│ └── main.ipynb
└── project3/
├── ham/
│ └── spam.py
└── main.ipynb
</code></pre>
<h2>My Environment</h2>
<ul>
<li>VSCode 1.84.2 running on Mac Os</li>
<li>Jupyter Lab 4.0.9 running on Raspberry Pi OS (Debian Bookworm)</li>
<li>Python 3.11 running on the Pi</li>
<li>VSCode Jupyter Extension v2023.10.1100000000</li>
</ul>
<h2>Steps to Reproduce</h2>
<ol>
<li>Launch Jupyter on a raspberry pi in <code>/home/foouser/src/</code> - <code>$ jupyter lab --ip=192.168.1.99 --no-browser</code></li>
<li>Connect to raspberry pi from VS Code over SSH</li>
<li>Browse to an <code>.ipynb</code> file in <code>/home/foouser/src/project3/main.ipynb</code> and open</li>
<li>Set kernel to remote kernel by pasting URI to Jupyter server into VSCode</li>
<li>Try to run a cell with a relative import <code>from ham import spam</code> --> results in <code>ModuleNotFoundError</code></li>
</ol>
<h2>What I've Tried</h2>
<p><strong>Launching the notebook instance from <code>/home/foouser/src/project2</code></strong></p>
<p>This solves the problem of the relative imports in project 2, but makes it really hard to work on project1 or project3. I have to either start multiple instances of jupyter in each project directory, or stop one instance and start another. This is not at all ideal as the remote host is a raspberry pi with limited resources; everything kind of grinds to a halt.</p>
<p><strong>Setting <code>"jupyter.notebookFileRoot": "${fileDirname}"</code> in <code>.vscode/settings.json</code></strong></p>
<p><a href="https://stackoverflow.com/questions/55491046/how-to-set-the-running-file-path-of-jupyter-in-vscode">This solution</a> looks tantalizing, but after much faffing around, it appears that this may be a red-herring. I came across <a href="https://github.com/microsoft/vscode-jupyter/blob/85dcde7742a45e37eddc47d41de76ae728883aa3/package.nls.json#L177" rel="nofollow noreferrer">this comment</a> that states that the variable is only for <em>local</em> instances.</p>
<p><strike><strong>Creating a helper function in a python file to detect the working dir</strong></strike></p>
<p><strong>EDIT:</strong> This doesn't work reliably</p>
<p>This is a <a href="https://stackoverflow.com/a/73673295/5530152"><strong>fugly</strong> hack</a>, but it works. I'm sure this will bite me in the butt later.</p>
<p>dir_helper.py</p>
<pre class="lang-py prettyprint-override"><code>import os
def get_local_folder():
return os.path.dirname(os.path.realpath(__file__))
</code></pre>
<p>Then running the following cell:</p>
<pre class="lang-py prettyprint-override"><code>from dir_helper import get_local_folder
import os
os.chdir(get_local_folder())
</code></pre>
| <python><visual-studio-code><jupyter-notebook><jupyter> | 2023-11-24 20:26:10 | 3 | 933 | Aaron Ciuffo |
77,545,381 | 11,611,246 | I cannot import a Python package despite it being on the right path | <p>I wrote a Python package and uploaded it to <a href="https://pypi.org/project/graphab4py/" rel="nofollow noreferrer">PyPI</a> (note: I have been working on Python 3.11.3 and set Python >=3.11 as a requirement to make sure the package is compatible).</p>
<p>Subsequently, I installed it via <code>pip</code> by executing <code>pip install graphab4py</code>. However, I could not import it in a Python script (which is not the case with other packages I installed via <code>pip</code>.) To be sure, I set up a virtual environment and installed the package there. Within the virtual environment, I started a Python session and tried to import the package, again with no success.</p>
<p>I stumbled upon some suggestions on how to list available packages and I tried the following in Python in my virtual environment:</p>
<pre><code>for dist in __import__('pkg_resources').working_set:
print(dist.project_name.replace('Python', ''))
</code></pre>
<p>output:</p>
<pre><code>setuptools
pip
graphab4py
</code></pre>
<p>It clearly lists the package there. So I tried <code>import graphab4py</code> again, and, once more, I got</p>
<pre><code>ModuleNotFoundError: No module named 'graphab4py'
</code></pre>
<p>So I suspect there might be something wrong with my package? Does anybody have a clue what the issue might be?</p>
<p><strong>Edit</strong></p>
<p>I found some hint that there might be some issue with the installation: Where my packages are located, there is a folder <code>graphab4py-1.0.1.dist-info</code> but no folder named <code>graphab4py</code>. There was no error during the installation, though. The package code can also be viewed on <a href="https://github.com/ManuelPopp/graphab4py" rel="nofollow noreferrer">GitHub</a></p>
| <python><pip><setuptools><python-module><pypi> | 2023-11-24 19:48:51 | 1 | 1,215 | Manuel Popp |
77,545,359 | 9,415,280 | Can't train and sim my tensorflow model with original dataset (tf.data.dataset) but it work after I split it!? Same processing on both? | <p>I build a dataset for trainning 2 heads neural network, first with lstm and the second with simple perceptron.</p>
<p>My dataset process in 2 ways to get one version split into train and test set and the second version not split to perform train test and simulation of the complete data at the end.</p>
<p>here my code to do that:</p>
<pre><code># fonction to split innitial dataset into train and test dataset:
def is_test(x, _):
return x % int(self.val_split * 100) == 0
def is_train(x, y):
return not is_test(x, y)
recover = lambda x, y: y
full_dataset
# Split the dataset for training.
test_set = full_dataset.enumerate().filter(is_test).map(recover)
# Split the dataset for testing/validation.
trainning_set = full_dataset.enumerate().filter(is_train).map(recover)
test_set = test_set.batch(batch_size).cache().prefetch(2)
trainning_set = trainning_set.batch(batch_size).cache().prefetch(2)
full_dataset = full_dataset.batch(batch_size).cache().prefetch(2)
</code></pre>
<p>making a check on each dataset:</p>
<pre><code>full_dataset:
<_PrefetchDataset element_spec=({'input1': TensorSpec(shape=(None, None, 3), dtype=tf.float32, name=None), 'input2': TensorSpec(shape=(None, 13), dtype=tf.float32, name=None)}, TensorSpec(shape=(None,), dtype=tf.float32, name=None))>
test_set:
<_PrefetchDataset element_spec=({'input1': TensorSpec(shape=(None, None, 3), dtype=tf.float32, name=None), 'input2': TensorSpec(shape=(None, 13), dtype=tf.float32, name=None)}, TensorSpec(shape=(None,), dtype=tf.float32, name=None))>
trainning_set:
<_PrefetchDataset element_spec=({'input1': TensorSpec(shape=(None, None, 3), dtype=tf.float32, name=None), 'input2': TensorSpec(shape=(None, 13), dtype=tf.float32, name=None)}, TensorSpec(shape=(None,), dtype=tf.float32, name=None))>
</code></pre>
<p>Now why trainning my model with split set train fine</p>
<pre><code>model.fit(trainning_set, validation_data=data.test_set)
</code></pre>
<p>but trainning my model with all the data doesn't work and produce nan??!!</p>
<pre><code>model.fit(full_dataset)
Epoch 1/5
160/160 - 2s - loss: nan - nash_sutcliffe: nan - 2s/epoch - 12ms/step
Epoch 2/5
160/160 - 0s - loss: nan - nash_sutcliffe: nan - 319ms/epoch - 2ms/step
...
</code></pre>
<p>I did some search and test but can't found what is different btw these 2 versions of dataset and why one work an not the other one!?</p>
<p>here samples of my test_set and full_dataset before batching... as you can see it are the same except for test_set, the values of inputs1 are more rounded (?!) but still float32</p>
<pre><code>for inputs, targets in test_set.take(1):
print("Feature:", inputs)
print("Label:", targets)
Feature: {'input1': <tf.Tensor: shape=(5, 3), dtype=float32, numpy=
array([[ 0. , 16.12, 0. ],
[ 0. , 17.42, 0.57],
[ 0. , 11.36, 13.97],
[ 0. , 10.55, 0.96],
[ 0. , 11.56, 0.24]], dtype=float32)>, 'input2': <tf.Tensor: shape=(13,), dtype=float32, numpy=
array([1.4391040e+02, 5.4850894e+03, 8.7901926e+00, 3.6657768e+01,
5.4554661e+01, 9.5567673e+01, 2.0000000e+00, 5.8438915e+01,
2.0383540e+03, 6.7381866e+01, 5.6437737e+01, 4.7759323e+00,
0.0000000e+00], dtype=float32)>}
Label: tf.Tensor(0.91, shape=(), dtype=float32)
for inputs, targets in full_dataset.take(1):
print("Feature:", inputs)
print("Label:", targets)
Feature: {'input1': <tf.Tensor: shape=(5, 3), dtype=float32, numpy=
array([[0.000e+00, 9.860e+00, 0.000e+00],
[0.000e+00, 1.308e+01, 0.000e+00],
[0.000e+00, 1.433e+01, 1.000e-02],
[0.000e+00, 1.630e+01, 0.000e+00],
[0.000e+00, 1.644e+01, 0.000e+00]], dtype=float32)>, 'input2': <tf.Tensor: shape=(13,), dtype=float32, numpy=
array([1.4391040e+02, 5.4850894e+03, 8.7901926e+00, 3.6657768e+01,
5.4554661e+01, 9.5567673e+01, 2.0000000e+00, 5.8438915e+01,
2.0383540e+03, 6.7381866e+01, 5.6437737e+01, 4.7759323e+00,
0.0000000e+00], dtype=float32)>}
Label: tf.Tensor(0.79, shape=(), dtype=float32)
</code></pre>
| <python><tensorflow><tensorflow2.0><tensorflow-datasets><tf.data.dataset> | 2023-11-24 19:41:57 | 1 | 451 | Jonathan Roy |
77,545,352 | 5,924,264 | Dividing multiple columns by another columns giving NaN`` | <pre><code>In [24]: df = pd.DataFrame({"a": [1], "b": [2], "c": [3]})
In [25]: df[["b", "c"]] / df["a"]
Out[25]:
b c 0
0 NaN NaN NaN
</code></pre>
<p>I was expecting this to work, but it seems I do not understand what is going on behind the scenes. Why does this give <code>NaN</code>?</p>
| <python><pandas><dataframe> | 2023-11-24 19:39:46 | 2 | 2,502 | roulette01 |
77,545,270 | 5,924,264 | How to divide column in dataframe by values in a dict according to some key? | <p>I have a dataframe <code>df</code> with the columns <code>delta</code> and <code>integer_id</code>. I have a dict <code>d</code> that maps <code>integer_id</code> to some floating point value. I want to divide each row's <code>delta</code> in <code>df</code> by the corresponding value from <code>d</code> for the <code>integer_id</code>, and if the row's <code>integer_id</code> doesn't exist in the dict, leave <code>delta</code> unchanged.</p>
<p>Here's an example:</p>
<pre><code>df = pd.DataFrame({
"integer_id": [1, 2, 3],
"delta": [10, 20, 30]
})
d = {1: 0.5, 3: 0.25}
</code></pre>
<p>The result should be</p>
<pre><code>df = pd.DataFrame({
"integer_id": [1, 2, 3],
"delta": [20, 20, 120] # note the middle element is unchanged
})
</code></pre>
<p>I tried <code>df["delta"] /= df.integer_id.map(d)</code>, but this will return <code>NaN</code> for the second row because <code>d</code> doesn't have the corresponding key. But something like</p>
<pre><code>df["delta"] /= df.integer_id.map(lambda x: d.get(x, 1))
</code></pre>
<p>will get what I need, but I'm wondering what other approaches there are for this case?</p>
| <python><pandas><dataframe> | 2023-11-24 19:16:38 | 4 | 2,502 | roulette01 |
77,545,240 | 569,313 | Sagemaker Studio Notebook - How to access OS Environ | <p>I am trying to set (at the terminal or in the notebook) an environment variable. I am running the following in the terminal or (with %%bash in the cell) notebook cell:</p>
<pre><code>echo "export MY_KEY='BLAH-BLAH123'" >> ~/.zshrc
source ~/.zshrc
</code></pre>
<p>or</p>
<pre><code>echo "export MY_KEY='BLAH-BLAH123'" >> ~/.bash_profile
source ~/.bash_profile
</code></pre>
<p>But when i run:</p>
<pre><code>import os
os.environ["MY_KEY"]
</code></pre>
<p>The key is not found. What am i doing wrong? It must be something with an image versus the terminal??</p>
| <python><bash><amazon-sagemaker> | 2023-11-24 19:09:29 | 1 | 1,832 | B_Miner |
77,544,933 | 4,019,495 | Pandas, numpy: return multiple columns in `np.select`? | <p>I have the following DataFrame.</p>
<pre><code> A B
0 1.0 4
1 2.0 5
2 NaN 6
</code></pre>
<p>I want to turn it into the following dataframe.</p>
<pre><code> A B val val_source
0 1.0 4 1.0 A
1 2.0 5 2.0 A
2 NaN 6 6.0 B
</code></pre>
<p>What's the easiest way to do this? I envision an <code>np.select</code> that returns multiple columns, something like</p>
<pre><code>conds = [df['A'].notna(), True]
choices = [df[['A']].assign(val_source='A'), df[['B']].assign(val_source='B')]
df[['val', 'val_source']] = np.select(conds, choices)
</code></pre>
<p>but this results in an error. I'm forced to do two separate <code>np.select</code> statements, even though they share the same <code>conds</code>.</p>
<pre><code>conds = [df['A'].notna(), True]
_choices_val_src = [
(df['A'], 'A'),
(df['B'], 'B'),
]
choices_val, choices_src = zip(*_choices_val_src)
df['val'] = np.select(conds, choices_val, default=np.nan)
df['val_source'] = np.select(conds, choices_src, default=np.nan)
</code></pre>
<p>Is there a cleaner way of doing this?</p>
| <python><pandas><numpy> | 2023-11-24 18:01:19 | 2 | 835 | extremeaxe5 |
77,544,923 | 3,370,561 | Aggregate column with list of string with intersection of the elements with Polars | <p>I'm trying to aggregate some rows in my dataframe with a <code>list[str]</code> column. For each <code>id</code> I need the intersection of all the lists in the group. Not sure if I'm just overthinking it but I can't provide a solution right now. Any help please?</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{"id": [1,1,2,2,3,3],
"values": [["A", "B"], ["B", "C"], ["A", "B"], ["B", "C"], ["A", "B"], ["B", "C"]]
}
)
</code></pre>
<p>Expected output</p>
<pre><code>shape: (3, 2)
┌─────┬───────────┐
│ idx ┆ values │
│ --- ┆ --- │
│ i64 ┆ list[str] │
╞═════╪═══════════╡
│ 1 ┆ ["B"] │
│ 2 ┆ ["B"] │
│ 3 ┆ ["B"] │
└─────┴───────────┘
</code></pre>
<p>I've tried some stuff without success</p>
<pre class="lang-py prettyprint-override"><code>df.group_by("id").agg(
pl.reduce(function=lambda acc, x: acc.list.set_intersection(x),
exprs=pl.col("values"))
)
# shape: (3, 2)
# ┌─────┬──────────────────────────┐
# │ id ┆ values │
# │ --- ┆ --- │
# │ i64 ┆ list[list[str]] │
# ╞═════╪══════════════════════════╡
# │ 1 ┆ [["A", "B"], ["B", "C"]] │
# │ 3 ┆ [["A", "B"], ["B", "C"]] │
# │ 2 ┆ [["A", "B"], ["B", "C"]] │
# └─────┴──────────────────────────┘
</code></pre>
<p>Another one</p>
<pre class="lang-py prettyprint-override"><code>df.group_by("id").agg(
pl.reduce(function=lambda acc, x: acc.list.set_intersection(x),
exprs=pl.col("values").explode())
)
# shape: (3, 2)
# ┌─────┬──────────────────────┐
# │ id ┆ values │
# │ --- ┆ --- │
# │ i64 ┆ list[str] │
# ╞═════╪══════════════════════╡
# │ 3 ┆ ["A", "B", "B", "C"] │
# │ 1 ┆ ["A", "B", "B", "C"] │
# │ 2 ┆ ["A", "B", "B", "C"] │
# └─────┴──────────────────────┘
</code></pre>
| <python><dataframe><list><aggregation><python-polars> | 2023-11-24 17:59:42 | 3 | 374 | 29antonioac |
77,544,843 | 3,290,799 | Iterate through rows in data frame with n rows and n columns and count the frequency of combinations 1:n | <p>Iterate through rows in data frame with <code>n</code> rows and 6 columns and count the frequency of combinations <code>1:n</code> within each row.</p>
<p>Non working template code:</p>
<pre><code>import pandas as pd
import itertools
from collections import Counter
# create sample data
df = pd.DataFrame([
[2, 10, 18, 31, 41],
[12, 27, 28, 39, 42]
])
def get_combinations(row)
all_combinations[]
for i in range(1, len(df)+1):
result = list(itertools.combinations(df, i))
return all_combinations
# get all posssible combinations of values in a row
all_rows = df.apply(get_combinations, 1).values
all_rows_flatten = list(itertools.chain.from_iterable(all_rows))
# use Counter to count how many there are of each combination
count_combinations = Counter(all_rows_flatten)
print(all_combinations["count_combinations"])
</code></pre>
| <python><pandas><collections><combinations><python-itertools> | 2023-11-24 17:42:33 | 2 | 867 | RTrain3k |
77,544,756 | 8,930,149 | Django with mypy: How to resolve incompatible types error due to redefined field for custom `User` model class that extends "AbstractUser"? | <p>I have an existing Django project which uses a custom <code>User</code> model class that extends <code>AbstractUser</code>. For various important reasons, we need to redefine the <code>email</code> field as follows:</p>
<pre class="lang-py prettyprint-override"><code>class User(AbstractUser):
...
email = models.EmailField(db_index=True, blank=True, null=True, unique=True)
...
</code></pre>
<p>Typing checks via mypy have been recently added. However, when I perform the mypy check, I get the following error:</p>
<blockquote>
<p>error: Incompatible types in assignment (expression has type
"EmailField[str | int | Combinable | None, str | None]", base class
"AbstractUser" defined the type as "EmailField[str | int | Combinable,
str]") [assignment]</p>
</blockquote>
<p>How can I make it so that mypy allows this type reassignment? I don't wish to just use <code># type: ignore</code> because I wish to use its type protections.</p>
<p>For context, if I do use <code># type: ignore</code>, then I get dozens of instances of the following mypy error instead from all over my codebase:</p>
<blockquote>
<p>error: Cannot determine type of "email" [has-type]</p>
</blockquote>
<p>Here are details of my setup:</p>
<pre><code>python version: 3.10.5
django version: 3.2.19
mypy version: 1.6.1
django-stubs[compatible-mypy] version: 4.2.6
django-stubs-ext version: 4.2.5
typing-extensions version: 4.8.0
</code></pre>
| <python><django><mypy><django-stubs> | 2023-11-24 17:21:50 | 2 | 643 | sunw |
77,544,523 | 8,037,521 | pip: install packages from Python subprojects with pyproject.toml | <p>I am trying to find some best practice for the following situation: I have a Python project which has several Python subprojects. Each subproject contain <code>pyproject.toml</code> specifying the dependencies.</p>
<p>Is there some way to automatically link the dependencies of these different <code>pyproject.toml</code> files to the parent repository? Or should I manually specify a parent <code>pyproject.toml</code> where I would have to always copy the up-to-date subprojects' dependencies (not ideal)?</p>
<p>I guess it would be also possible to write a bash script that would go to the subprojects' directories one-by-one and run <code>pip install</code> for respective <code>pyproject.toml</code> but, again, does not seem like most elegant solution to me.</p>
<p>Note that I am asking solution without utilization of Poetry/Conda. I used to use Poetry for this but I am curious if it is manageable without this tool. Unfortunately, all solutions I managed to google about <code>pyproject.toml</code> in such more complex projects, assume usage of Poetry.</p>
<p>UPD:</p>
<p>In ideal situation, I would like to create local wheels or, potentially, to update package to a private repository. For now I will be using the solution I've described (with <code>pip</code>), but the goal is to make it more applicable to packaging.</p>
| <python><pyproject.toml> | 2023-11-24 16:30:37 | 1 | 1,277 | Valeria |
77,544,500 | 3,729,397 | How can I install a package into a nix shell from source for debugging using flakes, equivalent to python setup.py develop | <p>Is there a way to include a package into a nix shell from source, ie the path linking not to a copy of the code in the nix store but to the actual source code, for debugging purposes?</p>
<p>I.e. I'm searching for a nix flake equivalent to running</p>
<pre><code>python setup.py develop
</code></pre>
<p>in a conda environment instead of install.</p>
<p>The closest I found is using a flake.nix like</p>
<pre><code>{
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
outputs = { self, nixpkgs }:
let
pkgs = import nixpkgs { system = "x86_64-linux"; };
python_library = pkgs.python3Packages.buildPythonPackage {
pname = "python_library";
version = "0.1";
src = /home/user_name/python_library;
doCheck = false;
};
in {
defaultPackage.x86_64-linux = pkgs.buildEnv {
name = "test-env";
paths = [
(pkgs.python3.withPackages(ps: [
ps.numpy
ps.scipy
ps.matplotlib
python_library
]))
pkgs.spyder
];
};
};
}
</code></pre>
<p>Where <code>python_library</code> contains new code I wish to debug, e.g.</p>
<pre><code>class Hello:
def __init__(self):
print("Hello!!! Hello!")
</code></pre>
<p>which I test via</p>
<pre><code>>>> from python_library.test import Hello
>>> a=Hello()
Hello!!! Hello!
</code></pre>
<p>For changes in the code to take effect, I need to run</p>
<pre><code>nix build
nix shell
</code></pre>
<p>which rebuilds the nix environment with the updated code, as long as the flake is not inside a git repository. However this rebuilding process can get quite annoying in more complex cases if there are a lot of bugs to fix and when always having to switch folder between the nix flake and the code to execute, so I was wondering if there is a way to write the flake in such a way that the nix store directly links to the source code, instead of copying it over, as is possible with anaconda?
None of the <code>nix develop</code> commands appeared to do anything like that for me.</p>
| <python><nix><nixos> | 2023-11-24 16:26:36 | 0 | 969 | Vera |
77,544,487 | 8,340,222 | How do I overwrite a BigQuery table (data and schema) from PySpark? | <p>I am trying to write a PySpark <code>DataFrame</code> to a BigQuery table. The schema for this table may change between job executions (columns may be added or omitted). So, I would like to overwrite this table each execution.</p>
<p>An example:</p>
<pre><code>df = spark.createDataFrame(data=[(1, "A")],schema=["col1","col2"])
df.write.format("bigquery")\
.option("temporaryGcsBucket","temporary-bucket-name")\
.mode("overwrite")\
.save(path="dataset_name.table_name")
</code></pre>
<p>When `dataset_name.table_name" doesn't already exists, the above works great to generate:
<a href="https://i.sstatic.net/4plBL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4plBL.png" alt="enter image description here" /></a></p>
<p>However, subsequent jobs may be as below:</p>
<pre><code>df.withColumnRenamed("col1", "col3").write.format("bigquery")\
.option("writeDisposition", "WRITE_TRUNCATE")\
.option("temporaryGcsBucket","temporary-bucket-name")\
.mode("overwrite")\
.save(path="dataset_name.table_name")
</code></pre>
<p>The above job does not generate what I want. I get no <code>col3</code> and <code>col1</code> still appears:
<a href="https://i.sstatic.net/8uZ2K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8uZ2K.png" alt="enter image description here" /></a></p>
<p>Even more disturbing, I get no error message.</p>
<p>So, what options should I specify so that the result in BigQuery is just <code>col2</code> and <code>col3</code> with appropriate data?</p>
<p>Basically, I want to mimic the SQL statement <code>CREATE OR REPLACE TABLE</code> from PySpark.</p>
| <python><pyspark><google-bigquery> | 2023-11-24 16:24:09 | 1 | 415 | Paco |
77,544,416 | 3,534,782 | Using a keras generator for streaming training data yields a strange tensor size mismatch error -- tensor flow code is too opaque to debug the issue | <p>I am training a neural network in tensor flow, and because I was running out of memory when training to load my whole training set (input images and "ground truth" images), I am trying to stream data using a generator such that only a few images are loaded at a time. My code takes each image and subdivides it into a set of many images. This is the code for the generator class I am using, based on a tutorial I found online:</p>
<pre><code>class DataGenerator(keras.utils.all_utils.Sequence):
'Generates data for Keras'
def __init__(self,
channel,
pairs,
prediction_size=200,
input_normalizing_function_name='standardize',
label="",
batch_size=1):
'Initialization'
self.channel = channel
self.prediction_size = prediction_size
self.batch_size = batch_size
self.pairs = pairs
self.id_list = list(self.pairs.keys())
self.input_normalizing_function_name = input_normalizing_function_name
self.label = label
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.id_list) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
print("{} Indexes is {}".format(self.label, indexes))
# Find list of IDs
subset_pair_id_list = [self.id_list[k] for k in indexes]
print("\t{} subset_pair_id_list is {}".format(self.label, subset_pair_id_list))
# Generate data
normalized_input_frames, normalized_gt_frames = self.__data_generation(subset_pair_id_list)
print("in __getitem, returning data batch")
return normalized_input_frames, normalized_gt_frames
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = list(range(len(self.id_list)))
def __data_generation(self, subset_pair_id_list):
'subdivides each image into an array of multiple images'
# Initialization
normalized_input_frames, normalized_gt_frames = get_normalized_input_and_gt_dataframes(
channel = self.channel,
pairs_for_training = self.pairs,
pair_ids=subset_pair_id_list,
input_normalizing_function_name = self.input_normalizing_function_name,
prediction_size=self.prediction_size
)
print("\t\t\t~~~In data generation: input shape: {}, gt shape: {}".format(normalized_input_frames.shape, normalized_gt_frames.shape))
return input_frames, gt_frames
</code></pre>
<p>I am using this generator for a set of data that is used for training, and then also using another instance of it for validation, for example:</p>
<pre><code>training_data_generator = DataGenerator(
pairs=pairs_for_training,
prediction_size=prediction_size,
input_normalizing_function_name=input_normalizing_function_name,
batch_size=batch_size,
channel=channel,
label="training generator"
)
</code></pre>
<p>Then I start training, which I am running with model.fit:</p>
<pre><code> callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience, restore_best_weights=True)
learning_rate = 0.0001
opt = tf.keras.optimizers.Adam(learning_rate)
l = tf.keras.losses.MeanSquaredError()
print("Compiling model...")
model.compile(loss=l, optimizer=opt)
print('\tTraining model...')
with tf.device('/device:GPU:0'):
model_history = model.fit(
training_data_generator,
validation_data=validation_data_generator,
epochs=eps,
callbacks=[callback]
)
</code></pre>
<p>This is the last bit of the print outputs before the failure:</p>
<pre><code>Epoch 1/1000
training generator Indexes is [0]
training generator subset_pair_id_list is ['A']
Loading batch of 1 pairs...
['A']
num data is 1
~~~In data generation: input shape: (5, 100, 100, 1), gt shape: (5, 100, 100, 1)
in __getitem, returning data batch
</code></pre>
<p>This step fails, however, with a strange error about tensor size mismatches, which is due to my use of the generator (it didn't happen before without the generators):</p>
<pre><code> File "/root/micromamba/envs/training/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: All dimensions except 3 must match. Input 1 has shape [5 25 25 32] and doesn't match input 0 with shape [5 24 24 64].
[[node gradient_tape/model/concatenate/ConcatOffset (defined at /bin/train.py:633) ]] [Op:__inference_train_function_1982]
</code></pre>
<p>I tried using breakpoints to delve into the tensor flow code and figure out why it is generating these tensors but couldn't find the function that was actually making them, and couldn't get to the bottom of what's going on. You can see that each returned set of input and ground truth data has shape (5, 100, 100, 1), so I don't know where the 25, 24, 32, and 64 values would be coming from in that error message. What might be going on here? I was under the assumption that each batch was returned and used for training, then thrown out before the next batch was fetched by the generator, but it seems like some sort of concatenation operation is being attempted based on the error message.</p>
| <python><tensorflow><keras><streaming><generator> | 2023-11-24 16:08:04 | 1 | 419 | ekofman |
77,544,407 | 15,341,457 | Scrapy response returns an empty array | <p>I'm crawling this <a href="https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=hp" rel="nofollow noreferrer">page</a> with scrapy and I'm trying to extract all the rows of the main table.</p>
<p>The following <em>XPath</em> expression should give me the wanted result:</p>
<pre><code>//div[@id='TableWithRules']//tbody/tr
</code></pre>
<p>Testing with the scrap shell made me notice that this expression does return an empty array:</p>
<pre><code>#This response is empty: []
response.xpath("//div[@id='TableWithRules']//tbody").extract()
#This one is not:
response.xpath("//div[@id='TableWithRules']//thead").extract()
</code></pre>
<p>I guess the website owners tries to limit the scraping of the table data but is there any way to find a work-around?</p>
| <python><shell><web-scraping><xpath><scrapy> | 2023-11-24 16:06:36 | 2 | 332 | Rodolfo |
77,544,292 | 5,995,696 | How properly store and load own embeddings in Redis vector db | <p>Here is a simple code to use Redis and embeddings but It's not clear how can I build and load own embeddings and then pull it from Redis and use in search</p>
<pre><code>from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.redis import Redis
embeddings = OpenAIEmbeddings
metadata = [
{
"user": "john",
"age": 18,
"job": "engineer",
"credit_score": "high"
}
]
texts = ["foo", "foo", "foo", "bar", "bar"]
rds = Redis.from_texts(
texts,
embeddings,
metadata,
redis_url="redis://localhost:6379",
index_name="users",
)
results = rds.similarity_search("foo")
print(results[0].page_content)
</code></pre>
<p>But I want to load a text from e.g. text file, create embedings and load into Redis for later use. Something like this:</p>
<pre><code>from openai import OpenAI
client = OpenAI()
def get_embedding(text, model="text-embedding-ada-002"):
text = text.replace("\n", " ")
return client.embeddings.create(input = [text], model=model).data[0].embedding
</code></pre>
<p>Does anyone have good example to implement this approach? Also wondering about TTL for embedings in Redis</p>
| <python><openai-api><langchain><large-language-model> | 2023-11-24 15:44:25 | 2 | 1,711 | John Glabb |
77,544,226 | 1,221,661 | Add multiple rows to MultiIndex DataFrame at once | <p>Imagine that we have an experiment with multiple subjects and multiple trials per subject, and each trial produces an unknown number of "events" for two measurement methods A and B. Now we want to store that in a DataFrame using MultiIndex:</p>
<pre class="lang-py prettyprint-override"><code>tuples = [
('s1', 't1', 0, 1, 11), ('s1', 't2', 0, 4, 14), ('s1', 't2', 1, 5, 15), ('s2', 't1', 0, 6, 16),
('s2', 't1', 1, 7, 17), ('s2', 't2', 0, 8, 18), ('s2', 't3', 0, 9, 19),
]
df= pd.DataFrame.from_records(tuples, index=['subject', 'trial', 'event'],
columns=['subject', 'trial', 'event', 'A', 'B'])
print(df)
</code></pre>
<p>The DataFrame looks like this. Note how we have different numbers of trials and events for each subject.</p>
<pre><code> A B
subject trial event
s1 t1 0 1 11
t2 0 4 14
1 5 15
s2 t1 0 6 16
1 7 17
t2 0 8 18
t3 0 9 19
</code></pre>
<p>Now suppose that we want to add the events for subject 3, trial 1, method A:</p>
<pre class="lang-py prettyprint-override"><code>events = [5,6,7] # List of unknown length, generated by some algorithm
for i, event in enumerate(events):
df.loc[('s3', 't1', i), 'A'] = events[i]
print(df)
</code></pre>
<p>This works, but adding things to a data structure in a loop is always a bad idea, because it creates a lot of unnecessary copies of the whole dataframe. Imagine doing this for thousands of events and hundreds of trials...</p>
<pre><code> A B
subject trial event
s1 t1 0 1.0 11.0
t2 0 4.0 14.0
1 5.0 15.0
s2 t1 0 6.0 16.0
1 7.0 17.0
t2 0 8.0 18.0
t3 0 9.0 19.0
s3 t1 0 5.0 NaN
1 6.0 NaN
2 7.0 NaN
</code></pre>
<p>So there must be better method... however, I couldn't find one. Here's what I tried so far, but none of the approaches really worked. Granted, I didn't really expect most of them to work. At the end I got frustrated and just tried random combinations of syntax, because I found the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html" rel="nofollow noreferrer">"MultiIndex / advanced indexing"</a> documentation a bit hard to understand.</p>
<pre><code>df.loc[('s3','t2'), 'A'] = events # --> KeyError: ('s3', 't2')
df.loc[('s3','t2', slice(None)), 'A'] = events # --> ValueError: Must have equal len keys and value when setting with an iterable
df.loc[('s3','t2', [0,1,2]), 'A'] = events # --> KeyError: ('s3', 't2', [0, 1, 2])
df.loc[('s4','t2', [0,1,2]), 'A'] = events # --> KeyError: 's4'
df.loc['s3','t2',:] = events # KeyError: ('s3', 't2', slice(None, None, None))
df['s3', 't2', :] = events # ValueError: Length of values (3) does not match length of index (10)
df['s3', 't2'] = events # ValueError: Length of values (3) does not match length of index (10)
df[('s3','t2'):] = events # ValueError: could not broadcast input array from shape (3,) into shape (0,2)
</code></pre>
<p>Or is this just the wrong tool for the job? In the end, I want to compare A and B for some subsets of data, so Pandas' flexible querying should come in handy.</p>
<p>In the past, I did most of this using lists and dicts of RecordClass instances containing numpy arrays, and selected data using list comprehensions.</p>
| <python><pandas><dataframe><multi-index> | 2023-11-24 15:30:34 | 1 | 1,439 | Fritz |
77,544,207 | 6,272,006 | Repricing Bonds used for Bootstrapping in QuantLib-Python | <p>I have data containing Bond information and I managed to use this information to bootstrap zero curve. To sense check if my zero curve is correct, I would like to reprice the Bonds using the bootstrapped curve (using the zero curve) to get back to the quoted prices of the Bonds. I am getting the following errors on the repricing part and if anyone can help I would appreciate;</p>
<pre><code>Traceback (most recent call last):
File "/Users/Library/CloudStorage/OneDrive-Personal/QuantLib Software/Valuations/IRS using Bond Bootstrapping/IRS using Bond Bootstrapping 2.py", line 369, in <module>
bondEngine = ql.DiscountingBondEngine(curve)
File "/usr/local/lib/python3.9/site-packages/QuantLib/QuantLib.py", line 25290, in __init__
_QuantLib.DiscountingBondEngine_swiginit(self, _QuantLib.new_DiscountingBondEngine(discountCurve))
TypeError: in method 'new_DiscountingBondEngine', argument 1 of type 'Handle< YieldTermStructure > const &'
</code></pre>
<p>Find below the code that I am running;</p>
<pre><code># Importing Libraries:
# The code imports necessary libraries:
# pandas for data manipulation, matplotlib.pyplot for plotting, and QuantLib (ql) for quantitative finance calculations.
import pandas as pd
import matplotlib.pyplot as plt
# Use the QuantLib or ORE Libraries
import QuantLib as ql
# Setting Evaluation Date:
# Sets the evaluation date
today = ql.Date(21, ql.November, 2023)
ql.Settings.instance().evaluationDate = today
# Calendar and Day Count:
# Creates a calendar object and specifies the day-count convention (Actual/365 Fixed)
calendar = ql.NullCalendar()
day_count = ql.Actual365Fixed()
# Settlement Days:
zero_coupon_settlement_days = 4
coupon_bond_settlement_days = 3
# Face Value
faceAmount = 100
data = [
('11-09-2023', '11-12-2023', 0, 99.524, zero_coupon_settlement_days),
('11-09-2023', '11-03-2024', 0, 96.539, zero_coupon_settlement_days),
('11-09-2023', '10-06-2024', 0, 93.552, zero_coupon_settlement_days),
('11-09-2023', '09-09-2024', 0, 89.510, zero_coupon_settlement_days),
('22-08-2022', '22-08-2024', 9.0, 96.406933, coupon_bond_settlement_days),
('27-06-2022', '27-06-2025', 10.0, 88.567570, coupon_bond_settlement_days),
('27-06-2022', '27-06-2027', 11.0, 71.363073, coupon_bond_settlement_days),
('22-08-2022', '22-08-2029', 12.0, 62.911623, coupon_bond_settlement_days),
('27-06-2022', '27-06-2032', 13.0, 55.976845, coupon_bond_settlement_days),
('22-08-2022', '22-08-2037', 14.0, 52.656596, coupon_bond_settlement_days)]
helpers = []
for issue_date, maturity, coupon, price, settlement_days in data:
price = ql.QuoteHandle(ql.SimpleQuote(price))
issue_date = ql.Date(issue_date, '%d-%m-%Y')
maturity = ql.Date(maturity, '%d-%m-%Y')
schedule = ql.MakeSchedule(issue_date, maturity, ql.Period(ql.Semiannual))
helper = ql.FixedRateBondHelper(price, settlement_days, faceAmount, schedule, [coupon / 100], day_count,
False)
helpers.append(helper)
curve = ql.PiecewiseCubicZero(today, helpers, day_count)
# Enable Extrapolation:
# This line enables extrapolation for the yield curve.
# Extrapolation allows the curve to provide interest rates or rates beyond the observed data points,
# which can be useful for pricing or risk management purposes.
curve.enableExtrapolation()
# Zero Rate and Discount Rate Calculation:
# Calculates and prints the zero rate and discount rate at a specific
# future date (May 28, 2048) using the constructed yield curve.
date = ql.Date(28, ql.May, 2024)
zero_rate = curve.zeroRate(date, day_count, ql.Annual).rate()
forward_rate = curve.forwardRate(date, date + ql.Period(1, ql.Years), day_count, ql.Annual).rate()
discount_rate = curve.discount(date)
print("Zero rate as at 28.05.2048: " + str(round(zero_rate*100, 4)) + str("%"))
print("Forward rate as at 28.05.2048: " + str(round(forward_rate*100, 4)) + str("%"))
print("Discount factor as at 28.05.2048: " + str(round(discount_rate, 4)))
# Print the Zero Rates, Forward Rates and Discount Factors at node dates
# print(pd.DataFrame(curve.nodes()))
node_data = {'Date': [],
'Zero Rates': [],
'Forward Rates': [],
'Discount Factors': []}
for dt in curve.dates():
node_data['Date'].append(dt)
node_data['Zero Rates'].append(curve.zeroRate(dt, day_count, ql.Annual).rate())
node_data['Forward Rates'].append(curve.forwardRate(dt, dt + ql.Period(1, ql.Years), day_count, ql.Annual).rate())
node_data['Discount Factors'].append(curve.discount(dt))
node_dataframe = pd.DataFrame(node_data)
print(node_dataframe)
node_dataframe.to_excel('NodeRates.xlsx')
# Printing Daily Zero Rates:
# Prints the daily zero rates
# It calculates and prints the zero rates for each year using the constructed yield curve.
maturity_date = calendar.advance(today, ql.Period(1, ql.Years))
current_date = today
while current_date <= maturity_date:
zero_rate = curve.zeroRate(current_date, day_count, ql.Annual).rate()
print(f"Date: {current_date}, Zero Rate: {zero_rate}")
current_date = calendar.advance(current_date, ql.Period(1, ql.Years))
# Creating Curve Data for Plotting:
# Creates lists of curve dates, zero rates, and forward rates for plotting.
# It calculates both zero rates and forward rates for each year up to 15 years from the current date.
curve_dates = [today + ql.Period(i, ql.Years)
for i in range(15)]
curve_zero_rates = [curve.zeroRate(date, day_count, ql.Annual).rate()
for date in curve_dates]
# Converting ql.Date to Numerical Values: (years from today)
# Converts the curve dates (ql.Date objects) to numerical values representing years from the current
# date. This is done to prepare the data for plotting on the x-axis.
numeric_dates = [(date - today) / 365 for date in curve_dates]
# Plotting:
# Creates a plot showing the zero rates and forward rates over time.
# The x-axis represents the years from the current date, and the y-axis represents the interest rates.
# The plot displays two lines: one for zero rates (blue) and another for forward rates (red).
# The plot is labeled, grid lines are added, and the visualization is displayed using
plt.figure(figsize=(10, 6))
plt.plot(numeric_dates, curve_zero_rates, marker='', linestyle='-', color='b', label='Zero Rates')
plt.title('Zero Rates')
plt.xlabel('Years from Today')
plt.ylabel('Rate')
plt.legend()
plt.grid(True)
plt.xticks(rotation=0)
plt.tight_layout()
plt.show()
tenors = ['3M', '6M', '9M', '1Y', '2Y', '3Y', '5Y', '7Y', '10Y', '15Y']
# Print the Zero Rates, Forward Rates, and Discount Factors at Instrument maturity dates
node_data = {'Maturity Date': [],
'Tenors': [],
'Zero Rates': [],
'Forward Rates': [],
'Discount Factors': []}
for tenor in tenors:
maturity_date = calendar.advance(today, ql.Period(tenor), ql.ModifiedFollowing) # Calculate the maturity date
node_data['Maturity Date'].append(maturity_date)
node_data['Tenors'].append(tenor)
node_data['Zero Rates'].append(curve.zeroRate(maturity_date, day_count, ql.Annual).rate())
node_data['Forward Rates'].append(curve.forwardRate(maturity_date, maturity_date + ql.Period(0, ql.Years), day_count, ql.Annual).rate())
node_data['Discount Factors'].append(curve.discount(maturity_date))
node_dataframe = pd.DataFrame(node_data)
print(node_dataframe)
node_dataframe.to_excel('NodeRates.xlsx')
# Create a DataFrame to store bond results
bond_results = {'Issue Date': [],
'Maturity Date': [],
'Coupon Rate': [],
'Price': [],
'Settlement Days': [],
'Yield': [],
'Clean Price': [],
'Dirty Price': []}
# Calculate bond prices and yields
for issue_date, maturity, coupon, price, settlement_days in data:
price = ql.QuoteHandle(ql.SimpleQuote(price))
issue_date = ql.Date(issue_date, '%d-%m-%Y')
maturity = ql.Date(maturity, '%d-%m-%Y')
schedule = ql.MakeSchedule(issue_date, maturity, ql.Period(ql.Semiannual))
bondEngine = ql.DiscountingBondEngine(curve)
bond = ql.FixedRateBond(settlement_days, faceAmount, schedule, [coupon / 100], day_count)
bond.setPricingEngine(bondEngine)
# Calculate bond yield, clean price, and dirty price
bondYield = bond.bondYield()
bondCleanPrice = bond.cleanPrice()
bondDirtyPrice = bond.dirtyPrice()
# Append the results to the DataFrame
bond_results['Issue Date'].append(issue_date)
bond_results['Maturity Date'].append(maturity)
bond_results['Coupon Rate'].append(coupon)
bond_results['Price'].append(price.value())
bond_results['Settlement Days'].append(settlement_days)
bond_results['Yield'].append(bondYield)
bond_results['Clean Price'].append(bondCleanPrice)
bond_results['Dirty Price'].append(bondDirtyPrice)
# Create a DataFrame from the bond results
bond_results_df = pd.DataFrame(bond_results)
# Print the results
print(bond_results_df)
</code></pre>
| <python><finance><quantitative-finance><quantlib> | 2023-11-24 15:26:54 | 1 | 303 | ccc |
77,544,146 | 18,904,265 | Is there a way to get a warning in Powershell if my virtual environment is not activated? | <p>Is there some kind of tool that warns me if I want to use pip when no virtual environment is activated? The only thing installed in my global python is pipx and I want to keep it that way.</p>
<p>I am using PowerShell in the Windows Terminal App, so it would need to be compatible with that.</p>
| <python><powershell> | 2023-11-24 15:17:00 | 2 | 465 | Jan |
77,544,094 | 1,188,318 | fuzzy matching address data | <p>I have the task of matching two lists of names and addresses - represented in two database tables in PostgreSQL.</p>
<p>Address data can be strings like <em>Otto-Johannsen-Straße 7</em> or even <em>Otto-Johannsen-Str. 7 Wohnung oben</em> which have to match to <em>Otto-Johannsen-Str. 7</em>. Names can look be things like <em>Antje's Hus</em> which should match with some probability to <em>Antje</em> or <em>Haus am Meer</em> should match to <em>Hotel Haus am Meer</em>.</p>
<p>So it is about fuzzy matching - but soundex() and even levenshtein() won't help too much, because it's part's of strings hat have to be taken into account. With levenshtein() for instance the best match for <em>Abendsonne</em> was "Undine" - but the better match for my data would have been *Hotel Abendsonne".</p>
<p>I envision to have some probability measure with matches - so my result should be a list of matches with probabilities ideally.</p>
<p>How should I approach this problem - which matching algorithms should I use?
And is this a task I would directly do in PostgreSQL - of is it maybe a better approach to use Python?</p>
| <python><postgresql> | 2023-11-24 15:07:32 | 2 | 3,749 | sers |
77,543,506 | 2,473,382 | Pydantic use alias and initial name of a field interchangeably | <p>If I create a Pydantic model with a field having an alias, I would like to be allowed to use the initial name or the alias interchangeably. This is possible when creating an object (thanks to <code>populate_by_name=True</code>), but not when using the object.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, ConfigDict, Field
class Resource(BaseModel):
name: str = Field(alias="identifier")
model_config = ConfigDict(populate_by_name=True)
r1 = Resource(name="a name") # works
r2 = Resource(identifier="a name") # works thanks to populate_by_name=True
print(r1.name) # works
print(r2.identifier) # AttributeError: 'Resource' object has no attribute 'identifier'
</code></pre>
<p>Is this at all possible, and if yes, how?</p>
<p>The alternative would be to have a <code>@computed_field</code> (<code>identifier</code>) which would just return the attribute <code>name</code> and no alias. This is less clean semantically.</p>
| <python><pydantic><pydantic-v2> | 2023-11-24 13:30:40 | 1 | 3,081 | Guillaume |
77,543,426 | 7,089,108 | Model and parameters in symfit package via loop | <p>I would like to add more and more model equation and thus also parameters via a loop. I tried it, but so far it only led to errors.</p>
<pre><code>x, y_1, y_2 = variables('x, y_1, y_2')
a = Parameter('a', min=0.0)
b = Parameter('b')
d = Parameter('d')
c_1, c_2 = parameters('c_1, c_2')
#a, b, c_1, c_2, d = parameters('a, b, c_1, c_2, d')
if 1:
model = Model({
y_1: a * exp(-x) + c_1 + b * x/(x**2 + d**2),
y_2: a * exp(-x) + c_2 + b * x/(x**2 + (d - 1)**2),
})
</code></pre>
<p>Ideally, I would like to have something like this, but so far I only got errors:</p>
<pre><code>for i in range(1, 6):
equations[f'y_{i+2}'] = a * exp(-x) + parameters(f'c_{i+2}') + b * x / (x**2 + (d - i)**2)
model = Model(equations)
</code></pre>
<p>Any ideas? Or is there another python package which supports a loop on a Model?</p>
<p>Edit:</p>
<pre><code># Create variables
x = variables('x')
ys = variables(','.join(f'y_{i}' for i in range(1, 3)))
# Create parameters
a = Parameter('a', min=0.0)
b, d = parameters('b, d')
cs = parameters(','.join(f'c_{i}' for i in range(1, 3)))
# Create model dictionary
model_dict = {
y: a * exp(-2 * 0.3 * x) + c + b * x/(x**2 + d**2)
for y, c in zip(ys, cs)
}
</code></pre>
<p>This code for example gives the following error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[41], line 18
15 cs = parameters(','.join(f'c_{i}' for i in range(1, 3)))
17 # Create model dictionary
---> 18 model_dict = {
19 y: a * exp(-2 * 0.3 * x) + c + b * x/(x**2 + d**2)
20 for y, c in zip(ys, cs)
21 }
23 xdata = np.arange(0,1256,pixel_size_nm)
25 ydata = [ima0_avg.data[i, :] for i in range(2, len(ima0_avg.data))]
Cell In[41], line 19, in <dictcomp>(.0)
15 cs = parameters(','.join(f'c_{i}' for i in range(1, 3)))
17 # Create model dictionary
18 model_dict = {
---> 19 y: a * exp(-2* 0.3 * x) + c + b * x/(x**2 + d**2)
20 for y, c in zip(ys, cs)
21 }
23 xdata = np.arange(0,1256,pixel_size_nm)
25 ydata = [ima0_avg.data[i, :] for i in range(2, len(ima0_avg.data))]
TypeError: can't multiply sequence by non-int of type 'float'
</code></pre>
| <python><curve-fitting><scipy-optimize><model-fitting><symfit> | 2023-11-24 13:14:53 | 1 | 433 | cerv21 |
77,543,326 | 7,658,051 | Cannot handle properly csv.writer escapechar and quoting parameters | <p>I am developing a python script which,
given:</p>
<ul>
<li>the path of a csv file</li>
<li>a list of names of some of the columns of the csv file</li>
<li>a string to-be-replaced (A)</li>
<li>a string to-replace-with (B)</li>
</ul>
<p>should:<br>
substitute string A with string B in the cells of the indicated columns.</p>
<p>However, I am having trouble with writing the changed row in the output csv, because csv.writer encloses the modified lines in double quotes, while I don't want thema, but I don't understand how to handle the quoting.</p>
<p>Note: I cannot use pandas.</p>
<h2>example:</h2>
<p>soruce file:</p>
<pre><code>code1;code2;money1;code3;type_payment;money2
74;1;185.04;10;AMEXCO;36.08
74;1;8.06;11;MASTERCARD;538.30
74;1;892.46;12;VISA;185.04
74;1;75.10;15;MAESTRO;8.06
74;1;63.92;16;BANCOMAT;892.46
</code></pre>
<p>desired output:</p>
<pre><code>code1;code2;money1;code3;type_payment;money2
74;1;185,04;10;AMEXCO;36,08
74;1;8,06;11;MASTERCARD;538,30
74;1;892,46;12;VISA;185,04
74;1;75,10;15;MAESTRO;8,06
74;1;63,92;16;BANCOMAT;892,46
</code></pre>
<p>actual output:</p>
<pre><code>code1;code2;money1;code3;type_payment;money2
"74;1;185,04;10;AMEXCO;36,08"
"74;1;8,06;11;MASTERCARD;538,30"
"74;1;892,46;12;VISA;185,04"
"74;1;75,10;15;MAESTRO;8,06"
"74;1;63,92;16;BANCOMAT;892,46"
</code></pre>
<p>my script:</p>
<pre><code>result = {}
csv_file_path = 'myreport.csv'
columns_to_process = ['money1', 'money2']
string_to_be_replaced = "."
string_to_replace_with = ","
mydelimiter = ";"
#--------------------------
# specific import for csv
import csv, io
# import operator
import shutil
# specific import to manage errors
import os, traceback
# define custom errors
class DelimiterManagementError(Exception):
"""This one is raised if the row splitted into single cells have a length greater than the length of the header, for which we assume the delimiter is not inside the cells of the header"""
# check file existence
if not os.path.isfile(csv_file_path):
raise IOError("csv_file_path is not valid or does not exists: {}".format(csv_file_path))
# check the delimiter existence
with open(csv_file_path, 'r') as csvfile:
first_line = csvfile.readline()
# print("first_line", first_line)
if mydelimiter not in first_line:
delimiter_warning_message = "No delimiter found in file first line."
result['warning_messages'].append(delimiter_warning_message)
# count the lines in the source file
NOL = sum(1 for _ in io.open(csv_file_path, "r"))
# print("NOL:", NOL)
# if NOL = 0 -> void file
# if NOL = 1 -> header-only file
if NOL > 0:
# just get the columns names, then close the file
#-----------------------------------------------------
with open(csv_file_path, 'r') as csvfile:
columnslist = csv.DictReader(csvfile, delimiter=mydelimiter)
list_of_dictcolumns = []
# loop to iterate through the rows of csv
for row in columnslist:
# adding the first row
list_of_dictcolumns.append(row)
# breaking the loop after the
# first iteration itself
break
# transform the colum names into a list
first_dictcolumn = list_of_dictcolumns[0]
list_of_column_names = list(first_dictcolumn.keys())
number_of_columns = len(list_of_column_names)
# check columns existence
#------------------------
column_existence = [ (column_name in list_of_column_names ) for column_name in columns_to_process ]
if not all(column_existence):
raise ValueError("File {} does not contains all the columns given in input for processing:\nFile columns names: {}\nInput columns names: {}".format(csv_file_path, list_of_column_names, columns_to_process))
# determine the indexes of the columns to process
indexes_of_columns_to_process = [i for i, column_name in enumerate(list_of_column_names) if column_name in columns_to_process]
print("indexes_of_columns_to_process: ", indexes_of_columns_to_process)
# build the path of a to-be-generated duplicate file to be used as output
inputcsv_absname, inputcsv_extension = os.path.splitext(csv_file_path)
csv_output_file_path = inputcsv_absname + '__output' + inputcsv_extension
# define the processing function
def replace_string_in_columns(input_csv, output_csv, indexes_of_columns_to_process, string_to_be_replaced, string_to_replace_with):
number_of_replacements = 0
with open(input_csv, 'r', newline='') as infile, open(output_csv, 'w', newline='') as outfile:
reader = csv.reader(infile)
writer = csv.writer(outfile)
# writer = csv.writer(outfile, delimiter=mydelimiter, escapechar='\\', quoting=csv.QUOTE_NONE)
row_index=0
for row in reader:
for col_index in indexes_of_columns_to_process:
# break the processing when empty lines at the end of the file are reached
if len(row) == 0:
break
# get the single cell and analyze it
#-------------------------------------
list_of_cells = row[0].split(mydelimiter)
# WARNING: in case the inspecting cell contains the delimiter, the split will return more columns.
if len(list_of_cells) != number_of_columns:
raise DelimiterManagementError("In row {}: {}, the number of splitted cells is {}, but the number of columns (in header) is {}.".format(row_index, row, len(list_of_cells), number_of_columns))
cell = list_of_cells[col_index]
columns_before = list_of_cells[:col_index]
columns_after = list_of_cells[(col_index + 1):]
print("col_index: ", col_index)
print("row: ", row)
# print("list_of_cells: ", list_of_cells)
print("cell: ", cell)
# print("columns_before: ", columns_before)
# print("columns_after: ", columns_after)
if string_to_be_replaced in cell and row_index != 0:
# do the substitution in the cell
cell = cell.replace(string_to_be_replaced, string_to_replace_with)
number_of_replacements = number_of_replacements + 1
print("number_of_replacements: ", number_of_replacements)
# sew the row up agian
list_of_cells_replaced = columns_before + [ cell ] + columns_after
string_of_cells_replaced = mydelimiter.join(list_of_cells_replaced)
row_of_cells_replaced = [ string_of_cells_replaced ]
row = row_of_cells_replaced
print("substitutiion done: ", cell)
print("list_of_cells_replaced: ", list_of_cells_replaced)
print("string_of_cells_replaced: ", string_of_cells_replaced)
# write / copy the row in the new file
writer.writerow(row)
print("written row: ", row, "index: ", row_index)
row_index=row_index+1
return number_of_replacements
# launch the function
result['number_of_modified_cells'] = replace_string_in_columns(csv_file_path, csv_output_file_path, indexes_of_columns_to_process, string_to_be_replaced, string_to_replace_with)
# replace the old csv with the new one
shutil.copyfile(csv_output_file_path, csv_file_path)
os.remove(csv_output_file_path)
if result['number_of_modified_cells'] > 0:
result['changed'] = True
else:
result['changed'] = False
else:
result['changed'] = False
result['source_csv_number_of_raw_lines'] = NOL
result['source_csv_number_of_lines'] = NOL - 1
print("result:\n\n", result)
</code></pre>
| <python><csv><quoting><writer><csvwriter> | 2023-11-24 12:57:42 | 1 | 4,389 | Tms91 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.