QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,530,801
| 1,580,469
|
Efficient calculation of all one polynomial modulus
|
<p>The all one polynomial modulus is</p>
<p><code>sum(x**i for i in range(n)) % m</code></p>
<p>In my case <code>1 < x < m</code>, <code>0 < n < m</code> and <em>all</em> are integer. The sum can be simplified to <code>(x**n - 1) / (x - 1)</code>. Due to numerical issues integer division needs to be used for larger <code>n</code>. So</p>
<p><code>(x**n - 1) // (x - 1) % m</code></p>
<p>is equivalent. Problem is, the power can become quite huge, although Python handles this nicely it gets slow. Python’s 3 argument <code>pow(x,n,m)</code>-function can efficiently calculate powers and modulus in one step, but it’s unclear how to make use of it here. The key is probably a transformation using modulus arithmetics.</p>
|
<python><modulo>
|
2023-02-22 09:41:53
| 0
| 396
|
Christian
|
75,530,640
| 10,829,044
|
Pandas - split one row value and merge with multiple rows
|
<p>I have two dataframes like as below</p>
<pre><code>proj_df = pd.DataFrame({'reg_id':[1,2,3,4,5,6,7],
'partner': ['ABC_123','ABC_123','ABC_123','ABC_123','ABC_123','ABC_123','ABC_123'],
'part_no':['P123','P123','P123','P123','P123','P123','P123'],
'cust_info':['Apple','Apple','Apple','Apple','Apple','Apple','Tesla'],
'qty_1st_year':[100,100,600,150,50,0,10]})
order_df = pd.DataFrame({'partner': ['ABC_123','ABC_123','JKL_123','MNO_123'],
'part_no':['P123','P123','Q123','P567'],
'cust_info':['Apple','Hyundai','REON','Renault'],
'order_qty':[1000,600,50,0]})
</code></pre>
<p>I would like to do the below</p>
<p>a) Merge two dataframes based on <code>partner,part_no,cust_info</code></p>
<p>b) split the <code>order_qty</code> column from <code>order_df</code> and assign the appropriate portion to a new column called <code>assigned_qty</code></p>
<p>c) appropriate portion is determined by the percentage distribution of <code>qty_1st_year</code>. Meaning, you divide individual <code>qty_1st_year</code> value by the total sum of <code>Qty_1st_year</code> for each group of <code>partner,part_no and cust_info</code>.</p>
<p>So, I tried the below</p>
<pre><code>sum_df = proj_df.groupby(['partner','part_no','cust_info'])['qty_1st_year'].sum().reset_index()
sum_df.columns = ['partner','part_no','cust_info','total_qty_all_project']
t1=proj_df.merge(order_df,on=['partner','part_no','cust_info'],how='left')
t2 = t1.merge(sum_df,on=['partner','part_no','cust_info'],how='left')
t2['pct_value'] = (t2['qty_1st_year']/t2['total_qty_all_project'])*100
proj_df['assigned_value'] = (t2['order_qty']*t2['pct_value'])/100
</code></pre>
<p>While this seems to work fine, I would like to know is there any other better and elegant way to do this task.</p>
<p>I expect my output to be like as below</p>
<p><a href="https://i.sstatic.net/7mMzz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7mMzz.png" alt="enter image description here" /></a></p>
|
<python><pandas><list><dataframe><group-by>
|
2023-02-22 09:30:01
| 1
| 7,793
|
The Great
|
75,530,586
| 598,599
|
Can environment markers be used for determining the Databricks Runtime
|
<p>We are creating a generic library in Python used by different (LTS) <a href="https://docs.databricks.com/release-notes/runtime/releases.html" rel="nofollow noreferrer">runtimes on Databricks</a> that is used in highly automated release pipelines. So we have not exact control over the libraries and dependencies that are already installed and which ones needs to be installed. Therefore I'd like a requirements file that can install packages conditionally. Normally that would be done by environment markers described in <a href="https://peps.python.org/pep-0508/" rel="nofollow noreferrer">PEP 0508</a>.</p>
<p>Unfortunately none of these markers can directly tell me which Databricks Runtime we are running on. Are there environment markers that can be used to derive the Databricks Runtime or are there other ways to retrieve the DB Runtime and install packages based on this runtime?</p>
|
<python><databricks>
|
2023-02-22 09:25:53
| 1
| 7,490
|
AutomatedChaos
|
75,530,375
| 6,212,718
|
Polars vs. Pandas: size and speed difference
|
<p>I have a <code>parquet</code> file (~1.5 GB) which I want to process with <code>polars</code>. The resulting dataframe has 250k rows and 10 columns. One column has large chunks of texts in it.</p>
<p>I have just started using polars, because I heard many good things about it. One of which is that it is significantly faster than pandas.</p>
<p><strong>Here is my issue / question:<br />
The preprocessing of the dataframe is rather slow, so I started comparing to <code>pandas</code>. Am I doing something wrong or is polars for this particular use case just slower? If so: is there a way to speed this up?</strong></p>
<p>Here is my code in <code>polars</code></p>
<pre><code>import polars as pl
df = (pl.scan_parquet("folder/myfile.parquet")
.filter((pl.col("type")=="Urteil") | (pl.col("type")=="Beschluss"))
.collect()
)
df.head()
</code></pre>
<p>The entire code takes roughly <strong>1 minute</strong> whereas just the filtering part takes around <strong>13 seconds</strong>.</p>
<p>My code in <code>pandas</code>:</p>
<pre><code>import pandas as pd
df = (pd.read_parquet("folder/myfile.parquet")
.query("type == 'Urteil' | type == 'Beschluss'") )
df.head()
</code></pre>
<p>The entire code also takes roughly <strong>1 minute</strong> whereas just the querying part takes <strong><1 second</strong>.</p>
<p>The dataframe has the following types for the 10 columns:</p>
<ul>
<li>i64</li>
<li>str</li>
<li>struct[7]</li>
<li>str (for all remaining)</li>
</ul>
<p>As mentioned: a column "<code>content</code>" stores large texts (1 to 20 pages of text) which I need to preprocess and the store differently I guess.</p>
<blockquote>
<p><strong>EDIT</strong>: removed the size part of the original post as the comparison was not like for like and does not appear to be related to my question.</p>
</blockquote>
|
<python><pandas><dataframe><python-polars>
|
2023-02-22 09:08:26
| 2
| 1,489
|
FredMaster
|
75,530,186
| 8,680,909
|
Dataframe groupby condition with used column in groupby
|
<p>I want to use "group by" by two columns A, B. A column is categorical variable and B column's type is 'datetime64[ns]'. Value column C is float type.
I want to get a result of summarized value C but only Cs that groupbefore the date of column B.</p>
<p>Is there any ways to solve this problem with using aggregation or applying lambda method?
Merging or joining the dataset with condition of "column B(dataset) >= column B(copied dataset)" can be one of a solution, however in my case dataset becomes too large.</p>
<p>Help ;-)</p>
<p>=example dataset=</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column A</th>
<th>Column B</th>
<th>Column C</th>
</tr>
</thead>
<tbody>
<tr>
<td>category1</td>
<td>2022-01-02</td>
<td>1.0</td>
</tr>
<tr>
<td>category1</td>
<td>2022-03-04</td>
<td>2.0</td>
</tr>
<tr>
<td>category1</td>
<td>2022-07-10</td>
<td>3.0</td>
</tr>
<tr>
<td>category1</td>
<td>2022-08-15</td>
<td>4.0</td>
</tr>
<tr>
<td>category2</td>
<td>2022-03-04</td>
<td>5.0</td>
</tr>
<tr>
<td>category2</td>
<td>2022-07-10</td>
<td>6.0</td>
</tr>
</tbody>
</table>
</div>
<p>=expected result=</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column A</th>
<th>Column B</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>category1</td>
<td>2022-01-02</td>
<td>1.0</td>
</tr>
<tr>
<td>category1</td>
<td>2022-03-04</td>
<td>3.0</td>
</tr>
<tr>
<td>category1</td>
<td>2022-07-10</td>
<td>6.0</td>
</tr>
<tr>
<td>category1</td>
<td>2022-08-15</td>
<td>10.0</td>
</tr>
<tr>
<td>category2</td>
<td>2022-03-04</td>
<td>5.0</td>
</tr>
<tr>
<td>category2</td>
<td>2022-07-10</td>
<td>11.0</td>
</tr>
</tbody>
</table>
</div>
|
<python><dataframe><group-by>
|
2023-02-22 08:51:33
| 1
| 461
|
johanna
|
75,530,056
| 4,377,095
|
in Pandas, append a new column of a row each time it has a duplicate ID
|
<p>assuming we have this table</p>
<pre><code>df = pd.DataFrame({'ID': [1, 2, 1, 4, 2, 6, 1], 'Name': ['led', 'peter', 'james', 'ellie', 'make', 'levi', 'kent'],
'food': ['apples', 'oranges', 'banana', 'carrots', 'carrots', 'mango', 'banana'],
'color': ['red', 'blue', 'pink', 'red', 'red', 'purple', 'orange']})
+----+-------+---------+--------+
| id | name | food | color |
+----+-------+---------+--------+
| 1 | led | apples | red |
| 2 | peter | oranges | blue |
| 1 | james | banana | pink |
| 4 | ellie | carrots | red |
| 2 | mako | carrots | red |
| 6 | levi | mango | purple |
| 1 | kent | banana | orange |
+----+-------+---------+--------+
</code></pre>
<p>The goal here is to group by id but keep appending new rows as long as duplicates are found.
the output would be like this:</p>
<pre><code>+----+-------+---------+--------+-------+---------+--------+-------+--------+--------+
| id | name | food | color | name2 | food2 | color2 | name3 | food3 | color3 |
+----+-------+---------+--------+-------+---------+--------+-------+--------+--------+
| 1 | led | apples | red | james | banana | pink | kent | banana | orange |
| 2 | peter | oranges | blue | mako | carrots | red | | | |
| 4 | ellie | carrots | red | | | | | | |
| 6 | levi | mango | purple | | | | | | |
+----+-------+---------+--------+-------+---------+--------+-------+--------+--------+
</code></pre>
<p>there is an existing logic for this, but it gets messed up when some of the columns on the duplicate is missing.</p>
<pre><code>df = pd.DataFrame({'ID': [1, 2, 1, 4, 2, 6, 1], 'Name': ['led', 'peter', np.nan, np.nan, 'make', 'levi', 'kent'],
'food': [np.nan, 'oranges', 'banana', 'carrots', 'carrots', 'mango', 'banana'],
'color': ['red', 'blue', 'pink', 'red', np.nan, 'purple', 'orange']})
transformed_df = df.set_index('ID').stack().droplevel(1)
counter = transformed_df.groupby('ID').cumcount().to_numpy()
transformed_df.index = [transformed_df, counter]
transformed_df = transformed_df.unstack().add_prefix('Col').reset_index()
</code></pre>
|
<python><pandas>
|
2023-02-22 08:39:51
| 1
| 537
|
Led
|
75,529,967
| 9,426,931
|
Getting future contracts market cap data using python
|
<p>Im trying to calculate my portofolio's risk and i need to calculate the market cap. it seems yahoo finance info attribute no longer provides market cap. what should I do?</p>
|
<python><yfinance><pandas-datareader>
|
2023-02-22 08:30:03
| 1
| 564
|
Ali Sadeghi Aghili
|
75,529,760
| 7,583,699
|
How to merge sub-matrices of high-dimensional matrices under the condition of ensuring the relative position of sub-matrices?
|
<p>If I have a tensort x with shape [z, d, d], which indicates a series image frames just like video data. Let pz=z**0.5 and let x = x.view(pz, pz, d, d]. Then we can get a grid of images with grid size of pz*pz, and each image has a shape of [d, d]. Now, I want get a matrix or tensor with shape of [1, 1, p*d, p*d], and MUST insure all element keep the same inter-position with all original images.</p>
<p>For an example:</p>
<pre><code> x = [[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]],
[[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15]]]
</code></pre>
<p>which indicates a series images with shape [2,2] and z = 4
I want get a tensor like:</p>
<pre><code>tensor([[ 0, 1, 4, 5],
[ 2, 3, 6, 7],
[ 8, 9, 12, 13],
[10, 11, 14, 15]])
</code></pre>
<p>I can use x = x.view(1, 1, 4, 4) to get one with the same shape,but it likes this:</p>
<pre><code>tensor([[[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]]]])
</code></pre>
<p>which I don't want.</p>
<p>And more , How about x has more dimension? Just like [b, c, z, d, d]. How to deal with this?</p>
<p>Any suggestion will be helpful.</p>
<p>I have a solution about the three dimention situation.If x.shape = [z, d, d], then the code below will work. But not work for high dimention tensors. Nested loop will be ok, but too heavy.
My solution for three dimention situation:</p>
<pre><code>
d = 2
z = 4
b, c = 1, 1
x = torch.arange(z*d*d).view(z, d, d)
# x = torch.tensor([[[ 1, 2],
# [ 4, 6]],
#
# [[ 8, 10],
# [12, 14]],
#
# [[16, 18],
# [20, 22]],
#
# [[24, 26],
# [28, 30]],
#
# [[32, 34],
# [36, 38]],
#
# [[40, 42],
# [44, 46]],
#
# [[48, 50],
# [52, 54]],
#
# [[56, 58],
# [60, 62]],
#
# [[64, 66],
# [68, 70]]])
# make z-index planes to a grid layout
grid_side_len = int(z**0.5)
grid_x = x.view(grid_side_len, grid_side_len, d, d)
# for all rows of crops , horizontally stack them togather
plane = []
for i in range(grid_x.shape[0]):
cat_crops = torch.hstack([crop for crop in grid_x[i]])
plane.append(cat_crops)
plane = torch.vstack([p for p in plane])
print("3D crop to 2D crop plane:")
print(x)
print(plane)
print(plane.shape)
print("2D crop plane to 3D crop:")
# group all rows
split = torch.chunk(plane, plane.shape[1]//d, dim=0)
spat_flatten = torch.cat([torch.cat(torch.chunk(p, p.shape[1]//d, dim=1), dim=0) for p in split], dim=0)
crops = [t[None,:,:] for t in torch.chunk(spat_flatten, spat_flatten.shape[0]//d, dim=0)]
spat_crops = torch.cat(crops, dim=0)
print(spat_crops)
print(spat_crops.shape)
</code></pre>
|
<python><machine-learning><matrix><deep-learning><pytorch>
|
2023-02-22 08:05:48
| 2
| 303
|
Skipper
|
75,529,554
| 1,581,090
|
How to properly define a windows foldername to be used with python?
|
<p>I have the following small code snippet in python 3.10 on windows 10 powershell</p>
<pre><code>win_folder = b"C:\Program Files (x86)\STMicroelectronics\STM32Cube\STM32CubeProgrammer\bin"
os.chdir(win_folder)
</code></pre>
<p>but when running this code I always get an error</p>
<pre><code>FileNotFoundError: [WinError 3] The system cannot find the path specified: b'C:\\Program Files (x86)\\STMicroelectronics\\STM32Cube\\STM32CubeProgrammer\x08in'
</code></pre>
<p>I also tried unicode string, byte string, with and without escaping the slash in "\b" and also the spaces:</p>
<pre><code>win_folder = "C:\Program\ Files\ (x86)\STMicroelectronics\STM32Cube\STM32CubeProgrammer\\bin"
</code></pre>
<p>But still no success.
Is there a way to automatically convert the string</p>
<pre><code>myfolder = "C:\Program Files (x86)\STMicroelectronics\STM32Cube\STM32CubeProgrammer\bin"
</code></pre>
<p>into a valid filename to be used within python? Or a way to define it properly?</p>
|
<python><windows><encoding>
|
2023-02-22 07:45:06
| 2
| 45,023
|
Alex
|
75,529,522
| 20,770,190
|
Why I cannot use flask.request.headers as a hinting types?
|
<p>I'm going to use <code>flask.request.headers</code> as a hinting types in input parameter of a method, but the following error is raised:</p>
<pre><code>...
raise RuntimeError(unbound_message) from None
RuntimeError: Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
</code></pre>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>def events_webhook(header: flask.request.headers) -> None:
"""Webhook for Google Calendar events."""
print(header)
</code></pre>
<p>I called the above code using this:</p>
<pre class="lang-py prettyprint-override"><code>@api.route('/calendar/event/webhook', methods=["POST"])
def gcalendar_event_webhook():
try:
if flask.request.method == "POST":
return RestHandler.handle_success(
gutils.events_webhook(request.headers), # here
request=request, log_payload=False
)
else:
return RestHandler.handle_exception(
exception="Request method is not POST.", request=request
)
except Exception as exc:
return RestHandler.handle_exception(exc, request)
</code></pre>
|
<python><flask>
|
2023-02-22 07:41:30
| 1
| 301
|
Benjamin Geoffrey
|
75,529,345
| 2,170,269
|
Is pyright incorrectly generalising types?
|
<p>I'm playing with some code that statically checks dimensions on linear algebra. I have managed to encode dimensions in a way that <code>mypy</code> is happy to check, but <code>pyright</code> isn't quite as happy about it.</p>
<p>A simplified version looks like this:</p>
<pre class="lang-py prettyprint-override"><code>
from __future__ import annotations
from typing import (
Generic,
Literal as L,
TypeVar,
overload,
assert_type
)
_D1 = TypeVar("_D1")
_D2 = TypeVar("_D2")
_D3 = TypeVar("_D3")
# TypeVarTuple is an experimental feature; this is a work-aroudn
class Shape:
"""Class that works as a tag to indicate that we are specifying a shape."""
class Shape1D(Shape, Generic[_D1]):
"""Class that works as a tag to indicate that we are specifying a shape."""
class Shape2D(Shape, Generic[_D1,_D2]):
"""Class that works as a tag to indicate that we are specifying a shape."""
_Shape = TypeVar("_Shape", bound=Shape)
Scalar = int | float
class Array(Generic[_Shape]):
@overload # Adding witht the same shape
def __add__(self: Array[_Shape], other: Array[_Shape]) -> Array[_Shape]:
return Any # type: ignore
@overload # Adding with a scalar
def __add__(self: Array[_Shape], other: Scalar) -> Array[_Shape]:
return Any # type: ignore
def __add__(self, other) -> Array:
return self # Dummy implementation
# Adding with a scalar
def __radd__(self: Array[_Shape], other: Scalar) -> Array[_Shape]:
return Any # type: ignore
@overload # matrix-matrix multiplication
def __matmul__(
self: Array[Shape2D[_D1,_D2]], other: Array[Shape2D[_D2,_D3]]
) -> Array[Shape2D[_D1,_D3]]: ...
@overload # matrix-vector multiplication
def __matmul__(
self: Array[Shape2D[_D1,_D2]], other: Array[Shape1D[_D2]]
) -> Array[Shape1D[_D1]]: ...
@overload # vector-matrix multiplication (one could argue for a different shape)
def __matmul__(
self: Array[Shape1D[_D1]], other: Array[Shape2D[_D1,_D2]]
) -> Array[Shape1D[_D2]]: ...
def __matmul__(self, other) -> Array:
return self # Dummy implementation
A = Array[Shape2D[L[3],L[4]]]()
B = Array[Shape2D[L[4],L[5]]]()
x = Array[Shape1D[L[4]]]()
y = Array[Shape1D[L[3]]]()
reveal_type(A + 1.0) ; assert_type(A + 1.0, Array[Shape2D[L[3],L[4]]])
reveal_type(1.0 + A) ; assert_type(1.0 + A, Array[Shape2D[L[3],L[4]]])
reveal_type(A + A) ; assert_type(A + A, Array[Shape2D[L[3],L[4]]])
reveal_type(A @ x) ; assert_type(A @ x, Array[Shape1D[L[3]]])
reveal_type(y @ A) ; assert_type(y @ A, Array[Shape1D[L[4]]])
reveal_type(A @ B) ; assert_type((A @ B), Array[Shape2D[L[3],L[5]]])
</code></pre>
<p>Mypy playground <a href="https://mypy-play.net/?mypy=1.0.0&python=3.11&gist=d0c6b2b2cb668a284094958c6c141684" rel="nofollow noreferrer">here</a>
Pyright playground <a href="https://pyright-playground.decorator-factory.su/?gzip=H4sIAMa89WMC_7VVwW6bQBC98xUj5wIqsWo7uThKFdJIvfiWqJfIQhsY7FVhFy1LYqR-fGd3jWMwRE0rcwDtvNk37w2z4GVKFhDHWa1rhXEMvCil0sCEkJppLkXleTZHNyUXmxb3PaDrBwpUPAntYsU1KpYDq2DlIk9NiT-ZCsGu5CuqXLLUYayqUOmYWNELPM-LH2Zw2-7wJ7ScBBScd4NzG1x0gwsKehdt5KkucwRekQXAXUn6ChSadGXIjMcb0FtCTQK8SfXrkilZp8JLcpIEj1tW4tIqnEwm321Mb5m2qZXxxkCzDWgJXKQ8YRr3OAJTCFWJCc8a0ykGlSGbEs8x-ezBt8-w7d4zeV0HZ605H6oZUjvPUZfepS3WeUk2MgnhRdYivbWrwHtMWM4U5XGh4TdkNB2atjvhkVKs8Q967ZZW7l07S3ABUZoaBW9cbzXJI1msQCfH5qaY0XyzNI1jv8I8WzrmljEESZtULxrA5bduxBU2l0IaIwGRaKi6md8l8I2QCj-SZjpk3X5Kk2vQv4vp19kTvxOeEJksYnqoi6IxZz1He3rMh8Cm2tvB2YApdV5XJz0umFZ8d-keUNS55mVuZrVV7GQRThgJO9Q4Vrg_JIdT0Z-Kd3xO-GK9tixHDroElLCE6XQ6JvYVEy3VWcXOrNgRoRacjYl06oY7Cr4UCIms85Q-AZsaISMjDFKeZahoUtzBCz5nptUz2vW90XEz89bMUOH_n3w3-hF9qbrKVs-Ldbh6vlqv137g3Q_gVwa_dviui88cbqFmAFo4yFP4iiy3f0o_gi8wm34N4Ob4B9qGw3F5QYeGcmlH1KfZh_-aJnIkMKCGSOAjmh7PHexOPVGwx3LoS1dHQ6knZmxwaP-IgPs-wT460o1ry_IHDaSArT8JAAA%3D" rel="nofollow noreferrer">here</a></p>
<p>The addition works fine with both checkers, but the matrix multiplication doesn't.</p>
<p>When doing <code>A @ B</code>, for example, <code>mypy</code> sees that <code>A</code>'s first dimension is <code>_D1=Literal[3]</code> and that <code>B</code>'s second dimension is <code>_D3=Literal[5]</code>, so it gives the multiplication the type <code>Array[Shape2D[Literal[3],Literal[5]]</code>.</p>
<p>With <code>pyright</code>, on the other hand, the type is inferred to be <code>Array[Shape2D[int,int]]</code>. The <code>Shape2D</code> is correct, but the <code>Literal[-]</code> types got turned into <code>int</code>.</p>
<p>While it is true that the <code>Literal[-]</code> types here <em>are</em> integers, <code>Literal[3] != int</code>, so what gives?</p>
<p>If I don't use <code>Literal[-]</code>, both checkers are happy to infer the dimensions of a matrix-multiplication. They will both take the first and third dimensions and make a matrix out of them.</p>
<pre class="lang-py prettyprint-override"><code>class N: pass
class M: pass
class K: pass
AA = Array[Shape2D[N,M]]()
BB = Array[Shape2D[M,K]]()
reveal_type(AA + AA); assert_type(AA + AA, Array[Shape2D[N,M]])
reveal_type(AA @ BB); assert_type(AA @ BB, Array[Shape2D[N,K]])
</code></pre>
<p>When it is the (abstract) types <code>N</code>, <code>M</code>, and <code>K</code>, that I use for dimensions, the inferred dimensions are the correct.</p>
<p>Either I don't understand how <code>Literal</code> works, or <code>pyright</code> doesn't. I'm not willing to bet which is which, but is there a way to fix it?</p>
|
<python><python-typing><mypy><pyright>
|
2023-02-22 07:21:20
| 0
| 1,844
|
Thomas Mailund
|
75,529,195
| 12,242,085
|
How to remove all special characters in column names by regex but stay ":" in re in Pandas Python?
|
<p>I have Pandas DataFrame in Python like below:</p>
<pre><code>COL1 | XX:\x84Â\x82Ă\x82Â\ | \x84Â\x82Ă\PPx82Â\
-------|----------------------|--------------------
111 | ABC | X
222 | CCC | Y
333 | DDD | XX
My code and current output:
</code></pre>
<p>By using below code I am able to delete any special characters from columns names and return list of columns with converted names. But I also replace ":" to "_"</p>
<pre><code>import re
new_names = {col: re.sub(r'[^A-Za-z0-9_]+', '', col) for col in df.columns}
new_n_list = list(new_names.values())
[COL1, XX_A, PP]
</code></pre>
<p><strong>My question:</strong></p>
<p>How can I modify my code, so as to work as now but do not convert ":" to "_"</p>
<p><strong>Desire output:</strong></p>
<pre><code> [COL1, XX:A, PP]
</code></pre>
|
<python><pandas><regex><dataframe>
|
2023-02-22 06:58:26
| 1
| 2,350
|
dingaro
|
75,529,181
| 18,756,733
|
Plotly subplot titles not showing correctly
|
<p>I created Plotly subplots figure using for loop. Here is the code and link to the HTML file(download to view the file):</p>
<pre><code>fig=make_subplots(rows=6,cols=2,subplot_titles=sorted(df['Region'].unique()))
for i,region in enumerate(sorted(df['Region'].unique()),start=1):
data=df.query('Region==@region').sort_values(by='Population',ascending=False)
mytrace=go.Bar(x=data['Country'],y=data['Population'],name=region)
fig.add_trace(mytrace,row=math.ceil(i/2),col=math.ceil(i%2)+1)
fig.update_layout(width=1500,height=2250)
</code></pre>
<p>Link: <a href="https://raw.githubusercontent.com/beridzeg45/IBSU/main/Strategic%20Management/plotly.html" rel="nofollow noreferrer">https://raw.githubusercontent.com/beridzeg45/IBSU/main/Strategic%20Management/plotly.html</a></p>
<p>As it can be seen titles are not applied correctly to the subplots.For example, Baltic countries subplot has title 'Asia' and Asian countries subplot has title 'Baltics'.
How can I fix it?</p>
|
<python><plotly><subplot>
|
2023-02-22 06:56:33
| 1
| 426
|
beridzeg45
|
75,529,064
| 8,229,534
|
How to load multiple partition parquet files from GCS into pandas dataframe?
|
<p>I am trying to read multiple parquet files stored as partitions from google cloud storage and read them as 1 single pandas data frame. As an example, here is the folder structure at <code>gs://path/to/storage/folder/</code></p>
<p><a href="https://i.sstatic.net/OGeU3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OGeU3.png" alt="enter image description here" /></a></p>
<p>And inside each of the <code>event_date=*</code>, there are multiple parquet files</p>
<p>So the directory structure is something like this -</p>
<pre><code>--gs://path/to/storage/folder/
---event_date=2023-01-01/
---abc.parquet
---def.parquet
---event_date=2023-01-02/
---ghi.parquet
---jkl.parquet
</code></pre>
<p>I want to load this to pandas data frame and I used below code</p>
<pre><code>import pandas as pd
import gcsfs
from pyarrow import parquet
url = "gs://path/to/storage/folder/event_date=*/*"
fs = gcsfs.GCSFileSystem()
files = ["gs://" + path for path in fs.glob(url)]
print(files)
data = parquet.ParquetDataset(files, filesystem=fs)
multiple_dates_df = data.read().to_pandas()
print(multiple_dates_df.shape)
</code></pre>
<p>But I get below error -</p>
<pre><code>OSError: Passed non-file path: gs://path/to/storage/folder/event_date=2023-01-01/abc.parquet
</code></pre>
<p>How do I fix this?</p>
|
<python><pandas><google-cloud-storage>
|
2023-02-22 06:41:15
| 1
| 1,973
|
Regressor
|
75,528,849
| 5,924,264
|
ImportError: cannot import name when using Jenkins CI but can't reproduce locally
|
<pre><code>ImportError: cannot import name CustomUtils
</code></pre>
<p>The import line is:</p>
<pre><code>from root.src.custom import Customutils
</code></pre>
<p>This is the error I get at my work's codebase for a pull request that I submitted. However, this error seems to only show up when using our unit test suite on Jenkins with bazel.</p>
<p>When I run the same test locally, I cannot reproduce this error, and there are no circular dependency issues. So I'm perplexed as to what could be going on in the unit tests on Jenkins but it works fine locally?</p>
<p>I'm also not well versed with Python so I'm not sure if there's something obvious I'm missing.</p>
|
<python><unit-testing><jenkins><continuous-integration><bazel>
|
2023-02-22 06:10:57
| 0
| 2,502
|
roulette01
|
75,528,839
| 12,492,675
|
How to fetch and filter data retrieved from Yahooquery?
|
<p>I am trying to retrieve the quarterly earnings of some companies using YahooQuery and I've been able to fetch the data for the whole year but I want to filter this data to show only the specific quarter's results. Furthermore I want to export this data to an Excel sheet and save it there for all the companies.</p>
<pre><code>import yahooquery
ticker = yahooquery.Ticker('AAPL',asynchronous=True)
ticker.earnings
</code></pre>
<p>Below is the output of the code and the highlighted box denotes the quarterly result that I was trying to filter.
<a href="https://i.sstatic.net/UGSQH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UGSQH.png" alt="enter image description here" /></a></p>
|
<python><stock><yahoo-finance>
|
2023-02-22 06:09:35
| 1
| 1,326
|
Simran Aswani
|
75,528,741
| 4,616,611
|
Create new label column based on dictionary key value mapping in a DataFrame
|
<p>Given a dictionary <code>d={1: [0.01, 0.02], 2:[1, 2], 3:[10, 20]}</code> and column in a data frame that contains all values in <code>d</code> e.g., <code>df['x'] = [0.01, 0.02, 1, 2, 10, 20]</code>, I'm looking to create a new column in my data frame that maps each value in col <code>x</code> to it's corresponding key according to dictionary <code>d</code>. The output is a creation of a new column, <code>df['x_label'] = [1, 1, 2, 2, 3, 3]</code>. I'm currently doing this brute force which is taking a lot of time. Is there a simpler optimized solution for this?</p>
|
<python><pandas><dataframe><numpy>
|
2023-02-22 05:53:04
| 2
| 1,669
|
Teodorico Levoff
|
75,528,577
| 5,928,682
|
Multiline string substitution in Python
|
<p>I am trying this out:</p>
<pre><code>account = "11111111111"
tenant_name= "Demo"
project_name= "demo"
sns_access_policy = """{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "0",
"Effect": "Allow",
"Principal": {
"Service": "codestar-notifications.amazonaws.com"
},
"Action": "sns:Publish",
"Resource": "arn:aws:sns:eu-west-1:{account_id}:{tenant_name}-{project_name}-pipeline-monitoring"
}
]
}"""
sns_access_policy = sns_access_policy.replace("{account_id}",account).replace("{tenant_name}",tenant_name).replace("{project_name}",project_name)
</code></pre>
<p>This is doing my work, but I am looking for something like <code>f{string_substitution}</code>:</p>
<pre><code>sns_access_policy = f"""{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "0",
"Effect": "Allow",
"Principal": {
"Service": "codestar-notifications.amazonaws.com"
},
"Action": "sns:Publish",
"Resource": "arn:aws:sns:eu-west-1:{account}:{tenant_name}-{project_name}-pipeline-monitoring"
}
]
}"""
</code></pre>
<p>I am getting the following error:</p>
<pre><code>SyntaxError: f-string: expressions nested too deeply
</code></pre>
<p>Is there any other way than using <code>.replace()</code> for this?</p>
|
<python>
|
2023-02-22 05:24:56
| 1
| 677
|
Sumanth Shetty
|
75,528,573
| 14,012,607
|
Time complexity of a combination function
|
<p>I have this function that creates pairs from a list of numbers. We know that there will be a total of n choose 2 iterations every time. So does that make the time complexity O(nC2)?
or is it O(n^2)?</p>
<p>If it is O(n^2) why is it O(n^2)? The function does not iterate that many times and it never will.</p>
<pre><code>def find_pairs(nums):
pairs = []
for i in range(len(nums)):
current = nums[i]
for n in nums[i+1:]:
pairs.append((current, n))
return pairs
</code></pre>
|
<python><python-3.x><big-o><combinations>
|
2023-02-22 05:24:18
| 2
| 318
|
Salty Sodiums
|
75,528,465
| 14,808,637
|
How to Print os.environ["CUDA_VISIBLE_DEVICES"] = "1"
|
<p>My machine has three GPUs with indexes <code>0, 1, and 2.</code> To assign my program to a specific GPU, I used the lines below:</p>
<pre><code>import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
</code></pre>
<p>How can I print the GPU index on which my program is executing?</p>
|
<python><python-3.x><pytorch>
|
2023-02-22 05:04:56
| 0
| 774
|
Ahmad
|
75,528,463
| 10,339,757
|
Apply expanding window on subsections of dataframe
|
<p>I have a dataframe like this</p>
<pre><code> key1 day feat
0 a 1 None
1 a 2 A
2 a 3 None
3 a 4 A
4 b 1 A
5 b 2 None
6 b 3 None
7 b 4 A
</code></pre>
<p>I would like the apply an expanding window with the count function over the feat column but apply the expanding window by sub category based on the key1 column.</p>
<p>eg I want my resultant df to be</p>
<pre><code> key1 day feat count
0 a 1 None 0
1 a 2 A 1
2 a 3 None 1
3 a 4 A 2
4 b 1 A 1
5 b 2 None 1
6 b 3 None 1
7 b 4 A 2
</code></pre>
<p>So in this case I would be grouping by key1 and then apply the expanding window to the sub groups so that the count resets for each group.
Note that in my actual problem, there are two keys I need to group by not just one.</p>
|
<python><python-3.x><pandas><windowing>
|
2023-02-22 05:04:04
| 1
| 371
|
thefrollickingnerd
|
75,528,445
| 2,369,000
|
Idiomatic way to get array of column values in a pandas dataframe
|
<p>I have a dataframe where I want to get a single array of all of the values in the 'a' column, which is part of a multi-index dataframe. The code below works, but it is hard to read, write, and think about. Is there a more idiomatic way to express the same idea?</p>
<pre><code>import numpy as np
import pandas as pd
x = pd.DataFrame({'a': [1, 2, 3], 'b': [1, 2, 3]})
y = pd.DataFrame({'a': [11, 12, 13], 'b': [21, 22, 23]})
df = pd.concat({'x': x, 'y': y}, axis=1)
x = np.concatenate(df.loc[:, (slice(None), 'a')].values)
</code></pre>
<pre><code>df:
x y
a b a b
0 1 1 11 21
1 2 2 12 22
2 3 3 13 23
x:
[ 1 11 2 12 3 13]
</code></pre>
|
<python><pandas><multi-index>
|
2023-02-22 05:02:20
| 2
| 461
|
HAL
|
75,528,441
| 1,870,832
|
Trouble parsing interview transcript (Q&As) where questioner name is sometimes redacted
|
<p>I have the following python script I wrote as a reproducible example of my current pdf-parsing hangup. It:</p>
<ul>
<li>downloads a pdf transcript from the web (Cassidy Hutchinson's 9/14/2022 interview transcript with the J6C)</li>
<li>reads/OCRs that pdf to text</li>
<li>attempts to split that text into the series of Q&A passages from the interview</li>
<li>runs a series of tests I wrote based on my manual read of the transcript</li>
</ul>
<p>running the python code below generates the following output:</p>
<pre><code>~/askliz main !1 ?21 python stack_overflow_q_example.py ✔ docenv Py 22:41:00
Test for passage0 passed.
Test for passage1 passed.
Test for passage7 passed.
Test for passage8 passed.
Traceback (most recent call last):
File "/home/max/askliz/stack_overflow_q_example.py", line 91, in <module>
assert nltk.edit_distance(passages[10][:len(actual_passage10_start)], actual_passage10_start) <= ACCEPTABLE_TEXT_DISCREPANCY, e_msg
AssertionError: Failed on passage 10
</code></pre>
<p><strong>Your mission, should you choose to accept it</strong>: get this passage10 test to pass without breaking one of the previous tests. I'm hoping there's a clever regex or other modification in <code>extract_q_a_locations</code> below that will do the trick, but I'm open to any solution that passes all these tests, as I chose these test passages deliberately.</p>
<p><strong>A little background on this transcript text</strong>, in case it's not as fun reading to you as it is to me: Sometimes a passage starts with a "Q" or "A", and sometimes it starts with a name (e.g. "Ms. Cheney."). The test that's failing, for passage 10, is where a question is asked by a staff member whose name is then redacted. The only way I've managed to get that test to pass has inadvertently broken one of the other tests, because not all redactions indicate the start of a question. (Note: in the pdf/ocr library I'm using, pdfplumber, redacted text usually shows up as just a bunch of extra spaces).</p>
<p>Code below:</p>
<pre><code>import nltk
import re
import requests
import pdfplumber
def extract_q_a_locations(examination_text:str)->list:
# (when parsed by pdfplumber) every Q/A starts with a newline, then spaces,
# then a line number and more spaces
prefix_regex = '\n\s+\d+\s+'
# sometimes what comes next is a 'Q' or 'A' and more spaces
qa_regex = '[QA]\s+'
# other times what comes next is the name of a congressperson or lawyer for the witness
speaker_regex = "(?:(?:Mr\.|Ms\.) \w+\.|-\s+)"
# the combined regex I've been using is looking for the prefix then QA or Speaker regex
pattern = f"{prefix_regex}(?:{speaker_regex}|{qa_regex})"
delims = list(re.finditer(pattern, text))
return delims
def get_q_a_passages(qa_delimiters, text):
q_a_list = []
for delim, next_delim in zip(qa_delimiters[:-1], qa_delimiters[1:]):
# prefix is either 'Q', 'A', or the name of the speaker
prefix = text[delim.span()[0]:delim.span()[1]].strip().split()[-1]
# the text chunk is the actual dialogue text. everything from current delim to next one
text_chunk = text[delim.span()[1]:next_delim.span()[0]]
# now we want to remove some of the extra cruft from layout=True OCR in pdfplumber
text_chunk = re.sub("\n\s+\d+\s+", " ", text_chunk) # remove line numbers
text_chunk = " ".join(text_chunk.split()) # remove extra whitespace
q_a_list.append(f"{prefix} {text_chunk}")
return q_a_list
if __name__ == "__main__":
# download pdf
PDF_URL = "https://www.govinfo.gov/content/pkg/GPO-J6-TRANSCRIPT-CTRL0000928888/pdf/GPO-J6-TRANSCRIPT-CTRL0000928888.pdf"
FILENAME = "interview_transcript_stackoverflow.pdf"
response = requests.get(PDF_URL)
with open(FILENAME, "wb") as f:
f.write(response.content)
# read pdf as text
with pdfplumber.open(FILENAME) as pdf:
text = "".join([p.extract_text(layout=True) for p in pdf.pages])
# I care about the Q&A transcript, which starts after the "EXAMINATION" header
startidx = text.find("EXAMINATION")
text = text[startidx:]
# extract Q&A passages
passage_locations = extract_q_a_locations(text)
passages = get_q_a_passages(passage_locations, text)
# TESTS
ACCEPTABLE_TEXT_DISCREPANCY = 2
# The tests below all pass already.
actual_passage0_start = "Q So I do first want to bring up exhibit"
assert nltk.edit_distance(passages[0][:len(actual_passage0_start)], actual_passage0_start) <= ACCEPTABLE_TEXT_DISCREPANCY
print("Test for passage0 passed.")
actual_passage1 = "A This is correct."
assert nltk.edit_distance(passages[1][:len(actual_passage1)], actual_passage1) <= ACCEPTABLE_TEXT_DISCREPANCY
print("Test for passage1 passed.")
# (Note: for the next two passages/texts, prefix/questioner is captured as "Cheney" &
# "Jordan", not "Ms. Cheney" & "Mr. Jordan". I'm fine with either way.
actual_passage7_start = "Cheney. And we also, just as"
assert nltk.edit_distance(passages[7][:len(actual_passage7_start)], actual_passage7_start) <= ACCEPTABLE_TEXT_DISCREPANCY
print("Test for passage7 passed.")
actual_passage8_start = "Jordan. They are pro bono"
assert nltk.edit_distance(passages[8][:len(actual_passage8_start)], actual_passage8_start) <= ACCEPTABLE_TEXT_DISCREPANCY
print("Test for passage8 passed.")
# HERE'S MY PROBLEM.
# This test fails because my regex fails to capture the question which starts with the
# redacted name of the staff/questioner. The only way I've managed to get this test to
# pass has also broken at least one of the tests above.
actual_passage10_start = " So at this point, as we discussed earlier, I'm going to"
e_msg = "Failed on passage 10"
assert nltk.edit_distance(passages[10][:len(actual_passage10_start)], actual_passage10_start) <= ACCEPTABLE_TEXT_DISCREPANCY, e_msg
</code></pre>
|
<python><regex><pdf><nlp><pdfplumber>
|
2023-02-22 05:01:08
| 1
| 9,136
|
Max Power
|
75,528,386
| 13,874,745
|
How can I make the output of annotate() the same as print() in matplotlib?
|
<p>I found that when the <strong>text contains a lot of spaces</strong>, the output of <code>print()</code> will be different from the output of <code>plt.annotate()</code></p>
<p>My question is : <strong>How can I make the output of annotate() the same as print() in matplotlib?</strong></p>
<p>I use the following code to compare the output of <code>print()</code> with the output of <code>plt.annotate()</code>:</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
import json
with open("epoch_3350-20230218172431.json", "r") as source:
log_dict = json.load(source)
print(log_dict["model_structure"])
plt.figure(figsize=(25, 10))
plt.annotate(text=f"{log_dict.get('model_structure').__str__()}",
xy=(0.08, 0.5), bbox={'facecolor': 'green', 'alpha': 0.4, 'pad': 5},
fontsize=14, xycoords='axes fraction', va='center')
plt.show()
plt.close()
</code></pre>
<p>output of <code>print()</code> :</p>
<pre><code>MTSCorrAD(
(gin_convs): ModuleList(
(0): GINConv(nn=Sequential(
(0): Linear(in_features=1, out_features=3, bias=True)
(1): BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Linear(in_features=3, out_features=3, bias=True)
(4): ReLU()
))
)
(gru1): GRU(3, 8)
(lin1): Linear(in_features=8, out_features=3, bias=True)
)
====================================================================================================
+-------------------------+-----------------------------+-----------------+----------+
| Layer | Input Shape | Output Shape | #Param |
|-------------------------+-----------------------------+-----------------+----------|
| MTSCorrAD | [792, 1], [2, 52272], [792] | [3] | 363 |
| ├─(gin_convs)ModuleList | -- | -- | 24 |
| │ └─(0)GINConv | [792, 1], [2, 52272] | [792, 3] | 24 |
| ├─(gru1)GRU | [12, 3] | [12, 8], [1, 8] | 312 |
| ├─(lin1)Linear | [8] | [3] | 27 |
+-------------------------+-----------------------------+-----------------+----------+
</code></pre>
<p>output of <code>plt.annotate()</code> :
<a href="https://i.sstatic.net/8wOlP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8wOlP.png" alt="enter image description here" /></a></p>
<p>As the results show, the output of <code>print()</code> is different from the output of <code>plt.annotate()</code>.</p>
<p>PS.</p>
<ul>
<li>I put the file in google drive: <a href="https://drive.google.com/drive/folders/1_KMwCzf1diwS4gGNdSSxG7bnemqQkFxI?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1_KMwCzf1diwS4gGNdSSxG7bnemqQkFxI?usp=sharing</a></li>
</ul>
|
<python><matplotlib>
|
2023-02-22 04:49:22
| 1
| 451
|
theabc50111
|
75,528,349
| 5,976,033
|
How to connect an Azure Function in tenant1 with an Azure SQL database in tenant2?
|
<p>In <code>tenant1</code> Azure Function, there is a Python function to connect to an Azure SQL database:</p>
<pre><code>import aioodbc
import logging
async def create_db_connection(SERVER_NAME: str, DATABASE_NAME: str, USERNAME: str, PASSWORD: str) -> aioodbc.Connection:
CONNECTION_STRING = (
'Driver={ODBC Driver 17 for SQL Server};'
f'Server=tcp:{SERVER_NAME}.database.windows.net,1433;'
f'Database={DATABASE_NAME};Uid={USERNAME};Pwd={PASSWORD};'
'Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;'
)
conn = await retry(aioodbc.connect, dsn=CONNECTION_STRING, autocommit=True)
logging.info(f'##### Azure SQL Server connection successfully created')
return conn
</code></pre>
<p>Is it possible to use <code>tenant2</code>'s <code>SERVER_NAME</code>, <code>DATABASE_NAME</code>, <code>USERNAME</code> and <code>PASSWORD</code> to connect to the database?</p>
<p>Or is there more to it than that?</p>
|
<python><azure-functions><azure-sql-database>
|
2023-02-22 04:41:55
| 1
| 4,456
|
SeaDude
|
75,528,344
| 1,532,974
|
How to mount a custom adapter by domain (wildcarding subdomains) with python requests?
|
<p>This is a <a href="https://stackoverflow.com/questions/74991540/python-requests-set-proxy-only-for-specific-domain">similar question</a> but not quite the same.</p>
<p>There are some situations in which I need to supply default values for certain arguments like <code>verify</code> (e.g. a custom CA/SSL chain) for specific domains</p>
<pre><code>from requests import Session
from requests.adapters import HTTPAdapter
class MyHTTPAdapter(HTTPAdapter): ...
s = Session()
s.mount("http://www.example.org", MyHTTPAdapter())
</code></pre>
<p>so that not only would <code>www.example.org</code> be handled, but also other subdomains (including redirects) that we cannot determine ahead of time, due to lots of redirects. In this specific situation, we have an s3-like server which returns a lot of 307's to get to the right server that has the file we want to download. Imagine calling a <code>GET</code> to <code>download.example.org</code> which provides a <code>307</code> to <code>p06636710s64948.example.org</code> for us to download the file from, however we need the custom CA chain applied to the redirect too.</p>
<p>We don't have a way to deterministically know all the possible servers that could be redirected to (in some cases, a virtual machine is rapidly spun up just to serve this request that got made and comes with a dynamically generated name). How can we mount an adapter in such a way to automatically set <code>verify</code> without asking users to do it themselves via</p>
<pre><code>s.get('.....', verify='/path/to/ca/chain')
</code></pre>
<p>at the top-level which propagates all the way through?</p>
<p>The best option I can think of is to do something similar to what's mentioned in this <a href="https://github.com/psf/requests/issues/2155#issuecomment-50771010" rel="nofollow noreferrer">github comment</a> and use <code>partial.functools</code> to override <code>s.get</code> to check the URL and change <code>verify</code> as needed before passing it through.</p>
|
<python><python-requests>
|
2023-02-22 04:41:23
| 1
| 621
|
kratsg
|
75,528,184
| 13,142,245
|
How to use class methods in Multiprocessing Pool in Python?
|
<p>I present a simplistic toy problem but the underlying problem is how to use class methods in multiprocessing Pool in Python?</p>
<pre class="lang-py prettyprint-override"><code>class Var:
def __init__(self,val):
self.val = val
def increment(self):
self.val +=1
arr = [Var(1) for i in range(1000)]
def func(x):
x.increment()
with Pool() as pool:
results = pool.map(func, arr)
</code></pre>
<p>The results returned is an array of <code>None</code> values. I expect this as <code>func</code> returned nothing. However, <code>arr[0]</code> is still set to 1. It did not increment. Sure, I could make the method return the new value. But that's not the solution that I'm looking for. The objects should be updated.</p>
<p>At the end of the day, I need to parallelize work on objects. Is there some other way that this can be accomplished in Python?</p>
<p><strong>Edit</strong>: Following recommendations from comments below, I understand that due to IPC design, there's no way to automatically update the objects in the list, arr, from within the pool.map call. So I think the best approach is to update arr following the pool execution</p>
<pre class="lang-py prettyprint-override"><code>class Var:
def __init__(self,val):
self.val = val
def increment(self):
self.val +=1
def update(self,val):
self.val = val
arr = [Var(1) for i in range(1000)]
def func(x):
x.increment()
return x.val
with Pool() as pool:
results = pool.map(func, arr)
for idx, val in enumerate(results):
arr[idx].update(val)
</code></pre>
<p>I don't think that there is a way around the for loop, executing each update sequentially. However, if func was CPU bound and there were enough elems in arr, this design could still offload a lot of the computational burden to <code>pool.map</code> and simply do updates, each O(1), sequentially in the for loop at the end.</p>
|
<python><oop><multiprocessing>
|
2023-02-22 04:07:39
| 1
| 1,238
|
jbuddy_13
|
75,528,043
| 3,713,236
|
Reordering categorical variables using a specified ordering?
|
<p>I have a <code>X_train</code> dataframe. One of the columns <code>locale</code> has the unique values: <code>['Regional', 'Local', 'National'].</code></p>
<p>I am trying to make this column into an Ordered Categorical variable, with the correct order being from smallest to largest: <code>['Local', 'Regional', 'National'] = [0, 1, 2]</code></p>
<p>However, it is not working. Yes I saw the other threads about similar problems as mine, but those solutions are not working. I'm using <code>factorize</code>, but open to customizing the order of <code>LabelEncoder</code> too if that option exists now.</p>
<p>This is my code:</p>
<pre><code>print(X_train['locale'][:10])
cat = pd.Categorical(X_train['locale'], categories = ['Local', 'Regional', 'National'])
codes, uniques = pd.factorize(cat)
print(codes[:10])
</code></pre>
<p>Output: (should be 2's if it is all national)</p>
<p><a href="https://i.sstatic.net/L4DLx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L4DLx.png" alt="enter image description here" /></a></p>
<p>X_train dataframe:</p>
<pre><code>{'id': {0: 0, 1: 1, 2: 2, 3: 3, 4: 4},
'date': {0: Timestamp('2013-01-01 00:00:00'),
1: Timestamp('2013-01-01 00:00:00'),
2: Timestamp('2013-01-01 00:00:00'),
3: Timestamp('2013-01-01 00:00:00'),
4: Timestamp('2013-01-01 00:00:00')},
'store_nbr': {0: '1', 1: '1', 2: '1', 3: '1', 4: '1'},
'family': {0: 'AUTOMOTIVE',
1: 'BABY CARE',
2: 'BEAUTY',
3: 'BEVERAGES',
4: 'BOOKS'},
'sales': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0},
'onpromotion': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0},
'city': {0: 'Quito', 1: 'Quito', 2: 'Quito', 3: 'Quito', 4: 'Quito'},
'state': {0: 'Pichincha',
1: 'Pichincha',
2: 'Pichincha',
3: 'Pichincha',
4: 'Pichincha'},
'store_type': {0: 'D', 1: 'D', 2: 'D', 3: 'D', 4: 'D'},
'cluster': {0: '13', 1: '13', 2: '13', 3: '13', 4: '13'},
'dcoilwtico': {0: nan, 1: nan, 2: nan, 3: nan, 4: nan},
'transactions': {0: nan, 1: nan, 2: nan, 3: nan, 4: nan},
'holiday_type': {0: 'Holiday',
1: 'Holiday',
2: 'Holiday',
3: 'Holiday',
4: 'Holiday'},
'locale': {0: 'National',
1: 'National',
2: 'National',
3: 'National',
4: 'National'},
'locale_name': {0: 'Ecuador',
1: 'Ecuador',
2: 'Ecuador',
3: 'Ecuador',
4: 'Ecuador'},
'description': {0: 'Primer dia del ano',
1: 'Primer dia del ano',
2: 'Primer dia del ano',
3: 'Primer dia del ano',
4: 'Primer dia del ano'},
'transferred': {0: False, 1: False, 2: False, 3: False, 4: False},
'year': {0: '2013', 1: '2013', 2: '2013', 3: '2013', 4: '2013'},
'month': {0: '1', 1: '1', 2: '1', 3: '1', 4: '1'},
'week': {0: '1', 1: '1', 2: '1', 3: '1', 4: '1'},
'quarter': {0: '1', 1: '1', 2: '1', 3: '1', 4: '1'},
'day_of_week': {0: 'Tuesday',
1: 'Tuesday',
2: 'Tuesday',
3: 'Tuesday',
4: 'Tuesday'}}
</code></pre>
|
<python><pandas><dataframe><categorical-data><label-encoding>
|
2023-02-22 03:43:24
| 1
| 9,075
|
Katsu
|
75,528,024
| 1,658,617
|
How do I compile a Python zipapp as optimized?
|
<p>Assume the following code under <code>src/app.py</code>:</p>
<pre><code>def main():
assert False
if __name__ == "__main__":
main()
</code></pre>
<p>Running this using <code>python -o src/app.py</code> will work fine as the assertions are disabled.</p>
<p>How can I package a <a href="https://docs.python.org/3/library/zipapp.html" rel="nofollow noreferrer">zipapp</a> (<code>python -m zipapp src -m "app:main"</code>) such that when it is double-clicked or run, it'll automatically run as optimized?</p>
<p>I've tried changing the extension to <code>.pyo</code> and it still resulted in an <code>AssertionError</code>.</p>
|
<python><zipapp>
|
2023-02-22 03:38:37
| 1
| 27,490
|
Bharel
|
75,527,966
| 1,171,550
|
ruamel.yaml loses anchor and skips aliases after the first alias on the same level when dumping
|
<p>Given this YAML:</p>
<pre class="lang-yaml prettyprint-override"><code>---
sp-database: &sp-database
DATABASE_NAME: blah
DATABASE_PORT: 5432
DATABASE_SCHEMA: public
DATABASE_USERNAME: foo
DATABASE_DRIVER: bar
DATABASE_TYPE: pg
rabbit: &rabbit
RABBIT_PORT: 5672
RABBIT_USERNAME: foo
sp-env: &sp-env
<<: *sp-database
<<: *rabbit
REDIS_PORT: 6379
</code></pre>
<p>when I read this code in and dump it out:</p>
<pre class="lang-py prettyprint-override"><code>def blah(self):
values_file = './src/values.yaml'
with open(values_file, 'r') as stream:
data = self.yaml.load(stream)
values_file='./src/values1.yaml'
with open(values_file, 'w') as file:
self.yaml.indent(sequence=4, offset=2)
self.yaml.dump(data, file)
</code></pre>
<p>The closest solution I found was this:
<a href="https://stackoverflow.com/questions/73806159/how-to-generate-multiple-yaml-anchors-and-references-in-python">How to generate multiple YAML anchors and references in Python?</a></p>
<p>in which I did change the alias usage to this:</p>
<pre class="lang-yaml prettyprint-override"><code>sp-env: &sp-env
<<: [ *sp-database, *rabbit ]
REDIS_PORT: 6379
</code></pre>
<p>and it works but I want to figure out why it's not working with the sequential aliases, not the array subscripted aliases.</p>
|
<python><python-3.x><yaml><ruamel.yaml>
|
2023-02-22 03:23:27
| 1
| 1,428
|
Recur
|
75,527,955
| 2,998,077
|
GroupBy to count number of overlap substrings in rows
|
<p>Dataframe below that I want to compare the sub-strings in rows under GroupBy.</p>
<p>For example, in Project_A, it's to compare Project_A's first row ['William', 'Oliver', 'Elijah', 'Liam'] with Project_A's second row [ 'James', 'Mason', 'Elijah', 'Oliver']</p>
<p>Ideal result as:</p>
<p><a href="https://i.sstatic.net/RwWqt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RwWqt.png" alt="enter image description here" /></a></p>
<p>I've tried to convert the rows in list, then compare them, but unsuccessful.</p>
<pre><code>import pandas as pd
from io import StringIO
import numpy as np
csvfile = StringIO(
"""
Project Members
Project_A 'William', 'Oliver', 'Elijah', 'Liam'
Project_A 'James', 'Mason', 'Elijah', 'Oliver'
Project_A 'Noah', 'Benjamin', 'Mason', 'William'
Project_A 'Liam', 'Oliver', 'Lucas', 'Elijah'
Project_B 'Oliver', 'Elijah', 'Lucas', 'Liam'
Project_B 'Elijah', 'Benjamin', 'Oliver', 'Liam'
Project_B 'Lucas', 'William', 'James', 'Liam'
Project_C 'Lucas', 'Oliver', 'Mason', 'Elijah'
Project_C 'Mason', 'Elijah', 'William', 'Lucas'
Project_C 'Elijah', 'Oliver', 'Lucas', 'Benjamin'
""")
df = pd.read_csv(csvfile, sep = '\t', engine='python')
df['Overlaps'] = df.groupby('Project').apply(lambda group: len(set(group['Members'].tolist()) & set(group['Members'].shift(1).tolist()))).tolist()
</code></pre>
<p>What's the right way to do so?</p>
|
<python><pandas><dataframe>
|
2023-02-22 03:20:00
| 2
| 9,496
|
Mark K
|
75,527,938
| 7,984,318
|
matplotlib in vs studio code can't update plot
|
<p>I'm using matplotlib in vs studio code:</p>
<pre><code>from matplotlib import pyplot as plt
for i in [1,2,3]:
plt.figure(figsize=(15, 6))
plt.cla()
env.render_all()
plt.show()
time.sleep(5)
</code></pre>
<p>It will pop out an individual window besides the vs studio code window ,and only show the first round plot of the looping ,and then the process will be stuck ,until I manually close the individual plot window ,the process will go on and the second loop plot will pop out.</p>
<p>I have tired:</p>
<pre><code>from matplotlib import pyplot as plt
for i in [1,2,3]:
plt.close()
plt.close(2)
plt.close(plot1)
plt.close('all')
plt.figure(figsize=(15, 6))
plt.cla()
env.render_all()
plt.show()
time.sleep(5)
</code></pre>
<p>And none of this works for me ,I want to keep showing the old plot until 5 seconds later the new plot comes and automatically update the older plot to new plot.</p>
<p>Any friend can help ?</p>
|
<python><matplotlib><visual-studio-code><plot>
|
2023-02-22 03:16:24
| 1
| 4,094
|
William
|
75,527,887
| 13,142,245
|
How to use Multiprocessing Pool on a dictionary input in Python?
|
<p>My goal is to to use map reduce aka Multiprocessing pool on a python dictionary. I would like it to map key,value pairs to different cores then aggregate result as a dictionary.</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing.pool import Pool
elems = {i:i for i in range(1_000_000)}
def func(x):
return (x, elems[x]**2)
with Pool() as pool:
results = pool.map(func, elems.keys())
results = {a:b for a,b in results}
</code></pre>
<p>This is a bit of a hacky solution but is there a more Pythonic way to receive a dictionary input and produce a dictionary output using a multiprocessing pool in Python?</p>
|
<python><dictionary><multiprocessing>
|
2023-02-22 03:05:33
| 1
| 1,238
|
jbuddy_13
|
75,527,883
| 2,882,380
|
How to add " and " around a number
|
<p>I use Python to create a text file, which is fed into a calculation APP. Due to the setting up of the calculation APP (which I cannot change), it persists that a number must be wrapped by double quote.</p>
<p>For example, when I open an existing text file used in the calculation APP from Notepad, I can see <code>"1"</code>. However, when I write from Python using <code>to_csv</code>, number will not be wrapped by double quote. I tried the folowing, but it gives me <code>"""1"""</code> instead of <code>"1"</code>. How can I get to the desired format in this case, please?</p>
<pre><code>data['field_1'] = data['field_1'].astype(str)
data['field_1'] = '"' + data['field_1'] + '"'
data.to_csv("output.txt", index=False)
</code></pre>
|
<python><pandas><csv><double-quotes>
|
2023-02-22 03:04:49
| 1
| 1,231
|
LaTeXFan
|
75,527,813
| 14,311,637
|
Difficulty in installing rasterio
|
<p>I need to clip a raster file using a shape file. I know the code of doing that using rasterio. But I cannot install rasterio.</p>
<p>I tried:</p>
<pre><code>conda install rasterio gdal=2 -y
</code></pre>
<p>but it shows installing since 3 hours.</p>
<p>is there any other package that can be used for the same purpose?</p>
|
<python><windows>
|
2023-02-22 02:50:41
| 0
| 377
|
Jui Sen
|
75,527,763
| 12,319,746
|
Deploying python script in Azure
|
<p>I have a python script which is making around 10,000 API calls, and sending the data from these calls to an SQL DB. It also needs to read/write from 2 text files and this programme needs to run continiously, forever. Is there a way to deploy this in Azure in anything else than a Virtual Machine without causing a signigficant increase in complexity and management?</p>
<p>I have read about function apps, durable functions, app service etc. However, my code is not a web-app, it takes around 4 hours to run once and needs to run in a loop. Based on the information about these they aren't supposed to be used for a job like this. Is there any other alternative in Azure?</p>
|
<python><azure>
|
2023-02-22 02:39:33
| 1
| 2,247
|
Abhishek Rai
|
75,527,619
| 1,187,968
|
Python/Experta Rule engine not running as expect
|
<p>I have the following code, and the Experta engine gives unexpected output: <code>None</code>. Is the <code>P</code> predicate function applied correctly?</p>
<pre><code>from experta import Fact, KnowledgeEngine, L, AS, Rule, P, AND
class QuoteFact(Fact):
"""Info about the traffic light."""
pass
class GoodSymbolFact(Fact):
pass
class _RuleEngine(KnowledgeEngine):
@Rule(
QuoteFact(P(lambda x: float(x.price) > 1.00))
)
def accept(self, quote_fact):
good_symbol_fact = GoodSymbolFact(symbol=quote_fact.symbol)
self.declare(good_symbol_fact)
_engine = _RuleEngine()
_engine.reset()
_engine.declare(QuoteFact(price=100.00, symbol="AAPL"))
result = _engine.run()
print(result)
</code></pre>
|
<python><rule-engine><experta>
|
2023-02-22 02:00:50
| 1
| 8,146
|
user1187968
|
75,527,554
| 1,293,778
|
Error adding item to Azure Storage Queue from Python
|
<p>Have a (so far) very simple Azure Function in Python. Trying to use Azure Queues for the first time. Modeling off of <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue-output?tabs=in-process%2Cextensionv5&pivots=programming-language-python" rel="nofollow noreferrer">the tutorial here</a>. When I try to run this I get the following error:</p>
<blockquote>
<p>System.Private.CoreLib: Exception while executing function:
Functions.applyNewSurvey123. System.Private.CoreLib: Result: Failure
Exception: TypeError: unable to encode outgoing TypedData: unsupported
type "<class
'azure_functions_worker.bindings.generic.GenericBinding'>" for Python
type "list" Stack: File "C:\Program Files\Microsoft\Azure Functions
Core
Tools\workers\python\3.9/WINDOWS/X64\azure_functions_worker\dispatcher.py",
line 425, in _handle__invocation_request
param_binding = bindings.to_outgoing_param_binding( File "C:\Program Files\Microsoft\Azure Functions Core
Tools\workers\python\3.9/WINDOWS/X64\azure_functions_worker\bindings\meta.py",
line 160, in to_outgoing_param_binding
datum = get_datum(binding, obj, pytype) File "C:\Program Files\Microsoft\Azure Functions Core
Tools\workers\python\3.9/WINDOWS/X64\azure_functions_worker\bindings\meta.py",
line 108, in get_datum
raise TypeError( .</p>
</blockquote>
<p>My code:</p>
<pre><code>import logging
import azure.functions as func
def main(req: func.HttpRequest,
msg: func.Out[str]) -> func.HttpResponse:
jsonString = req.get_json()
try:
msg.set(jsonString)
return func.HttpResponse("HTTP triggered successfully.", status_code=200)
except Exception as e:
logging.error(f"Error: {e}")
return func.HttpResponse("NOPE", status_code=500)
</code></pre>
<p>I've got an Azure Storage Queue set up and function.json seems to be set up correctly. If I set msg to just some plain text, it gives the same error. If I set a logging.error message after msg.set(), I do see the message. Any ideas what I'm doing wrong here?</p>
|
<python><azure><azure-storage-queues>
|
2023-02-22 01:46:46
| 1
| 361
|
Roger Asbury
|
75,527,408
| 411,094
|
Call Github Actions in CookieCutter template
|
<p>I am building a template repo that uses <a href="https://github.com/cookiecutter/cookiecutter" rel="nofollow noreferrer">CookieCutter</a>. The template itself will contain github action files that execute the tests in the template.</p>
<p>When I make changes to the template itself, in the PR, I want to instantiate the template with the default options and then call the github action files in the template to test the template itself.</p>
<p>What I have working now is I instantiate the template and then use Tox and Pytest to run the tests directly. This works but I would like to call the template action files in order to test the CICD of the template itself.</p>
<p>Is this possible?</p>
<p>TIA!</p>
|
<python><github><github-actions><cookiecutter>
|
2023-02-22 01:13:00
| 0
| 1,271
|
josh
|
75,527,284
| 5,526,682
|
pandas filter rows based on row 1 column B equals row 2 column A and so forth
|
<p>I have a problem that I'm trying to figure out how to accomplish.
I have a dataframe with multiple columns containing names and hrs.</p>
<pre><code>d = {'ID': [1, 2,3,4,5,6], 'uName': ['Mark', 'Joe', 'Patty', 'Mary', 'Ted', 'Sam'], 'sName': ['Patty','Mary', 'Sam','Sally','Tony','Bob'], 'hrs': [20, 16,35,18,15,21], 'dep': ['A', 'J', 'K','I','P','U']}
df = pd.DataFrame(data=d)
</code></pre>
<p>I want to select a row, in this example I'll select Row 1 Mark, I then want to take the sName and select the next row where sName in this row is equal to pName in the new row , so in this I would use Patty and select row 3. I would then take Sam and select row 6 since Sam is the uName in row 6 and since there are no others where Bob is in uName I would end</p>
<pre><code>ID uName sName hrs dep
1 Mark Patty 20 A
2 Joe Mary 16 J
3 Patty Sam 35 K
4 Mary Sally 18 I
5 Ted Tony 15 P
6 Sam Bob 21 U
</code></pre>
<p>so my new df would be</p>
<pre><code>ID uName sName hrs dep
1 Mark Patty 20 A
3 Patty Sam 35 K
6 Sam Bob 21 U
</code></pre>
<p>Almost thinking networkx may be a good solution here, but not sure. Figured I would see if anyone knows how in pandas to do this. I am only using a few examples here, but my real data has around 90k rows</p>
|
<python><pandas><networkx>
|
2023-02-22 00:46:24
| 2
| 463
|
Messak
|
75,527,221
| 13,178,155
|
How to pass a variable through Javascript to PyQt5 Python script?
|
<p>I'm currently building a little GUI using PyQt5 and some html/js scripts that have already been built by someone else for a previous project, and wondering if anyone might be able to help me.</p>
<p>I'm comfortable working in python, but my limited knowledge of html, js and jquery are really making this confusing.</p>
<p>I'm in a situation where I need to be able to take information recorded by the javascript/html and pass it back through to my python script so that I can use it to create a variable for output later on.</p>
<p>I've created a reproducible version below. Currently I'm able to know that gender is being assigned a value when I click, using the buttons. But is there a way that I can pass the value of gender back through to the python script so that I am able to use it there too?</p>
<p>I want to be able to have access to it in successive parts of the app sot hat I can create a profile for the user etc.</p>
<p>pyqtwebtest1.html</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.3/jquery.min.js"></script>
<script>
$(document).ready(function(){
$("#new_butt").click(function(){
$("#para").hide();
var gender=$ (this).attr('value');
window.alert(gender)
});
$("#other_butt").click(function(){
$("#para").show();
var gender=$ (this).attr('value');
window.alert(gender)
});
});
</script>
</head>
<body>
<h1>My First Heading</h1>
<p id="para">My first paragraph.</p>
<button id ="new_butt" for="Gender_1" value=1> Click me please!</button>
<button id ="other_butt" for="Gender_2" value=25> Click me please!</button>
</body>
</html>
</code></pre>
<p>pyqtwebtest.py</p>
<pre><code>from PyQt5.QtWidgets import QApplication, QMainWindow
from PyQt5 import QtWebEngineWidgets
from pathlib import Path
import sys
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
#Create Web View
view = QtWebEngineWidgets.QWebEngineView()
html = Path('stack overflow_pass_test/pyqtwebtest1.html').read_text(encoding="utf8")
view.setHtml(html)
self.setCentralWidget(view)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
</code></pre>
<p>I'm wanting to bring all of this information from the javascript/html back through to the python app because I'm much more comfortable working in python, and the successive parts of the app I'm building out really don't rely on the javascript elements. It's just this small part I'm trying to solve.</p>
|
<javascript><python><html><jquery><pyqt5>
|
2023-02-22 00:33:12
| 0
| 417
|
Jaimee-lee Lincoln
|
75,527,195
| 2,334,092
|
Pandas : Cannot subtract dataframes
|
<p>I have 2 simple data frames</p>
<p>df1</p>
<pre><code> colA colB
0 e5b1b9fc-ade9-4501-a66b-ef2ecd57483e.d9967f258... 2ZWR52QYZ86H
1 8d127d82-cfa4-421f-9081-cf35132b8248.f0865b3b9... 61RPLMR5BFFT
2 005c8e84-98b4-402d-a24e-6a63edad0598.16b6f0f9f... 7L256IQTB1M1
3 d87f6dfd-1c55-4ce5-9b84-e80b6aa958d8.3f0901c7f... 3H9SLNATBJ01
4 cf89c9dd-004e-40e7-8120-3397ce5fd97e.f571bc175... 4Z8RT5VZNOQ8
5 9eebc606-e8d0-40e3-9ba5-6d3e1b77bc64.0dc42d528... 1DEOAHZL2JFC
6 7112aef1-5fa0-4459-aa1b-15cba2f96ec5.6a9ecb28d... 2CIISYGAAV69
7 e30d901c-34e6-4974-9b9e-1fe206ed6fca.701f1358e... 2NLLJ70RXKW2
8 13677989-8979-4422-a471-7fda22ea4f6d.e00051e45... 6P60G721DVHK
</code></pre>
<p>df2</p>
<pre><code>0 e5b1b9fc-ade9-4501-a66b-ef2ecd57483e.d9967f258... 2ZWR52QYZ86H
1 8d127d82-cfa4-421f-9081-cf35132b8248.f0865b3b9... 61RPLMR5BFFT
2 005c8e84-98b4-402d-a24e-6a63edad0598.16b6f0f9f... 7L256IQTB1M1
3 d87f6dfd-1c55-4ce5-9b84-e80b6aa958d8.3f0901c7f... 3H9SLNATBJ01
4 cf89c9dd-004e-40e7-8120-3397ce5fd97e.f571bc175... 4Z8RT5VZNOQ8
</code></pre>
<p>now, i want to isolate the rows in df1, that are not in df2</p>
<p>so i tried</p>
<pre><code>df1.subtract(df2)
</code></pre>
<p>but i get</p>
<pre><code>result[mask] = op(xrav[mask], yrav[mask])
TypeError: unsupported operand type(s) for -: 'str' and 'str'
</code></pre>
<p>What am i doing wrong?</p>
|
<python><pandas><dataframe><subtraction>
|
2023-02-22 00:27:32
| 1
| 8,038
|
AbtPst
|
75,527,171
| 962,891
|
How to route I/O to console AND logfile using clipspy
|
<p>I am using <a href="https://clipspy.readthedocs.io/en/latest/" rel="nofollow noreferrer">clipspy v1.0.0</a> and Python 3.10 on Ubuntu 22.0.4 LTS.</p>
<p>I am trying to get commands from CLIPS rules (e.g. <code>print t 'WARN: Listen up, maggots...'</code>) to print to the console by calling my registered function, which will then parse the message and recognise that it is a warning, so use the logging module to write a warning level message to the log file.</p>
<p>This is what I have so far:</p>
<h3>CLIPS rule file (example.clp)</h3>
<pre><code>(defrule welcome
(name "Tyler Durden")
=>
(printout t "INFO: Gentlemen, welcome to Fight Club. The first rule of Fight Club is: you do not talk about Fight Club!" crlf))
</code></pre>
<h3>Python program (example.py)</h3>
<pre><code>import logging
import logging.handlers
import re
import clips
log_format = '%(asctime)s - %(levelname)s - %(message)s'
logging.basicConfig(level=logging.INFO, format=log_format)
logger = logging.getLogger('CLIPS')
log_level = logging.INFO
log_filename = 'expert'
handler = logging.handlers.TimedRotatingFileHandler(f"{log_filename}.log", when="midnight", interval=1)
handler.setLevel(log_level)
formatter = logging.Formatter(log_format)
handler.setFormatter(formatter)
# add a suffix which you want
handler.suffix = "%Y%m%d"
#need to change the extMatch variable to match the suffix for it
handler.extMatch = re.compile(r"^\d{8}$")
# finally add handler to logger
logger.addHandler(handler)
def my_print(msg):
# Placeholder code, not parsing message or using logger for now ...
print(f"CLIPS: {msg}")
try:
env = clips.Environment()
router = clips.LoggingRouter()
env.add_router(router)
env.define_function(my_print)
env.load("example1.clp")
env.assert_string('(name "Tyler Durden")')
#env.reset()
env.run()
# while True:
# env.run()
except clips.CLIPSError as err:
print(f"CLIPS error: {err}")
except KeyboardInterrupt:
print("Stopping...")
finally:
env.clear()
</code></pre>
<h3>Bash</h3>
<pre><code>me@yourbox$ python example.py
2023-02-21 23:58:20,860 - INFO - INFO: Gentlemen, welcome to Fight Club. The first rule of Fight Club is: you do not talk about Fight Club!
</code></pre>
<p>A log file is created, but nothing is written to it. Also, it seems stdout is being simply routed to Pythons stdout, instead of calling my function.</p>
<p>How do I fix the code above, so that when a <code>(print t)</code> statement is encountered in a CLIPS program, it, it simultaneously prints to console, and writes to log using the correct (i.e. specified) log level.</p>
|
<python><clips><clipspy>
|
2023-02-22 00:23:13
| 1
| 68,926
|
Homunculus Reticulli
|
75,527,129
| 16,462,878
|
Substitute (scalar) multiples of a sub-expression
|
<p>I need to identify and replace <strong>multiple</strong> of a sub-expression from an expression:</p>
<p>eliminate <em>1<em>x - 3</em>exp(x)</em> from <em>x**2 - 3<em>x + 6</em>exp(x)</em> to give either <em>x**2 - x</em> or <em>x**2 - 3exp(x)</em>.</p>
<p>I haven't found yet an elegant and correct way to face the problem.
Among my several attempts I tried a recursive application of <a href="https://docs.sympy.org/latest/modules/core.html?highlight=extract#sympy.core.expr.Expr.extract_additively" rel="nofollow noreferrer"><code>extract_additively</code></a> but it's not universal and may fail!</p>
<p>What is the right way to "eliminate" multiple of an expression?</p>
<pre><code>import sympy as sp
import IPython.display as disp
x = sp.Symbol('x')
yp = x**2 - 3*x + 6*sp.exp(x)
# testing sub-expressions
yhs = [1*x - 3*sp.exp(x), # OK
5*x - 3*sp.exp(x), # FAIL: coefficient greater than the original
- 7*sp.exp(x), # FAIL: coefficient greater than the original
]
# recursive extraction
for yh in yhs:
yp_mod = yp.expand()
while yp_mod.extract_additively(yh):
yp_mod = yp_mod.extract_additively(yh)
while yp_mod.extract_additively(-yh): # test for opposite sign
yp_mod = yp_mod.extract_additively(-yh)
disp.display(yp_mod)
#x**2 - x
#x**2 - 3*x + 6*exp(x)
#x**2 - 3*x + 6*exp(x)
</code></pre>
|
<python><sympy><substitution>
|
2023-02-22 00:14:19
| 0
| 5,264
|
cards
|
75,527,103
| 20,898,396
|
Calling functions on arrays moved to GPU with Numba
|
<p>I didn't think we could print anything from the GPU since calling <code>print</code> inside a <code>@cuda.jit</code> function doesn't work, but then I tried calling <code>A.shape</code> to see what would happen.</p>
<pre><code>import numpy as np
from numba import cuda
A = np.random.randn(1000, 1000)
A_gpu = cuda.to_device(A)
</code></pre>
<pre><code>A_gpu.shape
</code></pre>
<pre><code>(1000, 1000)
</code></pre>
<pre><code>A_gpu[0][0]
</code></pre>
<pre><code>0.4253498653987585
</code></pre>
<pre><code>A_gpu.T
</code></pre>
<pre><code><numba.cuda.cudadrv.devicearray.DeviceNDArray at 0x7f5de810ffa0>
</code></pre>
<p>For something to be printed to the console, do the numbers need to be copied to the CPU first?</p>
<pre><code>%timeit A.T
%timeit A_gpu.T
%timeit A.shape
%timeit A_gpu.shape
%timeit A[0][0]
%timeit A_gpu[0][0]
</code></pre>
<pre><code>132 ns ± 18.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
159 ms ± 29.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
*76 ns* ± 2.37 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
*47.8 ns* ± 8.81 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
376 ns ± 146 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
161 µs ± 25.7 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
<p>Calling <code>A.shape</code> is faster in the GPU for some reason, but the other functions are slower. However, it might be the case that accessing elements <code>A[i, j]</code> inside a <code>@cuda.jit</code> is optimized and isn't slower.</p>
<p>I am implementing a <a href="https://numba.readthedocs.io/en/stable/cuda/examples.html#matrix-multiplication" rel="nofollow noreferrer">CUDA kernel for matrix multiplication</a>, with the intention of using it for back propagation in neural networks, meaning <code>dL_dX = np.dot(dL_dY, self.weights.T)</code> will be performed very often.</p>
<p>If I need to transpose a matrix, I was wondering if it's bad practice to transpose from the GPU <code>matrix_multiplication_gpu[blocks_per_grid, threads_per_block](A_gpu, B_gpu.T)</code> and whether it would be better to transpose the matrix in the CPU first, and then move/"cache" the result to GPU <code>cuda.to_device(A.T)</code>. Interestingly, moving the array to the GPU <code>%timeit cuda.to_device(A.T)</code> is much faster <code>2.41 ms ± 145 µs</code> than transposing the array within the GPU.</p>
|
<python><numpy><cuda><numba>
|
2023-02-22 00:08:16
| 1
| 927
|
BPDev
|
75,527,087
| 9,105,621
|
group plotly y-axis labels, but keep individual data lines
|
<p>I have a flask function that creates a timeline chart. I want to group the y-axis labels, but keep the individual data lines. How can I achieve this?</p>
<p>Here is my function:</p>
<pre><code>def drawpipelinechart(ta='All'):
table = pipeline_table().pipe(calc_fpfv_countdown)
cols = [
'ID',
'PROTOCOL',
'Quarter_Year',
'Supplier_Selected',
'TA',
'PRODUCT'
]
table = table[cols]
table['End'] = pd.to_datetime(table['Quarter_Year'].str.split(
',').str[-1] + '-' + table['Quarter_Year'].str.split(',').str[0].str[-1] + '-01')
table['Start'] = table['End'] - pd.offsets.MonthBegin(3)
table['link'] = table.apply(lambda x: f'<a href="/vo/editpipelinerecord/{x["ID"]}" target="_blank" class="nav-link" style="cursor: pointer;" target="_blank"></a>', axis=1)
now = pd.Timestamp.now()
end_date = now + pd.offsets.MonthEnd(1) + pd.offsets.DateOffset(months=18)
table = table[(table['End'] > now) & (table['End'] <= end_date)]
table = table[table['TA']==ta] if ta != 'All' else table
table.loc[:, 'Supplier_Selected'] = table['Supplier_Selected'].fillna('Open')
fig = px.timeline(table, x_start="Start", x_end="End", y="PROTOCOL", text='PROTOCOL',
color="Supplier_Selected", hover_name="PROTOCOL", template="simple_white",
labels={"Supplier_Selected": "Supplier"}, custom_data=['link'],
)
fig.update_yaxes(autorange="reversed")\
.update_xaxes(
showgrid=True,
gridwidth=.5,
gridcolor='lightgray',
tickformat='%Y-Q%q',
dtick='M3'
)\
.update_layout(
margin=dict(t=100),
xaxis=dict(side='top'),
height=200+len(table['PROTOCOL'].unique())*40,
autosize=True,
yaxis_title=None,
)\
.update_traces(textposition='inside',
)
# Add a horizontal line at each y-coordinate
for y in table['PROTOCOL'].unique():
fig.add_hline(y=y, line_dash="dot", line_width=1, line_color="gray")
graphJSON = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder)
return graphJSON
</code></pre>
<p>This currently gives this:</p>
<p><a href="https://i.sstatic.net/xXNJL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xXNJL.png" alt="enter image description here" /></a></p>
<p>I want to group the y-axis labels by the 'product' data column</p>
<p>Here is my desired result:</p>
<p><a href="https://i.sstatic.net/mWKT7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mWKT7.png" alt="enter image description here" /></a></p>
<p>Update:</p>
<p>I have added a mapping to convert the 'protocols' to 'product' on the y-axis. Now I just have to figure out how to group them.</p>
<p>I added this:</p>
<pre><code>yaxis=dict(
title=None,
tickmode='array',
ticktext=table['PRODUCT'],
tickvals=table['PROTOCOL'].unique()
)
</code></pre>
<p>the code is now producing this:</p>
<p><a href="https://i.sstatic.net/6amnF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6amnF.png" alt="enter image description here" /></a></p>
<p>any thoughts on how to group the y-axis labels from here?</p>
|
<python><flask><plotly>
|
2023-02-22 00:04:44
| 1
| 556
|
Mike Mann
|
75,527,077
| 5,651,575
|
How to filter Pandas dataframe based on grouped multiple column conditions?
|
<p>Given the following dataframe:</p>
<pre><code>currency index product price
AUD A STOCK $10.00
AUD A BOND $10.00
AUD B OPTION $11.00
AUD B STOCK $12.00
USD A STOCK $14.00
USD A BOND $11.00
USD A OPTION $19.00
USD B BOND $12.00
</code></pre>
<p>For a given currency and given index, if that index & currency contains options, filter out stock and bond rows.</p>
<p>Therefore the expected output will be:</p>
<pre><code>currency index product price
AUD A STOCK $10.00
AUD A BOND $10.00
AUD B OPTION $11.00
USD A OPTION $19.00
USD B BOND $12.00
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-22 00:02:53
| 1
| 617
|
youngdev
|
75,527,054
| 2,334,092
|
Python3 : How to spawn jobs in parallel
|
<p>I am pretty new to multithreading and would like to explore. I have a json file, that provides some config. Based on this, i need to kick off some processing. Here is the config</p>
<pre><code>{
"job1":{
"param1":"val1",
"param2":"val2"
},
"job2":{
"param3":"val3",
"param4":"val4"
}
}
</code></pre>
<p>and here is the python snippet</p>
<pre><code>config_file = open('config.json')
config_data = json.load(config_file)
for job_name,job_atts in metric_data.items():
perform_job(job_name,job_atts)
</code></pre>
<p>so in this way, i can finish up the jobs one by one.</p>
<p>Is there a way to run/kick off these jobs in parallel? Note that these jobs are completely independent of each other and do not need to be performed in a seqeuence.</p>
<p>How can i achieve parallel runs via python?</p>
<p><strong>Update</strong></p>
<pre><code>Here is what i tried
>>> from multiprocessing import Pool
>>>
>>> config_data = json.loads(''' {
... "job1":{
... "param1":"val1",
... "param2":"val2"
... },
... "job2":{
... "param3":"val3",
... "param4":"val4"
... }
... }''')
>>> def perform_job(job_name,job_atts):
... print(job_name)
... print(job_atts)
...
>>> args = [(name, attrs)
... for name, attrs in config_data.items()]
>>>
>>> with Pool() as pool:
... pool.starmap(perform_job, args)
...
Process SpawnPoolWorker-27:
Process SpawnPoolWorker-24:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/queues.py", line 368, in get
return _ForkingPickler.loads(res)
AttributeError: Can't get attribute 'perform_job' on <module '__main__' (built-in)>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/queues.py", line 368, in get
return _ForkingPickler.loads(res)
AttributeError: Can't get attribute 'perform_job' on <module '__main__' (built-in)>
</code></pre>
<p>But i am still getting the error</p>
|
<python><python-3.x><multithreading><python-multithreading>
|
2023-02-21 23:58:43
| 2
| 8,038
|
AbtPst
|
75,527,045
| 3,406,100
|
Issues connecting to GEE from R and connecting to Chrome using RSelenium
|
<p>I recently tried configuring my R environment to connect to GEE directly from my desktop. I have certain consistent issues and don't know why. I could connect to a website through Rselenium before I started tampering with stuff to get rgee to connect. What could be wrong and how do I fix it? Even when I set the correct gcs credentials, it still doesn't find it.
The path to python on my system is defined this way when I call it through the reticulate library:</p>
<pre><code>library(reticulate)
library(rgee)
Sys.which("python")
"C:\\Users\\Myname\\AppData\\Local\\MICROS~1\\WINDOW~1\\python.exe"
use_python(Sys.which("python3"))
Error in use_python(Sys.which("python3")) :
Specified version of python 'C:\Users\Myname\AppData\Local\MICROS~1\WINDOW~1\python3.exe' does not exist.
</code></pre>
<p>Most tutorials online show a Python path with a different structure. Now, even when I try to initialize, I have issues:</p>
<pre><code>ee_Initialize(user = 'myname@gmail.com', drive = TRUE, gcs = T)
── rgee 1.1.5 ────────────────────────────────────────────────── earthengine-api 0.1.339 ──
✔ user: myname@gmail.com
✔ Google Drive credentials:Auto-refreshing stale OAuth token.
✔ Google Drive credentials: FOUND
✔ GCS credentials: NOT FOUND
✔ Initializing Google Earth Engine:Fetching credentials using gcloud
Error: Exception: gcloud failed. Please check for any errors above.
*Possible fixes: If you loaded a page with a "redirect_uri_mismatch" error, run earthengine authenticate with the --quiet flag; if the error page says "invalid_request", be sure to run the entire gcloud auth command that is shown.
More information: https://developers.google.com/earth-engine/guides/python_install
</code></pre>
<p>How do I set my python path to something like this (which is written in most rgee tutorials):</p>
<pre><code> /usr/local/bin/python3
</code></pre>
<p>P.S. I have 3 different versons of Python installed (including version 3.10) on my system for ArcGIS pro and Desktop. I also have anaconda that came with the ESRI environment (I am scared of tampering with it).
Also, how do I fix the new challenges with connecting through rselenium to other websites?</p>
<pre><code>library(stringr)
library(RSelenium)
library(dplyr)
rd <- rsDriver(chromever = "110.0.5481.77",browser = "chrome", port = 9515L)
remDr <- rd$client
remDr$open()
[1] "Connecting to remote server"
Error in checkError(res) :
Undefined error in httr call. httr output: Failed to connect to localhost port 4800: Connection refused
</code></pre>
<p>I saw a suggestion to use Docker to connect and then scrape behind an iframe but will it work on Windows 10? If yes, how do I use it?</p>
|
<python><r><python-3.x><rselenium><rgee>
|
2023-02-21 23:56:57
| 0
| 619
|
Joke O.
|
75,526,957
| 12,461,032
|
Seaborn boxplot legend ignoring colors
|
<p>I have a boxplot in Seaborn/Matplotlib for multiple categorical data:</p>
<p><a href="https://i.sstatic.net/frqIY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/frqIY.png" alt="plot" /></a></p>
<p>The problem is the legend part does not match with the plot color.</p>
<pre><code>data = pd.DataFrame.from_dict(data)
print(data.head())
model_names = ['T5B', 'NatGen']
dfl = pd.melt(data, id_vars='metric', value_vars= ['T5B', 'NatGen'])
sns.boxplot(x= 'metric' , y='value', data=dfl, showfliers=False, color='tomato', hue='variable')
plt.legend(bbox_to_anchor=(1.04,0.5), loc="center left", borderaxespad=0, labels = model_names)
plt.show()
</code></pre>
<p>P.S:</p>
<pre><code>print(df.head())
</code></pre>
<p>Would yield:</p>
<pre><code> metric variable value
0 syntax T5B 0.071429
1 syntax T5B 0.086957
2 syntax T5B 0.090909
3 syntax T5B 0.071429
4 syntax T5B 0.125000
</code></pre>
|
<python><matplotlib><seaborn><legend><boxplot>
|
2023-02-21 23:42:02
| 0
| 472
|
m0ss
|
75,526,939
| 13,142,245
|
How to pass an object in Python with the self keyword?
|
<p>Is it possible to pass an object in Python such that the only reference to the object is the self keyword?</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self):
self.val = 'A'
def wave(self,friend):
friend.make_friend(self)
class B:
def __init__(self):
val = 'B'
def make_friend(self, friend):
self.friend = friend
a = A
b = B
a.wave(friend=B)
>>>
----> 1 a.wave(friend=B)
TypeError: A.wave() missing 1 required positional argument: 'self'
</code></pre>
<p>From this error, it does not appear to be possible. See class A method wave. This method intends to send A to B. But I can't reference A within A because it's simply an instance of the class. Thus, I want to pass it using the self keyword.</p>
|
<python>
|
2023-02-21 23:37:04
| 4
| 1,238
|
jbuddy_13
|
75,526,917
| 687,739
|
Traversing a directory with Python's glob
|
<p>I need to match all <code>*.csv.gz</code> files in a bunch of directories using <code>glob</code>.</p>
<p>My directory structure looks like this:</p>
<pre><code>data
/nasdaq100
/2015
/20150102
/A.csv.gz
...
/Z.csv.gz
...
/20161201
...
/2016
...
acquisition/
/script.py
</code></pre>
<p>In <code>script.py</code> I want to traverse all the subdirectories under <code>nasdaq100</code> and process the <code>*.csv.gz</code> files.</p>
<p>I'm coming up with an empty list when using this:</p>
<pre><code>from pathlib import Path
DATA_PATH = Path('../data')
path = DATA_PATH / "nasdaq100"
list(path.glob('*/**/*.csv.gz'))
</code></pre>
<p>Since <code>path</code> is <code>data/nasdaq100</code> I thought matching the current directory and sub directories with <code>*/**/*.csv.gz</code> would work.</p>
<p>What am I doing wrong?</p>
|
<python><glob>
|
2023-02-21 23:31:59
| 0
| 15,646
|
Jason Strimpel
|
75,526,789
| 12,369,606
|
Pathlib: change end of file name
|
<p>I am trying to iterate through a directory, do something to each of the files and then save a new file with a different name similar to the question <a href="https://stackoverflow.com/questions/68972637/using-pathlib-instead-of-os-for-identifying-file-and-changing">here</a>, but I am looking for a solution using pathlib. I am just not sure how to add the desired ending to the end of the file name</p>
<pre><code>movie_dir = pathlib.Path("/movies")
save_dir = pathlib.Path("/corrected_movies")
for dir in movie_dir.iterdir():
for movie_path in dir.iterdir():
save_path = save_dir / movie_path # want to add _corrected.avi to end of file name
</code></pre>
|
<python><pathlib>
|
2023-02-21 23:09:50
| 2
| 504
|
keenan
|
75,526,750
| 2,474,876
|
nlp translation preserving emojis
|
<p>Is there a way to prevent NLP translations from dropping emojis (or specific out of vocabulary tokens for that matter) when using the <a href="https://huggingface.co/Helsinki-NLP/opus-mt-ROMANCE-en" rel="nofollow noreferrer">huggingface--Helsinki-NLP--opus-mt-ROMANCE-en</a> models? Desired behavior:</p>
<p>"<code>Bonjour ma France 🇫🇷</code>"@fr --> "<code>Hello my France 🇫🇷</code>"@en</p>
<p>I can see the pre-trained default tokenizer knows of the emojis in its vocabulary, but looses it before decoding it. Sample code:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import *
# setup
engine = 'pt'
resource = 'huggingface--Helsinki-NLP--opus-mt-ROMANCE-en'
nlp = pipeline(
task="translation",
model=MarianMTModel.from_pretrained(resource),
tokenizer=AutoTokenizer.from_pretrained(resource),
framework=engine
)
# infer
translated = nlp.tokenizer.batch_decode(
skip_special_tokens=True,
sequences=nlp.model.generate(
**nlp.tokenizer(
text=["Bonjour ma France 🇫🇷"],
return_tensors=engine
)
)
)
# results
print(translated) # ['Hello, my France.']
print("🇫🇷" in nlp.tokenizer.get_vocab()) # True
</code></pre>
|
<python><translation><emoji><huggingface>
|
2023-02-21 23:03:52
| 0
| 417
|
eliangius
|
75,526,647
| 10,500,424
|
Plotly Cytoscape: remove edge functionality to increase callback performance
|
<p>I have a performance issue with Plotly Cytoscape. I understand that nodes and edges can be bound to callbacks to provide functionality (such as changing color, displaying labels, etc.).</p>
<p>Lag in callback performance in the documentation's <a href="https://dash.plotly.com/cytoscape/events" rel="nofollow noreferrer">examples</a> are unnoticeable, but the number of nodes and edges are few. Once the number of nodes and edges increase, a compromise in callback performance is obvious. What is more obvious is the performance hit when edges are present. In my use-case, I would like to display edges but have no functionality bound to them; I simply need the edges to show the relationships between nodes. Is this possible?</p>
<p>Please try out the code snippet below, which was modified from the documentation to add thousands of nodes and edges. If you do not plot the edges, you will notice an increase in callback performance. I am wondering how I can keep the edges while maintaining high callback performance.</p>
<pre class="lang-py prettyprint-override"><code>import json
import random
from dash import Dash, html, Input, Output
import dash_cytoscape as cyto
random.seed(888)
app = Dash(__name__)
# Generate a large number of random nodes and edges
NUM_NODES = 3_000
rand_nodes = [(str(random.randint(0, NUM_NODES)), random.randint(20, 50), -random.randint(50, 150)) for _ in range(NUM_NODES)]
node_ids = list(zip(*rand_nodes))[0]
edge_list = zip(node_ids, node_ids[::-1])
styles = {
'pre': {
'border': 'thin lightgrey solid',
'overflowX': 'scroll'
}
}
nodes = [
{
'data': {'id': short},
'position': {'x': 20*lat, 'y': -20*long}
}
for short, long, lat in rand_nodes
]
edges = [
{'data': {'source': source, 'target': target}}
for source, target in edge_list
]
default_stylesheet = [
{
'selector': 'node',
'style': {
'background-color': '#BFD7B5',
'label': 'data(label)'
}
}
]
app.layout = html.Div([
cyto.Cytoscape(
id='cytoscape-event-callbacks-1',
layout={'name': 'preset'},
## Toggle below to display nodes or nodes + edges
elements=edges+nodes,
# elements=nodes,
stylesheet=default_stylesheet,
style={'width': '100%', 'height': '450px'}
),
html.Pre(id='cytoscape-tapNodeData-json', style=styles['pre'])
])
@app.callback(Output('cytoscape-tapNodeData-json', 'children'),
Input('cytoscape-event-callbacks-1', 'tapNodeData'))
def displayTapNodeData(data):
return json.dumps(data, indent=2)
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
|
<javascript><python><graph><plotly><cytoscape>
|
2023-02-21 22:48:24
| 0
| 1,856
|
irahorecka
|
75,526,623
| 312,089
|
Django - How to filter a Many-to-Many-to-Many relantionship
|
<p>I have 3 models (<code>User</code> is Django's built in user-model):</p>
<pre><code>class User(AbstractUser):
pass
class Listing(models.Model):
name = models.CharField('Listing Name', max_length=64)
description = models.TextField()
owner = models.ForeignKey(User, related_name="listings", on_delete=models.CASCADE)
active = models.BooleanField('Active', default=True)
class Bid(models.Model):
listing = models.ForeignKey(Listing, related_name="listing_bids", on_delete=models.CASCADE)
bidder = models.ForeignKey(User, related_name="user_bids", on_delete=models.CASCADE)
amount = models.DecimalField(max_digits=7, decimal_places=2)
class Meta:
unique_together = ('listing', 'bidder',)
</code></pre>
<p>Basically a <code>Listing</code> can have multiple <code>Bids</code> from different <code>Users</code></p>
<p>Now I would like to get the <code>Listing</code> row and it's corresponding <code>Bid</code> row where <code>Listing = x</code> and <code>User = y</code></p>
<p>I am <strong>able</strong> to get a queryset returned when I do this:</p>
<pre><code>Listing.objects.get(pk=8).listing_bids.all()
</code></pre>
<p><strong>But</strong> now I'd like to filter that queryset further based on the <code>bidder</code> where <code>user=xx</code> (or <code>username = xx</code>).</p>
<p>I should then only get one row back (that is if the <code>Listing</code> and <code>User</code> exist on the <code>Bid</code> table) or <code>None</code>should be returned.</p>
<p>It seems so simple to do but I can't figure it out - besides "looping" through all the rows myself.</p>
|
<python><django><django-models><many-to-many>
|
2023-02-21 22:45:04
| 0
| 677
|
Wavesailor
|
75,526,595
| 10,743,830
|
apply a function on a dataframe in a rolling fashion using values from two columns and previous rows
|
<p>Let's say I have the following dataframe (which in reality is much bigger hence the method should be fast):</p>
<pre><code>df = pd.DataFrame({"distance1": [101, 102, 103], "distance2":[12, 33, 44]})
distance1 distance2
0 12 101
1 33 102
2 44 103
</code></pre>
<p>Now I want to apply following function on this dataframe</p>
<pre><code>def distance(x):
return np.sqrt(np.power(x.loc[n, "distance1"] - x.loc[n-1 ,"distance1"], 2) + np.power(x.loc[n, "distance2"] - x.loc[n-1 ,"distance2"], 2))
data["dist"] = data.apply(distance, axis=1)
</code></pre>
<p>Where essentially I would calculate the euclidian distance between the distance1 and distance2 and n is the current row, and n-1 is the previous row in the dataframe</p>
|
<python><pandas><numpy><apply>
|
2023-02-21 22:41:46
| 2
| 352
|
Noah Weber
|
75,526,285
| 743,188
|
python ABC: forbid extra methods on derived classes?
|
<p>Is there a way to prohibit derived classes from abc.ABC from adding methods if they implement an ABC?</p>
<p>Like so:</p>
<pre><code>from abc import ABC, abstractmethod
class Foo(ABC):
"""base class for all Foos"""
def __init__(self, sec_type: str, metric: str) -> None:
...
def some_base_method(...)
@abstractmethod
def m1(self, ....)
@abstractmethod
def m2(self, ....)
</code></pre>
<p>Legal:</p>
<pre><code>class Bar(Foo):
def some_base_method(... # i can override this too if i want to
def m1(...
def m2(...
</code></pre>
<p>but make any other method illegal:</p>
<pre><code>class NoBueno(Foo): # SHOULD RAISE
def m1(...
def m2(...
def m3(....
</code></pre>
<p>Not that this is a common use case, just want to know if this is possible (sort of the opposite of <code>abstractmethod</code></p>
|
<python><abstract-class>
|
2023-02-21 21:57:37
| 0
| 13,802
|
Tommy
|
75,526,276
| 11,036,109
|
Pandas Dataframe: Get and Edit all values in a column containing substring
|
<p>Lets say I have a dataframe, called stores, like this one:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>country</th>
<th>store_name</th>
</tr>
</thead>
<tbody>
<tr>
<td>FR</td>
<td>my new tmp</td>
</tr>
<tr>
<td>ES</td>
<td>this Tmp is new</td>
</tr>
<tr>
<td>FR</td>
<td>walmart</td>
</tr>
<tr>
<td>ES</td>
<td>Target</td>
</tr>
<tr>
<td>FR</td>
<td>TMP</td>
</tr>
</tbody>
</table>
</div>
<p>and another dataframe, called replacements, like this one:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>country</th>
<th>original</th>
<th>replacement</th>
</tr>
</thead>
<tbody>
<tr>
<td>ES</td>
<td>TMP</td>
<td>STORE</td>
</tr>
<tr>
<td>FR</td>
<td>TMP</td>
<td>STORE</td>
</tr>
<tr>
<td>FR</td>
<td>WALMART</td>
<td>IGNORE</td>
</tr>
</tbody>
</table>
</div>
<p>How would you go about getting and updating all values in the store_name column of the first dataframe according to the "rules" of the second one, when the substring in the original column is found (ignoring lower/upper case)?</p>
<p>For this example i'd like to get a new dataframe like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>country</th>
<th>store_name</th>
</tr>
</thead>
<tbody>
<tr>
<td>FR</td>
<td>my new STORE</td>
</tr>
<tr>
<td>ES</td>
<td>this STORE is new</td>
</tr>
<tr>
<td>FR</td>
<td>IGNORE</td>
</tr>
<tr>
<td>ES</td>
<td>Target</td>
</tr>
<tr>
<td>FR</td>
<td>STORE</td>
</tr>
</tbody>
</table>
</div>
<p>I was thinking something like iterating the second dataframe and apply the change to the first one, like this:</p>
<pre><code>for index, row in replacements.iterrows():
stores['store_name'] = stores['store_name'].str.upper().replace(row["original"].upper(), row["replacement"])
</code></pre>
<p>It kind of works, but it's doing some weird things like not changing some strings. Also, I'm not sure if this is the optimal way of doing this. Any suggestions?</p>
<p>Reproducible inputs:</p>
<pre><code>data = [['FR', 'my new tmp'], ['ES', 'this Tmp is new'], ['FR', 'walmart'], ['ES', 'Target'], ['FR', 'TMP']]
df1 = pd.DataFrame(data, columns=['country', 'store_name'])
data = [['ES', 'TMP','STORE'], ['FR', 'TMP','STORE'], ['FR', 'WALMART','IGNORE']]
df2 = pd.DataFrame(data, columns=['country', 'store_name','replacement'])
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-21 21:56:10
| 2
| 411
|
Alain
|
75,526,264
| 1,977,508
|
Using drag and drop files or file picker with CustomTkinter
|
<p>I have recently decided to start learning Python and while doing several small projects as a hands-on approach, i discovered the <code>customtkinter</code> library (<a href="https://github.com/TomSchimansky/CustomTkinter" rel="nofollow noreferrer">https://github.com/TomSchimansky/CustomTkinter</a>) for more modern looking GUI development with Python.</p>
<p>I wanted to do something which either requires a drag-and-drop component for files or a file picker dialogue, which is seemingly <em>somewhat</em> present for the original <code>tkinter</code> library with the <code>tkinterdnd2</code> module, but it doesn't seem to be directly mentioned in the documentation for the <code>customtkinter</code> library wrapper.</p>
<p><strong>Does anyone know how to use drag-and-drop for files with <code>customtkinter</code> specifically?</strong></p>
<p>If there is no direct wrapper with <code>customtkinter</code>, is there a way to apply the styles of <code>customtkinter</code> to the <code>tkinderdnd2</code> module? When using it like this, obviously it just uses the default <code>tkinter</code> style:</p>
<pre><code>from tkinter import TOP, Entry, Label, StringVar
from tkinterdnd2 import *
def get_path(event):
pathLabel.configure(text = event.data)
root = TkinterDnD.Tk()
root.geometry("350x100")
root.title("Get file path")
nameVar = StringVar()
entryWidget = Entry(root)
entryWidget.pack(side=TOP, padx=5, pady=5)
pathLabel = Label(root, text="Drag and drop file in the entry box")
pathLabel.pack(side=TOP)
entryWidget.drop_target_register(DND_ALL)
entryWidget.dnd_bind("<<Drop>>", get_path)
root.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/qYtlj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qYtlj.png" alt="enter image description here" /></a></p>
|
<python><tkinter><drag-and-drop><customtkinter>
|
2023-02-21 21:54:35
| 2
| 539
|
Furious Gamer
|
75,526,261
| 3,798,713
|
How to download files from GCS cloud bucket which has special characters in their file names using python client api?
|
<p>I am using Windows and I am able to download GCS files which don't have special characters.
I used the following <a href="https://stackoverflow.com/questions/51203883/download-multiple-file-from-google-cloud-storage-using-python">link</a> to download the files.
However, I am unable to download files which have special characters. (Pipe symbol)
Example filename: BRX_23022023|00000.csv</p>
<p>I have added the code. Files are downloaded as per the folder structure mentioned in the bucketname and foldername variables. However, if the filename has special characters, it simply creates the folder structure, but it won't download the files. C:\adt-dev-outputs\archive\2023\02</p>
<pre><code>import os
from google.cloud import storage
from os import makedirs
cred_json_file_path = 'C:/Users/config.json'
client = storage.Client.from_service_account_json(cred_json_file_path)
def download_blob(bucket: storage.Bucket, remotefile: str, localpath: str='.'):
"""downloads from remotepath to localpath"""
localrelativepath = '/'.join(remotefile.split('/')[:-1])
totalpath = f'{localpath}/{localrelativepath}'
filename = f'{localpath}/{remotefile}'
makedirs(totalpath, exist_ok=True)
print(f'Current file details:\n remote file: {remotefile}\n local file: {filename}\n')
blob = storage.Blob(remotefile, bucket)
blob.download_to_filename(filename, client=client)
def download_blob_list(bucketname: str, bloblist: list, localpath: str='.'):
"""downloads a list of blobs to localpath"""
bucket = storage.Bucket(client, name=bucketname)
for blob in bloblist:
download_blob(bucket, blob, localpath)
def list_blobs(bucketname: str, remotepath: str=None, filetypes: list=[]) -> list:
"""returns a list of blobs filtered by remotepath and filetypes
remotepath and filetypes are optional"""
result = []
blobs = list(client.list_blobs(bucketname, prefix=remotepath))
for blob in blobs:
name = str(blob.name)
# skip "folder" names
if not name.endswith('/'):
# do we need to filter file types?
if len(filetypes) > 0:
for filetype in filetypes:
if name.endswith(filetype):
result.append(name)
else:
result.append(name)
return result
bucketname = 'adt-dev-outputs'
foldername = 'archive/2023/02'
filetypes = ['.csv'] # list of extentions to return
bloblist = list_blobs(bucketname, remotepath=foldername, filetypes=filetypes)
download_blob_list(bucketname, bloblist, localpath=bucketname)
</code></pre>
|
<python><google-cloud-platform><google-cloud-storage><google-api-python-client>
|
2023-02-21 21:54:05
| 1
| 446
|
Suj
|
75,526,218
| 1,857,373
|
X not have valid feature names, reshape warning, X is 2D, Ya is 1D
|
<p><strong>PROBLEM</strong>
Issue is not valid names in X features: "UserWarning: X does not have valid feature names, but LinearRegression was fitted with feature names"</p>
<p>X is 2D, Ya is 1D response variable.</p>
<p>And ValueError on</p>
<pre><code>ValueError Traceback (most recent call last)
Cell In[445], line 5
3 lm = linear_model.LinearRegression()
4 lm.fit(X, Ya)
----> 5 res = lm.predict(Ya)
...
ValueError: Expected 2D array, got 1D array instead:
array=[0.24107763 0.20358284 0.26190807 ... 0.321622 0.14890293 0.15636717].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
</code></pre>
<p><strong>CODE for linear model</strong></p>
<p>This code executes with ValueError</p>
<pre><code>from sklearn import linear_model
lm = linear_model.LinearRegression()
lm.fit(X, Ya)
res = lm.predict(Ya)
</code></pre>
<p><strong>CODE SHAPE for Y response variables with reshape(-1,1)</strong></p>
<p>This code runs before LinearRegression</p>
<pre><code>Y = train_data['SalePrice']
X = train_data.drop(['Id'], axis=1).values
Ya = Y.values
Ya.reshape(-1, 1)
print('Y shape:', Ya.shape)
Ya.head
</code></pre>
<p><strong>Data and object shape</strong></p>
<pre><code>Y shape: (1460,)
array([0.24107763, 0.20358284, 0.26190807, ..., 0.321622 , 0.14890293,
0.15636717])
</code></pre>
<p><strong>CODE SHAPE for X features</strong></p>
<pre><code>print('X shape:', X.shape)
X
</code></pre>
<p><strong>Data</strong></p>
<pre><code>
array([[0.15068493, 0.0334198 , 0.66666667, ..., 0. , 1. ,
0. ],
[0.20205479, 0.03879502, 0.55555556, ..., 0. , 1. ,
0. ],
[0.1609589 , 0.04650728, 0.66666667, ..., 0. , 1. ,
0. ],
...,
[0.15410959, 0.03618687, 0.66666667, ..., 0. , 1. ,
0. ],
[0.1609589 , 0.03934189, 0.44444444, ..., 0. , 1. ,
0. ],
[0.18493151, 0.04037019, 0.44444444, ..., 0. , 1. ,
0. ]])
</code></pre>
<p>Original train_data values</p>
<pre><code>LotFrontage LotArea OverallQual OverallCond YearBuilt YearRemodAdd MasVnrArea ExterQual ExterCond BsmtQual ... SaleType_ConLw SaleType_New SaleType_Oth SaleType_WD SaleCondition_Abnorml SaleCondition_AdjLand SaleCondition_Alloca SaleCondition_Family SaleCondition_Normal SaleCondition_Partial
0 0.150685 0.033420 0.666667 0.500 0.949275 0.883333 0.12250 0.666667 0.5 0.8 ... 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0
1 0.202055 0.038795 0.555556 0.875 0.753623 0.433333 0.00000 0.333333 0.5 0.8 ... 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0
2 0.160959 0.046507 0.666667 0.500 0.934783 0.866667 0.10125 0.666667 0.5 0.8 ... 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0
3 0.133562 0.038561 0.666667 0.500 0.311594 0.333333 0.00000 0.333333 0.5 0.6 ... 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0
4 0.215753 0.060576 0.777778 0.500 0.927536 0.833333 0.21875 0.666667 0.5 0.8 ... 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0
5 rows × 249 columns
</code></pre>
|
<python><arrays><linear-regression>
|
2023-02-21 21:47:16
| 0
| 449
|
Data Science Analytics Manager
|
75,526,202
| 4,629,916
|
Can I use python Template to recursively use a dictionary as its own template to replace values in the source dictionary
|
<p>I would like to use a dictionary as its own template.</p>
<p>My attempt is below. Actual I used AI to get that function but AI was unable to
generalize.</p>
<pre><code>def recursive_template(d, **kwargs):
d.update(kwargs)
for k, v in d.items():
if isinstance(v, str):
try:
template = Template(v)
d[k] = template.substitute(**kwargs)
except KeyError:
pass
elif isinstance(v, dict): # added condition to handle nested dictionaries
d[k] = recursive_template(v, **kwargs)
return d
config_dict = {
'dataPath': '/home/work/Users/',
'dataLoc': '${dataPath}inst/extdata/'
}
config_dict2 = recursive_template(d=config_dict
print(config_dict2)
</code></pre>
<p>Wanted</p>
<pre><code>{'dataPath': '/home/work/Users/',
'dataLoc': '/home/work/Users/inst/extdata/}'}
</code></pre>
|
<python><templates>
|
2023-02-21 21:43:37
| 1
| 1,522
|
Harlan Nelson
|
75,526,165
| 2,167,717
|
How can I calculate session IDs from event data, assigning a new ID every time there was no event for 15 minutes?
|
<p>I have a huge file with events tracked a website. The data contains, among others, the <code>user_id</code> and the <code>time_stamp</code> of events (clicked on a link, viewed an image, etc.). Here is a simplified example:</p>
<pre><code>#%%
import pandas as pd
# make a dataframe
df = pd.DataFrame([['123', 1],
['123', 1],
['123', 19],
['234', 7],
['234', 28],
['234', 29]],
columns=['user_id', 'time_stamp'])
print(df)
</code></pre>
<p>What I would like to obtain is a <code>session_id</code> column, which is counting the sessions for each user. (Alternatively a string with the <code>user_id</code> and the <code>time_stamp</code> concatenated, but I assume counting is simpler?) I want it to look somewhat like this:</p>
<pre><code># make a dataframe
df = pd.DataFrame([['123', 1, 0],
['123', 1, 0],
['123', 19, 1],
['234', 7, 0],
['234', 28, 1],
['234', 29, 1]],
columns=['user_id', 'time_stamp', session_id])
print(df)
</code></pre>
<p>I read quite a lot, and tried even more, but I just can't figure out how to do it without a <code>for</code> loop. There is probably some <code>.shift(1)</code> involved and something with <code>.groupby()</code>? Any help is appreciated.</p>
|
<python><pandas><dataframe><group-by>
|
2023-02-21 21:39:43
| 1
| 365
|
Maxim Moloshenko
|
75,526,099
| 1,398,841
|
How to make a Django UniqueConstraint that checks fields in a position-independant way?
|
<p>I have a Django <code>Model</code> where I'd like to ensure that no duplicate <code>Edge</code> exists where the same pair of <code>Node</code>s appear as <code>node_a</code> or <code>node_b</code>, in either order.</p>
<pre class="lang-py prettyprint-override"><code>class Edge(models.Model):
project = models.ForeignKey(Project, related_name="edges", on_delete=models.CASCADE)
node_a = models.ForeignKey(Node, related_name="+", on_delete=models.CASCADE)
node_b = models.ForeignKey(Node, related_name="+", on_delete=models.CASCADE)
class Meta:
constraints = [
UniqueConstraint(
fields=("project", "node_a", "node_b"), name="unique_edge"
),
]
</code></pre>
<p>This catches if an <code>Edge</code> is made with the same <code>(A, B)</code>, but not if the same <code>Node</code>s are put in reverse <code>(B, A)</code>.</p>
<p>The validation function I'm trying to port to <a href="https://docs.djangoproject.com/en/4.1/ref/models/constraints/" rel="nofollow noreferrer">Constraint`s</a> was:</p>
<pre class="lang-py prettyprint-override"><code> def validate_unique(self, *args: Any, **kwargs: Any) -> None:
if self.project.edges.filter(
Q(node_a=self.node_a_id, node_b=self.node_b_id)
| Q(node_a=self.node_b_id, node_b=self.node_a_id)
).exists():
raise ValidationError(
f"Edges must be unique within a project: {self.node_a_id}|{self.node_b_id}"
)
</code></pre>
<p>This validation function validates that the <code>Node</code>s are unique in both directions: <code>(A, B)</code> or <code>(B, A)</code>.</p>
<p>Is there a way to express this within a <a href="https://docs.djangoproject.com/en/4.1/ref/models/constraints/#uniqueconstraint" rel="nofollow noreferrer"><code>UniqueConstraint</code></a>?</p>
<p>Currently targeting Django 4.1.</p>
|
<python><django><django-models><django-validation><django-constraints>
|
2023-02-21 21:31:29
| 0
| 9,369
|
johnthagen
|
75,526,087
| 11,039,117
|
pybind11 cannot import templates
|
<p>I tried to use templates in pybind11 <a href="https://github.com/bast/pybind11-demo" rel="nofollow noreferrer">based on this demo</a>. Error</p>
<pre><code>>>> from example import add
>>> add(2,3)
5L
>>> from example import myadd
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name myadd
</code></pre>
<p>Why does myadd cannot be imported while add can? I can use basic functions like int, double, void, but whenever I try to use any more complex structures like templates or shared_ptr this error occures again (even if I just copypaste examples from documentation).</p>
<p>Source code</p>
<pre><code>#include <pybind11/pybind11.h>
#include <iostream>
#include <string>
int add(int i, int j) {
return i + j;
}
template <class T> //template <typename T> doesn't work as well
T myadd(T a, T b) {
return a+b;
}
namespace py = pybind11;
PYBIND11_MODULE(example, m) {
// optional module docstring
m.doc() = "pybind11 example plugin";
// define add function
m.def("add", &add, "A function which adds two numbers");
m.def("myadd", &myadd<int>);
m.def("myadd", &myadd<float>);
}
</code></pre>
|
<python><c++><pybind11>
|
2023-02-21 21:29:39
| 1
| 758
|
Vladislav Kogan
|
75,525,825
| 1,232,087
|
pyspark - select() function ignoring if statement
|
<p>Thanks to user @DerekO, <a href="https://stackoverflow.com/a/75487254/1232087">his</a> following example is correctly getting max lengths of only <code>varchar</code> columns. But when I use the same example with <code>df</code> loaded with a <code>csv</code> file it ignores the <code>if</code> statement and calculates the max lengths of all columns (including the ones that are integers, doubles etc.)</p>
<p><strong>Question</strong>: <strong>Without</strong> creating custom schema, how can we improve the EXAMPLE 2 below, so it displays the max lengths of only the varchar columns</p>
<p><strong>Example 1</strong>:</p>
<pre><code>from pyspark.sql.functions import col, length, max
from pyspark.sql.types import StringType
df = spark.createDataFrame(
[
(1, '2', '1'),
(1, '4', '82'),
(1, '2', '3'),
],
['col1','col2','col3']
)
df.select([
max(length(col(schema.name))).alias(f'{schema.name}_max_length')
for schema in df.schema
if schema.dataType == StringType()
])
+---------------+---------------+
|col2_max_length|col3_max_length|
+---------------+---------------+
| 1| 2|
+---------------+---------------+
</code></pre>
<p><strong>Example 2</strong>:</p>
<pre><code>from pyspark.sql.functions import col, length, max
from pyspark.sql.types import StringType
df = spark.read.option("delimiter", ',').option("header", 'true').option("escape", '"').option("inferSchema", 'true')\
.csv("abfss://myContainer@myStorageAccountName" + '.dfs.core.windows.net/' + myFile_path)
df = df.select([max(length(col(schema.name))).alias(f'{schema.name}')
for schema in df.schema
if schema.dataType == StringType()
])
display(df)
#The above code displays lengths of all columns even though `csv` file contains non-varchar columns, as well, as shown below:
for schema in df.schema:
print(schema.name+" , "+str(schema.dataType))
#Output: The csv has about 80 columns. For brevity I am displaying only the few here
Field , StringType
Field2 , StringType
Field3 , StringType
Field4 , IntegerType
Field5 , DoubleType
Field6 , LongType
Field7, StringType
Field8 , StringType
Field9 , DoubleType
.....
.....
</code></pre>
|
<python><azure><pyspark><azure-databricks><azure-storage-account>
|
2023-02-21 20:56:17
| 1
| 24,239
|
nam
|
75,525,731
| 4,070,660
|
Fix JSON with unescaped double quotes in Python
|
<p>Imagine these samples:</p>
<pre><code>["And he said "Hello world" and went away"]
</code></pre>
<p>Or:</p>
<pre><code>{"key": "value " with quote in the middle"}
</code></pre>
<p>Or</p>
<pre><code>["invalid " string", "valid string"]
</code></pre>
<p>Those are invalid json and it is pretty obvious how to fix those by escaping quotes there.
I am getting those JSON from bugged system (which I don't control and cannot change) so I have to fix it on my side.
In theory it should be pretty simple - if you are withing string and you find quote character, it should be immediately followed by either:</p>
<ul>
<li>comma and another quote <code>,"</code></li>
<li>end of array if in array <code>]</code></li>
<li>end of object if in object <code>}</code></li>
</ul>
<p>In all other cases, the quote can be considered part of the string and quoted.</p>
<p>Before I start implementing this.</p>
<ul>
<li>Are there any libraries that are handling this?</li>
<li>Any json parser libraries that can be easily customized to do this?</li>
<li>Are there any errors in my logic?</li>
</ul>
|
<python><json><parsing><jsonparser>
|
2023-02-21 20:45:25
| 2
| 1,512
|
K.H.
|
75,525,721
| 4,181,335
|
Determine Planet Conjunction Times per Year using Skyfield & Scipy
|
<p>Expanding on the solution provided here:
<a href="https://stackoverflow.com/questions/43024371/determine-coordinates-at-conjunction-times/48256511#48256511">Determine coordinates at conjunction times</a>
... I coded the following to give me all the conjunctions of 5 planets (and the sun) for any given year (within the ephemeris, of course), sorted by date.</p>
<p><strong>CORRECTED TEXT:</strong></p>
<p>My question is <em>how can the results be improved?</em>. The results below are encouraging, but ...</p>
<ul>
<li><p>Normally one can expect that a yearly scan requires 13 monthly 'starting points', e.g. 1st Jan 2019 to 1st Jan 2020 inclusive. However 14 'starting points' including 1st. Feb 2020 are required (see below). Why is this?</p>
</li>
<li><p>Using monthly search 'starting points' is an arbitrary choice. It appears to work well with slowly moving celestial objects, however Mercury dances around like a yo-yo and can cause multiple conjunctions within one month. Switching to weekly 'starting points' with <code>t = ts.utc(yy, 0, range(0, 58*7, 7))</code> does not appear to help. Why is this?</p>
</li>
<li><p>Comparing with USNO data looks good. However, to pick one discrepancy in 2020: although "<em>Feb 26 02h Mercury in inferior conjunction</em>" is detected, "<em>Jan 10 15h Mercury in superior conjunction</em>" is not. Why is this?</p>
</li>
</ul>
<p>Note: The <code>scipy.optimize.brentq</code> search can go either way - forwards or backwards - so it is normal to expect that incorrect (previous/next) years have to be filtered out of the results.</p>
<p>Run the following in Python:</p>
<pre><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# find conjunctions of 5 objects within a given year
# INITIALLY RUN: pip install scipy
import sys
import scipy.optimize
from skyfield.api import load, pi, tau
ts = load.timescale()
eph = load('de421.bsp')
sun = eph['sun']
earth = eph['earth']
mercury = eph['mercury']
venus = eph['venus']
jupiter = eph['jupiter barycenter']
saturn = eph['saturn barycenter']
mars = eph['mars']
objects = [sun, mercury, venus, mars, jupiter, saturn]
object_name = ['sun', 'mercury', 'venus', 'mars', 'jupiter', 'saturn']
conjunctions_per_year = []
def f(jd,j,k):
"Compute how far away in longitude the two celestial objects are."
t = ts.tt(jd=jd)
e = earth.at(t)
lat, lon, distance = e.observe(objects[j]).ecliptic_latlon()
sl = lon.radians
lat, lon, distance = e.observe(objects[k]).ecliptic_latlon()
vl = lon.radians
relative_lon = (vl - sl + pi) % tau - pi
return relative_lon
## MAIN ##
s = input("""
Enter year as 'YYYY': """)
okay = False
if len(s) == 4:
if s.isnumeric():
yy = int(s)
if 1900 <= yy <= 2050: okay = True
if not okay:
print("ERROR: Please pick a year between 1900 and 2050")
sys.exit(0)
# Process monthly starting points spanning the chosen year
t = ts.utc(yy, range(13))
print("Found conjunctions:") # let's assume we find some
# Where in the sky were the two celestial objects on those dates?
e = earth.at(t)
for j in range(len(objects)):
lat, lon, distance = e.observe(objects[j]).ecliptic_latlon()
sl = lon.radians
for k in range(j+1, len(objects)):
lat, lon, distance = e.observe(objects[k]).ecliptic_latlon()
vl = lon.radians
# Where was object A relative to object B? Compute their difference in
# longitude, wrapping the value into the range [-pi, pi) to avoid
# the discontinuity when one or the other object reaches 360 degrees
# and flips back to 0 degrees.
relative_lon = (vl - sl + pi) % tau - pi
# Find where object B passed from being ahead of object A to being behind:
conjunctions = (relative_lon >= 0)[:-1] & (relative_lon < 0)[1:]
# For each month that included a conjunction,
# ask SciPy exactly when the conjunction occurred.
for i in conjunctions.nonzero()[0]:
t0 = t[i]
t1 = t[i + 1]
#print("Starting search at", t0.utc_jpl())
jd_conjunction = scipy.optimize.brentq(f, t[i].tt, t[i+1].tt, args=(j,k))
# append result as tuple to a list
conjunctions_per_year.append((jd_conjunction, j, k))
conjunctions_per_year.sort() # sort tuples in-place by date
for jdt, j, k in conjunctions_per_year:
tt = ts.tt(jd=jdt)
#if int(tt.utc_strftime("%Y")) != yy: continue # filter out incorrect years
print(" {:7}-{:7}: {}".format(object_name[j], object_name[k], tt.utc_jpl()))
</code></pre>
<p>The output generated for 2019 with 14 monthly 'starting points' in line 49 (Jan 1 2019 to Feb 1 2020) is:</p>
<pre><code> Enter year as 'YYYY': 2019
Found conjunctions:
mercury-jupiter: A.D. 2018-Dec-21 17:37:00.2965 UTC
sun -saturn : A.D. 2019-Jan-02 05:49:31.2431 UTC
mercury-saturn : A.D. 2019-Jan-13 13:31:05.0947 UTC
venus -jupiter: A.D. 2019-Jan-22 12:25:50.9449 UTC
venus -saturn : A.D. 2019-Feb-18 10:51:54.0668 UTC
sun -mercury: A.D. 2019-Mar-15 01:47:38.1785 UTC
mercury-mars : A.D. 2019-Jun-18 16:04:25.1024 UTC
sun -mercury: A.D. 2019-Jul-21 12:34:05.4668 UTC
venus -mars : A.D. 2019-Aug-24 17:04:32.8511 UTC
sun -mars : A.D. 2019-Sep-02 10:42:14.4417 UTC
mercury-mars : A.D. 2019-Sep-03 15:39:54.5854 UTC
mercury-venus : A.D. 2019-Sep-13 15:10:34.7771 UTC
sun -mercury: A.D. 2019-Nov-11 15:21:41.6804 UTC
venus -jupiter: A.D. 2019-Nov-24 13:33:28.2480 UTC
venus -saturn : A.D. 2019-Dec-11 10:04:50.4542 UTC
sun -jupiter: A.D. 2019-Dec-27 18:25:26.4797 UTC
</code></pre>
<p>Furthermore, with the expected 13 monthly 'starting points' (in line 49) the December 2019 conjunctions are excluded:</p>
<pre><code> Enter year as 'YYYY': 2019
Found conjunctions:
mercury-jupiter: A.D. 2018-Dec-21 17:37:00.2965 UTC
sun -saturn : A.D. 2019-Jan-02 05:49:31.2431 UTC
mercury-saturn : A.D. 2019-Jan-13 13:31:05.0947 UTC
venus -jupiter: A.D. 2019-Jan-22 12:25:50.9449 UTC
venus -saturn : A.D. 2019-Feb-18 10:51:54.0668 UTC
sun -mercury: A.D. 2019-Mar-15 01:47:38.1785 UTC
mercury-mars : A.D. 2019-Jun-18 16:04:25.1024 UTC
sun -mercury: A.D. 2019-Jul-21 12:34:05.4668 UTC
venus -mars : A.D. 2019-Aug-24 17:04:32.8511 UTC
sun -mars : A.D. 2019-Sep-02 10:42:14.4417 UTC
mercury-mars : A.D. 2019-Sep-03 15:39:54.5854 UTC
mercury-venus : A.D. 2019-Sep-13 15:10:34.7771 UTC
sun -mercury: A.D. 2019-Nov-11 15:21:41.6804 UTC
venus -jupiter: A.D. 2019-Nov-24 13:33:28.2480 UTC
</code></pre>
<p>Note: Uncomment the penultimate line to filter out incorrect years ('2018' in the example above).</p>
|
<python><scipy-optimize><skyfield>
|
2023-02-21 20:44:04
| 1
| 343
|
Aendie
|
75,525,564
| 9,544,417
|
Attaching debugger to nemo python
|
<p>Under Linux Mint 21, I am trying to debug python script embedded in <code>nemo</code> (for example, like the <code>nemo-terminal</code> package which is written in Python). I followed instruction from <a href="https://github.com/microsoft/debugpy" rel="nofollow noreferrer">debugpy doc</a> and other SO answers <a href="https://stackoverflow.com/questions/69690653/remote-debugging-with-debugpy-works-from-code-but-not-from-command-line">here</a> and <a href="https://stackoverflow.com/questions/64013251/debugging-python-in-docker-container-using-debugpy-and-vs-code-results-in-timeou">here</a>.</p>
<p>My python script is located at <code>/usr/share/nemo-python/extensions/mytest.py</code>. I copied the <code>debugpy</code> module in the same directory (so that it is accessible as an import) then at the start of my script, I added</p>
<pre><code>import debugpy
debugpy.listen(('127.0.0.1', 5678))
debugpy.wait_for_client()
</code></pre>
<p>When launching nemo from terminal, it gets hanging waiting for connection.</p>
<p>Then from VSCode, I created a <code>launch.json</code></p>
<pre><code>"version": "0.2.0",
"configurations": [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"host": "127.0.0.1",
"port": 5678,
}
]
</code></pre>
<p>When I start debugging, I get an error <code>RuntimeError: Can't listen for client connections: [Errno 98] Adress already in use</code> before even reaching the first breakpoint.</p>
<p>Full trace is</p>
<pre><code> File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/phil/Personnalisation/Nemo/nemo-python extensions/mytest.py", line 23, in <module>
debugpy.listen(('127.0.0.18', 5679))
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/public_api.py", line 31, in wrapper
return wrapped(*args, **kwargs)
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/api.py", line 143, in debug
log.reraise_exception("{0}() failed:", func.__name__, level="info")
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/api.py", line 141, in debug
return func(address, settrace_kwargs, **kwargs)
File "/home/phil/.vscode/extensions/ms-python.python-2023.2.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/api.py", line 262, in listen
raise RuntimeError(str(endpoints["error"]))
RuntimeError: Can't listen for client connections: [Errno 98] Adresse déjà utilisée
</code></pre>
<p>How should I run a debugger for nemo's python extensions?</p>
|
<python><debugging><vscode-debugger><linux-mint>
|
2023-02-21 20:23:59
| 1
| 448
|
sayanel
|
75,525,375
| 10,007,302
|
Openpyxl causing Excel to repair and remove datavalidation when I try and add DV values to a cell using string formatting, but not if I specify values
|
<p>I have a dataframe in pandas that looks like the below with 3 columns. Possible Matches will have one name if the Match Score is 100 and multiple names if it's not. I'm trying to output this file to excel using openpyxl.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Tracker_Name</th>
<th>Possible Matches</th>
<th>Match Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Alberta Investment Management Corporation</td>
<td>alberta investment management corporation</td>
<td>100</td>
</tr>
<tr>
<td>2</td>
<td>Acharya Capital</td>
<td>karya capital, hara capital, ara capital, a capital, Create New Database Entry</td>
<td>63.66124359</td>
</tr>
</tbody>
</table>
</div>
<p>I am trying to figure out how to make the the possible matches output to Excel as data validation. My code is below, essentially, I'm attempting to check if the match score is less than 100, if it's not, it'll output the pandas dataframe row to Excel as is.</p>
<p>If it's less than 100, then I'd like the second column to add a data validation where the list of possible values is limited to the names in the Possible Matches cell of my pandas dataframe.</p>
<pre><code> workbook = load_workbook(filename='output_test.xlsx.xlsm')
# Select a worksheet
worksheet = workbook['Sheet1']
for r, row in enumerate(dataframe_to_rows(df_excel, index=False, header=False), start=1):
if row[2] == 100:
for c, value in enumerate(row, start=1):
worksheet.cell(row=r, column=c, value=value)
else:
for c, value in enumerate(row, start=1):
if c == 2:
dv = DataValidation(type="list", formula1="{}".format(value))
worksheet.add_data_validation(dv)
dv.add(worksheet.cell(row=r, column=c))
else:
worksheet.cell(row=r, column=c, value=value)
# Save the changes to the Excel workbook
workbook.save('output_text.xlsx')
</code></pre>
<p>No matter what I do, every time I run this code and go to open the workbook, Excel will give me the following error:</p>
<p><a href="https://i.sstatic.net/42vZM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/42vZM.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/jICFy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jICFy.png" alt="enter image description here" /></a></p>
<p>However, if I change the line of code to just be set values like the below, it works without any issues.</p>
<pre><code>dv = DataValidation(type="list", formula1='"dog, cat, mouse"')
</code></pre>
|
<python><excel><pandas><openpyxl>
|
2023-02-21 20:03:20
| 1
| 1,281
|
novawaly
|
75,525,363
| 12,131,472
|
parse a quite nested Json file with Pandas/Python, the json thing is now in one column of a dataframe
|
<p>I have retrieved data I need in one dataframe, one of the column has this list of dict</p>
<pre><code>[{'date': '2023-02-03T00:00:00', 'groups': [{'periodType': 'm',
'projections': [{'identifier': 'TD3BALMO', 'period': 'Feb 23', 'value': 54.621, 'validFrom': '2023-02-01', 'validTo': '2023-02-28', 'nextRolloverDate': '2023-02-28', 'archiveDate': '2023-02-03'},
{'identifier': 'TD3CURMON', 'period': 'Feb 23', 'value': 53.855, 'validFrom': '2023-02-01', 'validTo': '2023-02-28', 'nextRolloverDate': '2023-02-28', 'archiveDate': '2023-02-03'},
{'identifier': 'TD3+1_M', 'period': 'Mar 23', 'value': 55.387, 'validFrom': '2023-03-01', 'validTo': '2023-03-31', 'nextRolloverDate': '2023-02-28', 'archiveDate': '2023-02-03'},
{'identifier': 'TD3+2_M', 'period': 'Apr 23', 'value': 55.174, 'validFrom': '2023-04-01', 'validTo': '2023-04-28', 'nextRolloverDate': '2023-02-28', 'archiveDate': '2023-02-03'},
{'identifier': 'TD3+3_M', 'period': 'May 23', 'value': 55.748, 'validFrom': '2023-05-01', 'validTo': '2023-05-31', 'nextRolloverDate': '2023-02-28', 'archiveDate': '2023-02-03'},
{'identifier': 'TD3+4_M', 'period': 'Jun 23', 'value': 55.608, 'validFrom': '2023-06-01', 'validTo': '2023-06-30', 'nextRolloverDate': '2023-02-28', 'archiveDate': '2023-02-03'},
{'identifier': 'TD3+5_M', 'period': 'Jul 23', 'value': 52.548, 'validFrom': '2023-07-01', 'validTo': '2023-07-31', 'nextRolloverDate': '2023-02-28', 'archiveDate': '2023-02-03'}]},
{'periodType': 'q',
'projections': [{'identifier': 'TD3CURQ', 'period': 'Q1 23', 'value': 52.638, 'validFrom': '2023-01-01', 'validTo': '2023-03-31', 'nextRolloverDate': '2023-03-31', 'archiveDate': '2023-02-03'},
{'identifier': 'TD3+1Q', 'period': 'Q2 23', 'value': 55.51, 'validFrom': '2023-04-01', 'validTo': '2023-06-30', 'nextRolloverDate': '2023-03-31', 'archiveDate': '2023-02-03'},
{'identifier': 'TD3+2Q', 'period': 'Q3 23', 'value': 51.729, 'validFrom': '2023-07-01', 'validTo': '2023-09-29', 'nextRolloverDate': '2023-03-31', 'archiveDate': '2023-02-03'},
{'identifier': 'TD3+3Q', 'period': 'Q4 23', 'value': 62.63, 'validFrom': '2023-10-01', 'validTo': '2023-12-22', 'nextRolloverDate': '2023-03-31', 'archiveDate': '2023-02-03'}]
}
]
}]
</code></pre>
<p>What's the easiest way to convert it to below? (sorry the numbers are not the same but you get the idea) I tried json_normalize but haven't found an efficient way to convert to below? in fact I only need data from the first 3 columns: identifier, period, value</p>
<pre><code>identifier period value ... validTo nextRolloverDate archiveDate
0 TD3BALMO Feb 23 68.464 ... 2023-02-28 2023-02-28 2023-02-21
1 TD3CURMON Feb 23 60.955 ... 2023-02-28 2023-02-28 2023-02-21
2 TD3+1_M Mar 23 67.128 ... 2023-03-31 2023-02-28 2023-02-21
3 TD3+2_M Apr 23 63.499 ... 2023-04-28 2023-02-28 2023-02-21
4 TD3+3_M May 23 59.734 ... 2023-05-31 2023-02-28 2023-02-21
</code></pre>
|
<python><json><pandas><json-normalize>
|
2023-02-21 20:02:12
| 1
| 447
|
neutralname
|
75,525,305
| 5,924,264
|
AttributeError: 'module' object has no attribute while using absolute imports
|
<pre><code>
[2023-02-21T18:51:30.894Z] _ ERROR collecting path/to/file/tests/test.py _
[2023-02-21T18:51:30.894Z] path/to/file/tests/test.py:12: in <module>
[2023-02-21T18:51:30.894Z] import path.to.file.tests.utils as utils
[2023-02-21T18:51:30.894Z] path/to/file/tests/utils.py:9: in <module>
[2023-02-21T18:51:30.894Z] from path.to.file.full import run
[2023-02-21T18:51:30.894Z] path/to/file/run.py:39: in <module>
[2023-02-21T18:51:30.894Z] import path.to.file.compute
[2023-02-21T18:51:30.894Z] path/to/file/compute.py:33: in <module>
[2023-02-21T18:51:30.894Z] import path.to.file.sim as sim
[2023-02-21T18:51:30.894Z] path/to/file/sim.py:13: in <module>
[2023-02-21T18:51:30.894Z] import path.to.file.compute as compute
[2023-02-21T18:51:30.894Z] E AttributeError: 'module' object has no attribute 'compute'
</code></pre>
<p>I'm running into this error when running a unit test at my work's codebase. I can't run this unit test locally easily so I'm having a hard time debugging it. I believe this error is typical of circular dependencies. There is a circular dependency right now that we plan to change in the near future; however, here I am using absolute imports, so I'm wondering why the circular dependency is still occurring?</p>
<p>I checked some other unit tests that ran locally that also involved importing the circular dependencies, and those didn't result in the error.</p>
|
<python><circular-dependency>
|
2023-02-21 19:56:54
| 0
| 2,502
|
roulette01
|
75,525,293
| 8,406,122
|
Seeing the progress of my code in real time in jupyter
|
<p>I have this code which I am running on jupyter notebook</p>
<pre><code>with open('tracker.txt', 'w+') as p:
for i in range(1,100000000):
p.write("\nValue is: "+str(i) )
</code></pre>
<p>while running this code when I am opening the <code>tracker.txt</code> file it is showing my blank, and only showing the result after the code is executed completely. But I want to see the results getting printed in the file in real time so that I can track the progress of the code. I am not able to get how to achieve that. Any help will be great.</p>
|
<python><jupyter>
|
2023-02-21 19:55:45
| 1
| 377
|
Turing101
|
75,525,269
| 1,136,015
|
When kafka consumer connected to a broker by domain name, if the ip changed after broker crash, will the kafka consumer reconnect?
|
<p>My kafka consumer is connected to a cluster of brokers. There is a domain name server in between. Each broker has a domain name associated with an IP address.
Problem is, the IPs are not static and for some reason, I have to restart the broker.
The consumers are configured to reconnect.
My question is when it will try to reconnect, will the consumer resolve the domain name to the new IP address or it will use the previously resolved IP address?</p>
|
<python><kafka-python>
|
2023-02-21 19:53:07
| 2
| 927
|
sovon
|
75,525,102
| 15,923,186
|
SerDe problem with django rest framework and foreign key
|
<p><strong>Context:</strong></p>
<p>I have written a simple django rest framework app as BE and a react JS app as FE. The db is SQLite and it's gonna stay that way, since there are only 2 simple models and the number of users will be quite limited as well.
For the sake of the example let's assume there is only one team currently with <code>name="First"</code> and <code>id=1</code>.</p>
<p>Requirements:</p>
<ol>
<li>display in a table the list of players with team as a name, not its id.</li>
<li>add players from form</li>
</ol>
<p>Code:</p>
<p><strong>models.py</strong></p>
<pre class="lang-py prettyprint-override"><code>class Player(models.Model):
first_name = models.Charfield(max_length=50)
last_name = models.Charfield(max_length=50)
team = models.ForeignKey(Team, on_delete=models.SET_NULL, blank=True, null=True)
class Team(models.Model):
name = models.Charfield(max_length=50)
def __str__(self):
return f"{self.name}"
</code></pre>
<p><strong>views.py</strong></p>
<pre class="lang-py prettyprint-override"><code>class PlayersView(APIView):
def post(self, request):
serializer = PlayerSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>In order to meed 1st requirement I've implemented serializer like this:</p>
<p><strong>serializers.py</strong></p>
<pre class="lang-py prettyprint-override"><code>class PlayerSerializer(serializers.ModelSerializer):
team = serializers.CharField(source="team.name", read_only=True)
class Meta:
model = Player
fields = "__all__"
</code></pre>
<p>That worked fine, but I wasn't able to add the players to the database when processing the request from the FE.</p>
<p>The body of that POST request is:</p>
<p><strong>body</strong></p>
<pre><code>{
"first_name": "John",
"last_name": "Doe",
"team": 1
}
</code></pre>
<p>Looking on SO I found:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/17280007/retrieving-a-foreign-key-value-with-django-rest-framework-serializers">Retrieving a Foreign Key value with django-rest-framework serializers</a></li>
<li><a href="https://stackoverflow.com/questions/68743630/how-to-serialize-the-foreign-key-field-in-django-rest-framework">How to serialize the foreign key field in django rest framework</a></li>
<li><a href="https://stackoverflow.com/questions/53263981/drf-serializer-save-not-saving-to-database">DRF serializer.save() not saving to database</a></li>
</ul>
<p>I tried sending the request body as (the FE has full info on team, both name and id, that part is working fine):</p>
<p><strong>body</strong></p>
<pre><code>{
"first_name": "John",
"last_name": "Doe",
"team": "First"
}
</code></pre>
<p>But that was a dead end.</p>
<p>I played with serializer and tried:
<strong>serializers.py</strong></p>
<pre class="lang-py prettyprint-override"><code>class PlayerSerializer(serializers.ModelSerializer):
team = serializers.PrimaryKeyRelatedField(source="team.name", queryset=Team.objects.all())
def create(self, validated_data):
validated_data["team"] = validated_data["team"]["name"] # Since the Player initializer expected team to be an instance of Team and not a str/name neither an int/id
return Player(**validated_data)
def to_representation(self, instance):
r ={
"id": instance.id,
"first_name": instance.first_name,
"last_name": instance.last_namem
"team": instance.team.name if instance.team else ""
}
return r
class Meta:
model = Player
fields = "__all__"
</code></pre>
<p>The above does not raise any exceptions, but it doesn't save the new entry into db neither. When I logged the <code>serializer.data</code> after validation the <code>id</code> of the new item was <code>None</code>.
I'd appreciate if somebody could either tell me where the mistake is and/or point me towards a good solution of the problem.</p>
|
<python><python-3.x><django><django-rest-framework><django-serializer>
|
2023-02-21 19:35:19
| 2
| 1,245
|
Gameplay
|
75,525,016
| 5,698,673
|
How to pytest monkeypatch a mixin class in python 3.x
|
<p>I need to monkeypatch a mixin class inherited by other classes using pytest, python 3.x, I have this example:</p>
<pre><code>class A:
@classmethod
def foo(cls) -> str:
# do some database operation
return "foo"
class B(A):
pass
</code></pre>
<p>I initially thought of monkeypatching the method directly by doing something like this:</p>
<pre><code>def test_B(monkeypatch):
def mock_b__foo() -> str:
return 'bar'
monkeypatch.setattr('B.foo', mock_b__foo)
</code></pre>
<p>Then I tried this:</p>
<pre><code>def test_B(monkeypatch):
class MockA:
@classmethod
def foo(cls) -> str:
# do some database operation
return "bar"
monkeypatch.setattr('A', MockA)
</code></pre>
<p>This does not work as intended, B.foo still inherits from A instead of MockA because the import is evaluated once I guess, any way of achieving this ?</p>
|
<python><python-3.x><pytest>
|
2023-02-21 19:24:35
| 1
| 1,073
|
Moad Ennagi
|
75,524,792
| 4,382,391
|
Error when importing flask_socket-io : AttributeError: module 'collections' has no attribute 'MutableMapping'
|
<p>With a python file that is just:</p>
<pre><code>from flask_socketio import SocketIO
</code></pre>
<p>I get the error:</p>
<pre><code>> python .\app.py
Traceback (most recent call last):
File "C:\{...}\server\app.py", line 127, in <module>
from flask_socketio import SocketIO
File "C:\{...}\Python310\lib\site-packages\flask_socketio\__init__.py", line 9, in <module>
from socketio import socketio_manage # noqa: F401
File "C:\{...}\Python310\lib\site-packages\socketio\__init__.py", line 9, in <module>
from .zmq_manager import ZmqManager
File "C:\{...}\Python310\lib\site-packages\socketio\zmq_manager.py", line 5, in <module>
import eventlet.green.zmq as zmq
File "C:\{...}\Python310\lib\site-packages\eventlet\__init__.py", line 17, in <module>
from eventlet import convenience
File "C:\{...}\Python310\lib\site-packages\eventlet\convenience.py", line 7, in <module>
from eventlet.green import socket
File "C:\{...}\Python310\lib\site-packages\eventlet\green\socket.py", line 21, in <module>
from eventlet.support import greendns
File "C:\{...}\Python310\lib\site-packages\eventlet\support\greendns.py", line 79, in <module>
setattr(dns, pkg, import_patched('dns.' + pkg))
File "C:\{...}\Python310\lib\site-packages\eventlet\support\greendns.py", line 61, in import_patched
return patcher.import_patched(module_name, **modules)
File "C:\{...}\Python310\lib\site-packages\eventlet\patcher.py", line 132, in import_patched
return inject(
File "C:\{...}\Python310\lib\site-packages\eventlet\patcher.py", line 109, in inject
module = __import__(module_name, {}, {}, module_name.split('.')[:-1])
File "C:\{...}\Python310\lib\site-packages\dns\namedict.py", line 35, in <module>
class NameDict(collections.MutableMapping):
AttributeError: module 'collections' has no attribute 'MutableMapping'
</code></pre>
<p>I saw in other threads that <code>collections.MutableMapping</code> has changed to <code>collections.abc.MutableMapping</code> in python >=3.1. I have tried the following solutions to no avail:</p>
<ol>
<li>Upgrade dependent packages:</li>
</ol>
<pre><code>pip install --upgrade pip
pip install --upgrade wheel
pip install --upgrade setuptools
pip install --upgrade requests
</code></pre>
<ol start="2">
<li>Reinstall flask-socketio</li>
</ol>
<pre><code>pip install flask-socketio
</code></pre>
<p>I tried to reinstall the eventlet dependency with upgrade:</p>
<pre><code>pip install --upgrade eventlet
Requirement already satisfied: eventlet in c:\...\python\python310\lib\site-packages (0.33.3)
Requirement already satisfied: greenlet>=0.3 in c:\...\python\python310\lib\site-packages (from eventlet) (2.0.2)
Requirement already satisfied: dnspython>=1.15.0 in c:\...\python\python310\lib\site-packages (from eventlet) (1.16.0)
Requirement already satisfied: six>=1.10.0 in c:\...\python\python310\lib\site-packages (from eventlet) (1.16.0)
</code></pre>
<p>None of these options work. How should I fix this error?</p>
|
<python><flask-socketio><eventlet>
|
2023-02-21 18:57:21
| 1
| 1,070
|
Null Salad
|
75,524,716
| 7,800,760
|
How to store Stanza Span in MongoDB collection?
|
<p>I am trying to add a list of dictionaries (whose name is <code>stanzanerlist</code>) like the following:</p>
<pre class="lang-python prettyprint-override"><code>stanzanerlist = [{
"text": "Harry Potter",
"type": "PER",
"start_char": 141,
"end_char": 153
}, {
"text": "Hogwarts",
"type": "LOC",
"start_char": 405,
"end_char": 413
}, {
"text": "JK Rowling",
"type": "PER",
"start_char": 505,
"end_char": 515
}]
</code></pre>
<p>as a field in a MongoDB document in a collection.</p>
<p>I am inserting the whole document as follows with <code>stanzanerlist</code> as the last item in <code>mongodocument</code>:</p>
<pre class="lang-python prettyprint-override"><code>mongodocument = {
"_id": urlid,
"source": sourcename,
"stanzadoc": stanzadoc.to_serialized(),
"stanzaver": stanzaver,
# "timestamp": datetime.now(tzinfo),
"timestamp": datetime.now(
tz=pytz.timezone(cfgdata["timezone"]["name"])
),
"stanzanerlist": stanzanerlist,
}
try:
mdbrc = mdbcoll.insert_one(
mongodocument
) # insert fails if URL/_ID already exists
return mdbrc
except pymongo.errors.DuplicateKeyError:
# manage the record update
print(f"Article {urlid} already exists!")
</code></pre>
<p>but while all other fields work well, the addition of <code>stanzanerlist</code> gives the following error:</p>
<pre class="lang-python prettyprint-override"><code>cannot encode object: {
"text": "Harry Potter",
"type": "PER",
"start_char": 141,
"end_char": 153
}, of type: <class 'stanza.models.common.doc.Span'>
</code></pre>
<p>and I'm not able to understand if and how I could achieve that addition.</p>
|
<python><mongodb><pymongo><stanford-nlp>
|
2023-02-21 18:49:25
| 1
| 1,231
|
Robert Alexander
|
75,524,656
| 4,837,637
|
Flask Rest Api error SQLAlchemyAutoSchema
|
<p>i have deployed my flask rest api app on cloud run, using this dockerfile:</p>
<pre><code># Python image to use.
FROM python:3.10-alpine
# Set the working directory to /app
WORKDIR /app
# copy the requirements file used for dependencies
#COPY requirements.txt .
# Copy the rest of the working directory contents into the container at /app
COPY . .
#RUN export PYTHONPATH=/usr/bin/python
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
#RUN pip install -r requirements.txt
# Run app.py when the container launches
#ENTRYPOINT ["python", "app.py"]
CMD gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 app:app
</code></pre>
<p>When i try to call one of rest endpoint of app, i receive this error in my log:</p>
<pre><code>"Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.10/site-packages/gunicorn/workers/gthread.py", line 92, in init_process
super().init_process()
File "/usr/local/lib/python3.10/site-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.10/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.10/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.10/site-packages/gunicorn/util.py", line 359, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/app/app.py", line 12, in <module>
from resources.user import UserRegister, UserLogin, User, TokenRefresh, UserLogout
File "/app/resources/user.py", line 13, in <module>
from schemas.user import UserSchema
File "/app/schemas/user.py", line 7, in <module>
class UserSchema(ma.SQLAlchemyAutoSchema):
File "/usr/local/lib/python3.10/site-packages/marshmallow/schema.py", line 121, in __new__
klass._declared_fields = mcs.get_declared_fields(
File "/usr/local/lib/python3.10/site-packages/marshmallow_sqlalchemy/schema.py", line 91, in get_declared_fields
fields.update(mcs.get_declared_sqla_fields(fields, converter, opts, dict_cls))
File "/usr/local/lib/python3.10/site-packages/marshmallow_sqlalchemy/schema.py", line 130, in get_declared_sqla_fields
converter.fields_for_model(
File "/usr/local/lib/python3.10/site-packages/marshmallow_sqlalchemy/convert.py", line 154, in fields_for_model
field = base_fields.get(key) or self.property2field(prop)
File "/usr/local/lib/python3.10/site-packages/marshmallow_sqlalchemy/convert.py", line 193, in property2field
field_class = field_class or self._get_field_class_for_property(prop)
File "/usr/local/lib/python3.10/site-packages/marshmallow_sqlalchemy/convert.py", line 275, in _get_field_class_for_property
column = _base_column(prop.columns[0])
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 1329, in __getattr__
return self._fallback_getattr(key)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 1298, in _fallback_getattr
raise AttributeError(key)
AttributeError: columns"
</code></pre>
<p>This is my app.py file:</p>
<pre><code>import os
from flask import Flask, jsonify
from flask_restful import Api
from flask_jwt_extended import JWTManager
from marshmallow import ValidationError
from datetime import timedelta
from db import db, getconn
from ma import ma
from blocklist import BLOCKLIST
from resources.user import UserRegister, UserLogin, User, TokenRefresh, UserLogout
from resources.confirmation import Confirmation, ConfirmationByUser
app = Flask(__name__)
### Cloud Sql Google Parameter ###
app.config["SQLALCHEMY_DATABASE_URI"] = "postgresql+pg8000://"
app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {
"creator" : getconn,
"max_overflow": 2,
"pool_timeout": 30,
"pool_size": 5,
"pool_recycle": 1800
}
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
#app.config["SQLALCHEMY_DATABASE_URI"] = os.environ.get("DATABASE_URL")
#app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["PROPAGATE_EXCEPTIONS"] = True
#app.secret_key = os.environ.get(
# "APP_SECRET_KEY"
#) # could do app.config['JWT_SECRET_KEY'] if we prefer
### Flask JWT Configuration Key ###
app.config["JWT_SECRET_KEY"] = os.environ["JWT_KEY"]
app.config["JWT_ACCESS_TOKEN_EXPIRES"] = timedelta(hours=1)
db.init_app(app)
ma.init_app(app)
api = Api(app)
@app.errorhandler(ValidationError)
def handle_marshmallow_validation(err):
return jsonify(err.messages), 400
jwt = JWTManager(app)
# This method will check if a token is blocklisted, and will be called automatically when blocklist is enabled
@jwt.token_in_blocklist_loader
def check_if_token_in_blocklist(jwt_header, jwt_payload):
return jwt_payload["jti"] in BLOCKLIST
@app.route("/api")
def user_route():
return "Welcome user API !"
api.add_resource(UserRegister, "/register")
api.add_resource(User, "/user/<int:user_id>")
api.add_resource(UserLogin, "/login")
api.add_resource(TokenRefresh, "/refresh")
api.add_resource(UserLogout, "/logout")
api.add_resource(Confirmation, "/user_confirm/<string:confirmation_id>")
api.add_resource(ConfirmationByUser, "/confirmation/user/<int:user_id>")
if __name__ == "__main__":
#app.run(port=5000, debug=True)
server_port = os.environ.get('PORT', '8082')
app.run(debug=True, port=server_port, host='0.0.0.0')
</code></pre>
<p>This is the requirements.txt file:</p>
<pre><code>Flask==2.2.2
#Flask==2.1.3
requests==2.28.1
flask-sqlalchemy
ptvsd==4.3.2 # Required for debugging.
gunicorn
flask-smorest
python-dotenv
marshmallow
cloud-sql-python-connector
pg8000
flask-jwt-extended
passlib
datetime
uuid
requests
flask_restful
flask_marshmallow
marshmallow-sqlalchemy
Flask-SQLAlchemy
</code></pre>
<p>This is my user.py schema:</p>
<pre><code>from marshmallow import pre_dump
from ma import ma
from models.user import UserModel
class UserSchema(ma.SQLAlchemyAutoSchema):
class Meta:
model = UserModel
load_instance = True
load_only = ("password",)
dump_only = ("id", "confirmation")
@pre_dump
def _pre_dump(self, user: UserModel, **kwargs):
user.confirmation = [user.most_recent_confirmation]
return user
</code></pre>
<p>And this is my user model:</p>
<pre><code>from requests import Response
from flask import request, url_for
from db import db
from libs.mailgun import Mailgun
from models.confirmation import ConfirmationModel
class UserModel(db.Model):
__tablename__ = "users"
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), nullable=False, unique=True)
password = db.Column(db.String(80), nullable=False)
email = db.Column(db.String(80), nullable=False, unique=True)
created = db.Column(db.String(80), nullable=False, unique=True)
uuid_user = db.Column(db.String(200), nullable=False, unique=True)
confirmation = db.relationship(
"ConfirmationModel", lazy="dynamic", cascade="all, delete-orphan"
)
@property
def most_recent_confirmation(self) -> "ConfirmationModel":
# ordered by expiration time (in descending order)
return self.confirmation.order_by(db.desc(ConfirmationModel.expire_at)).first()
@classmethod
def find_by_username(cls, username: str) -> "UserModel":
return cls.query.filter_by(username=username).first()
@classmethod
def find_by_email(cls, email: str) -> "UserModel":
return cls.query.filter_by(email=email).first()
@classmethod
def find_by_id(cls, _id: int) -> "UserModel":
return cls.query.filter_by(id=_id).first()
def send_confirmation_email(self) -> Response:
subject = "Registration Confirmation"
link = request.url_root[:-1] + url_for(
"confirmation", confirmation_id=self.most_recent_confirmation.id
)
text = f"Please click the link to confirm your registration: {link}"
html = f"<html>Please click the link to confirm your registration: <a href={link}>link</a></html>"
return Mailgun.send_email([self.email], subject, text, html)
def save_to_db(self) -> None:
db.session.add(self)
db.session.commit()
def delete_from_db(self) -> None:
db.session.delete(self)
db.session.commit()
</code></pre>
<p>Previously the app was on python 3.7 and worked fine, but the requirements.txt was different because it didn't need the modules for GCP and Cloud Sql connection.
Any idea how to fix this error and what is the cause?
Thanks</p>
|
<python><sqlalchemy><marshmallow><marshmallow-sqlalchemy>
|
2023-02-21 18:42:43
| 1
| 415
|
dev_
|
75,524,624
| 7,544,279
|
How to solve decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>] occurred due to large input number?
|
<p>I am having an issue with handling large integers. I am trying to implement threshold secret sharing scheme by following the article in the <a href="https://www.geeksforgeeks.org/implementing-shamirs-secret-sharing-scheme-in-python/" rel="nofollow noreferrer">link</a>. I have modified the driver code to the following so that our secret key is practically large number instead of some pass phrase.</p>
<pre><code>t, n = 3, 5
secret = 5238440126074724526414965738603764307730944402353374454036086258591065307597
print(f'Original Secret: {secret}')
shares = generate_shares(n, t, secret)
print(f'Shares: {", ".join(str(share) for share in shares)}')
pool = random.sample(shares, t)
print(f'Combining shares: {", ".join(str(share) for share in pool)}')
print(f'Reconstructed secret: {reconstruct_secret(pool) }')
</code></pre>
<p>I am getting the following error at due to line <code>return int(round(Decimal(sums), 0))</code> from <code>reconstruct_secret</code> function :</p>
<pre><code>decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>]
</code></pre>
<p>But it works just fine when I shorten my secret to <strong>5238440126074724526414965</strong>. I have no idea how I can modify the <strong>reconstruct_secret</strong> function to work according to my requirement. Any suggestions would be a great help.</p>
|
<python><cryptography><integer><secret-key><largenumber>
|
2023-02-21 18:39:40
| 1
| 495
|
Purushotam Sangroula
|
75,524,171
| 550,705
|
Python >= 3.8 container on GCP (Vertex AI Workbench)
|
<p>Problem: Python 3.7 is outdated, yet prebuilt containers for Vertex AI Workbench are all are built w/ 3.7.</p>
<p>Is there a way to modify (e.g. rebuild the base containers that work w/ Vertex AI workbench), e.g. perhaps by setting an ARG command to rebuild a working base container with Python >= 3.8?</p>
<p>Alternatively, does any code exist to demonstrate building an image compatible with Workbench?</p>
|
<python><google-cloud-platform><jupyter-lab><google-cloud-vertex-ai><google-dl-platform>
|
2023-02-21 17:53:28
| 2
| 763
|
Brian Bien
|
75,524,091
| 5,695,336
|
Catch error inside Firestore on_snapshot from the outer scope
|
<p>I'm writing a python program, when it catches any error, it will reset everything and restart itself.</p>
<p>It goes like this</p>
<pre><code>async def main_loop():
while True:
try:
await main()
except:
stop_everything()
reset_everything()
await asyncio.sleep(60)
asyncio.run(main_loop())
</code></pre>
<p>Part of the main program is to watch a Firestore collection.</p>
<pre><code>def collection_changed(docs, changes, time):
# Error can possibly happen here.
raise RuntimeError("Something wrong.")
async def main():
col_ref.on_snapshot(collection_changed)
await some_forever_task()
</code></pre>
<p>The error in <code>collection_changed</code> will not be caught by the <code>try-except</code> block, because <code>on_snapshot</code> runs in the background, kind of like <code>asyncio.create_task</code>.</p>
<p>But in the case of <code>asyncio.create_task</code>, I can do <code>task = asyncio.create_task(...)</code> and then <code>await task</code>. This way, error in the task will be caught.</p>
<p>I tried <code>watch = col_ref.on_snapshot(...)</code>, but I can't <code>await watch</code>.</p>
<p>So how can I catch error that happens inside <code>on_snapshot</code> from the outer scope?</p>
|
<python><firebase><google-cloud-firestore><python-asyncio><try-except>
|
2023-02-21 17:44:53
| 1
| 2,017
|
Jeffrey Chen
|
75,524,034
| 2,366,887
|
Trying to run a python program but I keep getting file is not a package error
|
<p>I've got a python program "LoginServiceBase.py" Defined inside a directory called 'LoginService'. I've created an __init__.py file that looks like this:</p>
<pre><code> # Import views and other necessary modules
from .views import LoginView, SignupView
from .LoginServiceBase import settings
# Define package-level variables
APP_NAME = 'LoginServiceBase'
</code></pre>
<p>My LoginServiceBase.py uses Django. I've set up a class inside LoginServiceBase for my Django settings.</p>
<p>When I attempt to run python -m LoginServiceBase, I get the following error:</p>
<p>ModuleNotFoundError: No module named 'LoginServiceModule.settings'; 'LoginServiceModule' is not a package</p>
<p>The LoginView and SignupView classes are defined in the same directory and are contained in a file called 'views.py'.</p>
<p>Where is the name 'LoginServiceModule' coming from? I haven't defined it anywhere. Is this a required naming convention for Python packages?</p>
<p>Any help appreciated.</p>
<pre><code>traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/bbrelin/src/repos/mednotes/src/Microservices/LoginService/LoginServiceBase.py", line 4, in <module>
application = get_wsgi_application()
File "/home/bbrelin/src/repos/mednotes/.venv/lib/python3.10/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
django.setup(set_prefix=False)
File "/home/bbrelin/src/repos/mednotes/.venv/lib/python3.10/site-packages/django/__init__.py", line 19, in setup
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
File "/home/bbrelin/src/repos/mednotes/.venv/lib/python3.10/site-packages/django/conf/__init__.py", line 92, in __getattr__
self._setup(name)
File "/home/bbrelin/src/repos/mednotes/.venv/lib/python3.10/site-packages/django/conf/__init__.py", line 79, in _setup
self._wrapped = Settings(settings_module)
</code></pre>
|
<python><django>
|
2023-02-21 17:38:16
| 1
| 523
|
redmage123
|
75,524,026
| 1,050,187
|
Tensorflow 2.11 error: AttributeError: module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
|
<p>I had to update Tensorflow to the currently latest version 2.11. when importing i get "AttributeError: module 'tensorflow._api.v2.compat.v2.<strong>internal</strong>' has no attribute 'register_load_context_function'". I have also completely reinstalled a full anaconda environment and downgraded Python to the version compatible with the latest of Tensorflow and then "pip3 install Tensorflow==2.11". Got the same error. I have no other ideas.</p>
<p>The full error log is the following</p>
<pre><code>import tensorflow as tf
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_432\3752927832.py in <module>
----> 1 import tensorflow as tf
~\AppData\Roaming\Python\Python310\site-packages\tensorflow\__init__.py in <module>
467 if hasattr(_current_module, "keras"):
468 try:
--> 469 _keras._load()
470 except ImportError:
471 pass
~\AppData\Roaming\Python\Python310\site-packages\tensorflow\python\util\lazy_loader.py in _load(self)
39 """Load the module and insert it into the parent's globals."""
40 # Import the target module and insert it into the parent's namespace
---> 41 module = importlib.import_module(self.__name__)
42 self._parent_module_globals[self._local_name] = module
43
~\anaconda3\envs\mltrade2\lib\importlib\__init__.py in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
128
~\anaconda3\envs\mltrade2\lib\site-packages\keras\__init__.py in <module>
19 """
20 from keras import distribute
---> 21 from keras import models
22 from keras.engine.input_layer import Input
23 from keras.engine.sequential import Sequential
~\anaconda3\envs\mltrade2\lib\site-packages\keras\models\__init__.py in <module>
16
17
---> 18 from keras.engine.functional import Functional
19 from keras.engine.sequential import Sequential
20 from keras.engine.training import Model
~\anaconda3\envs\mltrade2\lib\site-packages\keras\engine\functional.py in <module>
32 from keras.engine import input_spec
33 from keras.engine import node as node_module
---> 34 from keras.engine import training as training_lib
35 from keras.engine import training_utils
36 from keras.saving.legacy import serialization
~\anaconda3\envs\mltrade2\lib\site-packages\keras\engine\training.py in <module>
43 from keras.saving.experimental import saving_lib
44 from keras.saving.legacy import hdf5_format
---> 45 from keras.saving.legacy import save
46 from keras.saving.legacy import saving_utils
47 from keras.saving.legacy import serialization
~\anaconda3\envs\mltrade2\lib\site-packages\keras\saving\legacy\save.py in <module>
22 from keras.saving.legacy import serialization
23 from keras.saving.legacy.saved_model import load as saved_model_load
---> 24 from keras.saving.legacy.saved_model import load_context
25 from keras.saving.legacy.saved_model import save as saved_model_save
26 from keras.utils import traceback_utils
~\anaconda3\envs\mltrade2\lib\site-packages\keras\saving\legacy\saved_model\load_context.py in <module>
66
67
---> 68 tf.__internal__.register_load_context_function(in_load_context)
AttributeError: module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
</code></pre>
|
<python><tensorflow><keras>
|
2023-02-21 17:37:11
| 2
| 623
|
fede72bari
|
75,523,841
| 3,971,855
|
Aggregating the columns having list as values in pandas dataframe
|
<p>I have a dataframe which have 3 columns with one of the columns having a list of values.</p>
<p>I want aggregate that list into single list when doing groupby.</p>
<p>DataFrame looks like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>C1</th>
<th>C2</th>
<th>C3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>['A','B']</td>
<td>Hi</td>
</tr>
<tr>
<td>2</td>
<td>NaN</td>
<td>Po</td>
</tr>
<tr>
<td>1</td>
<td>['B','C']</td>
<td>Yo</td>
</tr>
<tr>
<td>2</td>
<td>['D','E']</td>
<td>Yup</td>
</tr>
</tbody>
</table>
</div>
<p>Now I want my dataframe to look like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>C1</th>
<th>C2</th>
<th>C3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>['A','B','C']</td>
<td>['Hi','Yo']</td>
</tr>
<tr>
<td>2</td>
<td>['D','E']</td>
<td>['Po','Yup']</td>
</tr>
</tbody>
</table>
</div>
<p>I used the agrregating function with list past as parameter but I am getting the result like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>C1</th>
<th>C2</th>
<th>C3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>[['A','B'],['B','C']]</td>
<td>['Hi','Yo']</td>
</tr>
<tr>
<td>2</td>
<td>['D','E']</td>
<td>['Po','Yup']</td>
</tr>
</tbody>
</table>
</div>
<p>Can anyone please help on how to get this result??</p>
|
<python><pandas><list><group-by>
|
2023-02-21 17:19:13
| 3
| 309
|
BrownBatman
|
75,523,824
| 7,154,462
|
Custom module not found error on google colab
|
<p>I am trying to run a pytorch code in google colab. The github repository is:</p>
<p><a href="https://github.com/BrightXiaoHan/FaceDetector" rel="nofollow noreferrer">GitHub - BrightXiaoHan/FaceDetector: A re-implementation of mtcnn. Joint training, tutorial and deployment together.</a></p>
<p>There is a custom module in google colab notebook. To build the module I have run <code>python setup.py build_ext --inplace</code></p>
<p>My folder structure:</p>
<p>ls /content/drive/MyDrive/FaceDetector</p>
<pre><code>doc mtcnn output README.md scripts setup.py tests tutorial
</code></pre>
<p>I have also added in sys path</p>
<pre class="lang-py prettyprint-override"><code>import sys
sys.path.append('/content/drive/MyDrive/FaceDetector')
</code></pre>
<p>So when I try to import it</p>
<pre class="lang-py prettyprint-override"><code>import mtcnn
</code></pre>
<p>I am getting an error:</p>
<pre><code>ModuleNotFoundError
Traceback (most recent call last)
[<ipython-input-61-eb80d650f81e>](https://localhost:8080/#) in <module>
----> 1 import mtcnn
---
2 frames
---
[/content/drive/MyDrive/FaceDetector/mtcnn/deploy/detect.py](https://localhost:8080/#) in <module>
4 import time
5 ----> 6 import mtcnn.utils.functional as func
7 8 def _no_grad(func):
ModuleNotFoundError: No module named 'mtcnn.utils.functional'
</code></pre>
<p>Google Colab Notebook Link:
<a href="https://colab.research.google.com/drive/1KQRF-HmZA7EU13acnwIX0dFayRMIuQ-B?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1KQRF-HmZA7EU13acnwIX0dFayRMIuQ-B?usp=sharing</a></p>
|
<python><pytorch><google-colaboratory>
|
2023-02-21 17:17:22
| 1
| 898
|
desertSniper87
|
75,523,818
| 3,763,732
|
Pandas: Combining Product and Apply
|
<p>I have a df created from a spreadsheet containing mostly strings:</p>
<pre><code> # age sex employed educ marital race
0 1 35 to 44 years F Full time Some Col DIV White
1 2 65 to 74 years M Retired BA/BS SING White
2 3 45 to 54 years F Full time BA/BS MAR Hisp
</code></pre>
<p>I want to identify the most/least common combinations of values - perhaps an easy way being to calculate the frequency proportions in each column, and then look up the proportion for a given value and multiply all the proportions together (i.e. someone with a rare combination of values across these columns will have a very small number).</p>
<p>So I build a dict containing the frequencies:</p>
<pre><code>frequencies = {col_name: frame[col_name].value_counts(normalize=True).to_dict() for col_name in columns[1:]}
</code></pre>
<p>Which produces output like <code>'sex': {'F': 0.5666666666666667, 'M': 0.43333333333333335}</code></p>
<p>Now I know I need a function that with look up the frequency, and then I sense I'll need to combine <code>apply()</code>-ing that function with the <code>product()</code> method, but I'm stumped about how to do that -- mostly because I'm not sure how to construct and apply the frequency lookup function.</p>
|
<python><pandas>
|
2023-02-21 17:16:48
| 1
| 1,759
|
AutomaticStatic
|
75,523,779
| 4,178,189
|
How to get details of group selection in streamlit-aggrid
|
<p>I am using <a href="https://streamlit.io/" rel="nofollow noreferrer">streamlit</a> extension <a href="https://github.com/PablocFonseca/streamlit-aggrid" rel="nofollow noreferrer">streamlit-aggrid</a> and I have a selectable table with row groups. I am not able to gather all details of the rows selected when selecting a grouped row.</p>
<p>Here is a runnable <code>issue_example.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
from st_aggrid import AgGrid, ColumnsAutoSizeMode
import pandas as pd
#_ Input data
df = pd.DataFrame({
'Category': ['Fruit', 'Fruit', 'Vegetable', 'Vegetable'],
'Items': ['Apple', 'Banana', 'Tomato', 'Carrots'],
'Price': [1.04, 1.15, 1.74, 1.5]})
#_ Ag Grid table
st.markdown('# Issue: how to get group selection?')
st.write("Try selecting an aggregate, and then an atomic record")
grid_options = {
"columnDefs": [
{"field": "Category", "rowGroup": True, "hide": True},
{"field": "Items"},
{"field": "Price"},
],
"rowSelection": "single",
}
#_ Playing with response
response = AgGrid(
df,
grid_options,
columns_auto_size_mode=ColumnsAutoSizeMode.FIT_ALL_COLUMNS_TO_VIEW,
)
if response['selected_rows']:
selection=response['selected_rows'][0]
st.write("Current selection is provided as a nested dictionary, requesting `['selected_rows'][0]` value of AgGrid response:")
st.write(selection)
if "Items" in selection:
st.markdown('#### Good!')
Category = selection['Category']
Item = selection['Items']
Price = selection['Price']
st.write(f"We know everything about current selection: you picked a `{Category}` called `{Item}`, with price `{Price}`!")
else:
st.markdown('#### Bad!')
nodeId = response['selected_rows'][0]['_selectedRowNodeInfo']['nodeId']
st.write(f"All we know is that a node with Id `{nodeId}` is selected.\n\r How do we know if you're looking for a `Fruit` or a `Vegetable`?")
</code></pre>
<p>when running the above with <code>streamlit run issue_example.py</code> and selecting the <code>Fruit</code> group row, the response of AgGrid is a dictionary that contains no information about the row details in <code>Fruit</code> group. It does not even tell me that I selected <code>Fruit</code>. I need to have a way to know that when I am selecting Fruit and that the selected rows inside are <code>Apple</code> and <code>Banana</code>.</p>
<p>See screenshot for the running streamlit app:</p>
<p><a href="https://i.sstatic.net/pHB0e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pHB0e.png" alt="streamlit app showing the described issue" /></a></p>
|
<python><ag-grid><streamlit>
|
2023-02-21 17:13:07
| 2
| 1,493
|
pietroppeter
|
75,523,579
| 2,473,382
|
Ignore a failed alembic migration
|
<p>For the context: I am using a Postgres RDS which comes with some infinite timestamp for some default roles. Alembic (or more precisely, psycopg) does not like that at all. My first revision is updating some those timestamps to something legit (aka year 9999).</p>
<p>The thing is that I test on docker first, where those roles do not exist. This makes the migration fail, and I would like to recover cleanly (ie. ignore it).</p>
<p>What I would like is:</p>
<pre><code>from psycopg.errors import UndefinedObject
try:
op.execute("alter role pgsqladmin valid until '9999-12-31 12:00:00 +0'")
except sa.exc.ProgrammingError as e:
if isinstance(e.__context__, UndefinedObject):
print("I don't care!")
op.rollback() # That's where I do not know what to have
else:
raise
</code></pre>
<p>But I cannot find a relevant line to replace the invalid <code>op.rollback</code>. Is it at all possible?</p>
<p>And yes, I am aware I could first check if the role does exist instead of try/excepting it or find other workarounds, but I would prefer this solution if possible.</p>
|
<python><sqlalchemy><alembic>
|
2023-02-21 16:54:56
| 0
| 3,081
|
Guillaume
|
75,523,547
| 12,300,981
|
Scipy Minimization Problem, how to find the global minima when there is multiple local minima
|
<p>I have a function with 2 adjustable variables g(x,y) that I am minimizing to fit function f(a,b) with bounds 0 to inf.</p>
<p>This is done using the basic</p>
<pre><code>bnds=np.array([0,np.inf])
minimize(min_fun,args=(funx_output), x0=[0,0], bounds=bnds,method='L-BFGS-B')
</code></pre>
<p>The problem isn't the code or setup, but the chi2 landscape for this g(x,y).</p>
<p><a href="https://i.sstatic.net/G7xyV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G7xyV.png" alt="enter image description here" /></a></p>
<p>This is a contour map showing the chi2 between 1.25 and 1.3. I am trying to determine exactly where the global minima is, but as you can see, there are multiple minima (these are all within 0.0000001 of one another, so very similar), but a global minima does exist (if I run a minimization with a grid search of different starting conditions, I get different outputs). I don't know exactly why this type of oscilatting chi2 exists.</p>
<p>Does anyone know why my function would behave this way? Or what kind of minimization method would help in finding the global minima in a situation like this?</p>
|
<python><scipy>
|
2023-02-21 16:52:25
| 0
| 623
|
samman
|
75,523,498
| 11,680,995
|
Python Polars: How to get the row count of a LazyFrame?
|
<p>The CSV file I have is 70 Gb in size. I want to load the DF and count the number of rows, in lazy mode. What's the best way to do so?</p>
<p>As far as I can tell, there is no function like shape in lazy mode according to the documentation.
I found this <a href="https://stackoverflow.com/questions/41553467/how-can-i-get-the-row-count-of-a-csv-file">answer</a> which provide a solution not based on Polars, but I wonder if it is possible to do this in Polars as well.</p>
|
<python><dataframe><python-polars>
|
2023-02-21 16:48:11
| 1
| 343
|
roei shlezinger
|
75,523,415
| 6,552,836
|
Scipy Optimise with Mystic - constraint keeps getting violated
|
<p>I'm trying to optimize a 52x5 matrix to maximize a return value <code>y</code>. I first flatten the matrix into an array of 260 elements, then apply Scipy optimize and mystic. However, the <code>max_limit</code> constraint keeps getting violated?</p>
<p>Please see the main part of the code below:</p>
<pre><code>max_limit = 2000
def constraint_func():
var_number = ['x'+str(i) for i in range(260)]
constraint = ' + '.join(var_number) + f' <= {max_limit}'
return constraint
eqns = ms.simplify(constraint_func(), all=True)
constraint = ms.generate_constraint(ms.generate_solvers(eqns), join=my.constraints.and_)
def objective_func(x):
constraint_vars = constraint(x)
y = -model.func(constraint_vars)
return y
initial_matrix = [random.randint(0,3) for i in range(260)]
output = so.minimize(objective_func, initial_matrix, method='SLSQP',bounds=[(0,max_limit)]*260 ,tol=0.01, options={ 'disp': True, 'maxiter':100})
</code></pre>
|
<python><optimization><scipy><scipy-optimize><mystic>
|
2023-02-21 16:40:05
| 1
| 439
|
star_it8293
|
75,523,336
| 301,302
|
Streaming redirected stdout output in real time
|
<p>I'm using the <a href="https://docs.python.org/3/library/code.html" rel="nofollow noreferrer">InteractiveInterpreter</a> to execute code that has delays in it (see below). I'd like to capture the output from <code>stdout</code> as the output is written. I'm currently using <code>contextlib</code> to redirect <code>stdout</code> along with <code>StringIO</code> to buffer the output:</p>
<pre class="lang-py prettyprint-override"><code>code = """import time
for i in range(3):
print(i)
time.sleep(1)"""
f = io.StringIO()
inter = InteractiveInterpreter()
with redirect_stdout(f):
inter.runcode(code)
out = f.getvalue()
print(out)
</code></pre>
<p>Naturally, since <code>runcode</code> runs synchronously, the output is only available after the code finishes executing. I'd like to capture the output as it becomes available and do something with it (this code is running in a gRPC environment, so yielding the output in real time).</p>
<p>My initial thought was to wrap the running of the code and reading the output in two separate <code>asyncio</code> coroutines and somehow have <code>stdout</code> write to a stream and have the other task read from that stream.</p>
<p>Is the async approach feasible/reasonable? Any other suggestions? Thanks.</p>
|
<python><python-3.x>
|
2023-02-21 16:34:29
| 1
| 2,958
|
naivedeveloper
|
75,523,328
| 850,781
|
pip does not see the latest pandas version
|
<p><a href="https://pandas.pydata.org/" rel="nofollow noreferrer">pandas</a> 1.5.3 was released on Jan 19, 2023.</p>
<p>However, <code>pip</code> seems to be unable to install it (same for 1.5.2 &c):</p>
<pre><code>$ pip3 install --user pandas==1.5.3
ERROR: Could not find a version that satisfies the requirement pandas==1.5.3 (from versions: 0.1, 0.2, 0.3.0, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.5.0, 0.6.0, 0.6.1, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.8.0, 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.11.0, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.14.1, 0.15.0, 0.15.1, 0.15.2, 0.16.0, 0.16.1, 0.16.2, 0.17.0, 0.17.1, 0.18.0, 0.18.1, 0.19.0, 0.19.1, 0.19.2, 0.20.0, 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.21.1, 0.22.0, 0.23.0, 0.23.1, 0.23.2, 0.23.3, 0.23.4, 0.24.0, 0.24.1, 0.24.2, 0.25.0, 0.25.1, 0.25.2, 0.25.3, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5)
ERROR: No matching distribution found for pandas==1.5.3
</code></pre>
<p>also,</p>
<pre><code>$ pip3 cache purge
ERROR: No matching packages
$ pip3 install --user pandas
Collecting pandas
Using cached pandas-1.3.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.3 MB)
Requirement already satisfied: numpy>=1.17.3; platform_machine != "aarch64" and platform_machine != "arm64" and python_version < "3.10" in /local/home/sdsg/.virtualenvs/myself/lib/python3.7/site-packages (from pandas) (1.21.6)
Requirement already satisfied: pytz>=2017.3 in /local/home/sdsg/.virtualenvs/myself/lib/python3.7/site-packages (from pandas) (2022.7.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /local/home/sdsg/.virtualenvs/myself/lib/python3.7/site-packages (from pandas) (2.8.2)
Requirement already satisfied: six>=1.5 in /local/home/sdsg/.virtualenvs/myself/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas) (1.16.0)
Installing collected packages: pandas
Successfully installed pandas-1.3.5
</code></pre>
<p>(note that <code>pip3 install --user sqlalchemy</code> installs <a href="https://www.sqlalchemy.org/" rel="nofollow noreferrer">sqlalchemy</a> v2.0.4 so it seems to be able to see newer versions of <em>some</em> packages).</p>
|
<python><pandas><pip>
|
2023-02-21 16:34:02
| 1
| 60,468
|
sds
|
75,523,158
| 20,967,663
|
Sqlalchemy specify table size during create_all call
|
<p>Is there any way to specify the size for the tables in Oracle database when I use <code>schema.create_all()</code>?</p>
<p>Generally speaking, I am looking for ways to increase Oracle table space size from python and I am specifically using sqlalchemy to work with the database. I know I can use Oracle's SQL command (like ALTER) to alter the size of the tables in Oracle database. But can I do something similar from sqlalchemy?</p>
|
<python><oracle-database><sqlalchemy>
|
2023-02-21 16:20:18
| 0
| 303
|
Osman Mamun
|
75,523,134
| 13,627,237
|
Passing param to Airflow DAG from another DAG with TriggerDagRunOperator
|
<p>I'm trying to pass a param to Airflow DAG from another DAG with TriggerDagRunOperator, here is the code:</p>
<pre><code>@dag(default_args=default_args, catchup=False, #schedule_interval=DAG_SCHEDULE_INTERVAL,
dagrun_timeout=timedelta(seconds=3600), tags=["tag1"], doc_md=DOC_MD, max_active_runs=1)
def parent_dag(date_start="", date_end=""):
triggered_dag = TriggerDagRunOperator(
task_id='triggered_dag',
trigger_dag_id='triggered_dag',
conf={"date_start": "{{date_start}}", "date_end": "{{date_start}}"}
)
triggered_dag
dag = parent_dag()
</code></pre>
<p>The params date_start and date_end in parent DAG have a empty string default value, but they will normally be manually informed when triggering parent DAG.</p>
<p>With above code sample I'm getting this error:</p>
<pre><code>jinja2.exceptions.UndefinedError: 'date_start' is undefined
</code></pre>
<p>Does anyone know where's the issue?</p>
<p>Thanks in advence.</p>
|
<python><airflow><jinja2>
|
2023-02-21 16:18:35
| 1
| 774
|
Cristian Ispan
|
75,523,072
| 12,193,952
|
Pandas - Find the lowest value in range defined by certain values?
|
<p>I struggle while finding the lowest value within a range defined by values in other column. The range is always defined by two similar values in <code>boo</code> column (1-1, 2-2), shown also on image below. Values (<code>boo</code> column) are not known in advance (<em>so I cannot make a list and compare them</em>), because they are calculated in the code few steps earlier.</p>
<p><a href="https://i.sstatic.net/PF7ka.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PF7ka.png" alt="Ranges explained" /></a></p>
<p>Dataframe example</p>
<pre class="lang-py prettyprint-override"><code> foo boo
15 36.377949
16 42.489706 1
17 41.223734
18 32.281779 0
19 22.888312 2
20 12.847996
21 6.876954
22 -23.872935 1
23 -31.858878
24 -39.404905 3
25 -47.724924 2
26 -4.8161051 3
</code></pre>
<p>The output is preferred as new dataframe column</p>
<pre class="lang-py prettyprint-override"><code> foo boo min
15 36.377949
16 42.489706 1
17 41.223734
18 32.281779 0
19 22.888312 2
20 12.847996
21 6.876954
22 -23.872935 1 -23
23 -31.858878
24 -39.404905 3
25 -47.724924 2 -47
26 -4.8161051 3 -47
</code></pre>
<p>I know how to solve this using basic <code>for</code> loop (and not leveradging from Pandas functions and speed), so I would like to keep this on dataframe/Pandas/Numpy level, if possible.</p>
<p>Is there a way how to do it using Pandas/Numpy? <em>Any comments, suggestions and help is appreciated!</em></p>
<hr />
<h2>EDIT</h2>
<p>I have tried to implement both suggested methods (and they work for small data sets!), however the execution time is not good with larger datasets. I use dataframes with <strong>1.5 - 2.5 milions of rows</strong>, which would take "forever" (<em>according to increase of execution times</em>).</p>
<p><code>function1</code> is the one using <code>find_min_in_range</code> (from Pedro Rocha) and <code>function2</code> is using <code>for</code> loop (from mozway).</p>
<ul>
<li>x axis is number of rows in dataframe</li>
<li>y axis is execution time in seconds</li>
<li>I have tested from <code>10000</code> to <code>200000</code> rows</li>
<li>while using my "usuall" dataframe, none of given solution finished</li>
<li>using <code>func1 adjusted</code> whole df iteration took 4-5 minutes</li>
</ul>
<p><a href="https://i.sstatic.net/ZGzcH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZGzcH.png" alt="func1 vs func2 vs func1 adjusted" /></a></p>
<h2>EDIT2</h2>
<p>I've done another round of testing using provided solutions. It seems like there is significant improvement.</p>
<p>Below are test results with more metrics (number of ranges).</p>
<ul>
<li>it seems that Pedros Option 1 and Option 3 are the best performers</li>
<li>number of ranges in our data samples seems to increase linearly</li>
<li>only Pedros Option 3 was able to finish finding min on our "whole" data sample [in <code>172</code> seconds <code>1725410</code> rows and <code>204954</code> ranges], <em>but I am unsure whether it will work in our use case - because we are executing script on 1 vCPU machines</em> - but I am gonna try it ^^</li>
<li><a href="https://drive.google.com/file/d/18IHKoJ6pUV4DAVKcsgZmCdIgpKy8l2zm/view?usp=sharing" rel="nofollow noreferrer">here</a> the sample with one of the larges number of ranges in our data, if you would like to use our data (can be loaded using <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_parquet.html" rel="nofollow noreferrer">read_parquet()</a> method)</li>
<li>it seems like the <code>number of ranges</code> is always less than <code>25 %</code> of total <code>row count</code></li>
<li>ranges are always closed (<em>range 0 is exception and only not closed range</em>)</li>
</ul>
<p><strong>Lower number of ranges</strong>
<a href="https://i.sstatic.net/nlBXI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nlBXI.png" alt="lower number of ranges" /></a>
<strong>Higher number of ranges</strong>
<a href="https://i.sstatic.net/1CK5f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1CK5f.png" alt="higher number of ranges" /></a></p>
<p>However I am unable to reproduce Option 2 from Pedro. Code below (copy paste from generator and option 2).</p>
<pre class="lang-py prettyprint-override"><code>num_rows = 2000000
num_ranges = 10000
foo_values = [random.uniform(-100, 100) for i in range(num_rows)]
boo_values = [i for i in range(num_ranges)]
boo_values.extend([i for i in range(num_ranges)])
a = np.empty(num_rows-len(boo_values))
a[:] = np.nan
boo_values.extend(a)
random.shuffle(boo_values)
df = pd.DataFrame({"foo": foo_values, "boo": boo_values})
df["min"] = np.nan
idx = df[df.groupby('boo').cumcount() == 1].index
df.loc[idx,"min"] = df.loc[idx].apply(lambda row: df.loc[range(*df[df.boo == row["boo"]].index[[0,-1]]+[0,1]),"foo"].min(), axis=1)
</code></pre>
<p>Error</p>
<pre class="lang-py prettyprint-override"><code> File "/.../option2.py", line 96, in <lambda>
df.loc[idx,"min"] = df.loc[idx].apply(lambda row: df.loc[range(*df[df.boo == row["boo"]].index[[0,-1]]+[0,1]),"foo"].min(), axis=1)
File "/.../venv/lib/python3.10/site-packages/pandas/core/indexes/base.py", line 5069, in __getitem__
result = getitem(key)
IndexError: index 0 is out of bounds for axis 0 with size 0
</code></pre>
<p><em>Also I am thinking about closing this question and creating a new one focused on performance enhacements with provided data sample and more precise description.</em></p>
<h2>EDIT3</h2>
<p>I am sharing the way how I benchmark different approaches.</p>
<ul>
<li>using lighter version of <a href="https://drive.google.com/file/d/18IHKoJ6pUV4DAVKcsgZmCdIgpKy8l2zm/view?usp=sharing" rel="nofollow noreferrer">this</a> as input data (lower number of ranges - approx. 200k)</li>
<li>measuring execution time using <code>timeit</code></li>
<li>copy&paste results form terminal into GSheet and creating chart</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from timeit import default_timer as timer
import pandas as pd
import numpy as np
# Skipped loading the dataframe, can be mocked by using read_parquet from given example file
for i in range(10000, 210000, 10000):
df = input_df[:i].copy()
start = timer()
# here is the code of one or more solution/s
...
f['x'] = round(timer() - start, 1)
times.append(f)
# Print out execution time
for t in times:
print(t['x'])
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-21 16:12:52
| 3
| 873
|
FN_
|
75,522,916
| 11,027,207
|
sqlalchemy - automap_base dynamically generates mapped classes
|
<p>How i can make below code to work dynamically by given list of tables .</p>
<pre><code>from sqlalchemy import create_engine,MetaData
from sqlalchemy.orm import Session
engine = create_engine('postgresql://admin:admin@192.168.1.113:5432/target')
session = Session(engine)
from sqlalchemy.ext.automap import automap_base
metadata = MetaData()
# list of tables...
tables = ['books']
metadata.reflect(engine, only=tables)
Base = automap_base(metadata=metadata)
Base.prepare()
## hardcode version
books = Base.classes.books
query = session.query(books).all()
for row in query:
print(row.title)
</code></pre>
<p>So the question how i can dynamically extract all tables from Base.classes
and create new class dynamically.</p>
<p>Pseudo code</p>
<pre><code>for table in tables:
cls.<table> = Base.classes.<table>
## or maybe even better iterate on Base.classes
for cls in Base.class :
cls.__name__ = cls
</code></pre>
|
<python><sqlalchemy><orm>
|
2023-02-21 15:59:18
| 0
| 424
|
AviC
|
75,522,890
| 4,225,430
|
Python: Fail to produce diamond since unable to add space in the first line
|
<p>I'd like to produce a diamond with python, but failed because python cannot allow spaces in the starting positions in the first line. Need your help for advice. My code:</p>
<pre><code>space = " "
n = int(input('Enter the number of upper-half column of diamond : '))
for i in range (n):
print(space * (n-i-1) + "a" * (2+2*i))
for i in range (n):
print(space * i + "a" * (2*n-2*i))
</code></pre>
<p>Output:</p>
<pre><code>Enter the number of upper-half column of diamond: 4
aa
aaaa
aaaaaa
aaaaaaaa
aaaaaaaa
aaaaaa
aaaa
aa
</code></pre>
<p>May I know how to fix it? And, are there any better way to write code, such as combining two for-loops into one?</p>
<p>Many thanks.</p>
|
<python><for-loop>
|
2023-02-21 15:57:08
| 1
| 393
|
ronzenith
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.