QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,522,656
| 6,239,971
|
Python - Pandas DataFrame manipulation
|
<p>I've got a DataFrame called <code>product</code> with a list of orders, products, and quantities for each product. Here's a screenshot:</p>
<p><a href="https://i.sstatic.net/P9Ofp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P9Ofp.png" alt="enter image description here" /></a></p>
<p>I need to make a new DataFrame that has a row for each product name and two more columns with the sum of products ordered (basically a sum on the column <code>quantity</code> per product) and the total sales for each product (sum on column <code>total</code> per product).</p>
<p>I made this function:</p>
<pre><code>products_unique = products['product_id'].unique()
names = [
products.loc[
products['product_id'] == elem
]['name'].unique()
for elem in products_unique
]
orders = [
len(products.loc[
products['product_id'] == elem
])
for elem in products_unique
]
totals = [
products.loc[
products['product_id'] == elem
]['total'].sum()
for elem in products_unique
]
chart_data = pd.DataFrame({
'Prodotti': products_unique,
'Nome': names,
'Ordini': orders,
'Totale': totals
})
</code></pre>
<p>Now, this function works for the purpose I had, but there is something I don't understand. When I run it, I got this:</p>
<p><a href="https://i.sstatic.net/bwMwn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bwMwn.png" alt="enter image description here" /></a></p>
<p>As you can see, values in the column <code>names</code> are of the type <code>list</code>. Why does this happen?</p>
<p>And moreover, is there a cleaner way to achieve what I'm building?</p>
<p>Thanks to everyone who gonna help me!</p>
|
<python><pandas><dataframe><data-manipulation>
|
2023-02-21 15:36:00
| 2
| 454
|
Davide
|
75,522,641
| 3,069,498
|
Python: how is `None` evaluated in logical expressions, in relation to other boolean values?
|
<p>In Python 3...</p>
<ul>
<li>Sample 1:</li>
</ul>
<pre><code>s: str = None
b: bool = True
b = (s != None) and (s != "some-not-allowed-value")
print (b)
</code></pre>
<p>Displays <code>False</code> (<em>as it seems intuitive</em>)</p>
<hr />
<ul>
<li>Sample 2:</li>
</ul>
<pre><code>s: str = None
b: bool = True
b = (s) and (s != "some-not-allowed-value")
print (b)
</code></pre>
<p>Displays <code>None</code></p>
<p><strong>Why ? How is that <code>s</code> field evaluated ?</strong></p>
|
<python><logical-operators><nonetype>
|
2023-02-21 15:34:24
| 1
| 832
|
Serban
|
75,522,460
| 7,425,726
|
insert record to fill missing time window
|
<p>I have a dataset with consecutive time periods corresponding with activities (drive, rest, charge etc). But there is no record for the night so the data is not continuous. I would like to add an extra record to fill this gap such that the start time of each record is always equal to the end time of the previous record. What is the best way to insert these records automatically (for different vehicle ID's). My data looks like this now:</p>
<pre><code>import pandas as pd
from io import StringIO
csv = """
id,starttime,endtime
1,2022-09-19 17:05:00,2022-09-19 17:26:00
1,2022-09-19 17:26:00,2022-09-19 18:38:00
1,2022-09-19 18:38:00,2022-09-19 19:31:00
1,2022-09-19 19:31:00,2022-09-19 19:38:00
1,2022-09-19 19:38:00,2022-09-19 19:40:00
1,2022-09-19 19:40:00,2022-09-19 19:41:00
1,2022-09-20 07:06:00,2022-09-20 07:06:00
1,2022-09-20 07:06:00,2022-09-20 07:23:00
1,2022-09-20 07:23:00,2022-09-20 07:26:00
1,2022-09-20 07:26:00,2022-09-20 07:37:00
"""
df = pd.read_csv(StringIO(csv))
</code></pre>
<p>And I would like to add the extra record:</p>
<pre><code>1,2022-09-19 19:41:00,2022-09-20 07:06:00
</code></pre>
<p>(in the real case for multiple days and multiple id's)</p>
|
<python><pandas>
|
2023-02-21 15:20:21
| 2
| 1,734
|
pieterbons
|
75,522,458
| 11,971,785
|
Mixin using @extend_schema (drf_spectacular) using instance serializer
|
<p>I want to set the responses for @extend_schema dynamically.
The Mixin is inherited by different Subclasses, their response can vary depending on their Serializer. Therefore I would like to have the responses be set dynamically. Is that possible?</p>
<pre><code>class Mixin:
@extend_schema(
request=None,
responses={
status.HTTP_200_OK: self.__class__.get_serializer(), # <--- self obviously won't work
status.HTTP_204_NO_CONTENT: None,
}
)
@action(detail=True, methods=['post'], url_path='decline')
def decline(self, request, **kwargs):
# do something
</code></pre>
<p>I tried to patch the viewset and write a function selecting the serializer dynamically, but it always boils down to the point, that the responses are set in a static context which made me wonder if it is even possible to set it dynamically?</p>
|
<python><django-rest-framework><drf-spectacular>
|
2023-02-21 15:20:10
| 0
| 9,265
|
Andreas
|
75,522,454
| 1,864,294
|
Drop rows from the end on condition
|
<p>For a series</p>
<pre><code>s = pd.Series([1, 0, 1, 0, 2, 0, 0, 0])
</code></pre>
<p>I would like to remove all rows with consecutive zeros at the end:</p>
<pre><code>pd.Series([1, 0, 1, 0, 2])
</code></pre>
<p>My current solution</p>
<pre><code>s.loc[s != s.shift()]
</code></pre>
<p>does not remove the last zero row and manually drop it feels wrong. :)</p>
<p>Any better ideas?</p>
|
<python><pandas>
|
2023-02-21 15:19:46
| 4
| 20,605
|
Michael Dorner
|
75,522,432
| 1,700,890
|
Append list in dictionary through dictionary comprehension
|
<p>Here is my example:</p>
<pre><code>sample_dict = {'foo':[1], 'bar': [2]}
{el: sample_dict[el].append(3) for el in sample_dict.keys()}
</code></pre>
<p>it generates:</p>
<pre><code>{'foo': None, 'bar': None}
</code></pre>
<p>while I am expecting this:</p>
<pre><code>{'foo': [1,3], 'bar': [2,3]}
</code></pre>
<p>What am I missing?</p>
|
<python><append><dictionary-comprehension>
|
2023-02-21 15:17:25
| 1
| 7,802
|
user1700890
|
75,522,400
| 558,619
|
if type(X) == type(Y) boolean triggering as false when it's the same type
|
<p>I have a small bit of code in a class (<code>Delta_Viewer</code>) that looks like this:</p>
<pre><code>class Delta_Viewer():
def __init__(self, df):
self.df : pd.Dataframe = df
self._columns = trade_types
# init method, properties, etc up here
def __sub__(self, other):
if type(other) == Delta_Viewer:
new_df = self.df.sub(other.df, fill_value = 0)
return Delta_Viewer(new_df)
else:
print(type(other), 'self:', type(self))
print(id(other), 'self:', id(self))
raise Exception(f'Delta Viewer can only be subtracted from another Delta_Viewer instance. passed: {type(other)}')
def __rsub__(self, other):
return self.__sub__(other)
</code></pre>
<p>What's confusing me is that when a separate part of my code tries to subtract one <code>Delta_Viewer</code> instance from another, my Exception gets raised. To try and debug, I added in the <code>print()</code> statements, which show that they are the same type.</p>
<pre><code><class 'Sitka.Portfolio.delta_gamma.Delta_Viewer'> self: <class 'Sitka.Portfolio.delta_gamma.Delta_Viewer'>
1681758064352 self: 1681751071136
</code></pre>
<p>I also tried the same if-statement with <code>isinstance()</code>, but that didn't work either.</p>
<p>PS. There are no class decorators. Its a fairly simple class that has 4 properties that return predefined functions on self.df.</p>
|
<python>
|
2023-02-21 15:15:00
| 0
| 3,541
|
keynesiancross
|
75,522,319
| 4,916,174
|
SignalR with Python websockets return 503
|
<p>I have created simplest Hub in ASP.NET Core and pushed it into Azure containing below piece of code:</p>
<pre><code>public class TestHub: Hub
{
}
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
endpoints.MapHub<TestHub>("/testHub");
});
</code></pre>
<p>To validate hub is working and client has possibility to retrieve messages I have created console application using Microsoft.SignalR client library:</p>
<pre><code>var url = "{baseUrl}/testHub";
var connection = new HubConnectionBuilder()
.WithUrl(url)
.WithAutomaticReconnect()
.Build();
connection.On("OpenDoors", () =>
{
Console.Write("Message");
});
await connection.StartAsync();
</code></pre>
<p>With above solution I am able to retrieve messages from the SignalR deployed to Azure.</p>
<p>Now I am trying to do the same using Python and its websockets.</p>
<p>To achieve this I do negotiation:</p>
<pre><code>negotiation = requests.post('https://{url}/testHub/negotiate?negotiateVersion=0', verify=False).json()
</code></pre>
<p>which returns me the correct values. I am trying now to do the handshake:</p>
<pre><code>connectionId = negotiation['connectionId'])
async with websockets.connect('wss://{url}/testHub?id={connectionId }')
</code></pre>
<p>There I retrieve an error:</p>
<blockquote>
<p>websockets.exceptions.InvalidStatusCode: server rejected WebSocket
connection: HTTP 503</p>
</blockquote>
<p>What do I miss?</p>
|
<python><websocket><signalr>
|
2023-02-21 15:07:13
| 0
| 3,442
|
miechooy
|
75,522,265
| 1,673,492
|
Typehints on autospecced mock does not match original
|
<p>I would like an autospecced mock to give me the same typehints as i get from my original function, but I cannot make it work. Below is a minimal example of what I am trying to do</p>
<pre><code>from __future__ import annotations
import typing
from unittest.mock import create_autospec
import pandas as pd
def func(a: int, c: pd.Timestamp, d: typing.Literal['e', 'f']) -> None:
pass
func_annotations = func.__annotations__
mock_func = create_autospec(func)
type_hints = typing.get_type_hints(func)
# {'a': <class 'int'>, 'c': <class 'pandas._libs.tslibs.timestamps.Timestamp'>, 'return': <class 'NoneType'>}
# This I would like to be equal to type_hints
type_hints_mocked = typing.get_type_hints(mock_func) # {}
# I noticed 'get_type_hints' uses __annotations__ and tried this hack, but to no avail
mock_func.__annotations__ = func_annotations
type_hints_mocked_2 = typing.get_type_hints(mock_func) # NameError: name 'pd' is not defined
</code></pre>
<p>How come the mocked function doesn't give me the same typehints and could I somehow get it to?</p>
|
<python><python-unittest><python-typing>
|
2023-02-21 15:02:30
| 0
| 331
|
ano
|
75,522,220
| 6,077,239
|
Got ValueError: exog is not 1d or 2d when fitting OLS with a corner case
|
<p>The following code causes an Exception to be raised.</p>
<pre><code>>>> import statsmodels.api as sm
>>> sm.OLS(np.array([np.nan]), sm.add_constant(np.array([[1]]), has_constant="add"), missing="drop").fit().params
ValueError: exog is not 1d or 2d
</code></pre>
<p>Is there anyway to suppress the Exception and make the expression above return array([[nan., nan.]]) instead?</p>
|
<python><statsmodels>
|
2023-02-21 14:58:26
| 0
| 1,153
|
lebesgue
|
75,522,116
| 3,491,031
|
Weisfeiler-Lehman Networkx function for different k-levels
|
<p>I am trying to use the Weisfeiler-Lehman (WL) algorithm implemented in networkx library to check if two graphs are isomorphic.</p>
<p>My graphs are the following:</p>
<pre class="lang-python prettyprint-override"><code>import networkx as nx
g1 = nx.Graph()
g1.add_edges_from([(1, 2), (1, 4), (2, 4), (2, 5), (3, 5), (3, 6), (5, 6)])
g2 = nx.Graph()
g2.add_edges_from([(1, 2), (1, 4), (2, 3), (2, 5), (3, 6), (4, 5), (5, 6)])
g1_hash = nx.weisfeiler_lehman_graph_hash(g1)
g2_hash = nx.weisfeiler_lehman_graph_hash(g2)
# g1_hash and g2_hash are equal and they should not be since graphs are not isomorphic
# there is another implementation in networkx to check if two graphs
# are isomorphic and there it produces the correct answer which is False
nx.is_isomorphic(g1, g2)
</code></pre>
<p>They are not isomorphic. However, if I try to go to higher levels of the WL algorithm it should produce different hashes for the given graphs. I don't know how to do that and I couldn't find something in the networkx documentation.</p>
|
<python><networkx><graph-theory><isomorphism>
|
2023-02-21 14:48:26
| 1
| 574
|
jAdex
|
75,522,008
| 1,666,623
|
SSL Error in Python Requests run from Github's Actions
|
<p>I am trying to periodically download data in Python from parliamentary website <a href="https://www.psp.cz/" rel="nofollow noreferrer">https://www.psp.cz/</a> using Github's Actions.
(For example this data file: <a href="https://www.psp.cz/eknih/cdrom/opendata/poslanci.zip" rel="nofollow noreferrer">https://www.psp.cz/eknih/cdrom/opendata/poslanci.zip</a> )</p>
<p>I am able to download it OK at my local computer, but it does not work from Github Actions returning error: <code>SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:997)')</code></p>
<p>My basic code in <code>downloader.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = "https://www.psp.cz/eknih/cdrom/opendata/poslanci.zip"
r = requests.get(url)
</code></pre>
<p>My code in <code>YML</code> file for Github Actions:</p>
<pre class="lang-yaml prettyprint-override"><code># ... set when to run the Actions here
jobs:
scheduled:
runs-on: ubuntu-latest
steps:
- name: Check out repo
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- uses: actions/cache@v3
name: Configure pip caching
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install Python dependencies
run: |
pip install -r requirements.txt
- name: Download data
run: python downloader.py
# ... do something with the data here
- name: Commit and push if it changed
run: |-
git config user.name "Automated"
git config user.email "actions@users.noreply.github.com"
git add -A
timestamp=$(date +%FT%T%Z)
git commit -m "Latest data: ${timestamp}" || exit 0
git push
</code></pre>
<p>And <code>requirements.txt</code> is simply:</p>
<pre><code>requests
</code></pre>
<p>I've tried several variants, but with no success:</p>
<pre class="lang-py prettyprint-override"><code>r = requests.get(url)
r = requests.get(url, verify=False)
r = requests.get(url, verify=True)
# files obtained from Firefox's data/info about `psp.cz` website:
r = requests.get(url, verify='psp-cz.pem')
r = requests.get(url, verify='psp-cz-chain.pem')
# all 5 ends with: requests.exceptions.SSLError: HTTPSConnectionPool(host='www.psp.cz', port=443): Max retries exceeded with url: /eknih/cdrom/opendata/poslanci.zip (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:997)')))
</code></pre>
<p>Any idea how to set it up correctly? Thank you.</p>
|
<python><ssl><python-requests><ssl-certificate><github-actions>
|
2023-02-21 14:39:53
| 0
| 1,419
|
Michal Skop
|
75,521,871
| 16,844,801
|
VScode Jupyter Notebook crash in cell
|
<p>I get this error when I run sklearn to train on a very large dataset. If the dataset is small, it works, but if it is above a threshold, the kernel crashes.</p>
<p>Error:</p>
<pre><code>info 16:24:11.630: Process Execution: > ~/miniconda3/envs/auto-sklearn/bin/python -m pip list
> ~/miniconda3/envs/auto-sklearn/bin/python -m pip list
info 16:24:11.712: Process Execution: > ~/miniconda3/envs/auto-sklearn/bin/python -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
> ~/miniconda3/envs/auto-sklearn/bin/python -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
info 16:24:11.910: Process Execution: > ~/miniconda3/envs/auto-sklearn/bin/python -m ipykernel_launcher --ip=127.0.0.1 --stdin=9011 --control=9009 --hb=9008 --Session.signature_scheme="hmac-sha256" --Session.key=b"39e77f25-eae0-4712-8a1a-628305c2ff03" --shell=9010 --transport="tcp" --iopub=9012 --f=/home/baraa/.local/share/jupyter/runtime/kernel-v2-261323AOvTtMclSsgz.json
> ~/miniconda3/envs/auto-sklearn/bin/python -m ipykernel_launcher --ip=127.0.0.1 --stdin=9011 --control=9009 --hb=9008 --Session.signature_scheme="hmac-sha256" --Session.key=b"39e77f25-eae0-4712-8a1a-628305c2ff03" --shell=9010 --transport="tcp" --iopub=9012 --f=/home/baraa/.local/share/jupyter/runtime/kernel-v2-261323AOvTtMclSsgz.json
info 16:24:11.910: Process Execution: cwd: ~/Documents/Python/Testing/Search
cwd: ~/Documents/Python/Testing/Search
info 16:24:12.237: ipykernel version & path 6.21.2, ~/miniconda3/envs/auto-sklearn/lib/python3.8/site-packages/ipykernel/__init__.py for /home/baraa/miniconda3/envs/auto-sklearn/bin/python
info 16:24:13.131: Started Kernel auto-sklearn (Python 3.8.16) (pid: 263555)
info 16:24:13.182: Process Execution: > ~/miniconda3/envs/auto-sklearn/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/pythonFiles/printJupyterDataDir.py
> ~/miniconda3/envs/auto-sklearn/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/pythonFiles/printJupyterDataDir.py
error 16:25:00.281: Disposing session as kernel process died ExitCode: undefined, Reason: /home/baraa/miniconda3/envs/auto-sklearn/lib/python3.8/site-packages/traitlets/traitlets.py:2548: FutureWarning: Supporting extra quotes around strings is deprecated in traitlets 5.0. You can use 'hmac-sha256' instead of '"hmac-sha256"' if you require traitlets >=5.
warn(
/home/baraa/miniconda3/envs/auto-sklearn/lib/python3.8/site-packages/traitlets/traitlets.py:2499: FutureWarning: Supporting extra quotes around Bytes is deprecated in traitlets 5.0. Use '39e77f25-eae0-4712-8a1a-628305c2ff03' instead of 'b"39e77f25-eae0-4712-8a1a-628305c2ff03"'.
warn(
info 16:25:00.284: Dispose Kernel process 263555.
error 16:25:00.284: Raw kernel process exited code: undefined
error 16:25:00.301: Error in waiting for cell to complete [Error: Canceled future for execute_request message before replies were done
at t.KernelShellFutureHandler.dispose (/home/baraa/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/out/extension.node.js:2:33213)
at /home/baraa/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/out/extension.node.js:2:52265
at Map.forEach (<anonymous>)
at y._clearKernelState (/home/baraa/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/out/extension.node.js:2:52250)
at y.dispose (/home/baraa/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/out/extension.node.js:2:45732)
at /home/baraa/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/out/extension.node.js:17:139244
at Z (/home/baraa/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/out/extension.node.js:2:1608939)
at Kp.dispose (/home/baraa/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/out/extension.node.js:17:139221)
at qp.dispose (/home/baraa/.vscode/extensions/ms-toolsai.jupyter-2023.1.2010391206/out/extension.node.js:17:146518)
at process.processTicksAndRejections (node:internal/process/task_queues:96:5)]
warn 16:25:00.301: Cell completed with errors {
message: 'Canceled future for execute_request message before replies were done'
}
info 16:25:00.302: Cancel all remaining cells true || Error || undefined
</code></pre>
<p>I tried reinstall ipykernal, downgrading traitlets and Pyzmq==19.0.2, update python version, reinstalling miniconda, choosing different environment... still to no avail</p>
|
<python><python-3.x><visual-studio-code><scikit-learn><jupyter-notebook>
|
2023-02-21 14:28:27
| 2
| 434
|
Baraa Zaid
|
75,521,662
| 15,915,737
|
Upsert Pandas Dataframe into Snowflake Table
|
<p>I'm upserting data in snowflake table by creating a Temp Table (from my dataframe) and then merging it to my Table. But is there a more efficient way of achieving it ? Like merging directly the dataframe on snowflake table without a temp Table ?</p>
<p>Because I will do it on several tables having a few thousant rows.</p>
<p>My Code:</p>
<pre><code>
import pandas as pd
from sqlalchemy import create_engine
from snowflake.connector.pandas_tools import pd_writer
engine = create_engine('snowflake://{user}:{password}@{account_identifier}/{database_name}/{schema_name}?warehouse={warehouse_name}&role={role_name}'.format(
user='user',
password=os.environ['SNOWFLAKE_PASSWORD'] ,
account_identifier='account_identifier',
database_name='DB_NAME',
schema_name='SHCEMA_NAME',
warehouse_name='WH',
role_name='ADMIN'
)
)
conn=engine.connect()
temp_table_name='source_table'
df=pd.DataFrame({'id':[1,2,3],'description':['a','b','c']})
#create temp table in snowflake
res_sql=df.to_sql(temp_table_name.lower(), engine, if_exists='replace',index=False, method=pd_writer, schema='SCHEMA_NAME')
#MERGE TEMP TABLE TO EXISTING TABLE
conn.cursor().execute(
'''
MERGE INTO target_table USING source_table
ON target_table.id = source_table.id
WHEN MATCHED THEN
UPDATE SET target_table.description = source_table.description
WHEN NOT MATCHED THEN
INSERT (ID, description) VALUES (source_table.id, source_table.description);
'''
)
#Drop temp table
conn.cursor().execute("DROP TABLE IF EXISTS DB_NAME.SCHEMA_NAME.source_table")
</code></pre>
|
<python><dataframe><snowflake-cloud-data-platform><upsert>
|
2023-02-21 14:11:00
| 1
| 418
|
user15915737
|
75,521,643
| 5,651,960
|
How to scrape multiple tables form HTML file in python?
|
<p>OBJECTIVE:
Trying to scrape the contents of multiples HTML files and put into a csv file.</p>
<p>Looking for the unbolded items to be column headers and for the bolded information to fit into a row inside a csv file.</p>
<p>So far, I've been able to effectively get tables 1 and 2 (there are 5 total).</p>
<p>Believe I'm having difficulty scraping the data once there is more than one field on a single "row" (see table 3 as an example below and how it varies relative to table 1 & 2).</p>
<p>The first line of html for table 3 looks like this:</p>
<p><a href="https://i.sstatic.net/Rkr2b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rkr2b.png" alt="enter image description here" /></a></p>
<p>Notice multiple td's and th's in the tr.</p>
<p><strong>Example HTML:</strong>
<a href="https://i.sstatic.net/ZFbBh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZFbBh.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/fOT4e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fOT4e.png" alt="enter image description here" /></a></p>
<p><strong>CSV Formatting:</strong>
<a href="https://i.sstatic.net/wuare.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wuare.png" alt="enter image description here" /></a></p>
<p><strong>Page HTML:</strong></p>
<pre><code><div class="table-responsive" id="section-1">
<h2>1.&nbsp;Identification de l'unité d'évaluation</h2>
<table class="table table-borderless table-condensed">
<tbody><tr>
<td width="50%">Adresse&nbsp;:</td>
<th width="50%">325 Chemin de la Pointe-Sud</th>
</tr>
<tr>
<td>Arrondissement&nbsp;:</td>
<th>Arrondissement de Verdun</th>
</tr>
<tr>
<td>Numéro de lot&nbsp;:</td>
<th>
4070799
</th>
</tr>
<tr>
<td>Numéro de matricule&nbsp;:</td>
<th>0034-33-9422-7-000-0000</th>
</tr>
<tr>
<td>Utilisation prédominante&nbsp;:</td>
<th>Maison pour personnes retraitées autonomes</th>
</tr>
<tr>
<td>Numéro d'unité de voisinage&nbsp;:</td>
<th>4882</th>
</tr>
<tr>
<td>Numéro de compte foncier&nbsp;:</td>
<th>28 - F01810500</th>
</tr>
</tbody></table>
</div>
<div class="table-responsive" id="section-2">
<h2>2.&nbsp;Propriétaire</h2>
<table class="table table-borderless table-condensed">
<tbody><tr>
<td width="50%">Nom&nbsp;:</td>
<th width="50%">9427686 CANADA INC.</th>
</tr>
<tr>
<td>Statut aux fins d'imposition scolaire&nbsp;:</td>
<th>Personne morale</th>
</tr>
<tr>
<td>Adresse postale&nbsp;:</td>
<th>2400 BOUL DANIEL-JOHNSON , LAVAL QUEBEC, H7T 3A4</th>
</tr>
<tr>
<td>Date d'inscription au rôle&nbsp;:</td>
<th>
2015-10-02</th>
</tr>
</tbody></table>
</div>
<div class="table-responsive" id="section-3">
<h2>3.&nbsp;Caractéristiques de l'unité d'évaluation</h2>
<table class="table table-borderless table-condensed">
<tbody><tr>
<th colspan="3" width="50%"><h3>Caractéristiques du terrain</h3></th>
<th colspan="3" width="50%"><h3>Caractéristiques du bâtiment principal</h3></th>
</tr>
<tr>
<td>Mesure frontale&nbsp;:</td>
<th class="text-right">
</th>
<td>&nbsp;</td>
<td>Nombre d'étages&nbsp;:</td>
<th class="text-right">10</th>
<td>&nbsp;</td>
</tr>
<tr>
<td>Superficie&nbsp;:</td>
<th class="text-right">
14&nbsp;196,30&nbsp;m<sup>2</sup>
</th>
<td>&nbsp;</td>
<td>Année de construction&nbsp;:</td>
<th class="text-right">
2009
</th>
<td>&nbsp;</td>
</tr>
<tr>
<td>
</td>
<th class="text-right"></th>
<td>&nbsp;</td>
<td>Aire d'étages&nbsp;:</td>
<th class="text-right">
28&nbsp;585,20&nbsp;m<sup>2</sup>
</th>
<td>&nbsp;</td>
</tr>
<tr>
<th colspan="3" width="50%">
</th>
<td>Genre de construction&nbsp;:</td>
<th class="text-right"></th>
<td>&nbsp;</td>
</tr>
<tr>
<td>
</td>
<th class="text-right">
</th>
<td>&nbsp;</td>
<td>Lien physique&nbsp;:</td>
<th class="text-right"></th>
<td>&nbsp;</td>
</tr>
<tr>
<td>
</td>
<th class="text-right">
</th>
<td>&nbsp;</td>
<td>Nombre de logements&nbsp;: </td>
<th class="text-right">247</th>
<td>&nbsp;</td>
</tr>
<tr>
<td>
&nbsp;
</td>
<th class="text-right">
</th>
<td>&nbsp;</td>
<td>Nombre de locaux non résidentiels&nbsp;:</td>
<th class="text-right">6</th>
<td>&nbsp;</td>
</tr>
<tr>
<td>&nbsp;</td>
<th>&nbsp;</th>
<td>&nbsp;</td>
<td>Nombre de chambres locatives&nbsp;:</td>
<th class="text-right">56</th>
<td>&nbsp;</td>
</tr>
</tbody></table>
</div>
<div class="table-responsive" id="section-4">
<h2>4.&nbsp;Valeurs au rôle d'évaluation</h2>
<table class="table table-borderless table-condensed">
<tbody><tr>
<th colspan="3" width="50%"><h3>Rôle courant</h3></th>
<th colspan="3" width="50%"><h3>Rôle antérieur</h3></th>
</tr>
<tr>
<td width="35%">Date de référence au marché&nbsp;:</td>
<th width="15%" class="text-right">2021-07-01</th>
<td></td>
<td width="35%">Date de référence au marché&nbsp;:</td>
<th width="15%" class="text-right">2018-07-01</th>
<td></td>
</tr>
<tr>
<td>Valeur du terrain&nbsp;:</td>
<th class="text-right">
13&nbsp;429&nbsp;700&nbsp;$
</th>
<td></td>
<td>Valeur de l'immeuble au rôle antérieur&nbsp;:</td>
<th class="text-right">
32&nbsp;070&nbsp;000&nbsp;$
</th>
<td></td>
</tr>
<tr>
<td>Valeur du bâtiment&nbsp;:</td>
<th class="text-right">
18&nbsp;990&nbsp;300&nbsp;$
</th>
<td></td>
</tr>
<tr>
<td>Valeur de l'immeuble&nbsp;:</td>
<th class="text-right">
32&nbsp;420&nbsp;000&nbsp;$
</th>
<td></td>
</tr>
</tbody></table>
</div>
<div class="table-responsive" id="section-5A">
<h2>5.&nbsp;Répartition fiscale</h2>
<!-- div class="table-responsive"-->
<table id="repartitionTable2" class="table table-borderless table-condensed">
<tbody><tr>
<td colspan="2" width="40%">Catégorie et classe d'immeuble à des fins d'application<br>des taux de taxation&nbsp;:</td>
<th class="text-lefht" colspan="2" width="60%">Non résidentielle classe 1A, Six logements et plus</th>
</tr>
</tbody></table>
<br>
<table id="repartitionTable2" class="table table-borderless table-condensed">
<tbody>
<tr>
<td width="20%">Valeur imposable de l'immeuble&nbsp;:</td>
<th width="20%" class="text-right">
32&nbsp;420&nbsp;000&nbsp;$
</th>
<td width="30%">Valeur non imposable de l'immeuble&nbsp;:</td>
<th width="30%" class="text-right">
0&nbsp;$
</th>
</tr>
</tbody>
</table>
<!-- /div-->
</div>
<div class="table-responsive" id="section-5">
</div>
</code></pre>
|
<python><html><dataframe><web-scraping><beautifulsoup>
|
2023-02-21 14:09:21
| 1
| 949
|
RageAgainstheMachine
|
75,521,556
| 5,398,197
|
Python socketIO callback is lost: `Unknown callback received, ignoring.`
|
<p>I have a Flask-SocketIO server that connects with a number of python-socketIO clients. I want to know which clients are online. To get to know this, I am sending a <code>ping</code> event from the server with a callback function to process the response.</p>
<p>The structure of the server code is as follows:</p>
<pre><code>class Server:
def ping(self):
while True:
self.socketio.emit(
'ping', namespace='/some_namespace', room='some_room',
callback=self.pong_response
)
def pong_response(self, client_id):
# goal: updates the client's status as online in a database
print(client_id)
</code></pre>
<p>In the client application, we have:</p>
<pre><code>from socketio import ClientNamespace
class SocketClient(ClientNamespace):
def on_ping(self) -> int:
return <my_id>
</code></pre>
<p>Now, when I run the server and client application locally, this works fine: the client responds to the server and I can print the ID sent by the client app.</p>
<p>However, when the server app is deployed (in our case to an Azure App Service), the server logs print:</p>
<pre><code>socketio.server - WARNING - Unknown callback received, ignoring.
</code></pre>
<p>I have no clue why this works locally but not when deployed in Azure. Also, is this a good approach to update which client are online?</p>
<hr />
<p>Possibly relevant details:</p>
<ul>
<li>server is on <code>Flask-SocketIO==5.1.1</code></li>
<li>clients are on <code>python-socketio==5.5.0</code></li>
<li>Python 3.7</li>
<li>On the server, the ping/pong is running in a separate thread</li>
</ul>
<hr />
<p><strong>Update</strong>:
It seems the message queue (RabbitMQ) is causing the issue. When defining our socketIO connection we use:</p>
<pre><code>socketio = SocketIO(
self.app,
async_mode='gevent_uwsgi',
message_queue=msg_queue,
)
</code></pre>
<p>It works when <code>message_queue=None</code> is put in. However, this obviously does not fix our issue when we want to run multiple server instances. I have yet to figure out why this doesn't work and how I can fix it.</p>
|
<python><azure-web-app-service><flask-socketio><python-socketio>
|
2023-02-21 14:02:25
| 1
| 328
|
Bart
|
75,521,500
| 1,616,785
|
Better way to identify chunks where data is available in zarr
|
<p>I have a zarr store of weather data with 1 hr time interval for the year 2022. So 8760 chunks. But there are data only for random days. How do i check which are the hours in 0 to 8760, the data is available? Also the store is defined with <code>"fill_value": "NaN",</code></p>
<p>I am iterating over each hour and checking for all nan as below (using <code>xarray</code>) to identify if there is data or not. But its a very time consuming process.</p>
<pre><code>hours = 8760
for hour in range(hours):
if not np.isnan(np.array(xarrds['temperature'][hour])).all():
print(f"data available in hour: {i}")
</code></pre>
<p>is there a better way to check the data availablity?</p>
|
<python><python-xarray><zarr>
|
2023-02-21 13:58:11
| 1
| 1,401
|
sjd
|
75,521,488
| 3,265,791
|
Docker and tensorflow - image size explodes with tensorflow
|
<p>I am trying to add tensorflow to my conda enviroment by adding the following depenencies to my .yml enviroment file</p>
<pre><code>name: py310
channels:
- conda-forge
dependencies:
- python=3.10
- pandas
</code></pre>
<h3>Additional tensorflow dependencies</h3>
<pre><code> - pip
- cudatoolkit=11.2
- cudnn=8.1.0
- pip:
- keras-tuner
- tensorflow>2.0
</code></pre>
<p>The image size is +3-4GB larger with tensorflow and its dependencies. Is this what I should expect or can I somehow reduce the size of this installation (for example, I don't need the GPU support).</p>
<p>The base image is: <code>FROM condaforge/mambaforge</code></p>
|
<python><docker><tensorflow><pip><conda>
|
2023-02-21 13:57:13
| 0
| 639
|
MMCM_
|
75,521,428
| 3,265,791
|
Docker and conda - how does "clean --all -y" work?
|
<p>I am bit confused by the conda command <code>conda clean --all -y</code> inside a docker script.
Generally, the idea is to shrink the final docker image. <code>conda clean --all -y</code> should help to delete downloaded tarballs, and indeed, the docker log shows:</p>
<pre><code>Will remove 430 (853.4 MB) tarball(s).
</code></pre>
<p>However, the final image size is identical whether I include <code>conda clean --all -y</code> or not. Do I additionally need to explicitly delete any files with <code>rm -rf</code> or how can you explain that the final image size is not different?</p>
|
<python><docker><conda>
|
2023-02-21 13:51:31
| 1
| 639
|
MMCM_
|
75,521,330
| 6,213,883
|
Why does Python seem to behave differently between a call with pytest and a "manual" call?
|
<h2>Context</h2>
<p>I want to test the behavior of a singleton class when used in multiprocessing environment because it has been brought to my attention that it does not work properly. It seems the same object is being used in two different processes.</p>
<h2>Minimal example</h2>
<ul>
<li>Python 3.8.3</li>
<li>Windows 10 enterprise X64</li>
<li>pytest 6.1.2</li>
</ul>
<h3>singleton.py</h3>
<pre class="lang-py prettyprint-override"><code>from threading import Lock
class SingletonMeta(type):
_instances = {}
_lock = Lock()
def __call__(cls, *args, **kwargs):
with cls._lock:
if cls not in cls._instances:
cls._instances[cls] = super().__call__(*args, **kwargs)
return cls._instances[cls]
</code></pre>
<h3>manual_test.py</h3>
<pre class="lang-py prettyprint-override"><code>from singleton import SingletonMeta
from multiprocessing import Pool
class Test(metaclass=SingletonMeta):
def __init__(self, value):
self.value = value
def do(value):
t = Test(value)
return t.value, id(t)
if __name__ == "__main__":
with Pool(2) as pool:
values = [1, 2]
results = pool.map(do, values)
results.sort()
print("results: ", results)
</code></pre>
<h3>test_multiprocessing.py</h3>
<pre class="lang-py prettyprint-override"><code>from singleton import SingletonMeta
import pytest
from multiprocessing import Pool
class Singleton(metaclass=SingletonMeta):
def __init__(self, value):
self.value = value
def do(value):
t = Singleton(value=value)
return t.value, id(t)
def test_multiprocess():
# given
value1 = 1
value2 = 2
# when
with Pool(2) as pool:
results = pool.map(do, [value1, value2])
results.sort()
print("results", results)
# then
assert results[0][0] == value1
assert results[1][0] == value2
</code></pre>
<h2>manual_test.py output</h2>
<pre><code>$ python manual_test.py
results: [(1, 2076407268784), (1, 2076407268784)]
</code></pre>
<p>As you can see, the <code>value</code> and the object's id are the same for both processes.</p>
<h2>pytest output</h2>
<p>note: truncated to remove noise</p>
<pre><code>$ pytest -rA -v -p no:faulthandler
results [(1, 2356600174768), (2, 2732965816496)]
</code></pre>
<p>As you can see, both the values and the object's id are different.</p>
<h2>Problem</h2>
<p>Given that these two programs have almost the very same code, I was expecting to have the same behavior for both:</p>
<ul>
<li>The first value of the set should be se same</li>
<li>The id of the objects should be the same</li>
</ul>
<p>However, this is only the case when calling manual_test.py, not with the pytest utility. My final goal is to have my class work in multiprocessing and test it in my library, hence I would like to know:</p>
<ul>
<li>why is pytest behaving differently ? (or the other way arround, I am not sure which one is the "correct" behavior)</li>
<li>if the pytest behavior is "incorrect" (at least unexpected), how can I fix it ?</li>
</ul>
|
<python><pytest><python-3.8>
|
2023-02-21 13:42:48
| 1
| 3,040
|
Itération 122442
|
75,521,238
| 1,436,812
|
Matplotlib figure as SVG with Shiny for Python
|
<p>The axis labels and titles in the figure in the app below appears unsharp to me. I assume it's because the figure is rendered as PNG, so I assume that rendering it as SVG will fix the issue. However, I'm not sure how to do that. Any pointers?</p>
<pre><code>from shiny import *
import matplotlib.pyplot as plt
app_ui = ui.page_fluid(
ui.output_plot("dens_plot"),
ui.input_slider(id = "n", label = "slider", min = 10, max = 50, value = 10)
)
def server(input, output, session):
@output
@render.plot
def dens_plot():
xs = list(range(input.n()+1))
ys = [1]*len(xs)
fig, ax = plt.subplots()
ax.stem(xs, ys , markerfmt = " ")
ax.set_xlabel("X title")
ax.set_ylabel("Y title")
return fig
app = App(app_ui, server)
</code></pre>
|
<python><py-shiny>
|
2023-02-21 13:33:57
| 1
| 641
|
Stefan Hansen
|
75,520,939
| 8,681,882
|
Optimised way to format an 2D float array to a list of list of dictionaries
|
<p>I'm making a web app using fastapi</p>
<p>I have an endpoint named <code>/distances</code></p>
<p>which is outputting an array of float <code>distances</code>, from a string input <code>word</code></p>
<p>And then formatting it to a list of dictionnaries to be readable as a json API</p>
<pre><code>@app.get("/distances")
async def distances(request: Request, word):
# batch processing
distances = await process_batch(word)
# formatting step
formatted_distances = [{'d': f'{d:.5f}'} for d in distances]
return {"response": {
"distances": formatted_distances}}
</code></pre>
<p>I'm using a batch processing function to process multiple words at the same time</p>
<pre><code>@serve.batch(max_batch_size=MAX_BATCH_SIZE,
batch_wait_timeout_s=BATCH_WAIT_TIMEOUT)
async def process_batch(words):
...
return distances_list
</code></pre>
<p>The problem is the formatting step is taking a lot of time to process, and is making a bottleneck when the traffic is high</p>
<p>Is there a faster way to process my distances array to avoid bottleneck ? Maybe directly in my batch processing function ?</p>
|
<python><arrays><fastapi>
|
2023-02-21 13:06:48
| 0
| 337
|
Noa Be
|
75,520,831
| 7,677,894
|
Is LabelEncoder ordered in sklearn?
|
<p><code>LabelEncoder</code> is used to generate labels for pytorch projects. Codes like:</p>
<pre><code>from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(annotation['instance_ids'])
annotation['labels'] = list(map(int,label_encoder.transform(annotation['instance_ids'])))
</code></pre>
<p>The question is:</p>
<ol>
<li>whether the labels generated are strictly same in different runnings? More specifically, will <code>instance_id_1</code> be mapped to <code>label_1</code> at all times.</li>
<li>what's the order rule to generate the labels?</li>
</ol>
|
<python><scikit-learn>
|
2023-02-21 12:55:58
| 1
| 983
|
Ink
|
75,520,797
| 1,697,288
|
SQLAlchemy MSSQL bulk upSert
|
<p>I've been trying various methods to bulk upSert an Azure SQL (MSSQL) database using SQLAlchemy 2.0, the source table is fairly large 2M records and I need to bulk upSert 100,000 records (most of which won't be there).</p>
<p><strong>NOTE</strong> This will run as an Azure function so if there is a better way I'm open to this</p>
<pre><code>class issues(Base):
__tablename__ = "issues"
id = mapped_column('id', String(36), primary_key=True)
created = mapped_column ('created', DateTime())
updated = mapped_column ('updated', DateTime())
status = mapped_column('status', String(50))
severity = mapped_column('severity', String(10))
control_id = mapped_column('control_id', String(36))
entity_id = mapped_column('entity_id', String(36))
</code></pre>
<p>Example data</p>
<pre><code>issueList = {
issues( "1234", datetime.now(), datetime.now() , "Test", "Low8", "con123", "ent123"),
issues( "5678", datetime.now(), datetime.now() , "Test", "Low9", "con123", "ent123"),
}
</code></pre>
<p>Currently I'm doing <code>session.merge(issue)</code> but it's slow and doesn't support bulk inserts, I've looked at <a href="https://stackoverflow.com/a/69968892/1697288">https://stackoverflow.com/a/69968892/1697288</a> but have been getting errors as I was passing:</p>
<pre><code>issueList = {
"1234": { id: "1234", "created": datetime.now(), "updated": datetime.now, "status": "Test", "severity": "Low16", "control_id": "con123", "entity_id": "ent123" },
"5678": { id: "5678", "created": datetime.now(), "updated": datetime.now, "status": "Test", "severity": "Low9", "control_id": "con123", "entity_id": "ent123" },
}
upsert_data (session, issueList, "issues", "id")
</code></pre>
<p>It seems to be expecting a model not text for the 3rd params, so I wasn't sure what to send.</p>
<p>Any suggestions of a fast model would be great. Only this application will be inserting data so locking the db isn't an issue as long as the lock is cleared on error.</p>
<p>Thanks.</p>
|
<python><sql-server><sqlalchemy><azure-functions><azure-sql-database>
|
2023-02-21 12:52:04
| 1
| 463
|
trevrobwhite
|
75,520,480
| 4,576,519
|
Append model checkpoints to existing file in PyTorch
|
<p>In PyTorch, it is possible to save model checkpoints as follows:</p>
<pre class="lang-py prettyprint-override"><code>import torch
# Create a model
model = torch.nn.Sequential(
torch.nn.Linear(1, 50),
torch.nn.Tanh(),
torch.nn.Linear(50, 1)
)
# ... some training here
# Save checkpoint
torch.save(network.state_dict(), 'checkpoint.pt')
</code></pre>
<p>During my training procedure, I save a checkpoint every 100 epochs or so. Currently this results in a folder with many files, e.g.</p>
<pre><code>checkpoint0.pt
checkpoint100.pt
checkpoint200.pt
</code></pre>
<p>I was wondering if it was possible to <em>append</em> checkpoints to an existing file, so I don't clutter my disk with small files but instead have only a single file called <code>checkpoints.pt</code>. I currently have implemented this as follows:</p>
<pre><code>import torch
# Create a model
model = torch.nn.Sequential(
torch.nn.Linear(1, 50),
torch.nn.Tanh(),
torch.nn.Linear(50, 1)
)
# ... some training here
# Save 1st checkpoint
data = {'0': model.state_dict()}
torch.save(data, 'checkpoints.pt')
# ... some training here
# Save 2nd checkpoint
data = torch.load('checkpoints.pt')
data['100'] = model.state_dict()
torch.save(data, 'checkpoints.pt')
print(torch.load('checkpoints.pt'))
</code></pre>
<p>But the problem is it requires loading the existing file in memory before appending a new checkpoint, which is memory intensive especially considering that I have 100s of checkpoints. Is there a way to do this (or something similar) without having to load the existing checkpoints back into memory?</p>
|
<python><file><pytorch><save><checkpointing>
|
2023-02-21 12:23:16
| 1
| 6,829
|
Thomas Wagenaar
|
75,520,180
| 6,108,107
|
calculate zscores using groupby and highlighting using style
|
<p>I want to highlight possible outliers in a dataframe based on zscore grouped by location. I have used <a href="https://stackoverflow.com/questions/70003657/highlight-outliers-using-zscore-in-pandas">this answer</a> to identify and score values but I can't seem to implement the .groupby correctly. For example <code>df.groupby('Loc').style.apply(highlight_outliers,color='red', threshold_val=1.5, axis=None)</code> gives 'AttributeError: 'DataFrameGroupBy' object has no attribute 'style'</p>
<p>Dummy data:</p>
<pre><code>import pandas as pd
array=[['Site 1',750.0, 1.1e-09, 'daljk', 6.0],
['Site 1',890.0, 1e-09, 'djfh', 8.0],
['Site 1',1720.0, 1e-09, 'dkhf', 4.0],
['Site 1',999.0, 1e-09, 'dkafh', 10.0],
['Site 1',890.0, 1e-09, 'dkajfh', 0.0005],
['Site 1',909.0, 1e-09, 'jkdafh', 6.0],
['Site 1',1002.0, 1e-09, 'dlfakh', np.nan],
['Site 1',990.0, 1e-09, 'ldkj', 3.0],
['Site 1',0.0001, 1e-09, 'dlkfj', 10.0],
['Site 2',7500.0, 1.1e-09, 'daljk', 6.0],
['Site 2',890.0, 1e-09, 'djfh', 8.0],
['Site 2',1720.0, 1e-09, 'dkhf', 4.0],
['Site 2',1, 1e-09, 'dkafh', 10.0],
['Site 2',890.0, 1e-09, 'dkajfh', 0.0005],
['Site 2',909.0, 1e-09, 'jkdafh', 6.0],
['Site 2',1002.0, 1e-09, 'dlfakh', np.nan],
['Site 2',990.0, 1e-09, 'ldkj', 3.0],
['Site 2',0.0001, 1e-09, 'dlkfj', 10.0]]
df = pd.DataFrame(array, columns = ['Loc','A','B','C','D'])
df
</code></pre>
<p>Code not working:</p>
<pre><code>from scipy import stats
css_colours={"red": 'red'}
def highlight_outliers(x,color,threshold_val):
color=css_colours[color]
#extract numeric columns
c=x.select_dtypes([np.number]).columns
#create df of numeric cols
df2=pd.DataFrame(x,columns=c)
#calculate zscores
df2=df2.apply(stats.zscore, nan_policy='omit').abs()
#boolean mask of values greater than threshold value
mask=(df2[c].apply(pd.to_numeric, errors='coerce').fillna(-np.Inf).replace(0, -np.Inf).values<threshold_val)
#create blank df of numeric cols
df1=pd.DataFrame('',index=x.index, columns=c)
#style locations which exceed threshold (fill orange) based on mask
df1=(df1.where(mask, 'background-color:{}'.format(color)).reindex(columns=x.columns, fill_value=''))
return df1
df.groupby(['Loc']).style.apply(highlight_outliers,color='red', threshold_val=1.5, axis=None)
</code></pre>
<p>I think I need to use 'transform' but I can't see how to apply it in this case. Creating a boolean mask is more intuitive to me (novice in python).
Expected output:</p>
<p><a href="https://i.sstatic.net/eKn0G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eKn0G.png" alt="output of values highlighted based on zscore" /></a></p>
|
<python><pandas>
|
2023-02-21 11:54:21
| 1
| 578
|
flashliquid
|
75,520,141
| 19,130,803
|
file write with cancel using python
|
<p>I am writing data to a new file. I am reading data from flask request(uploading a file) but during writing I am providing option to cancel the writing, for that I have used process and event, and passing required arguments.</p>
<p><strong>Read file</strong></p>
<pre><code>file = request.files.get("file")
method-1: contents = file.stream.read()
method-2: contents = file.stream.readlines()
For eg file as train.csv (10MB size)
</code></pre>
<p><strong>Write file</strong></p>
<pre><code>def __init__(self, filename: str, contents: Any, read_length: int) -> bool:
file: Path = DIRPATH / self.filename
method-1:
with open(file, "wb") as fp:
write_length = fp.write(self.contents)
if self.event.wait(0.4):
break
</code></pre>
<p>Result for method-1: The entire file is getting written in one go and my cancel option becomes useless. But the
writing speed is very fast, takes only few seconds</p>
<pre><code>method-2:
with open(file, "wb") as fp:
for line in self.contents:
cnt = fp.write(line)
write_length += cnt
if self.event.wait(0.4):
break
else:
continue
</code></pre>
<p>Result for method-2: The entire file is getting written line by line and I am able to cancel the writting successfully but the writing speed is significantly slow, takes significant amount of miniutes.</p>
<p>Is there way to write good amount of chunks in file before waiting for event thereby making writing speed faster by using read() or readlines().</p>
|
<python>
|
2023-02-21 11:51:11
| 1
| 962
|
winter
|
75,520,116
| 13,314,132
|
How do I use TextClassifier to load a previously generated model?
|
<p>I have used <code>arcgis</code> <code>learn.text</code> to import <code>TextClassifier</code> in order for creating a Machine learning module. Now I want to use the same model in <code>Streamlit</code>for creating an interface for re-use and displaying the predictions.</p>
<p>Code for streamlit-app:</p>
<pre><code>import streamlit as st
import os
from arcgis.learn.text import TextClassifier, SequenceToSequence
import pickle
with st.sidebar:
st.image('https://www.attomdata.com/wp-content/uploads/2021/05/ATTOM-main-full-1000.jpg')
st.title("AutoAttom")
st.info("This project application will help in text classification and sequence to sequence labelling")
# Text Classifier Section
st.title("Text Classifier")
user_input = st.text_input(""" """)
if user_input:
model_folder = "models/text-classifier"
print(os.listdir(model_folder))
model_path = os.path.join(model_folder, 'text-classifier.pth')
model = TextClassifier.load(model_path, name_or_path=model_path)
st.write(model.predict(user_input))
</code></pre>
<p>Now, whenever I am running this code I am getting the following error:</p>
<pre><code>NotADirectoryError: [WinError 267] The directory name is invalid: 'models\\text-classifier\\text-classifier.pth'
Traceback:
File "d:\python projects\attom\text2seq\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "D:\Python Projects\ATTOM\app.py", line 18, in <module>
model = TextClassifier.load(model_path, name_or_path=model_path)
File "d:\python projects\attom\text2seq\lib\site-packages\arcgis\learn\text\_text_classifier.py", line 294, in load
name_or_path = str(_get_emd_path(name_or_path))
File "d:\python projects\attom\text2seq\lib\site-packages\arcgis\learn\_utils\common.py", line 460, in _get_emd_path
list_files = get_files(emd_path, extensions=['.emd'])
File "d:\python projects\attom\text2seq\lib\site-packages\fastai\data_block.py", line 44, in get_files
f = [o.name for o in os.scandir(path) if o.is_file()]
</code></pre>
<p>Now, I have checked my folder structure numerous times. And it is correct as follows:</p>
<pre><code>D:\PYTHON PROJECTS\ATTOM\MODELS\TEXT-CLASSIFIER
│ model_metrics.html
│ text-classifier.dlpk
│ text-classifier.emd
│ text-classifier.pth
│
└───ModelCharacteristics
loss_graph.png
sample_results.html
</code></pre>
<p>How do i reuse the model that i have generated?</p>
|
<python><arcgis><text-classification><streamlit>
|
2023-02-21 11:49:12
| 1
| 655
|
Daremitsu
|
75,519,932
| 2,160,256
|
Azure Function Python Model 2 in Docker container
|
<p>I am failing to get a minimal working example running with the following setup:</p>
<ul>
<li>azure function in docker container</li>
<li>python as language, specifically the <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python?tabs=asgi%2Capplication-level&pivots=python-mode-decorators#programming-model" rel="noreferrer">"new Python programming model V2"</a></li>
</ul>
<p>I followed the instructions from <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image?tabs=in-process%2Cbash%2Cazure-cli&pivots=programming-language-python#create-supporting-azure-resources-for-your-function" rel="noreferrer">here</a> but added the V2 flag, specifically:</p>
<pre><code> # init directory
func init --worker-runtime python --docker -m V2
# build docker image
docker build -t foo .
# run functions locally
docker run -p 80:80 foo
</code></pre>
<p>Whatever I tried, the runtime seems to not pick up the auto generated http trigger function</p>
<pre class="lang-py prettyprint-override"><code># function_app.py (autogenerated by func init ...)
import azure.functions as func
app = func.FunctionApp()
@app.function_name(name="HttpTrigger1")
@app.route(route="hello") # HTTP Trigger
def test_function(req: func.HttpRequest) -> func.HttpResponse:
return func.HttpResponse("HttpTrigger1 function processed a request!!!")
</code></pre>
<p>I think the relevant part of the logs is:</p>
<pre><code>info: Host.Startup[327]
1 functions found
info: Host.Startup[315]
0 functions loaded
info: Host.Startup[0]
Generating 0 job function(s)
warn: Host.Startup[0]
No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
info: Microsoft.Azure.WebJobs.Script.WebHost.WebScriptHostHttpRoutesManager[0]
Initializing function HTTP routes
No HTTP routes mapped
</code></pre>
<p>because when I use the "programming model V1", then the <code>Microsoft.Azure.WebJobs.Script.WebHost.WebScriptHostHttpRoutesManager</code> actually prints some info about the mapped routes.</p>
<p>How can I fix this? Is this not supported at the moment?</p>
|
<python><azure><azure-functions>
|
2023-02-21 11:33:24
| 1
| 860
|
Marti Nito
|
75,519,912
| 16,169,533
|
Flask (Working outside of application context) with RabiitMQ
|
<p>I'm trying to build a consumer on a flask app and when i trying to get a data from another app (Django) to create it on flask database i got this error.</p>
<pre><code>RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
the current application. To solve this, set up an application context
with app.app_context(). See the documentation for more information.
</code></pre>
<p>my consumer.py located in the same file of the main.py</p>
<p>consumer.py</p>
<pre><code># amqps://yildtheb:ZEATuey0rMa34bFIZ7NSmcUIQIhu4JFH@hawk.rmq.cloudamqp.com/yildtheb
import pika, json
from main import Product, db
params = pika.URLParameters('amqps://yildtheb:ZEATuey0rMa34bFIZ7NSmcUIQIhu4JFH@hawk.rmq.cloudamqp.com/yildtheb')
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.queue_declare(queue = 'main')
def callback(ch, method, properties, body):
print('Message recieved')
data = json.loads(body)
print(data)
print(properties.content_type)
if properties.content_type == 'product created':
print('product created')
product = Product(id = data['id'], title = data['name'], image = data['image'])
db.session.add(product)
db.session.commit()
elif properties.content_type == 'product updated':
product = Product.query.get(data['id'])
product.title = data['name']
product.image = data['image']
db.session.commit()
elif properties.content_type == 'product deleted':
product = Product.query.get(data)
db.session.delete(product)
db.session.commit()
channel.basic_consume(queue = 'main', on_message_callback = callback, auto_ack= True)
print('started consuming....')
channel.start_consuming()
channel.close()
</code></pre>
<p>How i can solve this error ?</p>
<p>Thanks in advance.</p>
|
<python><flask>
|
2023-02-21 11:32:07
| 1
| 424
|
Yussef Raouf Abdelmisih
|
75,519,754
| 8,176,763
|
object str cannot be used in await expression in psycopg3
|
<p>I have a function that copies a csv file to a database. I'm trying to do that asynchronously:</p>
<pre><code>import psycopg
from config import config
from pathlib import WindowsPath
from psycopg import sql
import asyncio
async def main():
conn = await psycopg.AsyncConnection.connect(f'postgresql://{config.USER_PG}:{config.PASS_PG}@{config.HOST_PG}:{config.PORT_PG}/{config.DATABASE_PG}')
p = WindowsPath(r'.\data\product_version.csv')
async with conn:
if p.exists():
with p.open(encoding='utf-8-sig') as f:
columns = list(next(f).strip().lower().split(','))
async with conn.cursor() as cur:
await cur.execute(sql.SQL("TRUNCATE TABLE {} RESTART IDENTITY CASCADE").format(sql.Identifier('product_version_map')))
async with cur.copy(sql.SQL("COPY {} ({}) FROM STDIN WITH CSV").format(sql.Identifier('product_version_map'),sql.SQL(', ').join(map(sql.Identifier, columns)))) as copy:
while data := await f.read():
await copy.write(data)
else:
print(f'You need the product_version file')
if __name__=='__main__':
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
asyncio.run(main())
</code></pre>
<p>But I'm getting this error:</p>
<pre><code>psycopg.errors.QueryCanceled: COPY from stdin failed: error from Python: TypeError - object str can't be used in 'await' expression
</code></pre>
<p><code>f</code> here has class string, it's the row of the file. the error comes from this line:</p>
<pre><code>while data := await f.read():
</code></pre>
<p>This is documentation I'm referring to when building this code:</p>
<p><a href="https://www.psycopg.org/psycopg3/docs/basic/copy.html#asynchronous-copy-support" rel="nofollow noreferrer">https://www.psycopg.org/psycopg3/docs/basic/copy.html#asynchronous-copy-support</a></p>
|
<python><asynchronous><python-asyncio><psycopg2><psycopg3>
|
2023-02-21 11:18:29
| 1
| 2,459
|
moth
|
75,519,704
| 16,981,638
|
how to remove list brackets from the web scraped data
|
<p>i'm trying to scrape some data from a website but the final result have the output data in lists, so how can i extract the data without those list brackets.</p>
<p><a href="https://i.sstatic.net/Mod2q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mod2q.png" alt="enter image description here" /></a></p>
<p><strong>The Original Code:-</strong></p>
<pre><code>user_input = 'ios-phones'#input('Please Enter Your Favorite Item:- ')
try:
data_list = []
for i in range(1,30):
url = f'https://www.jumia.com.eg/{user_input}/?page={i}#catalog-listing'
page = requests.get(url).content
soup = BeautifulSoup(page,'lxml')
phones = soup.find('div',class_='-paxs row _no-g _4cl-3cm-shs')
phones_info = phones.find_all('article',class_=True)
for i in phones_info:
try:
title = i.select('.name')[0].text.strip()
current_price = i.select('.prc')[0].text
old_price = i.find('div',class_='old')
rating = i.find('div',class_='stars')
except:
pass
row = {'Phone Title':title,'Current Price':current_price,'Old Price':old_price,'Rating':rating}
data_list.append(row)
except:
pass
df = pd.DataFrame(data_list)
df
</code></pre>
|
<python><web-scraping><beautifulsoup>
|
2023-02-21 11:13:48
| 2
| 303
|
Mahmoud Badr
|
75,519,693
| 15,481,917
|
Django: get model's fields name
|
<p>I am trying to create a table in react that uses as table information from a django backend.</p>
<p>I would like to fetch the table's columns from the API, so I tried updating the model:</p>
<pre><code>class Activity(models.Model):
aid = models.AutoField(primary_key=True)
uid = models.ForeignKey(USER_MODEL, default=1, verbose_name="Category", on_delete=models.SET_DEFAULT,
db_column="uid")
rid = models.IntegerField()
action = models.TextField(max_length=254, blank=True, null=True)
time = models.TextField(max_length=254, blank=True, null=True)
table = models.TextField(max_length=254, blank=True, null=True)
class Meta:
managed = False
db_table = "activity"
@property
def fields(self):
return [f.name for f in self._meta.fields]
</code></pre>
<p>I am trying to use <code>fields()</code> as the method to get all the model's "column" names, so that I can place them in the API's response, but it does not work.</p>
<p>How can I get a django model's field names from the model's meta?</p>
|
<python><django>
|
2023-02-21 11:13:12
| 2
| 584
|
Orl13
|
75,519,552
| 4,684,861
|
How to convert Pyspark Dataframe to Pandas on Spark Dataframe?
|
<p>I have intermediate pyspark dataframe which I want to convert to Pandas on Spark Dataframe (not just toPandas()).
I have gone through the official docs and found out <code>pandas_api()</code> does the job for me. But when I try use it says, <code>AttributeError: 'DataFrame' object has no attribute 'pandas_api'</code></p>
<pre><code> type(df)
Out[115]: pyspark.sql.dataframe.DataFrame
df.pandas_api()
AttributeError: 'DataFrame' object has no attribute 'pandas_api'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<command-2644367454426097> in <module>
----> 1 df.pandas_api()
/[REDACTED]/spark/python/pyspark/sql/dataframe.py in __getattr__(self, name)
1798 """
1799 if name not in self.columns:
-> 1800 raise AttributeError(
1801 "'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
1802 jc = self._jdf.apply(name)
AttributeError: 'DataFrame' object has no attribute 'pandas_api'
</code></pre>
<p>Pyspark version: 3.2.1</p>
|
<python><apache-spark><pyspark><databricks>
|
2023-02-21 10:59:19
| 1
| 11,241
|
Mohamed Thasin ah
|
75,519,501
| 12,752,172
|
How to pass checkbox data into main window function in customtkinter in python?
|
<p>I'm creating a python app with the "customtkinter". I need to open another window when the user clicks on a button and on that window there are 2 options to select and submit. After clicking on submit button need to close that window and pass the selected value into the main window and display them on the log text box.</p>
<p><strong>This is what I tried;</strong></p>
<pre><code>import customtkinter
customtkinter.set_appearance_mode("System") # Modes: "System" (standard), "Dark", "Light"
customtkinter.set_default_color_theme("blue") # Themes: "blue" (standard), "green", "dark-blue"
class App(customtkinter.CTk):
def __init__(self):
super().__init__()
# configure window
self.title("User activity checker")
self.geometry(f"{1100}x{580}")
self.minsize(600, 400)
# configure grid layout (4x4)
self.grid_columnconfigure(1, weight=1)
self.grid_columnconfigure((2, 3), weight=0)
self.grid_rowconfigure((0, 1, 2), weight=1)
# create sidebar frame with widgets
self.sidebar_frame = customtkinter.CTkFrame(self, width=140, corner_radius=0)
self.sidebar_frame.grid(row=0, column=0, rowspan=5, sticky="nsew")
self.sidebar_frame.grid_rowconfigure(5, weight=1)
self.logo_label = customtkinter.CTkLabel(self.sidebar_frame, text="User Activity",
font=customtkinter.CTkFont(size=20, weight="bold"))
self.logo_label.grid(row=0, column=0, padx=20, pady=(20, 10))
self.sidebar_button_1 = customtkinter.CTkButton(self.sidebar_frame, command=self.click_more_buttons,
text="Click More Buttons")
self.sidebar_button_1.grid(row=1, column=0, padx=20, pady=10)
self.sidebar_button_2 = customtkinter.CTkButton(self.sidebar_frame, command=self.take_user_input,
text="Take User input")
self.sidebar_button_2.grid(row=2, column=0, padx=20, pady=10)
self.sidebar_button_3 = customtkinter.CTkButton(self.sidebar_frame, command=self.check_user_input,
text="Check user input")
self.sidebar_button_3.grid(row=3, column=0, padx=20, pady=10)
self.sidebar_button_4 = customtkinter.CTkButton(self.sidebar_frame, command=self.exit,
text="Exit")
self.sidebar_button_4.grid(row=4, column=0, padx=20, pady=10)
self.appearance_mode_label = customtkinter.CTkLabel(self.sidebar_frame, text="Appearance Mode:", anchor="w")
self.appearance_mode_label.grid(row=6, column=0, padx=20, pady=(10, 0))
self.appearance_mode_optionemenu = customtkinter.CTkOptionMenu(self.sidebar_frame,
values=["Light", "Dark", "System"],
command=self.change_appearance_mode_event)
self.appearance_mode_optionemenu.grid(row=7, column=0, padx=20, pady=(10, 10))
# set textbox
self.log_label = customtkinter.CTkLabel(self, text="Logs", font=customtkinter.CTkFont(size=16, weight="bold"))
self.log_label.grid(row=0, column=1, columnspan=2, sticky="nsew")
self.textbox = customtkinter.CTkTextbox(self, height=400)
self.textbox.grid(row=1, column=1, rowspan=1, columnspan=2, padx=20, pady=(20, 0), sticky="nsew")
self.combobox = customtkinter.CTkComboBox(master=self, values=["Sample text 1", "Text 2"])
self.combobox.grid(row=2, column=1, padx=20, pady=20, sticky="ew")
self.button = customtkinter.CTkButton(master=self, command=self.button_callback, text="Insert Text")
self.button.grid(row=2, column=2, padx=20, pady=20, sticky="ew")
# set default values
self.appearance_mode_optionemenu.set("Dark")
def change_appearance_mode_event(self, new_appearance_mode: str):
self.textbox.insert("insert", "change the appearance mode \n")
customtkinter.set_appearance_mode(new_appearance_mode)
def button_callback(self):
self.textbox.insert("insert", self.combobox.get() + "\n")
def click_more_buttons(self):
self.textbox.insert("insert", "Click button 1 \n")
w = customtkinter.CTk()
w.geometry("300x300")
w.minsize(300, 300)
w.title('Select Option')
l1 = customtkinter.CTkLabel(master=w, text="Select option to process", font=('Century Gothic', 20))
l1.pack(padx=20, pady=20)
checkbox_1 = customtkinter.CTkCheckBox(master=w, text="Select option 1")
checkbox_1.pack(padx=20, pady=20)
checkbox_2 = customtkinter.CTkCheckBox(master=w, text="Select option 2")
checkbox_2.pack(padx=20, pady=20)
button3 = customtkinter.CTkButton(master=w, text="Submit")
button3.pack(padx=20, pady=20)
w.mainloop()
def take_user_input(self):
self.textbox.insert("insert", "Click button 2 \n")
def check_user_input(self):
self.textbox.insert("insert", "Click button 3 \n")
def exit(self):
self.textbox.insert("insert", "Click button 4 \n")
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>Now I need to pass that selected value and process more details with the passed value. And if I am wrong about developing the GUI, Please correct me. I'm new to python and need to solve this problem.</p>
|
<python><tkinter><customtkinter>
|
2023-02-21 10:54:43
| 1
| 469
|
Sidath
|
75,519,060
| 773,718
|
Unable to save binary images data from a numpy array to a video file using openCV Python
|
<p>I'm reading a binary video file that has 54 frames of depth image data that was taken by the Kinect sensor.</p>
<p>I'm able to read the data frame by frame and show it by following the code below:</p>
<pre><code># Load the depth map data from the binary file
with open(dataset_path + filename, "rb") as f:
i, = struct.unpack('i', f.read(4)) # frame count
w, = struct.unpack('i', f.read(4)) # width
h, = struct.unpack('i', f.read(4)) # height
for _ in range(i):
bytesread = f.read(320*240*4)
depth_data = np.frombuffer(bytesread, dtype=np.uint32)
depth_map = depth_data.reshape((240, 320))
plt.imshow(depth_map, cmap='gray')
plt.show()
</code></pre>
<p>However, when I try to save frames together to construct a video, it does not work. The code below generates the video file without errors, but I can't open it.</p>
<pre><code>with open(dataset_path + filename, "rb") as f:
i, = struct.unpack('i', f.read(4)) # frame count
w, = struct.unpack('i', f.read(4)) # width
h, = struct.unpack('i', f.read(4)) # height
fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v')
out = cv2.VideoWriter("vid.mov", fourcc, 15, (w, h))
for _ in range(i):
bytesread = f.read(w*h*4)
depth_data = np.frombuffer(bytesread, dtype=np.uint32)
depth_map = depth_data.reshape((240, 320))
out.write(depth_map.astype(np.uint8))
out.release()
</code></pre>
<p>I've also tried different codecs without luck!</p>
|
<python><numpy><opencv><dataset>
|
2023-02-21 10:18:38
| 1
| 4,682
|
MSaudi
|
75,518,754
| 188,331
|
Train Tokenizer with HuggingFace dataset
|
<p>I'm trying to train the Tokenizer with HuggingFace <a href="https://huggingface.co/datasets/wiki_split" rel="nofollow noreferrer">wiki_split datasets</a>. According to the Tokenizers' <a href="https://github.com/huggingface/tokenizers" rel="nofollow noreferrer">documentation at GitHub</a>, I can train the Tokenizer with the following codes:</p>
<pre class="lang-py prettyprint-override"><code>from tokenizers import Tokenizer
from tokenizers.models import BPE
tokenizer = Tokenizer(BPE())
# You can customize how pre-tokenization (e.g., splitting into words) is done:
from tokenizers.pre_tokenizers import Whitespace
tokenizer.pre_tokenizer = Whitespace()
# Then training your tokenizer on a set of files just takes two lines of codes:
from tokenizers.trainers import BpeTrainer
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)
# Once your tokenizer is trained, encode any text with just one line:
output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.tokens)
# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]
</code></pre>
<p>However, the example is to load from three files: <code>wiki.train.raw</code>, <code>wiki.valid.raw</code> and <code>wiki.test.raw</code>. In my case, I am loading from <code>wiki_split</code> dataset. My code is as follow:</p>
<pre class="lang-py prettyprint-override"><code>from tokenizers.trainers import BpeTrainer
def iterator_wiki(dataset):
for txt in dataset:
if type(txt) != float:
yield txt
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train_from_iterator(iterator_wiki(wiki_train), trainer=trainer)
</code></pre>
<p>The <code>tokenizer.train_from_iterator()</code> only accepts 1 dataset split, how can I use the validation and test split here?</p>
|
<python><huggingface-tokenizers>
|
2023-02-21 09:50:42
| 1
| 54,395
|
Raptor
|
75,518,559
| 12,860,141
|
How Do I unnest following structure of json into interpretable table using python?
|
<p>I have dataframe <code> df</code> which has column called <code> test_col</code> which contains json structures as shown below. As you can see lineItemPromotions object has nested jsons in it which can have 0-10 numbers of items in it. By unnesting, it should create new rows for each ID under lineItemPromotions.
How do I unnest this structures correctly?</p>
<pre><code>{'provider': 'ABC',
'discountCodes_out': [],
'discounts_out': [],
'lineItemPromotions': [{'id': '1',
'discountCodes': [],
'discounts': [{'rule': 'Bundle Discount',
'name': 'Bundle Discount',
'ruleId': '',
'campaignId': '419f9a2f-0342-41c0-ac79-419d1023aaa9',
'centAmount': 1733550}],
'perUnitPromotionsShares': [1733550]},
{'id': '2',
'discountCodes': [],
'discounts': [{'rule': 'Bundle Discount',
'name': 'Bundle Discount',
'ruleId': '',
'campaignId': '419f9a2f-0342-41c0-ac79-419d1023aaa9',
'centAmount': 119438}],
'perUnitPromotionsShares': [119438, 119438]}]}
</code></pre>
<p>I tried following code but it is not working correctly. It is giving me nested item which I have to unnest again. Sorry that I have to paste the picture to show you the process how it is giving results.</p>
<p><a href="https://i.sstatic.net/oNvCB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oNvCB.png" alt="enter image description here" /></a></p>
|
<python><json><pandas>
|
2023-02-21 09:31:28
| 2
| 493
|
dan
|
75,518,463
| 8,099,689
|
What does "symbolically traceable" mean for a function?
|
<p>Specifically, I am referring to Python and PyTorch, but I guess the term has a meaning more general than Python/PyTorch.</p>
<p>The PyTorch <a href="https://pytorch.org/docs/stable/generated/torch._assert.html" rel="nofollow noreferrer">docs</a> state for the function <code>torch._assert</code>:</p>
<blockquote>
<p>A wrapper around Python’s assert which is symbolically traceable.</p>
</blockquote>
<ol>
<li>What does that mean?</li>
<li>Why would I want to use this version of assert and not Python's builtin <code>assert</code>?</li>
</ol>
|
<python><pytorch><assert>
|
2023-02-21 09:22:43
| 1
| 366
|
joba2ca
|
75,518,435
| 6,111,772
|
pysimplegui: why does a working layout fail in a Column / Frame?
|
<p>A working layout looses part of the information when used in a 'Column' or 'Frame'.
Minimized source:</p>
<pre><code>import PySimpleGUI as sg
lo = [
[sg.T("Line 1")],
[sg.T("Aa"),sg.T("Bb")],
[
[sg.T("1 "),sg.T("2")], # (*)
[sg.T("3 "),sg.T("4")], # (*)
[sg.T("5 "),sg.T("6")], # (*)
]
]
# (1)
layout=lo
# (2) layout=[[sg.Column(lo),sg.T("TEST")]]
# (3) layout=[[sg.Frame("Test",lo),sg.T("TEST")]]
window = sg.Window('W', layout)
while(True):
event,values=window.read()
if event in (sg.WIN_CLOSED,"Cancel"):
break
window.close
</code></pre>
<p>Using the Layout alone (1) I get the following window:</p>
<p><a href="https://i.sstatic.net/c3BQc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c3BQc.png" alt="enter image description here" /></a></p>
<p>activating lines (2) or (3) instead, the (*) marked lines are marked as errors and are omitted from the window:</p>
<p><a href="https://i.sstatic.net/CJizd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CJizd.png" alt="enter image description here" /></a></p>
<p>For demonstration a "TEST" string was added; same problem without it.
Any idea what's wrong? Thanks for help!</p>
|
<python><pysimplegui><col>
|
2023-02-21 09:20:47
| 2
| 441
|
peets
|
75,518,353
| 11,449
|
Python logging output doesn't show unless I call another logger first
|
<p>I have some behaviour I can't explain wrt the Python logging library, and none of the docs nor tutorials I've read match with what I'm seeing (well they probably do, I guess I'm just missing some crucial detail somewhere). Consider first what does work as expected:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logging.debug("Test 1")
logging.info("Test 2")
logging.warning("Test 3")
logger.debug("Test 4")
logger.info("Test 5")
</code></pre>
<p>Output is</p>
<pre><code>$ e:\Python311\python.exe testlogger.py
ERROR:root:Test 3
DEBUG:__main__:Test 4
INFO:__main__:Test 5
</code></pre>
<p>The default minimum log level for the 'module level' logger (so to call it) is logging.WARNING, so "Test 1" & "Test 2" are not shown, but "Test 3" is; and the instantiated logger is configured to log from level logging.DEBUG onwards so "Test 4" and "Test 5" are shown, too. So far so good.</p>
<p>But now I take out the calls to the 'module level' logger like so:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
#logging.debug("Test 1")
#logging.info("Test 2")
#logging.warning("Test 3")
logger.debug("Test 4")
logger.info("Test 5")
</code></pre>
<p>and now the output from this program is empty, i.e. nothing is output at all. Why aren't "Test 4" and "Test 5" shown? How and why are calls to the logging functions of my instantiated logger influenced by earlier calls to the 'module level' logger? Thanks.</p>
<p>(The above is Python 3.11 on Windows, but I tried on online-python.com too and got the same result, it uses 3.8.5 on I presume Linux)</p>
|
<python><logging><python-logging>
|
2023-02-21 09:12:46
| 1
| 19,644
|
Roel
|
75,518,345
| 3,909,896
|
Catching exit codes from submodules in python
|
<p>I have a function that allows me to run <code>az cli</code> commands from within python. However, whenever I get a non-zero exit code, the entire process is being shut down, including my python job. This happens for instance when I try to look up a user that does not exist.</p>
<p>I tried to wrap the function call with a <code>try-except</code> block, but it does not work, the job still exits on its own. How can I catch exit-code <code>3</code> (missing resource according to the <a href="https://github.com/Azure/azure-cli" rel="nofollow noreferrer">documentation</a>) and continue after trying to run the az-cli command?</p>
<pre><code>import os
from azure.cli.core import get_default_cli
def az_cli(args_str):
args = args_str.split()
cli = get_default_cli()
exit_code = cli.invoke(args, out_file=open(os.devnull, 'w'))
print("exit_code", exit_code)
if cli.result.result:
return cli.result.result
elif cli.result.error:
return cli.result.error
return True
try:
user_id = "some-id-129-x1201-312"
response = az_cli(f"ad user show --id {user_id}")
print("response", response)
except Exception as e:
print(e.args)
</code></pre>
|
<python><azure><azure-cli>
|
2023-02-21 09:12:02
| 1
| 3,013
|
Cribber
|
75,518,324
| 6,343,313
|
Parallelising a select query in Python's sqlite does not seem to improve performance
|
<p>After reading <a href="https://stackoverflow.com/questions/24298023">this</a> post, I have been trying to compare parallelization with non parallelization in sqlite. I am using Python's sqlite3 library to create a database containing a table called randNums which contains a two columns, an id and a val. The val is a random number between 0 and 1. I then select all rows with val greater than a half. I have done this twice so as to compare run times of the parallelized version and unparallelized version, however they take the same amount of time. I'd like to know if I am using the keyword 'PARALLEL' incorrectly, or if I need to first enable parallelization with a python command.</p>
<p>Finally, I'd also like to know how parallelization differs for different databases, for example, mysql and postgresql.</p>
<pre class="lang-py prettyprint-override"><code>import sqlite3
from time import time
con = sqlite3.connect('mydatabase.db')
cursorObj = con.cursor()
cmd = ['PRAGMA table_info(randNums);']
cmd += ['SELECT count (*) from randNums where val>.5;']
cmd += ['SELECT count /*+ PARALLEL(4) */ (*) from randNums where val>.5;']
for c in cmd:
t = time()
cursorObj.execute(c)
print('command: %s' % c)
print(cursorObj.fetchall())
print('run time in seconds: %.3f\n\n' % (time()-t))
</code></pre>
<p>Running a Python script containing the above code results in the following output:</p>
<pre class="lang-none prettyprint-override"><code>command: PRAGMA table_info(randNums);
[(0, 'id', 'INTEGER', 0, None, 1), (1, 'val', 'REAL', 0, None, 0)]
run time in seconds: 0.000
command: SELECT count (*) from randNums where val>.5;
[(49996009,)]
run time in seconds: 3.604
command: SELECT count /*+ PARALLEL(4) */ (*) from randNums where val>.5;
[(49996009,)]
run time in seconds: 3.598
</code></pre>
<p>I first generated the database with the following code:</p>
<pre><code>import sqlite3
from random import uniform as rand
con = sqlite3.connect('mydatabase.db')
cursorObj = con.cursor()
cursorObj.execute("CREATE TABLE IF NOT EXISTS randNums(id integer PRIMARY KEY, val real)")
try:
for i in range(10**8):
if i%10**5==0: print(i)
cursorObj.execute("INSERT INTO randNums VALUES(%d, '%f')" % (i,rand(0,1)))
except:
print('datbase is already filled with data')
pass
con.commit()
</code></pre>
|
<python><sqlite><parallel-processing>
|
2023-02-21 09:09:42
| 2
| 1,580
|
Mathew
|
75,518,264
| 6,734,243
|
how to add extra space before closing tag in bs4 prettify?
|
<p>I'm using bs4 to parse html output in my test suit and compare them with existing html files.
prettify and prettier behaviour seems different in how they manage closing tags.</p>
<p>Using the following inputs:</p>
<pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup, formatter
fmt = formatter.HTMLFormatter(indent=2)
str_ = '<video preload="auto"><source src="_static/video.mkv" type=""/></video>'
html = BeautifulSoup(str_, "html.parser")
html.prettify(formatter=fmt)
</code></pre>
<p>I get the following output:</p>
<pre class="lang-html prettyprint-override"><code><video preload="auto">
<source src="_static/video.mp4" type="video/mp4"/>
</video>
</code></pre>
<p>But to fully respect standards I should have an extra space before the closing tag (after the attributes of "source"). Is it possible ?</p>
|
<python><beautifulsoup>
|
2023-02-21 09:04:18
| 1
| 2,670
|
Pierrick Rambaud
|
75,518,142
| 8,176,763
|
openpyxl returns function instead of value evaluation
|
<p>I am reading from a file like this in <code>openpyxl</code>:</p>
<pre><code>import openpyxl
from pathlib import WindowsPath
p = WindowsPath(r'.\data\product_version.xlsx')
if p.exists():
wb = openpyxl.load_workbook(p)
ws = wb.active
for row in ws.iter_rows(values_only=True):
print(row)
</code></pre>
<p>This should return values only and indeed it does for some columns , while others that are basically excel functions return the function as a value.</p>
<pre><code>('WTX84', 'TX', '=MID(A338,1,3)', 'Yes', 'Yes', 258979, 255980)
('WTX90', 'TX', '=MID(A339,1,3)', 'Yes', 'Yes', 258979, 11572174)
</code></pre>
<p>I want the value <code>=MID(A338,1,3)</code> from this.</p>
|
<python><openpyxl>
|
2023-02-21 08:52:12
| 0
| 2,459
|
moth
|
75,518,069
| 258,483
|
How to make several libraries under same parent path in Python
|
<p>Suppose I have <code>library1</code></p>
<pre><code>.
└── mycompany/
├── __init__.py
└── library1/
├── __init__.py
├── file1.py
└── file2.py
</code></pre>
<p>And <code>library2</code>:</p>
<pre><code>.
└── mycompany/
├── __init__.py
└── library2/
├── __init__.py
├── code1.py
└── code2.py
</code></pre>
<p>They both have parent directory of <code>mycompany</code> and own <code>__init__.py</code> file inside.</p>
<p>Will this work? How can I install these libraries? Won't <code>__init__</code> files conflict?</p>
|
<python><python-packaging>
|
2023-02-21 08:44:45
| 0
| 51,780
|
Dims
|
75,518,012
| 14,427,209
|
How to check if there are digits in a string, and speel them in Python?
|
<p>I am quite new to this programming world. I am wondering if we can check if we can spell out an integer from a string that was given in the input. For example, I am using the below to take an input and convert it to lowercase.</p>
<pre><code>request = input('You: ').lower().strip()
</code></pre>
<p>What I want now is, if someone enters "300 years", it should be converted to "three hundred years", and then saved in the variable. Is it possible?</p>
|
<python><python-3.x><string><function><integer>
|
2023-02-21 08:38:19
| 1
| 317
|
TECH FREEKS
|
75,517,967
| 13,803,549
|
Delete select menu after interaction Discord.py
|
<p>I have several buttons that each bring up a different select menu. I have been trying to figure out a way to delete a select menu after a selection is made so when the user clicks on the next buttons it stays clean with only the current menu showing.</p>
<p>I tried attaching a "submit" button to the menu's view that deletes the message but I get this error:</p>
<p>AttributeError: 'Button' object has no attribute 'message'</p>
<p>I also tried to edit the message after the interaction to add 'delete_after' but that didn't work either.</p>
<p>The menu is ephemeral and through research I see that you can't delete ephemeral messages due to the bot not seeing it. Is there any way around this?</p>
<pre class="lang-py prettyprint-override"><code>
class selectMenu(discord.ui.Select):
def __init__(self):
super().__init__(placeholder=placeholder, options=players, min_values=1,
max_values=1)
async def callback(self, interaction:discord.Interaction):
await interaction.response.defer()
class selectMenuView(discord.ui.View):
def __init__(self):
super().__init__(timeout=None)
self.add_item(selectMenu())
*Tried this*
@discord.ui.button(label="Cancel", style=discord.ButtonStyle.red,
row=3, disabled=False, emoji="✖️")
async def cancel_callback(self, button: discord.ui.Button,
interaction: discord.Interaction):
await interaction.message.delete()
</code></pre>
|
<python><discord><discord.py>
|
2023-02-21 08:34:09
| 1
| 526
|
Ryan Thomas
|
75,517,910
| 9,644,712
|
Invalid projection when opening geopandas
|
<p>I need to do some spatial operations in geopandas. I created the new conda environment and installed geopandas <code>conda install --channel conda-forge geopandas</code>. When I run the following simple code:</p>
<pre><code>import geopandas as gpd
from shapely.geometry import Point
gdf = gpd.GeoDataFrame([Point(1,1)])
gdf.set_geometry(0).set_crs(epsg=3857)
</code></pre>
<p>I get the following error message :</p>
<pre><code>CRSError: Invalid projection: EPSG:3857: (Internal Proj Error: proj_create: no database context specified)
</code></pre>
<p>I tried to google the issue. There are several posts, yet I could not find the right solution. It seems that there is a problem with pyproj database. That's what I understood so far.</p>
<p>Any solutions?</p>
<p>Thanks in advance!</p>
|
<python><geopandas><pyproj>
|
2023-02-21 08:27:19
| 1
| 453
|
Avto Abashishvili
|
75,517,843
| 13,088,678
|
median in each subgroup
|
<p>I have a csv file having city name and price. Need to find the median of price for each group of city.</p>
<p>Input: from a csv file</p>
<pre><code>London, 10
London, 25
London, 30
Brasov, 50
Brasov, 60
</code></pre>
<p>Expected:</p>
<pre><code>London, 25
Brasov, 55
</code></pre>
<p>Solution using Pandas:</p>
<pre><code>import pandas as pd
costs = []
with open("path","r") as f:
for line in f:
costs.append(line.split(","))
# with above , data looks like below. We can store in any format.
#costs = [["London",10] , ["London",25], ["London",30],
# ["Brasov",50], ["Brasov",60]]
df = pd.DataFrame(costs,columns=["city","price"])
res = df.groupby("city")["price"].median()
print(res)
</code></pre>
<p>I have tried using Pandas above and works fine. Is there any other approach in Python without using Pandas? I'm trying to tweak below approach to get results per each city group, but could not find a way to fit to groups. Please advice.</p>
<pre><code>mid = len(data) // 2
if len(data) % 2 != 0: # if odd number
return data[mid]
else: # if even number
return (data[mid] + data [mid-1]) / 2
</code></pre>
|
<python><python-3.x>
|
2023-02-21 08:20:04
| 1
| 407
|
Matthew
|
75,517,671
| 1,221,704
|
HTTP ERROR 500 when running Azure functions
|
<p>I am trying to run my python application as Azure functions. It runs perfectly fine when I use the default/original code.</p>
<p>However, I am having problems when I add a new class. When I run the new code locally, it runs correctly (through "Run and "Debug" as well as http://localhost:7071/api/myapp).</p>
<p>However, when I deploy it, I am getting HTTP 500 error (myapp.azurewebsites.net can't currently handle this request.)</p>
<p>I have the following in the requirement.py</p>
<pre><code>azure-functions
u8darts==0.19.0
statsforecast==0.6.0
</code></pre>
<p>Also, my new class is in a new file. Also, I do Deploy to -> function app.</p>
<p>Can someone please help? Thanks!</p>
|
<python><azure><azure-functions><serverless>
|
2023-02-21 08:00:08
| 1
| 311
|
Groot
|
75,517,667
| 6,751,456
|
djano support nested or flat values for a single field using single serializer
|
<p>I have a <code>DateRangeSerializer</code> serializer that validates a payload.</p>
<pre><code>import rest_framework.serializers as serializer
from django.conf import settings
class ValueNestedSerializer(serializer.Serializer):
lower = serializer.DateTimeField(input_formats=settings.DATETIME_INPUT_FORMATS, required=True)
upper = serializer.DateTimeField(input_formats=settings.DATETIME_INPUT_FORMATS, required=True)
class DateRangeSerializer(serializer.Serializer):
attribute = serializer.CharField(default="UPLOAD_TIME", allow_null=True)
operator = serializer.CharField(default="between_dates")
value = ValueNestedSerializer(required=True) <---- this could be set to `False` to address the issue
# lower = serializer.DateTimeField()
# upper = serializer.DateTimeField()
timezone = serializer.CharField(default="UTC")
timezoneOffset = serializer.IntegerField(default=0)
</code></pre>
<p>The respective payload:</p>
<pre><code>"date_range": {
"attribute": "date_range",
"operator": "between_dates",
"value": {
"lower": "2023-01-06T00:00:00Z",
"upper": "2023-02-05T23:59:59Z"
}
}
</code></pre>
<p>Here <code>value</code> field is <code>nested</code>. But there are few implementations where <code>lower</code> and <code>upper</code> are <code>flat</code> values and not nested.</p>
<p>Like:</p>
<pre><code>"date_range": {
"lower": "2023-01-21T00:00:00Z",
"upper": "2023-02-21T23:59:59Z"
}
</code></pre>
<p>Now, I can set the <code>value required=False</code> and add <code>lower/upper</code> as flat fields like I've mentioned in the comments above. But I want to enforce it more "properly".</p>
<p>Is there any other way where I can handle both payloads for <code>nested</code> and <code>flat</code> lower-upper values?</p>
|
<python><json><django><django-serializer>
|
2023-02-21 07:59:34
| 1
| 4,161
|
Azima
|
75,517,520
| 20,102,061
|
'pyrcc5' is not recognized as an internal or external command, operable program or batch file
|
<p>I am working from my laptop here in school, I've moved files from my PC at home to my laptop and when I try to run things I get this error:</p>
<pre><code>'pyrcc5' is not recognized as an internal or external command,
</code></pre>
<p>operable program or batch file.</p>
<p>Code:
<a href="https://i.sstatic.net/ToSj9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ToSj9.png" alt="enter image description here" /></a></p>
|
<python><pyqt5>
|
2023-02-21 07:41:39
| 1
| 402
|
David
|
75,517,410
| 3,520,404
|
Extract text based on annots from PDF using Python and PyPDF2
|
<p>I am trying to read the below PDF programmatically using Python to extract useful information.</p>
<p><a href="https://i.sstatic.net/JLe5e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JLe5e.png" alt="enter image description here" /></a></p>
<p>Here, the "attachments" are basically links that point to specific pages inside the same PDF. I came to know that these are called "annots" and there is a way to extract them using <code>PyPDF2</code> library.</p>
<p>My end goal is to read the whole PDF attachment by attachment where each attachment could span across multiple pages. I've tried below:</p>
<pre><code># creating a pdf reader object
pdfReader: PdfReader = PdfReader(pdfFileObj)
# Read annots from pdf
start = 0
end = 2
while start < end:
try:
for annot in pdfReader.pages[start]["/Annots"]:
print(annot.getObject()) # (1)
print("")
except:
# there are no annotations on this page
pass
</code></pre>
<p>I was hoping that <code>annot.getObject()</code> or something like <code>annot.extract_text()</code> would give me the full content of the relevant pages but it is not so. Going through <code>annot</code> object doesn't provide any useful information.</p>
|
<python><pdf><pypdf>
|
2023-02-21 07:29:55
| 1
| 14,416
|
saran3h
|
75,517,337
| 100,265
|
What is * in return function in python?
|
<p>What is * in return (last line of code)?</p>
<pre><code>def func_list():
arr=[x for x in range(1,6,2)]
arr1=arr
arr1.append(10)
return *arr,
print(func_list())
</code></pre>
<p>after run this code, return value is:</p>
<blockquote>
<p>(1, 3, 5, 10)</p>
</blockquote>
|
<python><return>
|
2023-02-21 07:20:37
| 2
| 1,495
|
mSafdel
|
75,517,264
| 3,520,404
|
Check if two sentences contain any matching word using Python
|
<p>I'm trying to simply check whether two sentences have any similar words.</p>
<p>Here's an example:</p>
<pre><code>string_one = "Author: James Oliver"
string_two = "James Oliver has written this beautiful article which says...."
</code></pre>
<p>In this case, these two sentences match the criteria as they contain some common words.</p>
<p>I've tried a bunch of solutions and none seems to work properly. The two sentences would have a fairly large amount of words so splitting them into lists and finding the intersection would be really inefficient I think.</p>
|
<python><pypdf>
|
2023-02-21 07:12:56
| 2
| 14,416
|
saran3h
|
75,517,239
| 13,049,379
|
scipy.spatial.transform.Rotation giving apparantly wrong results
|
<p>For the below code,</p>
<pre><code>from scipy.spatial.transform import Rotation
for theta in [0,10,80,90,100,135,170,180,185,300]:
r = Rotation.from_euler('zxy', [theta,theta,theta], degrees=True)
print(f"for theta = {theta} we get {r.as_euler('zxy', degrees=True)}")
</code></pre>
<p>the output is:</p>
<pre><code>for theta = 0 we get [0. 0. 0.]
for theta = 10 we get [10. 10. 10.]
for theta = 80 we get [80. 80. 80.]
/path/to/file.py:52: UserWarning: Gimbal lock detected. Setting third angle to zero since it is not possible to uniquely determine all angles.
print(f"for theta = {theta} we get {r.as_euler('zxy', degrees=True)}")
for theta = 90 we get [-0. 89.99999879 0. ]
for theta = 100 we get [-80. 80. -80.]
for theta = 135 we get [-45. 45. -45.]
for theta = 170 we get [-10. 10. -10.]
for theta = 180 we get [-7.01670930e-15 1.27222187e-14 -7.01670930e-15]
for theta = 185 we get [ 5. -5. 5.]
for theta = 300 we get [-60. -60. -60.]
</code></pre>
<p>In the above output the result for <code>theta=0,10,80</code> is consistent but cannot understand the results for other values of <code>theta</code>. <code>theta=90</code> can be ignored. Hopefully this is not a bug and there should be a mathematical explanation as to how scipy is calculating this.</p>
<p>Lastly I largely want a python code which can convert rotation matrix to euler angles. This would solve my problem and the scipy inbuilt function was just one way to solve this.</p>
|
<python><rotation>
|
2023-02-21 07:09:51
| 1
| 1,433
|
Mohit Lamba
|
75,517,186
| 8,176,763
|
python relative import fix
|
<p>I have a folder structure like this:</p>
<pre><code>backend/
stage/
stage.py
config.py
</code></pre>
<p>I am trying to import <code>config.py</code> into <code>stage.py</code> script with a relative import:</p>
<pre><code>from ..backend import config
</code></pre>
<p>The directory which i run from the shell my script is <code>stage</code>, and i get this error:</p>
<pre><code>(env) PS C:\Users\45291029\Documents\evergreen\project\backend\stage> & c:/Users/45291029/Documents/evergreen/project/fastapi/env/Scripts/python.exe c:/Users/45291029/Documents/evergreen/project/backend/stage/stage.py
Traceback (most recent call last):
File "c:\Users\45291029\Documents\evergreen\project\backend\stage\stage.py", line 4, in <module>
from ..backend import config
ImportError: attempted relative import with no known parent package
</code></pre>
<p>What am I doing wrong here ? Thought that by using two dots i go up one directory, like unix systems. If it helps I am using VScode IDE and I have this settings.json:</p>
<pre><code>"python.terminal.executeInFileDir": true,
</code></pre>
<p>I also use <code>python3.9</code></p>
|
<python>
|
2023-02-21 07:03:05
| 3
| 2,459
|
moth
|
75,517,089
| 813,970
|
python pandas - group by two columns and find average
|
<p>I have a dataframe like this</p>
<pre><code>TxnId TxnDate TxnCount
233 2023-02-01 2
533 2023-02-01 1
433 2023-02-01 4
233 2023-02-02 3
533 2023-02-02 5
233 2023-02-03 3
533 2023-02-03 5
433 2023-02-03 2
</code></pre>
<p>I want to compute the average of TxnCount for every TxnId for maximum last 3 days from today and have it in a separate column.</p>
<p>Lets say today = 2023-02-04. I would need the average TxnCount for a TxnId until 2023-02-01. My expected result will be.</p>
<pre><code>TxnId TxnDate TxnCount AVG
233 2023-02-01 2 2
533 2023-02-01 1 1
433 2023-02-01 4 4
233 2023-02-02 3 2.5 [(3+2)/2]
533 2023-02-02 5 3 [(5+1)/2]
233 2023-02-03 3 2.66 [(3+3+2)/3]
533 2023-02-03 5 3.66 [(5+5+1)/3]
433 2023-02-03 2 3 [(2 + 4)/2] Only for two days TxnId is present
</code></pre>
<p>Could you please help how to achieve this in python?</p>
|
<python><pandas>
|
2023-02-21 06:49:12
| 2
| 628
|
KurinchiMalar
|
75,516,954
| 3,853,711
|
How do I access multiple python/shelve database at the same time?
|
<p>I'm building a simple program that concurrently save data to different shelve database with multithreading but error occurs when 2 threads invoke <code>shelve.open()</code> (for different files):</p>
<pre><code>import threading
import shelve
import time
def parallel_shelve(idx):
print("thread {}: start".format(idx))
with shelve.open("cache_shelve_{}".format(idx)) as db:
time.sleep(4)
db["0"] = 0
db.close()
print("thread {}: done".format(idx))
if __name__ == "__main__":
threads = []
for idx in range(2):
threads += [threading.Thread(target=parallel_shelve, args=(idx,))]
for idx in range(len(threads)):
threads[idx].start()
for idx in range(len(threads)):
threads[idx].join()
</code></pre>
<p>Full log:</p>
<pre><code>$ python multi_database.py
thread 0: start
thread 1: start
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/blahblah/anaconda3/lib/python3.9/threading.py", line 980, in _bootstrap_inner
Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/blahblah/anaconda3/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/home/blahblah/anaconda3/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/home/blahblah/Desktop/multi_database.py", line 8, in parallel_shelve
with shelve.open("cache_shelve_{}".format(idx)) as db:
File "/home/blahblah/anaconda3/lib/python3.9/shelve.py", line 243, in open
self.run()
File "/home/blahblah/anaconda3/lib/python3.9/threading.py", line 917, in run
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/home/blahblah/anaconda3/lib/python3.9/shelve.py", line 227, in __init__
Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
File "/home/blahblah/anaconda3/lib/python3.9/dbm/__init__.py", line 95, in open
return mod.open(file, flag, mode)
AttributeError: module 'dbm.gnu' has no attribute 'open'
self._target(*self._args, **self._kwargs)
File "/home/blahblah/Desktop/multi_database.py", line 8, in parallel_shelve
with shelve.open("cache_shelve_{}".format(idx)) as db:
File "/home/blahblah/anaconda3/lib/python3.9/shelve.py", line 243, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/home/blahblah/anaconda3/lib/python3.9/shelve.py", line 227, in __init__
Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
File "/home/blahblah/anaconda3/lib/python3.9/dbm/__init__.py", line 95, in open
return mod.open(file, flag, mode)
AttributeError: module 'dbm.gnu' has no attribute 'open'
</code></pre>
<pre><code>$ python --version
Python 3.9.13
</code></pre>
<p>How do I fix it to access different shelve files at the same time?</p>
|
<python><python-3.x><shelve>
|
2023-02-21 06:29:07
| 2
| 5,555
|
Rahn
|
75,516,824
| 663,413
|
How to calculate percentages using Pandas groupby
|
<p>I have 3 users s1 who has 10 dollars, s2 10,20 dollars, and s3 20,20,30 dollars. I want to calculate percentage of users who had 10, 20 and 30 dollars. Is my interpretation correct here?</p>
<p>input</p>
<pre><code>import pandas as pd
df1 = (pd.DataFrame({'users': ['s1', 's2', 's2', 's3', 's3', 's3'],
'dollars': [10,10,20,20,20,30]}))
</code></pre>
<p>output</p>
<pre><code>% of subjects who had 10 dollors 0.4
% of subjects who had 20 dollors 0.4
% of subjects who had 30 dollors 0.2
</code></pre>
<p>tried</p>
<pre><code>df1.groupby(['dollars']).agg({'dollars': 'sum'}) / df1['dollars'].sum() * 100
</code></pre>
|
<python><pandas>
|
2023-02-21 06:08:50
| 2
| 831
|
ferrelwill
|
75,516,759
| 1,539,757
|
How we can import a python file stored in memory using memfs
|
<p>I have config.py file</p>
<pre><code>def main():
dataToReturn = {
"url":"testurl"
}
</code></pre>
<p>I have a python file myfile.py</p>
<pre><code>from context import config
def main():
testfunResp = config.main()
logger.log("log in apistack")
logger.log(testfunResp)
if __name__=="__main__":
main();
</code></pre>
<p>and i am executing that file in node js using</p>
<pre><code>let script_content = Reading file content of (myfile.py)
let scriptExecution = spawn('python', ['-c', script_content], inputParams, req.headerDetails]);
</code></pre>
<p>Here in that file i have import statement</p>
<pre><code>from context import config
</code></pre>
<p>which works fine bcz I store config.py in local file system in context folder</p>
<p>But if i want this config.py file in memory instead of local file system</p>
<p>like</p>
<pre><code>import { fs } from 'memfs';
fs.writeFileSync("/config.py", "script content");
</code></pre>
<p>Is there any way to do this because If I store file in local file system and import it its working fine. but i dont want that way because everytime i have to store these files(files to import) in localFS before executing main script.</p>
<p>Can somebody help me to get it done</p>
|
<python><python-2.7>
|
2023-02-21 05:58:27
| 0
| 2,936
|
pbhle
|
75,516,608
| 1,857,373
|
KeyError on column appears only in when I assign column value, all other cells can use column except this assigned cell
|
<p><strong>Problem</strong></p>
<p>Odd, when I run this in Jupyter Notebook new cells it works, but inside my full Jupyter Notebook I never drop the column but the data frame somehow has this column missing, hence error below. The only logic between # SETUP and # ASSIGN is code that processes missing values. When I step through all cells in Jupyter the SalePrice column is available to use, but once it hits the cell in Jupyter ASSIGN code cell (below) then the KeyError is raised.</p>
<p><strong>Data</strong></p>
<p><strong>ASSIGN Code Issue</strong></p>
<p>Here, train_data['SalePrice'] IS missing SalePrice column</p>
<pre><code># ASSIGN
train_data.replace(np.nan, 0)
Y = train_data['SalePrice']
X = train_data.drop(['Id'], axis=1)
...
KeyError...
...
</code></pre>
<p><strong>Setup data frame</strong></p>
<p>Here train_data HAS SalePrice.</p>
<pre><code># SETUP
import pandas as pd
train_data = pd.read_csv("../data/train.csv")
test_data = pd.read_csv("../data/test.csv")
Y = train_data['SalePrice']
</code></pre>
<p>Y resolved to this data</p>
<pre><code>0 208500
1 181500
2 223500
3 140000
4 250000
...
1455 175000
1456 210000
1457 266500
1458 142125
1459 147500
Name: SalePrice, Length: 1460, dtype: int64
</code></pre>
<p><strong>Error KeyError</strong></p>
<pre><code>KeyError Traceback (most recent call last)
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/indexes/base.py:3629, in Index.get_loc(self, key, method, tolerance)
3628 try:
-> 3629 return self._engine.get_loc(casted_key)
3630 except KeyError as err:
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/_libs/index.pyx:136, in pandas._libs.index.IndexEngine.get_loc()
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/_libs/index.pyx:163, in pandas._libs.index.IndexEngine.get_loc()
File pandas/_libs/hashtable_class_helper.pxi:5198, in pandas._libs.hashtable.PyObjectHashTable.get_item()
File pandas/_libs/hashtable_class_helper.pxi:5206, in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'SalePrice'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[88], line 2
1 train_data.replace(np.nan, 0)
----> 2 Y = train_data['SalePrice']
3 X = train_data.drop(['Id'], axis=1)
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/frame.py:3505, in DataFrame.__getitem__(self, key)
3503 if self.columns.nlevels > 1:
3504 return self._getitem_multilevel(key)
-> 3505 indexer = self.columns.get_loc(key)
3506 if is_integer(indexer):
3507 indexer = [indexer]
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/indexes/base.py:3631, in Index.get_loc(self, key, method, tolerance)
3629 return self._engine.get_loc(casted_key)
3630 except KeyError as err:
-> 3631 raise KeyError(key) from err
3632 except TypeError:
3633 # If we have a listlike key, _check_indexing_error will raise
3634 # InvalidIndexError. Otherwise we fall through and re-raise
3635 # the TypeError.
3636 self._check_indexing_error(key)
KeyError: 'SalePrice'
</code></pre>
|
<python><python-3.x><pandas><dataframe><jupyter-notebook>
|
2023-02-21 05:30:10
| 0
| 449
|
Data Science Analytics Manager
|
75,516,576
| 6,077,239
|
How to return multiple stats as multiple columns in Polars grouby context?
|
<p>The task at hand is to do multiple linear regression over multiple columns in groupby context and return respective beta coefficients and their associated t-values in separate columns.</p>
<p>Below is an illustration of an attempt to do such using statsmodels.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import polars as pl
import statsmodels.api as sm
from functools import partial
def ols_stats(s, yvar, xvars):
df = s.struct.unnest()
yvar = df[yvar].to_numpy()
xvars = df[xvars].to_numpy()
reg = sm.OLS(yvar, sm.add_constant(xvars), missing="drop").fit()
return np.concatenate((reg.params, reg.tvalues))
df = pl.DataFrame(
{
"day": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3],
"y": [1, 6, 3, 2, 8, 4, 5, 2, 7, 3, 1],
"x1": [1, 8, 2, 3, 5, 2, 1, 2, 7, 3, 1],
"x2": [8, 5, 3, 6, 3, 7, 3, 2, 9, 1, 1],
}
)
df.group_by("day").agg(
pl.struct("y", "x1", "x2")
.map_elements(partial(ols_stats, yvar="y", xvars=["x1", "x2"]))
.alias("params")
)
</code></pre>
<p>The result from the code snippet above evaluates to</p>
<pre><code>shape: (3, 2)
┌─────┬─────────────────────────────────┐
│ day ┆ params │
│ --- ┆ --- │
│ i64 ┆ object │
╞═════╪═════════════════════════════════╡
│ 2 ┆ [2.0462002 0.22397054 0.33679… │
│ 1 ┆ [ 4.86623165 0.64029364 -0.65… │
│ 3 ┆ [0.5 0.5 0. 0. ] │
└─────┴─────────────────────────────────┘
</code></pre>
<p>How am I supposed to split the 'params' into separate columns with one scalar value in each column?</p>
<p>Also, my code seems to fail at some corner cases. Below is one of them.</p>
<pre><code>df = pl.DataFrame(
{
"day": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3],
"y": [1, 6, 3, 2, 8, 4, 5, 2, 7, 3, None],
"x1": [1, 8, 2, 3, 5, 2, 1, 2, 7, 3, 1],
"x2": [8, 5, 3, 6, 3, 7, 3, 2, 9, 1, 1],
}
)
df.group_by("day").agg(
pl.struct("y", "x1", "x2")
.map_elements(partial(ols_stats, yvar="y", xvars=["x1", "x2"]))
.alias("params")
)
# ComputeError: ValueError: exog is not 1d or 2d
</code></pre>
<p>How can I make the code robust to such case?</p>
<p>Thanks for your help. And feel free to suggest your own solution.</p>
|
<python><python-polars>
|
2023-02-21 05:26:14
| 2
| 1,153
|
lebesgue
|
75,516,448
| 2,998,077
|
Python Pandas GroupBy to calculate differences in months
|
<p>A data frame below and I want to calculate the intervals of months under the names.</p>
<p>Lines so far:</p>
<pre><code>import pandas as pd
from io import StringIO
import numpy as np
csvfile = StringIO(
"""Name Year - Month Score
Mike 2022-11 31
Mike 2022-11 136
Lilly 2022-11 23
Lilly 2022-10 44
Kate 2023-01 1393
Kate 2022-10 2360
Kate 2022-08 1648
Kate 2022-06 543
Kate 2022-04 1935
Peter 2022-04 302
David 2023-01 1808
David 2022-12 194
David 2022-09 4077
David 2022-06 666
David 2022-03 3362""")
df = pd.read_csv(csvfile, sep = '\t', engine='python')
df['Year - Month'] = pd.to_datetime(df['Year - Month'], format='%Y-%m')
df['Interval'] = (df.groupby(['Name'])['Year - Month'].transform(lambda x: x.diff())/ np.timedelta64(1, 'M'))
df['Interval'] = df['Interval'].replace(np.nan, 1).astype(int)
</code></pre>
<p>But the output seems something wrong (not calculating right).</p>
<p>Where has this gone wrong, and how can I correct it?</p>
<pre><code> Name Year - Month Score Interval
0 Mike 2022-11 31 1 <- shall be 0
1 Mike 2022-11 136 0
2 Lilly 2022-11 23 1
3 Lilly 2022-10 44 1 <- shall be 0
4 Kate 2023-01 1393 1 <- shall be 3
5 Kate 2022-10 2360 3 <- shall be 2
6 Kate 2022-08 1648 2
7 Kate 2022-06 543 2
8 Kate 2022-04 1935 2 <- shall be 0
9 Peter 2022-04 302 1 <- shall be 0
10 David 2023-01 1808 1 <- shall be 1
11 David 2022-12 194 1 <- shall be 3
12 David 2022-09 4077 2 <- shall be 3
13 David 2022-06 666 3
14 David 2022-03 3362 3 <- shall be 0
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-21 04:59:49
| 1
| 9,496
|
Mark K
|
75,516,194
| 2,739,700
|
Azure blob shared key creation 403 error getting
|
<p>I wanted to create Azure blob container using python code using shared key Authorization,</p>
<p>I am getting below error:</p>
<pre><code>b'\xef\xbb\xbf<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:9e524b5e-301e-0051-4aa4-45750000\nTime:2023-02-21T03:27:02.8384023Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request \'xxxxxxxxxxxxxxxx\' is not the same as any computed signature. Server used following string to sign: \'PUT\n\n\n\n\n\n\n\n\n\n\n\nx-ms-date:Tue, 21 Feb 2023 03:27:01 GMT\nx-ms-version:2020-04-08\n/blobmediapedevwus2/mycontainer\nrestype:container\'.</AuthenticationErrorDetail></Error>'
</code></pre>
<p>How to fix this?</p>
<p>Below is the python code:</p>
<pre><code>import requests
import datetime
import hmac
import hashlib
import base64
# Set the storage account name and access key
STORAGE_ACCOUNT_NAME = 'vidyaflowerapp01'
STORAGE_ACCOUNT_KEY = "xxxxxxxxxx"
# Set the container name
CONTAINER_NAME = 'test'
# Set the request method and version
REQUEST_METHOD = 'PUT'
REQUEST_VERSION = '2020-04-08'
# Set the request date
REQUEST_DATE = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
CANONICALIZED_HEADERS = f'x-ms-date:{REQUEST_DATE}\nx-ms-version:{REQUEST_VERSION}\n'
# Set the canonicalized resource string
CANONICALIZED_RESOURCE = f'/{STORAGE_ACCOUNT_NAME}/{CONTAINER_NAME}\nrestype:container'
VERB = 'PUT'
Content_Encoding = ''
Content_Language = ''
Content_Length = ''
Content_MD5 = ''
Content_Type = ''
Date = ''
If_Modified_Since = ''
If_Match = ''
If_None_Match = ''
If_Unmodified_Since = ''
Range = ''
CanonicalizedHeaders = CANONICALIZED_HEADERS
CanonicalizedResource = CANONICALIZED_RESOURCE
STRING_TO_SIGN = (VERB + '\n' + Content_Encoding + '\n' + Content_Language + '\n' +
Content_Length + '\n' + Content_MD5 + '\n' + Content_Type +
Date + '\n' + If_Modified_Since + '\n' + If_Match + '\n' +
If_None_Match + '\n' + If_Unmodified_Since + '\n' + Range + '\n' +
CanonicalizedHeaders + CanonicalizedResource)
signature = base64.b64encode(hmac.new(base64.b64decode(STORAGE_ACCOUNT_KEY), msg=STRING_TO_SIGN.encode('utf-8'), digestmod=hashlib.sha256).digest()).decode()
# Generate the authorization header
auth_header = f'SharedKey {STORAGE_ACCOUNT_NAME}:{signature}'
# Set the request URL
request_url = f'https://{STORAGE_ACCOUNT_NAME}.blob.core.windows.net/{CONTAINER_NAME}?restype=container'
# Set the request headers
request_headers = {
'x-ms-date': REQUEST_DATE,
'x-ms-version': REQUEST_VERSION,
'Authorization': auth_header
}
# Send the request
response = requests.put(request_url, headers=request_headers)
print(response.content)
print(response.status_code)
</code></pre>
<p>above code uses shared key Authorization to make request, we have to replace access-key and storage account in order test</p>
<p>Current response: 403
Expected response: 201</p>
|
<python><python-3.x><azure><rest><azure-blob-storage>
|
2023-02-21 04:03:03
| 2
| 404
|
GoneCase123
|
75,516,086
| 2,458,922
|
Difference between sklearn.neural_network and simple Deep Learning by Keras with Sequential and Dese Nodes?
|
<p>Given , sklearn.neural_network and simple Deep Learning by Keras with Sequential and Dese Nodes, are the <strong>mathematically same</strong> just two API's with computation optimization?
Yes Keras has a Tensor Support and could also liverage GPU and Complex models like CNN and RNN are permissible.</p>
<p>However, are they <strong>mathematically same</strong> and we will <strong>yield same results</strong> given same hyper parameter , random state, input data etc ?</p>
<p><strong>Else</strong> apart from computational efficiency what maker Keras a better choice ?</p>
|
<python><tensorflow><keras><deep-learning><neural-network>
|
2023-02-21 03:38:47
| 1
| 1,731
|
user2458922
|
75,515,931
| 3,136,710
|
Comparing an empty string against a vowel in a condition statement returning True
|
<p>The code simply subtracts 1 from <code>syllables</code> if consecutive vowels are encountered. However, when we iterate in the inner for loop for the second word <code>arms</code>, the variable <code>prev</code> is an empty string again, which should make <code>if (char in vowels) and (prev in vowels):</code> False, but it executes the line <code>syllables -= 1</code> anyway and I don't understand why.</p>
<pre><code>string = "take arms against a sea of troubles, And by opposing end them? To die: to sleep."
vowels = "aeiouAEIOU"
syllables = 0
for word in string.split():
prev = ""
for char in word:
if (char in vowels) and (prev in vowels):
syllables -= 1
prev = char
print(syllables)
</code></pre>
|
<python>
|
2023-02-21 03:00:04
| 0
| 652
|
Spellbinder2050
|
75,515,846
| 41,634
|
Best way to load a mmap dictionary in Python without deserializing
|
<p>Context:</p>
<p>I've got Python processes running on the same container and I want to be able to share a read-only key-value object between them.</p>
<p>I'm aware I could use something like Redis to share that info, but I'm looking for optimal solution in regards to latency as well as memory usage.</p>
<p>My idea was to generate a binary object on disk and open that file using <a href="https://docs.python.org/3/library/mmap.html" rel="nofollow noreferrer">mmap</a></p>
<p>Question:
That brings me to my question, is there a binary format or library that would load a read-only file in ram an offer a dictionary interface, without the need to deserialize the file content? This way I could mmap the file in every process, all process would be re-using the same RAM for that file and I would be able to access the file's content with a dict-like interface?</p>
<p>I'm looking a file/object format for dictionaries, similar to what Parquet is to columnar storage, that could be consumed in read-only mode by python.</p>
|
<python><mmap><rocksdb><data-serialization>
|
2023-02-21 02:39:54
| 2
| 2,035
|
Damien
|
75,515,809
| 21,169,587
|
Changing delimiters for DOCUMENT_TEXT_DETECTION in Google's Cloud Vision API
|
<p>I'm trying out Google's Cloud Vision API for text detection and have noticed the detection is using symbols such as <code>:</code>, <code>-</code> and <code>/</code> as breakpoints. I do not want them to be breakpoints in the scanned text as they make parsing the results for information more difficult. Looking through the documentation, the closest things I've found is <code>DetectedBreak()</code> from the <code>full_text_annotations</code>, which did not seem to have the utility that I need, and <code>TextDetectionParams()</code> which will be inserted into <code>ImageContext()</code> before being passed to the client as a request. For <code>TextDetectionParams()</code>, there is an option called <code>advanced_ocr_options</code>, but I have not been able to find any details on what options are available in it both in the library's documentation and Google's cloud vision api documentation found here: <a href="https://cloud.google.com/python/docs/reference/vision/latest/google.cloud.vision_v1.types.TextDetectionParams" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/vision/latest/google.cloud.vision_v1.types.TextDetectionParams</a></p>
<p>Is there any documentation for <code>advanced_ocr_options</code> or is there a better way to define breakpoints for the annotation request? The documentation in the link above does not go into any detail about what <code>advanced_ocr_options</code> actually entail, only that it is a <code>MutableSequence[str]</code>.</p>
<p>My existing code for additional context below:</p>
<pre><code>feature = Feature(type_=Feature.Type(11)) # sets detection type to document_text
detection_params = vision.TextDetectionParams() # where advanced_ocr_options is an option in constructor
context = ImageContext(language_hints=["en"], text_detection_params=detection_params)
request = vision.AnnotateImageRequest(image=image, features=[feature], image_context=context)
reply = client.annotate_image(request)
#tried playing around with both text_annotations and full_text_annotations, unable to get the result I was looking for.
annotations = reply.text_annotations
full_annotations = reply.full_text_annotation
</code></pre>
<p>I know it is possible to take the full text returned by <code>text_annotations</code> and <code>full_text_annotations</code> and split it with <code>string.split()</code>, but as I do need to compare the tokens coordinate values I do not want to do this.</p>
<p>This is an example of what I got from the annotations (after formatting the coordinates) from the original text of <code>01/JAN/1990/</code></p>
<pre><code>01
bounds: (472,161),(537,162),(536,197),(471,196)
/
bounds: (553,163),(582,163),(581,197),(552,197)
JAN
bounds: (593,163),(703,165),(702,200),(592,198)
/
bounds: (716,166),(745,166),(744,200),(715,200)
1990
bounds: (759,166),(905,168),(904,203),(758,201)
/
bounds: (921,169),(948,169),(947,203),(920,203)
</code></pre>
<p>While I'm looking for something closer to</p>
<pre><code>01/JAN/1990/
bounds: (472,161), (948,169), (947,203), (471,196)
</code></pre>
<p>If none of this is an option I intend to parse through the annotations individually and re-construct the items based on expected rules, but this could lead to additional errors especially if there is a mischaracterization by the OCR. The code used to format the bounds currently is as follows:</p>
<pre><code> #annotations[0] is the text dump with full spaces and symbols as a single string, but only coordinates of the bbox of the entire text
for text in annotations[1:]:
print(text.description)
vertices = (['({},{})'.format(vertex.x, vertex.y)
for vertex in text.bounding_poly.vertices])
print(vertices) #this is for currently for testing so im just using print to see what the outputs I collect are
</code></pre>
<p>Notably for the image example, I do not need the day (FRI) but can handle its inclusion separately.
<a href="https://i.sstatic.net/ZWNg9.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZWNg9.jpg" alt="Cropped image for the date example" /></a></p>
|
<python><python-3.x><google-cloud-vision>
|
2023-02-21 02:34:07
| 0
| 867
|
Shorn
|
75,515,629
| 1,380,626
|
How to build the bias_act_plugin extension for stylegan3
|
<p>I am trying to run the code in <a href="https://github.com/NVlabs/stylegan3" rel="nofollow noreferrer">stylegan3</a>, and I am getting the error</p>
<pre><code>RuntimeError: Error building extension 'bias_act_plugin'
</code></pre>
<p>I looked online for some ideas and <a href="https://github.com/NVlabs/stylegan3/issues/124" rel="nofollow noreferrer">they told me</a> to install <em>ninja</em> with <em>pip</em>. I have done that I am still getting the error. How can I to deal with this issue?</p>
<p>My gcc and g++ version is 7.5.0.</p>
|
<python><pytorch><ninja><stylegan>
|
2023-02-21 01:47:41
| 0
| 6,302
|
odbhut.shei.chhele
|
75,515,593
| 4,119,822
|
Calling Cython Functions from Numba njited function where numpy ndarrays are involved
|
<p>I am trying to do the following...</p>
<pre class="lang-py prettyprint-override"><code># my_cython.pyx
cpdef api double cyf(double x, double[:] xarr, double[:] yarr) nogil:
# Do stuff
return result
</code></pre>
<pre class="lang-py prettyprint-override"><code># my_cython.pxd
cpdef api double cyf(double x, double[:] xarr, double[:] yarr) nogil
</code></pre>
<pre class="lang-py prettyprint-override"><code># main.py
from numba import njit
@njit
def myfunc():
result = cyf(x, xarr, yarr)
# Do more stuff
</code></pre>
<p>I followed this <a href="https://stackoverflow.com/questions/43016076/calling-cython-functions-from-numba-jitted-code">post</a> on how to show Numba what the cython code is but I am not sure what to do about the memory slices that are defined in <code>cyf</code>'s arguments (these are 1-D float np.ndarrays).</p>
<p>I can get things to compile just fine and I can get the cython function's address using...</p>
<pre class="lang-py prettyprint-override"><code>from numba.extending import get_cython_function_address
addr = get_cython_function_address("mymod", "cyf")
</code></pre>
<p>But, I do not know what to use for the ctypes for the ndarrays...</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
functype = ctypes.CFUNCTYPE(ctypes.c_double, ctypes.c_double, ???, ???)
cyf_numba = functype (addr)
</code></pre>
<p>I tried using pointers like...</p>
<pre class="lang-py prettyprint-override"><code>interp_functype = ctypes.CFUNCTYPE(ctypes.c_double, ctypes.c_double,
ctypes.POINTER(ctypes.c_double),
ctypes.POINTER(ctypes.c_double))
</code></pre>
<p>... but ended up with a njit error:</p>
<pre><code>No implementation of function ExternalFunctionPointer((float64, float64*, float64*) -> float64) found for signature:
ExternalFunctionPointer(float64, array(float64, 1d, C), array(float64, 1d, C))
</code></pre>
<p>Thanks for any help!</p>
|
<python><numpy><cython><numpy-ndarray><numba>
|
2023-02-21 01:34:40
| 0
| 329
|
Oniow
|
75,515,493
| 3,380,902
|
Pandas - look up key and map dictionary to a column
|
<p>I am trying to <code>.map</code> a dictionary to a pandas DataFrame. One of the columns in pandas DataFrame is the key in dict. Here's a reproducible example,</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'id': [0, 1, 2],
'nm': ['pn1','pn2','pn3],
'v': [np.nan, 25, 0],
'd': [{'k1':'v1','k2':'v2','k3':'v3'}]
})
dtd = {
'pn1':{'s':100,'v':20, sv:{['sv1': 500]}},
'pn2':{'s':150,'v':30, sv:{['sv1': 400]}}
}
</code></pre>
<p>I'd like to to look the <code>key</code> and parse the value <code>sv1</code> from the nested dictionary and assign it to pandas series <code>df.v</code>.</p>
<p>Series would look like this:</p>
<pre><code>v
500
400
0
</code></pre>
|
<python><pandas><dictionary>
|
2023-02-21 01:07:54
| 2
| 2,022
|
kms
|
75,515,479
| 143,397
|
Why would a large TCP transfer randomly stall until data is sent in the opposite direction?
|
<p>I have two programs I'm writing that communicate with a simple ad-hoc protocol over TCP. They work together to transfer large (1-64 MB) binary files from the server to the client.</p>
<p>There's an issue with data transmission stalling that causes a socket timeout on the receive side. I'd like to better understand what is happening in this situation, so that I can learn from this and improve my code.</p>
<h2>The Setup</h2>
<h4>TCP Server</h4>
<p>The TCP server (written with <code>boost::asio</code> using the async functions) that accepts a connection from a client and sends "HI". It runs on a separate, nearby ARM64 host, connected via a simple switch over Ethernet.</p>
<p>When the server receives "GET" it responds by writing a large amount of data (header "DATA" + 1MB binary data + footer "END" = 1048606 bytes) to the socket using <code>async_write()</code>. I believe my data lifetimes are correct.</p>
<p>I've also tried synchronous writes, and it seems to have no effect on this issue.</p>
<h4>TCP Client</h4>
<p>The TCP client is written in Python, and runs on a PC. It uses the low-level socket interface to connect to the server with a blocking socket:</p>
<pre><code> sock = socket.create_connection((address, port), timeout=30.0)
</code></pre>
<p>After connecting, it consumes the "HI" and responds with "GET".</p>
<p>After sending the "GET", it enters a loop that collects all bytes of data sent by the server.</p>
<p>The TCP client knows, a priori, how much data to expect from the server, so it can loop on <code>recv()</code> until all that data is received:</p>
<pre class="lang-py prettyprint-override"><code>import socket
def recv_len(sock: socket.socket, length: int) -> bytes:
chunks = []
bytes_rx = 0
while bytes_rx < length:
chunk = sock.recv(min(length - bytes_rx, 4096))
if chunk == b'':
raise RuntimeError("socket connection broken")
chunks.append(chunk)
bytes_rx += len(chunk)
return b''.join(chunks)
def main():
sock = socket.create_connection((address, port), timeout=30.0)
get = recv_len(sock, 3) # receive "HI\n"
sock.sendall(b"GET\n")
data = recv_len(sock, 1048606)
</code></pre>
<p>The client then processes the data and repeats, by sending another "GET" message.</p>
<h2>The Problem</h2>
<p>When run once, the client and server seem to work correctly.</p>
<p>Now, I have the client running in a tight loop, sending many "GET" requests, and receiving the "DATA" responses, synchronously. Every transaction is completed before the next one starts, and the socket connection is kept up the entire time.</p>
<p>The problem is, that after some seemingly random number of transfers (as few as 4, as many as 300), a transfer will unexpectedly stall. It stalls for the full 30 seconds, and then a socket timeout error is raised on the client.</p>
<p>At this point, I have inspected the client's socket and I can see (via manually calling <code>sock.recv(4096, socket.MSG_PEEK)</code> that there is no data pending on this side. It is genuinely out of data, and waiting for more. On the server side, it is still in the middle of the <code>async_write()</code> operation, with data still to send. There's no error on the server side (the timeout on the server is currently infinite).</p>
<p>I've looked with Wireshark and I can see that the last few packets on the stream are OK - there are no retransmissions or errors. The network is reliable and fast, over a small 1Gb Ethernet switch. Wireshark just shows that everything has stopped, dead.</p>
<p>If I invoke <code>pdb</code> when the socket timeout exception occurs, then set the socket to non-blocking and do a peek, I receive:</p>
<pre><code>*** BlockingIOError: [Errno 11] Resource temporarily unavailable
</code></pre>
<p>With or without the blocking socket, if I send a small amount of data from the client to the server with <code>sock.send(b"ping")</code>, the following happens immediately:</p>
<ol>
<li><p>the server completes the send and invokes the <code>async_write()</code>'s completion handler, with the bytes_transferred equal to what I'd expect to see. From the server's point of view, the transfer is complete.</p>
</li>
<li><p>the client now has data available on the socket to receive, and it can do so if I manually call <code>sock.recv(4096)</code>.</p>
</li>
</ol>
<p>So the stall is cleared by sending data in the <em>opposite</em> direction.</p>
<p>I don't understand why this might stall on occasion, and why would a transfer in one direction require data to be send in the opposite direction to exit the stall? Is there a subtle feature of sending large amounts of data over TCP that I need to be aware of?</p>
|
<python><sockets><tcp><network-programming><boost-asio>
|
2023-02-21 01:05:01
| 1
| 13,932
|
davidA
|
75,515,433
| 5,141,652
|
python tkinter move items from one frame to another
|
<p>I have 2 lists of items available items and active items, I am initially adding the items into the available list from a dictionary. I would like to activate items and move them from the available list into the active list when the associated activate button is pressed (this should also remove the item from the available list) if the remove item button is then pressed I would like to remove the item from the active list and put it back into the available list. I cannot figure out how to do this, I am pretty sure I need a list of the widgets instead of simply using the dictionary items but I am struggling to understand how this would work.</p>
<p>Here is what I have so far:-</p>
<pre><code>import tkinter as tk
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title("Add Remove Items Example")
self.geometry("800x450")
self.items = {
"Item 1": "Item 1 Description",
"Item 2": "Item 2 Description",
"Item 3": "Item 3 Description",
}
self.active_items = []
self.available_items = []
# Available Items
self.available_frame = tk.Frame(self)
self.available_frame.grid(row=0, column=0, padx=10, pady=10)
if len(self.items) == 0:
tk.Label(self.available_frame, text="No Items Available").grid(
row=0, column=0
)
else:
for item in self.items:
item_frame = tk.Frame(self)
item_frame.grid(row=len(self.available_items), column=0)
tk.Label(item_frame, text=item).grid(row=0, column=0)
tk.Label(item_frame, text=self.items[item]).grid(row=1, column=0)
tk.Button(
item_frame,
text="Activate",
command=lambda item=item: self.activate_item(item),
).grid(row=0, rowspan=2, column=1)
self.available_items.append(item)
# Active Items
self.active_frame = tk.Frame(self)
self.active_frame.grid(row=0, column=1, padx=10, pady=10)
if len(self.active_items) == 0:
tk.Label(self.active_frame, text="No Items Active").grid(row=0, column=0)
def activate_item(self, item):
print(f"Add {item}")
tk.Label(self.active_frame, text=item).grid(
row=len(self.active_items), column=0
)
tk.Button(
self.active_frame,
text="Remove",
command=lambda item=item: self.remove_item(item),
).grid(row=len(self.active_items), column=1)
self.active_items.append(item)
def remove_item(self, item):
print(f"Remove {item}")
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>can anybody give me any tips on how to improve this further adding the above functionality?</p>
<p>EDIT</p>
<p>I have continued to work on this and have come up with a workable solution:</p>
<pre><code>import tkinter as tk
class ItemFrame(tk.Frame):
def __init__(self, parent):
super().__init__(parent)
self.parent = parent
# Added an index for the row as len would overwrite the last item in
# the list if the first item was activated and then removed again it would
# be better to remove this if I can redraw the full list in ascending order
self.row = 0
self.label_list = []
self.button_list = []
# label if no items in this list, need to figure out how to add list name
# i.e "No Available Items"/"No Active Items" I will probably pass this in
self.no_items = tk.Label(
self,
text="No Items",
)
self.no_items.grid(row=0, column=0)
def add_item(self, item, action):
self.no_items.grid_forget()
label = tk.Label(
self,
text=item,
)
button = tk.Button(
self,
text=action,
command=lambda: self.button_event(item, action),
)
label.grid(row=self.row, column=0)
button.grid(row=self.row, column=1)
self.label_list.append(label)
self.button_list.append(button)
self.row += 1
def remove_item(self, item):
for label, button in zip(self.label_list, self.button_list):
if item == label.cget("text"):
label.destroy()
button.destroy()
self.label_list.remove(label)
self.button_list.remove(button)
if len(self.label_list) == 0:
self.row = 0
self.no_items.grid(row=0, column=0)
def button_event(self, item, action):
self.remove_item(item)
self.parent.button_event(item, action)
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title("Add Remove Items Example")
self.geometry("800x450")
# create available frame
self.available_frame = ItemFrame(self)
self.available_frame.grid(row=0, column=0)
# populate with dummy data
for i in range(5):
self.available_frame.add_item(f"available item {i}", "Activate")
# create active frame
self.active_frame = ItemFrame(self)
self.active_frame.grid(row=0, column=1)
def button_event(self, item, action):
if action == "Activate":
self.active_frame.add_item(item, "Remove")
elif action == "Remove":
self.available_frame.add_item(item, "Activate")
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>I would prefer it if I could insert the items into the opposite list/frame in alphabetical/numerical order and redraw the full list but I think that is way outside of my abilities and don't know where to start. Can anybody help?</p>
|
<python><python-3.x><tkinter>
|
2023-02-21 00:53:35
| 1
| 1,037
|
Chris
|
75,515,426
| 18,108,767
|
How to extract surfaces from an 3d array (volume)
|
<p>I create a 3d array like this:</p>
<pre><code>arr = np.zeros((100, 100))
for i in range(5):
for j in range(5):
arr[i*20:(i+1)*20, j*20:(j+1)*20] = i+1
arr = np.tile(arr[np.newaxis,:,:], (100,1,1))
arr = np.transpose(arr, (0, 2, 1))
</code></pre>
<p>The resulting shape will be <code>(100,100,100)</code>. It will look like this at <code>y=50</code>:</p>
<pre><code>fig = plt.figure(figsize=(5, 5))
y=50
x, z = np.arange(arr.shape[0]), np.arange(arr.shape[2])
xv, zv = np.meshgrid(x,z)
plt.pcolormesh(xv, zv, arr[:,y,:].T, cmap='plasma')
plt.colorbar()
plt.xlabel('x')
plt.ylabel('z')
plt.title('3d array - y:{}'.format(y))
plt.gca().invert_yaxis()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/p54lY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p54lY.png" alt="enter image description here" /></a></p>
<p>Now I want to create the interfaces between these regions, in this array there are 5 regions, determined by the values (1,2,3,4 and 5), so the resulting interfaces will be 4, how can they be estimated and stored in another array to later plot them as surfaces like <a href="https://plotly.com/python/3d-surface-plots/#multiple-3d-surface-plots" rel="nofollow noreferrer">this</a>?</p>
|
<python><numpy><numpy-ndarray>
|
2023-02-21 00:51:37
| 1
| 351
|
John
|
75,515,229
| 1,839,674
|
tf.data.Dataset.from_generator long to initialize
|
<p>I have a generator that I am trying to put into a tf.data.dataset.</p>
<pre><code>def static_syn_batch_generator(
total_size: int, batch_size: int, start_random_seed:int=0,
fg_seeds_ss:SampleSet=None, bg_seeds_ss:SampleSet=None, target_level:str="Isotope"):
static_syn = StaticSynthesizer(
samples_per_seed = 10, # will be updated in generator
snr_function ="log10",
random_state = 0 # will be updated in generator
)
static_syn.random_state = start_random_seed
samples_per_seed = math.ceil(batch_size/(len(fg_seeds_ss)*len(bg_seeds_ss)))
# print(f"static_syn.samples_per_seed={static_syn.samples_per_seed}")
# print(f"static_syn.random_state={static_syn.random_state}")
counter = 0
for i in range(total_size):
# Regenerate for each batch
if counter%batch_size == 0: # Regen data for every batch
fg, bg, gross = static_syn.generate(fg_seeds_ss=fg_seeds_ss, bg_seeds_ss=bg_seeds_ss)
fg_sources_cont_df = fg.sources.groupby(axis=1, level=target_level).sum()
bg_sources_cont_df = bg.sources.groupby(axis=1, level=target_level).sum()
gross_sources_cont_df = gross.sources.groupby(axis=1, level=target_level).sum()
static_syn.random_state += 1
print(static_syn.random_state)
# print(f"static_syn.samples_per_seed={static_syn.samples_per_seed}")
# print(f"static_syn.random_state={static_syn.random_state}")
fg_X = fg.spectra.values[i%batch_size]
fg_y = fg_sources_cont_df.values[i%batch_size].astype(float)
bg_X = bg.spectra.values[i%batch_size]
bg_y = bg_sources_cont_df.values[i%batch_size].astype(float)
gross_X = gross.spectra.values[i%batch_size]
gross_y = gross_sources_cont_df.values[i%batch_size].astype(float)
yield (fg_X, fg_y), (bg_X, bg_y), (gross_X, gross_y)
counter += 1
</code></pre>
<p>When running by hand it works and takes 6 seconds to output and compare two instances of the generator (to makes sure random seeding is working):</p>
<pre><code>total_size = 10
batch_size = 2
batch_gen = static_syn_batch_generator(total_size, batch_size, start_random_seed=0, fg_seeds_ss=fg_seeds_ss, bg_seeds_ss=bg_seeds_ss)
fg0 = []
bg0 =[]
gross0 = []
for i, ((fg_X, fg_y), (bg_X, bg_Y), (gross_X, gross_y)) in enumerate(batch_gen):
fg0.append(fg_X)
bg0.append(bg_X)
gross0.append(gross_X)
print(f"len of fg0: {len(fg0)}")
print(f"len of bg0: {len(bg0)}")
print(f"len of gross0: {len(gross0)}")
batch_gen = static_syn_batch_generator(total_size, batch_size, start_random_seed=0, fg_seeds_ss=fg_seeds_ss, bg_seeds_ss=bg_seeds_ss)
fg1 = []
bg1 =[]
gross1 = []
for i, ((fg_X, fg_y), (bg_X, bg_y), (gross_X, gross_y)) in enumerate(batch_gen):
fg1.append(fg_X)
bg1.append(bg_X)
gross1.append(gross_X)
print(f"len of fg1: {len(fg1)}")
print(f"len of bg1: {len(bg1)}")
print(f"len of gross1: {len(gross1)}")
assert np.array_equal(fg0, fg1)
assert np.array_equal(bg0, bg1)
assert np.array_equal(gross0, gross1)
</code></pre>
<p>However, when I try to instantiate a tf.data.Dataset.from_generator it takes forever to initalize (actually don't know if it finishes, on minute 15 currently).</p>
<pre><code>fg_seeds_ss, bg_seeds_ss = get_dummy_seeds().split_fg_and_bg()
total_samples = 10
batch_size = 2
start_random_seed = 0
#TODO: TAKES FOREVER
dataset = tf.data.Dataset.from_generator(
generator=static_syn_batch_generator,
args=(total_samples, batch_size, start_random_seed, fg_seeds_ss, bg_seeds_ss, "Isotope"),
output_types=((tf.float32, tf.float32),(tf.float32, tf.float32),(tf.float32, tf.float32))
)
</code></pre>
<p>Anyone have any suggestions or see what I am doing wrong?</p>
|
<python><tensorflow><tf.data.dataset><tf.dataset>
|
2023-02-20 23:54:31
| 0
| 620
|
lr100
|
75,515,139
| 1,368,342
|
Extract stdout from long-running process
|
<p><em>Disclaimer: I have seen many similar questions, but either they do not use <code>asyncio</code>, or they don't seem to do exactly what I do.</em></p>
<p>I am trying to get the output of a long-running command (it's actually a server that logs out to stdout) with the following:</p>
<pre class="lang-py prettyprint-override"><code> proc = await asyncio.create_subprocess_shell(
"my_long_running_command",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
limit=10
)
stdout = ""
try:
stdout, stderr = await asyncio.wait_for(proc.communicate(), 2.0)
except asyncio.exceptions.TimeoutError:
pass
print(stdout)
</code></pre>
<p>But I don't get anything. If I use <code>ls</code> instead of <code>my_long_running_command</code>, it works. The only difference I see is that <code>ls</code> returns, and not my command.</p>
<p>I am using <code>wait_for</code> because <a href="https://docs.python.org/3/library/asyncio-subprocess.html#asyncio.subprocess.Process" rel="nofollow noreferrer">the documentation</a> says:</p>
<blockquote>
<p>the communicate() and wait() methods don’t have a timeout parameter: use the wait_for() function;</p>
</blockquote>
<p>I tried with <code>limit=10</code>, hoping it would help with the buffering, and without it. It seems to have no effect at all.</p>
<p>Though I don't really understand how they differ, I tried both <code>asyncio.create_subprocess_shell</code> and <code>asyncio.create_subprocess_exec</code> without success.</p>
<p>Is there a way to extract stdout from the process before it returns?</p>
|
<python><subprocess><python-asyncio>
|
2023-02-20 23:33:58
| 1
| 8,143
|
JonasVautherin
|
75,515,109
| 2,813,606
|
ImportError: cannot import name 'NDArray' from 'numpy.typing' (Prophet)
|
<p>I have not had as much trouble trying to install any other package in my Python experience than I have with Prophet.</p>
<p>Here is a snippet of my code:</p>
<pre><code>#Import libraries
import pandas as pd
from prophet import Prophet
#Load data
test = pd.read_csv('https://raw.githubusercontent.com/facebook/prophet/main/examples/example_wp_log_peyton_manning.csv')
test.head()
# Train model
model = prophet()
model.fit(test)
</code></pre>
<p>I get the following error:</p>
<pre><code>----> 4 from prophet import Prophet
7 # Train model
8 model = prophet()
File ~/anaconda3/envs/prophet39/lib/python3.9/site-packages/prophet/__init__.py:7
1 # Copyright (c) 2017-present, Facebook, Inc.
2 # All rights reserved.
3 #
4 # This source code is licensed under the BSD-style license found in the
5 # LICENSE file in the root directory of this source tree. An additional grant
6 # of patent rights can be found in the PATENTS file in the same directory.
----> 7 from prophet.forecaster import Prophet
9 from pathlib import Path
10 about = {}
File ~/anaconda3/envs/prophet39/lib/python3.9/site-packages/prophet/forecaster.py:17
15 import numpy as np
16 import pandas as pd
---> 17 from numpy.typing import NDArray
19 from prophet.make_holidays import get_holiday_names, make_holidays_df
20 from prophet.models import StanBackendEnum
ImportError: cannot import name 'NDArray' from 'numpy.typing' (/Users/user_name/anaconda3/envs/prophet39/lib/python3.9/site-packages/numpy/typing/__init__.py)
</code></pre>
<p>I have no idea how to fix this problem. I've encountered several different <code>prophet/pystan</code> issues along the way, but seems like I've hit quite the roadblock this time.</p>
|
<python><numpy><facebook-prophet>
|
2023-02-20 23:28:06
| 1
| 921
|
user2813606
|
75,514,999
| 13,676,262
|
How to call a function in a cog from a view using Nextcord?
|
<p>I have a Cog and a view in nextcord. I'm trying to call the function <code>magic_function</code> from my view, but I've found no way of achieving it. Do you know how is it possible ?
Here is a simplification of the situation :</p>
<pre class="lang-py prettyprint-override"><code>class MyView(nextcord.ui.View):
def __init__(self):
super().__init__(timeout=None)
@nextcord.ui.button(label='Send Magic', style=nextcord.ButtonStyle.primary)
async def magicbutton(self, button: nextcord.ui.Button, interaction: nextcord.Interaction):
magic_word = MyCog.get_magic_word() # Here, I would like to access MyCog, but can't figure how
await interaction.response.send_message(magic_word)
class MyCog(commands.Cog):
def __init__(self, client):
self.client = client
def get_magic_word(self):
# Doing stuff
return "Banana"
@nextcord.slash_command(description="Sends the button")
async def send_the_view(self, interaction: nextcord.Interaction):
view = MyView()
await interaction.response.send_message("Here is the view :", view=view)
def setup(client):
client.add_cog(MyCog(client=client))
</code></pre>
<p>I thought I could define my view inside the Cog, but I find it useless, since i coulden't access the function either. Also, I thought I could bring the function <code>get_magic_word()</code> outside the cog, but this is a bad idea since the function needs to call <code>self</code> to do stuff.</p>
|
<python><discord><nextcord>
|
2023-02-20 23:07:46
| 1
| 340
|
Deden
|
75,514,895
| 1,497,385
|
Creating a cheap "overlay" Python virtual environment on top of current one without re-installing packages
|
<p>Problem:</p>
<p>My program is getting a list of (<code>requirements_123.txt</code>, <code>program_123.py</code>) pairs (actually a list of script lines like <code>pip install a==1 b==2 c==3 && program_123.py</code>).</p>
<p><strong>My program needs to run each program in an isolated virtual environment based on the current environment.</strong></p>
<p>Requirements:</p>
<ul>
<li>Current environment is not modified</li>
<li>Program environment is based on the current environment</li>
<li>Not reinstalling the packages from the current env. It's slow. It does not really work (package sources might be missing, build tools might be missing). No <code>pip freeze | pip install</code> please</li>
<li>Fast. Copying gigabytes of files from current environment to a new environment every time is too slow. Symlinking might be OK as a last resort.</li>
</ul>
<p>Ideal solution: I set some environment variables for each program, pointing to a new virtual environment dir, and then just execute the script and pip does the right thing.</p>
<p><strong>How can I do this?</strong></p>
<p>What do I mean by "overlay": Python already has some "overlays". There are system packages and user packages. User packages "shadow" the system packages, but non-shadowed system packages are still visible to the programs. When pip installs the packages in the user directory, it does not uninstall the system package version. This is the exact behavior I need. I just need a third overlay layer: "system packages", "user packages", "program packages".</p>
<p>Related questions (but they do not consider the user dir packages, only the virtual environments):</p>
<p><a href="https://stackoverflow.com/questions/74436125/cascading-virtual-python-environnements">"Cascading" virtual Python environnements</a>
<a href="https://stackoverflow.com/questions/61019081/is-it-possible-to-create-nested-virtual-environments-for-python">Is it possible to create nested virtual environments for python?</a></p>
<p>P.S.</p>
<blockquote>
<p>If pip freeze doesn't even work, you have much larger problems lurking.</p>
</blockquote>
<p>There are many reasons why the result of <code>pip freeze > requirements.txt</code> does not work in practice:</p>
<ul>
<li>System-installed packages installed using <code>apt</code>.</li>
<li>Packages installed from different package indexes, not PyPI (PyTorch does that). Package <code>conda-package-handling</code> is not on PyPI.</li>
<li>Conda packages.</li>
<li>Packages built from source some time ago (and your compilers are different now).</li>
<li>Installs from git or zip/whl files.</li>
<li>Editable installs.</li>
</ul>
<p>I've just checked a default notebook instance in Google Cloud and almost half of the <code>pip freeze</code> list looks like this:</p>
<pre><code>threadpoolctl @ file:///tmp/tmp79xdzxkt/threadpoolctl-2.1.0-py3-none-any.whl
tifffile @ file:///home/conda/feedstock_root/build_artifacts/tifffile_1597357726309/work
</code></pre>
<p>Also packages like <code>conda-package-handling</code> are not even on PyPI.</p>
<p>Anyways, this is just one of the many reasons why <code>pip freeze | pip install</code> does not work in practice.</p>
|
<python><pip><virtualenv><python-venv><virtual-environment>
|
2023-02-20 22:51:45
| 1
| 6,866
|
Ark-kun
|
75,514,846
| 1,930,248
|
pip says version 40.8.0 of setuptools does not satisfy requirement of setuptools>=40.8.0
|
<p>I am seriously stumped with this one and can't find any questions that match.</p>
<p>If I run <code>pip3 show setuptools</code> or <code>pip2 list</code>, both say that <code>setuptools 40.8.0</code> is installed, but when I try to install a module locally, from a local source code directory, I get the error that says <code>No matching distribution found for setuptools>=40.8.0</code></p>
<p>Due to access and firewall restrictions, I have to install the module into my home directory and using what's already installed on the system.</p>
<p>This worked with the previous version of the module, but now it's failing.</p>
|
<python><pip>
|
2023-02-20 22:42:51
| 2
| 917
|
RayInNoIL
|
75,514,819
| 371,334
|
How can we redact dynamic strings in Datadog traces?
|
<p>If we do <code>raise RuntimeError(password)</code> in an endpoint handler, the password will show up in the Datadog trace. How can we tell Datadog that certain variables should be redacted?</p>
|
<python><datadog>
|
2023-02-20 22:38:35
| 1
| 1,070
|
Andrew
|
75,514,736
| 2,595,859
|
Machine Learning associate solutions with issues
|
<p>I'm new with ML.
I have basically a set of solutions, articles with title and steps, for common app issues in our departament.
I want to use ML to scan those solutions and train a model that based on user inputs like "In My pc microsoft word is not starting".
I'm still not sure if solution would be model triying to elaborate a series on steps based on the articles it has or maybe propose a set of articles.</p>
<p>My main concert is which algoritm I should use for each case.</p>
<p>thanks in advance</p>
|
<python><tensorflow><machine-learning><dataset>
|
2023-02-20 22:26:45
| 1
| 471
|
UserMan
|
75,514,722
| 523,612
|
Why do I get "TypeError: open() missing required argument 'flags' (pos 2)" or "TypeError: an integer is required (got type str)" when opening a file?
|
<p><strong>If your question was closed as a duplicate of this, it is because</strong> you have code along the lines of:</p>
<pre><code>from os import *
with open('example.txt', mode='r') as f:
print('successfully opened example.txt')
</code></pre>
<p>This causes an error message that says <code>TypeError: open() missing required argument 'flags' (pos 2)</code>.</p>
<p>Alternately, you may have tried specifying the <code>mode</code> as a positional argument instead of a keyword argument, like:</p>
<pre><code>from os import *
with open('example.txt', 'r') as f:
print('successfully opened example.txt')
</code></pre>
<p>But that does not work either - it gives a different error, which says <code>TypeError: an integer is required (got type str)</code>.</p>
<p>You may have noticed that there is no such keyword argument <code>flags</code> for the built-in <code>open</code> function:</p>
<pre><code>>>> help(open)
Help on built-in function open in module io:
open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None)
Open file and return a stream. Raise OSError upon failure.
</code></pre>
<p>Indeed, if you try removing <code>from os import *</code> from the code example, you should find that the problem is resolved.</p>
<p>This question is an artificial canonical duplicate, to explain what happened, i.e.: <strong>Why is it different when the code says <code>from os import *</code></strong>? Also, how can the problem be resolved?</p>
|
<python><file-io><python-os><shadowing>
|
2023-02-20 22:24:56
| 1
| 61,352
|
Karl Knechtel
|
75,514,549
| 8,127,672
|
Python inject a class based on a value passed to a function
|
<p>Let's say I have two classes which have methods accepting arguments as below:</p>
<pre><code>class Foo:
def __m1(*scenario):
print("Foo.m1()")
for s in scenario:
print(s)
class Bar:
def __m1(*scenario):
print("Bar.m1()",scenario)
</code></pre>
<p>Now we create an interface class that will inherit those two classes.</p>
<pre><code>class Interface(Foo, Bar):
def __init__(self):
self.servicer = None
self.scenario = None
super().__init__()
def say_hello(self):
print("Interface.say_hello")
</code></pre>
<p>It has its own attributes and methods. The servicer attribute will be set by the client to tell the interface whether it should use methods from Foo or Bar. All it takes is a little bit of <strong>getattr</strong> redirection:</p>
<pre><code>def __getattr__(self, key: str):
key2 = f"_{self.servicer}__{key}({self.scenario})" # _Bar__m1 or _Foo__m1.
return self.__getattribute__(key2)
</code></pre>
<p>At this point, when we call interface.m1(), it will actually call interface._Foo__m1() or interface._Bar__m1().</p>
<p>Then the client is simply:</p>
<pre><code>class Client:
interface = Interface()
def call_servicer(self, method_name, whichservicer):
self.interface.servicer = whichservicer
self.interface.method = method_name
self.interface.m1() ## Need a way to avoid this
client = Client()
client.call_servicer("m1","Foo")
client.call_servicer("m1","Bar")
client.interface.say_hello()
</code></pre>
<p>Now, when I run the code, I see the response below:</p>
<pre><code>Foo.m1()
<__main__.Interface object at 0x102fc1fa0>
Bar.m1() (<__main__.Interface object at 0x102fc1fa0>,)
Interface.say_hello
Process finished with exit code 0
</code></pre>
<p>In brief the problem I am trying to solve is as follows:</p>
<ul>
<li>I would like to pass arguments to the method in class Foo or Bar using reflection</li>
<li>Also, I would like to avoid self.interface.m1() instead will like to pass the method name as a variable</li>
</ul>
|
<python><python-3.x>
|
2023-02-20 22:00:53
| 4
| 534
|
JavaMan
|
75,514,538
| 371,334
|
How to have different sample rates for different endpoints with Datadog
|
<p>Datadog can be configured to have different sample rates for different services via the <code>DD_TRACE_SAMPLING_RULES</code> environment variable. But how could we have different sample rates for different endpoints within a service?</p>
|
<python><datadog>
|
2023-02-20 21:59:48
| 1
| 1,070
|
Andrew
|
75,514,243
| 854,183
|
UnicodeEncodeError error when I redirect the output of a Python script into file
|
<p>I am writing a Python script to gather statistics on a directory that contains a very large number of files. The script itself works correctly by printing text into the terminal. I get no error. However, when I redirect the output stream into file in bash, I get the error below:</p>
<pre><code>python file_paths_processing.py -l 100 --dir-path "F:/bigdirectory" > dump.txt
Traceback (most recent call last):
File "D:\My_Designs\Python\myscripts\file_paths.py", line 42, in <module>
main()
File "D:\My_Designs\Python\myscripts\file_paths.py", line 39, in main
print("{:>4} {}".format(len(file), file))
File "C:\Users\myuser\AppData\Local\Programs\Python\Python310\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 45-54: character maps to <undefined>
</code></pre>
<p>From what I understand, there are some files somewhere that contain unicode characters in their name. This is certainly the cause behind the fault. But what is the correct way to fix this? If the script can write into the terminal without error, then why not just redirect it into a text file?</p>
|
<python><bash><unicode>
|
2023-02-20 21:20:42
| 1
| 2,613
|
quantum231
|
75,514,168
| 3,593,246
|
Identify a unique tag using BeautifulSoup
|
<p>BeautifulSoup treats two tags as identical if they both contain the exact same content, even when the two tags are <em>not</em> the same DOM node.</p>
<p>Example:</p>
<pre><code>from bs4 import BeautifulSoup
x = '<div class="a"><span>hello</span></div><div class="b"><span>hello</span></div>'
page = BeautifulSoup(x, 'html.parser')
spans = page.select('span')
spans[0] == spans[1] # prints True
</code></pre>
<p>The way I have managed to get around this is to account for their parents as well, e.g.:</p>
<pre><code>spans = page.select('span')
spans[0] == spans[1] and list(spans[0].parents) == list(spans[1].parents) # prints False
</code></pre>
<p>However, this method - when used on a normal HTML page with many nested DOM nodes - is often an order of magnitude slower than just comparing spans[0] to spans[1] without the parents.</p>
<p>My question is: is there a more efficient way to determine, via Beautiful Soup, whether two nodes are truly the same one?</p>
|
<python><beautifulsoup>
|
2023-02-20 21:09:58
| 1
| 362
|
jayp
|
75,513,921
| 9,841,408
|
How do I manually set the max value for the Y axis in my pandas boxplot?
|
<p>I'm looking at <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.boxplot.html" rel="nofollow noreferrer">this</a> official Pandas documentation, which says I can specify an "object of class matplotlib.axes.Axes" when creating my boxplot.</p>
<p>How do I format this boxplot axis object so that I can manually set the maximum Y axis value?</p>
<p>I've seen questions and answers on here relating to changing the Y axis <em>after</em> the boxplot is created, but those have not worked for me and I'd like to set the Y axis max at the time of the boxplot's creation.</p>
<p>This is the code I have so far:</p>
<pre><code>import pandas as pd
prices= pd.read_csv('..\priceData.csv')
boxplot = prices.boxplot(column=['price'])
</code></pre>
|
<python><pandas><matplotlib><boxplot><axes>
|
2023-02-20 20:38:45
| 2
| 399
|
Catlover
|
75,513,875
| 11,229,812
|
Python -How to compare columns from two dataframe and create 3rd with new values?
|
<p>I have two dataframes that contains names. What I am need to do is to check which of the names in second dataframe are not present in the first dataframe.
For this example</p>
<pre><code>list1 = ['Mark','Sofi','Joh','Leo','Jason']
df1 = pd.DataFrame(list1, columns =['Names'])
</code></pre>
<p>and</p>
<pre><code>list2 = ['Mark','Sofi','David','Matt','Jason']
df2 = pd.DataFrame(list2, columns =['Names'])
</code></pre>
<p>So basically I in this simple example we can see that David and Matt from second dataframe do not exist in the first dataframe.</p>
<p>I need programmatically to come up with 3rd dataframe that will have results like this:</p>
<pre><code>Names
David
Matt
</code></pre>
<p>My first thought was to try using pandas merge function but I am unable to get the unique set of names from df2 that are not in df1.</p>
<p>Any thoughts on how to do this?</p>
|
<python><python-3.x><pandas>
|
2023-02-20 20:32:44
| 3
| 767
|
Slavisha84
|
75,513,842
| 1,750,821
|
Python SQLite math functions (no such function: SQRT)
|
<p>With Python 3.11 I try to use math function (sqrt) with SQLite SQL query. Let's say:</p>
<pre><code>import sqlite3 as sq
import pandas as pd
con = sq.connect('database.db')
query = "SELECT SQRT(value) FROM table;"
pd.read_sql_query(query, con, index_col=None)
</code></pre>
<p>It throws "no such function: SQRT" in my local python environment. However when I use Jupyter Lab (same sqlite3 version - 3.39.3) or DB Browser it works with exactly the same database. So I have a proof that it can be done.</p>
<p>I've seen that by default sqlite is built without "math" flag as in: <a href="https://stackoverflow.com/questions/70451170/sqlite3-math-functions-python">SQLite3 math functions Python</a>
However sqlite is delivered as a part of Python standard library so I probably should rebuild my Python installation. Hence the question:</p>
<p>How can I configure python to use sqlite with built-in support for math functions?</p>
|
<python><sqlite>
|
2023-02-20 20:27:48
| 1
| 366
|
d3m0n
|
75,513,793
| 10,731,523
|
Python Tensorflow - how does GradientTape operate and why can it give None as a result?
|
<p><strong>Abstract</strong><br />
I am trying to create a neural network with custom training. In my attempts I ended up with a<br />
<code>ValueError: No gradients provided for any variable</code> error. While trying to figure it out, I've found out that the problem appears because GradientTape sees no connection between the parameters given in <code>GradientTape.gradient()</code> which in turn usually happens because not every variable is watched by default and some of them weren't marked with a <code>watch()</code> function.</p>
<p><strong>Problem</strong><br />
The code here raises the value error <code>ValueError: No gradients provided for any variable: (['dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0', 'dense_2/kernel:0', 'dense_2/bias:0'],). Provided `grads_and_vars` is (None,...</code> because <code>GradientTape.gradient()</code> returns a list of <code>None</code> instead of gradients for some reason.<br />
The code (this is only a draft so it is kind of dirty):</p>
<pre><code>def der2(f, x):
res = [(f[0] - 2 * f[1] + f[2]) / ((x[1] - x[0]) * (x[2] - x[1]))] * 2
res.extend(
[(f[i - 1] - 2 * f[i] + f[i + 1]) / ((x[i] - x[i - 1]) * (x[i + 1] - x[i])) for i in range(2, len(f) - 2)])
res.extend([(f[-3] - 2 * f[-2] + f[-1]) / ((x[-1] - x[-2]) * (x[-2] - x[-3]))] * 2)
return res
dim1 = 50
dim2 = 50
def loss(model, x, y, training):
y_ = model(x)
s = 0
for inputs, result in zip(x, y_):
res_matrix = result.numpy()
res_matrix = np.reshape(res_matrix, (dim1, dim2))
transposed = res_matrix.transpose()
si = list()
sj = list()
for yi in res_matrix:
si.append(der2(yi, inputs[:dim1]))
for yi in transposed:
sj.append(der2(yi, inputs[dim1:]))
si = np.array(si) + np.transpose(np.array(sj))
si = tf.reduce_sum(si)
for i in range(len(res_matrix)):
for j in range(len(transposed)):
si += 30 * np.exp(0.007 * ((inputs[i] - 5) ** 2) * ((inputs[dim1 + j] - 5) ** 2))
s += abs(si)
return s / 2500
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets, training=True)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
num_epochs = 1501
x = np.linspace(0, 10, num=dim1)
x = np.append(x, np.linspace(0, 10, num=dim2))
y = 0
train_dataset = tf.data.Dataset.from_tensor_slices(([[x]], [[y]]))
model = tf.keras.Sequential([
Dense(100, input_shape=(100,), activation=tf.nn.relu), # input shape required
Dense(100, activation=tf.nn.relu),
Dense(2500, dtype='float64')
])
optimizer = tf.keras.optimizers.SGD(learning_rate=0.00001)
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
epoch_loss_avg.update_state(loss_value)
if epoch % 50 == 0:
avg_loss = epoch_loss_avg.result()
optimizer.learning_rate = avg_loss / 1000
print("Epoch {:03d}: Loss: {:.3f}".format(epoch, avg_loss))
for x, y in train_dataset:
y_ = model(x)
plt.plot(x[0], y_[0])
plt.show()
</code></pre>
<p><strong>My understanding of what happens</strong></p>
<p>I have three code samples.</p>
<ol>
<li>The one that works fine:</li>
</ol>
<pre><code>def der(f, x):
res = [(f[1] - f[0]) / (x[1] - x[0])] * 2
res.extend([(f[i] - f[i - 1]) / (x[i] - x[i - 1]) for i in range(2, len(f))])
return res
def loss(model, x, y, tape, training):
y_ = model(x)
s = 0
for xi, yi in zip(x, y_):
lq = (1 + 3 * (xi ** 2)) / (1 + xi + xi ** 3)
s += tf.reduce_sum(abs((der(yi, xi) + (xi + lq) * yi - xi ** 3 - 2 * xi - lq * xi * xi)))
return s / tf.size(x).numpy()
</code></pre>
<p>Since it works fine I thought the problem is probuably because GradientTape only watches the output layer (<code>yi</code> in this sample) but not the <code>result_matrix</code> and <code>transposed</code> from the main problem code. So i've tried to simulate the same operations in sample #1 by adding <code>yi = yi.numpy()</code> and <code>yi = tf.convert_to_tensor(yi)</code> and making it a new variable and got code sample #2:</p>
<ol start="2">
<li>This code results in the same error as the main problem code:</li>
</ol>
<pre><code>def loss(model, x, y, tape, training)
y_ = model(x)
s = 0
for xi, yi in zip(x, y_):
yi = yi.numpy()
yi = tf.convert_to_tensor(yi)
lq = (1 + 3 * (xi ** 2)) / (1 + xi + xi ** 3)
s += tf.reduce_sum(abs((der(yi, xi) + (xi + lq) * yi - xi ** 3 - 2 * xi - lq * xi * xi)))
return s / tf.size(x).numpy()
</code></pre>
<p>So i thought that simply adding <code>tape.watch(yi)</code> would solve it, but it did not:</p>
<ol start="3">
<li>This code sample also results in the same error.</li>
</ol>
<pre><code>def loss(model, x, y, tape, training)
y_ = model(x)
s = 0
for xi, yi in zip(x, y_):
yi = yi.numpy()
yi = tf.convert_to_tensor(yi)
tape.watch(yi)
lq = (1 + 3 * (xi ** 2)) / (1 + xi + xi ** 3)
s += tf.reduce_sum(abs((der(yi, xi) + (xi + lq) * yi - xi ** 3 - 2 * xi - lq * xi * xi)))
return s / tf.size(x).numpy()
</code></pre>
<p>Since that did not fix the problem, i clearly misunderstand something about how GradientTape operates.</p>
|
<python><tensorflow><machine-learning><neural-network><gradienttape>
|
2023-02-20 20:20:56
| 0
| 1,435
|
Rabter
|
75,513,683
| 1,088,259
|
How auto consume to new topics?
|
<p>I consume data from kafka and all works perfect. But one problem - when i'm creating new topic - my script doesn't know about this. How auto refresh list of subscribed topics? I think pattern can help me, but it was mistake.</p>
<pre><code>from kafka import KafkaConsumer
import json
consumer = KafkaConsumer(
# auto_offset_reset="earliest",
group_id='my-group',
bootstrap_servers=["localhost:9092"],
)
consumer.subscribe(pattern='^pg.*')
</code></pre>
|
<python><apache-kafka>
|
2023-02-20 20:05:28
| 1
| 355
|
user1088259
|
75,513,619
| 8,818,287
|
is redudant to use `__all__` and `import specific_module`?
|
<h3>Background</h3>
<p>Reading the answers for <a href="https://stackoverflow.com/questions/44834/what-does-all-mean-in-python">this question</a>, and looking at the Pandas and Django repository.</p>
<p>If <code>__all__</code> is defined then when you <code>import *</code> from your package, only the names defined in <code>__all__</code> are imported in the current namespace.</p>
<p>I immediately draw the conclusion that if I use <code>import package.specific_module</code>, then there is no benefit of defining <code>__all__</code>.</p>
<p>However, digging around some common project like Pandas, Django... I realised that developers import specific modules <strong>and</strong> pass all of them as a list to the <code>__all__</code> variable, like the below example:</p>
<pre class="lang-py prettyprint-override"><code>from pandas.core.groupby.generic import (
DataFrameGroupBy,
NamedAgg,
SeriesGroupBy,
)
from pandas.core.groupby.groupby import GroupBy
from pandas.core.groupby.grouper import Grouper
__all__ = [
"DataFrameGroupBy",
"NamedAgg",
"SeriesGroupBy",
"GroupBy",
"Grouper",
]
</code></pre>
<h3>Question</h3>
<p>I know that <code>pydocs</code> will ignore modules that aren't in the <code>__all__</code> variable.
But besides that, what are the other benefits of importing specific modules from within <code>__init__.py</code> and at the same time pass them as a list to the <code>__all__</code> variable? isn't it redundant?</p>
|
<python><pandas><syntax>
|
2023-02-20 19:55:41
| 1
| 789
|
asa
|
75,513,612
| 12,435,792
|
How to add empty columns to a dataframe?
|
<p>I have a dataframe with 38 columns. I want to add 19 more blank columns at the end of it.</p>
<p>I've tried this:</p>
<pre><code>df.insert([39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57],
['Quality Control - Dupe Ticketing/Dupe Invoicing',
'Error Type',
'Passed/Not Passed',
'Agent Name',
'Team Leader',
'Responsibility(GO/Market/Capability)',
'Remediation Comments',
'Financial Exposure/Dupe Value(USD)',
'Self Identified (Yes/No)',
'Remediation Date',
'Remediation Status (Yes/No)',
'Remarks',
'CM Impact Yes/No',
'Is product cost(Air+Land) matching accounting lines & excel file -Yes/No',
'For Air - all ticket issued and matches all pax',
'For EXCH Air - passenger association correct & no dupe EXCH',
'Error - Yes/No',
'For EXCH Air - penalty check',
'Comments'],
empty_value)
</code></pre>
<p>But this gives a <code>TypeError: unhashable type: 'list'.</code></p>
|
<python><pandas><dataframe>
|
2023-02-20 19:55:16
| 3
| 331
|
Soumya Pandey
|
75,513,518
| 1,497,139
|
How are fully qualified names for slots defined in LinkML?
|
<p>linkML's slot concept will allow to share slot names between classes as IMHO is typical for the ontological approach to modeling. In most of my <a href="https://wiki.bitplan.com/index.php/SiDIF" rel="nofollow noreferrer">SiDIF</a> based models i am using object oriented approaches where there are separate properties/attributes/slots per class. Therefore there needs to be a fully qualifying name for each slot that is globaly unique and locally unique per class</p>
<p>In my first attempt to assign slots to classes i didn't use fully qualifying names and ended up with the error message:</p>
<pre><code>ValueError: No such slot {'name': 'name', 'description': 'Name of the context'} as an attribute of Context ancestors or as a slot definition in the schema
</code></pre>
<p>which does not explain how to solve the problem. I am assuming a fully qualified slot name would solve this problem.</p>
<p><strong>What would solve the problem?</strong>
If my assumption fully qualifying slot name solve the problem - what are allowed / recommended ways to assign fully qualified names - how are namespaces syntactically separated or is there no such official linkML way ot doing this.</p>
<p>Should i e.g. specify context.name or context::name or the like?</p>
<p>my current code is:</p>
<pre class="lang-py prettyprint-override"><code>'''
Created on 2023-02-20
@author: wf
'''
from meta.metamodel import Context
from linkml_runtime.utils.schemaview import SchemaView
from linkml_runtime.linkml_model import SchemaDefinition, ClassDefinition, SlotDefinition
from linkml.generators.linkmlgen import LinkmlGenerator
class SiDIF2LinkML:
"""
converter between SiDIF and LINKML
"""
def __init__(self,context:Context):
self.context=context
def asYaml(self)->str:
"""
convert my context
"""
# https://linkml.io/linkml-model/docs/SchemaDefinition/
sd=SchemaDefinition(id=self.context.name,name=self.context.name)
sv=SchemaView(sd)
for topic in self.context.topics.values():
cd=ClassDefinition(name=topic.name)
cd.description=topic.documentation
sv.add_class(cd)
for prop in topic.properties.values():
slot=SlotDefinition(name=prop.name)
if hasattr(prop,"documentation"):
slot.description=prop.documentation
sv.add_slot(slot)
cd.slots.append(slot)
pass
pass
lml_gen=LinkmlGenerator(schema=sd,format='yaml')
yaml_text=lml_gen.serialize()
return yaml_text
</code></pre>
<p>see also <a href="https://stackoverflow.com/questions/75508779/how-do-i-add-a-class-to-a-schemadefinition-in-linkml">How do i add a class to a SchemaDefinition in LinkML?</a></p>
|
<python><linkml>
|
2023-02-20 19:43:51
| 1
| 15,707
|
Wolfgang Fahl
|
75,513,420
| 11,170,350
|
Average consective nth row in numpy
|
<p>I have a 3D numpy array.</p>
<pre><code>x=np.random.randint(low=0,high=10,size=(100,64,1000))
</code></pre>
<p>I want to average every 4th row, for example first 4, then 4-8, 8-12 and so on.
I tried the following way</p>
<pre><code>x =np.split(x,len(x)/4)
np.mean(np.stack(x),1)
</code></pre>
<p>I am bit confused if its the correct way? Or if there is a better way. Also how to do if first dimension is not completely divisible by 4.
For example, I can do this way</p>
<pre><code>x =np.array_split(x,len(x)/4)
np.stack([np.mean(i,0) for i in x],0)
</code></pre>
<p>Thanks</p>
<p><strong>EDIT:</strong><br />
Here is my use case.
This is sensor data, where 100 is the number of time data been collected (trials), 64 is the number of channels of sensor and 1000 is sensor signal (length). I want sensor signal to be average for first 4 trials, then next 4 trials and so on.</p>
|
<python><numpy>
|
2023-02-20 19:29:20
| 2
| 2,979
|
Talha Anwar
|
75,513,149
| 11,173,364
|
How to save the output of a jupyter notebook cell (a pandas dataframe) as a high resolution picture?
|
<p>For normal plots I can set the <code>dpi</code> argument to generate a high resolution picture.</p>
<p>But for outputs of a cell such as in this picture, which is a output by displaying a pandas dataframe, how can I save it as a high resolution picture?</p>
<p><a href="https://i.sstatic.net/wxK7G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wxK7G.png" alt="enter image description here" /></a></p>
<p>The above picture is from a screen capture but I dont really like it.</p>
|
<python><jupyter-notebook>
|
2023-02-20 18:52:50
| 1
| 769
|
user900476
|
75,513,103
| 14,167,364
|
tkinter mainloop() not ending the script once it completes
|
<p>For some reason the <code>mainloop()</code> is not ending. The final <code>print</code> statement never gets triggered but everything else does. Any idea what is causing this or how to resolve it? It happens even without the threading.</p>
<pre><code>import time
from tkinter.filedialog import askdirectory
from tkinter import *
def threading():
t1=Thread(target=checkProgress)
t1.start()
def checkProgress():
loading_window.geometry = ("500x500")
text = "The device is being reset. This will take a minute."
Label(loading_window, text=text, font=('times', 12)).pack()
loading_window.update()
time.sleep(3)
print("The connection is complete!")
Tk().withdraw()
download_location = askdirectory(title='Find and select the download folder', mustexist=TRUE)
loading_window = Tk()
loading_window.after(200, threading())
loading_window.mainloop()
print("Finished")
</code></pre>
|
<python><tkinter><python-multithreading><mainloop>
|
2023-02-20 18:48:06
| 2
| 580
|
Justin Oberle
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.