QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,399,418
| 1,848,244
|
Pandas: Multi Index hierarchical join - populating missing level values from one join side
|
<p>I have some hierarchical data. Portions of it are scattered through 2 files.</p>
<p>The first file contains all hierarchy levels, but due to circumstances outside of my control, sometimes rows for particular level <em>values</em> are ommitted (the external system that produces the data omits rows where the value is 0.)</p>
<p>Here is an example. Key INFO: <strong>We want to always have 3 leaf levels</strong></p>
<pre><code>node1
+ child1
+ leaf1 # NB: this child has no value for leaf2
+ leaf3
+ child2
+ leaf1
+ leaf2
+ leaf3
node2
... etc
In tabular:
level1 level2 level3 value1
0 node1 child1 leaf1 10
1 node1 child1 leaf3 30
2 node1 child2 leaf1 10
3 node1 child2 leaf3 30
4 node2 child1 leaf1 100
5 node2 child1 leaf2 200
6 node2 child1 leaf3 200
7 node2 child2 leaf1 300
8 node2 child2 leaf2 400
9 node2 child2 leaf3 500
</code></pre>
<p>The second file contains the lowest hierarchy level (the leaves of the hierarchy tree) - usefully this file always contains data for every possible leaf value.</p>
<pre><code> level3 value2
0 leaf1 1000
1 leaf2 2000
2 leaf3 3000
</code></pre>
<p>I was wondering if there was a way to join these two to ensure the resulting dataframe has ALL leaf level values in all nodes. The ommitted values can be zero or nan filled. Like this.</p>
<pre><code> value2 value1
level1 level2 level3
node1 child1 leaf1 1000 10.0
leaf2 2000 0.0
leaf3 3000 30.0
node2 child1 leaf1 1000 100.0
leaf2 2000 200.0
leaf3 3000 200.0
node1 child2 leaf1 1000 10.0
leaf2 2000 0.0
leaf3 3000 30.0
node2 child2 leaf1 1000 300.0
leaf2 2000 400.0
leaf3 3000 500.0
</code></pre>
<p>I've some code that accomplishes this, but I cannot help thinking that there must be a MUCH simpler way to accomplish this - perhaps through some kind of join or concat. However, none of those sorts I've tried actually works.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from io import StringIO
# NB:
# 1. The size of level1 is not known in advance. Actual
# data may have few or many values.
# 2. The level3 values are ommitted when value1 is zero.
# (This is the problem I want to solve)
# 3. level2 does have a set number of values (child1, child2)
# - I'd prefer to avoid using that, but can do so.
data1 = """
level1,level2,level3,value1
node1,child1,leaf1,10
node1,child1,leaf3,30
node1,child2,leaf1,10
node1,child2,leaf3,30
node2,child1,leaf1,100
node2,child1,leaf2,200
node2,child1,leaf3,200
node2,child2,leaf1,300
node2,child2,leaf2,400
node2,child2,leaf3,500
"""
# This data ALWAYS has the full number of level2 values.
data2 = """
level3,value2
leaf1,1000
leaf2,2000
leaf3,3000
"""
df1 = pd.read_csv(StringIO(data1))
df2 = pd.read_csv(StringIO(data2))
#result_df = pd.read_csv(StringIO(desired_result), index_col=[0,1]).fillna(0.0)
print("Input data1:")
print(df1)
print("Input data2:")
print(df2)
# Here is how I got this to work.
# Shave off level1 and 2 from df1, and drop duplicates.
# that gives us only the hierarchy data for L1 and L2.
hierarchy1_df = df1[['level1', 'level2']].drop_duplicates()
print("Level1 and Level2 hierarchy:")
print(hierarchy1_df)
#Level1 and Level2 hierarchy:
# level1 level2
#0 node1 child1
#2 node1 child2
#4 node2 child1
#7 node2 child2
# I will abuse the fact that level2 has a static number of
# values. I will add level2 to df2 and duplicate
# the other column data over that new dimension.
child1_df = df2.copy()
child1_df['level2'] = 'child1'
child2_df = df2.copy()
child2_df['level2'] = 'child2'
hierarchy2_df = pd.concat([child1_df, child2_df])
print("Level2 and Level3 hierarchy")
print(hierarchy2_df)
#Level2 and Level3 hierarchy
# level3 value2 level2
#0 leaf1 1000 child1
#1 leaf2 2000 child1
#2 leaf3 3000 child1
#0 leaf1 1000 child2
#1 leaf2 2000 child2
#2 leaf3 3000 child2
# Now I will join this to hierarchy1_df. That will give us
# all of level1, level2, and level3.
hierarchy_df = pd.merge(hierarchy1_df, hierarchy2_df, on=['level2'])
hierarchy_df = hierarchy_df.set_index(['level1', 'level2', 'level3'])
print("Full hierarchy:")
print(hierarchy_df)
#Full hierarchy:
# value2
#level1 level2 level3
#node1 child1 leaf1 1000
# leaf2 2000
# leaf3 3000
#node2 child1 leaf1 1000
# leaf2 2000
# leaf3 3000
#node1 child2 leaf1 1000
# leaf2 2000
# leaf3 3000
#node2 child2 leaf1 1000
# leaf2 2000
# leaf3 3000
# And finally I join the full hierarchy back to df1 which should
# give us the desired result:
df1 = df1.set_index(['level1', 'level2', 'level3'])
df = hierarchy_df.join(df1).fillna(0.0)
print("Desired Result")
print(df)
</code></pre>
<p>Will accept the answer below, but using that answer found this way of also accomplishing:</p>
<pre class="lang-py prettyprint-override"><code># Get the first 2 levels from df1. df1 has all the values for
# those.
parent_levels = df1.set_index(['level1','level2']).index.levels
# The final level from df2, which has all the level values
# for level3.
df2 = df2.set_index(['level3'])
# Create a hierarchical index that is the product
# of all 3 levels.
hierarchy = pd.MultiIndex.from_product(
parent_levels + [df2.index],
names=['level1', 'level2', 'level3'])
# Push the new index into df1, filling empty rows
# with 0s.
df = (df1.set_index(['level1','level2', 'level3'])
.reindex(hierarchy)
.fillna(0.0)
.join(df2)
)
# Done.
print(df)
</code></pre>
|
<python><pandas>
|
2023-02-09 13:38:40
| 1
| 437
|
user1848244
|
75,399,229
| 962,190
|
Testing if warnings are sent as logmessages
|
<p>I have set <code>logging.captureWarnings(True)</code> in an application, and would like to test if warnings are logged correctly. I'm having difficulties understanding some of the behavior I'm seeing where tests are influencing each other in ways that I don't quite get.</p>
<p>Here is an example test suite which reproduces the behavior I'm seeing:</p>
<p><strong>test_warning_logs.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import warnings
import logging
def test_a(caplog):
logging.captureWarnings(True)
logging.basicConfig()
warnings.warn("foo")
assert "foo" in caplog.text
def test_b(caplog):
logging.captureWarnings(True)
logging.basicConfig()
warnings.warn("foo")
assert "foo" in caplog.text
</code></pre>
<p>Both tests are identical. When run in isolation (<code>pytest test_warning_logs.py -k test_a</code>, <code>pytest test_warning_logs.py -k test_b</code>), they each pass. When both of them are executed in the same run (<code>pytest test_warning_logs.py</code>), only the first one will pass:</p>
<pre><code>============== test session starts ========================
platform linux -- Python 3.10.2, pytest-7.2.1, pluggy-1.0.0
rootdir: /home/me
plugins: mock-3.10.0, dependency-0.5.1
collected 2 items
test_warning_logs.py .F [100%]
==================== FAILURES =============================
_____________________ test_b ______________________________
caplog = <_pytest.logging.LogCaptureFixture object at 0x7f8041857c40>
def test_b(caplog):
logging.captureWarnings(True)
logging.basicConfig()
warnings.warn("foo")
> assert "foo" in caplog.text
E AssertionError: assert 'foo' in ''
E + where '' = <_pytest.logging.LogCaptureFixture object at 0x7f8041857c40>.text
[...]
</code></pre>
<h2>Additional Information</h2>
<p>First I thought that the commands <code>logging.captureWarnings</code> and <code>logging.basicConfig</code> aren't idempotent, and running them more than once is the issue. But if you remove them from <code>test_b</code>, it still fails.</p>
<p>My current assumption is that it's a pytest issue, because when the code is executed without it, both warnings are logged:</p>
<pre class="lang-py prettyprint-override"><code># add this block to the bottom of test_warning_logs.py
if __name__ == '__main__':
from unittest.mock import MagicMock
test_a(MagicMock(text="foo"))
test_b(MagicMock(text="foo"))
</code></pre>
<p><code>$ python test_warning_logs.py</code></p>
<pre><code>WARNING:py.warnings:/home/me/test_warning_logs.py:9: UserWarning: foo
warnings.warn("foo")
WARNING:py.warnings:/home/me/test_warning_logs.py:17: UserWarning: foo
warnings.warn("foo")
</code></pre>
|
<python><logging><pytest><python-logging>
|
2023-02-09 13:23:17
| 1
| 20,675
|
Arne
|
75,399,141
| 10,357,604
|
AttributeError: module 'cv2' has no attribute 'setNumThreads'
|
<p>I have installed Python 3.9 and the pkg opencv-contrib-python 4.6.0.66 (also on another computer, and there all work fine).</p>
<p>What I do is</p>
<pre><code>from ultralytics import YOLO
~\anaconda3\lib\site-packages\ultralytics\yolo\utils\__init__.py in <module>
...
---> 96 cv2.setNumThreads(0) # this is called by ultralytics and throws the error
</code></pre>
<blockquote>
<p>AttributeError: module 'cv2' has no attribute 'setNumThreads'</p>
</blockquote>
|
<python><opencv>
|
2023-02-09 13:15:54
| 1
| 1,355
|
thestruggleisreal
|
75,399,094
| 1,772,898
|
Distinguish between blank lines by their position (beginning, in-between or end of paragraphs)
|
<p>Please look at following example</p>
<blockquote>
<p>[BLANK LINE/S]</p>
<p>First Paragraph</p>
<p>[BLANK LINE Between Paragraphs]</p>
<p>Second Paragraph</p>
<p>[BLANK LINE Between Paragraphs]</p>
<p>Third Paragraph</p>
<p>[BLANK LINE/S End of Paragraphs]</p>
</blockquote>
<p>Here are blank lines at the beginning, middle and end of paragraphs. I want to distinguish between them and take separate action for each.</p>
<p>The following code only find blank lines, but can not tell whether it is at the beginning or end or in-between paragraphs.</p>
<pre><code>for line in myinfile:
if line in ['\n', '\r', '\r\n']:
pass
</code></pre>
<p>or,</p>
<pre><code>for line in myinfile:
if line.strip() == "":
pass
</code></pre>
<p>What might be the solution here?</p>
|
<python><python-3.x>
|
2023-02-09 13:12:06
| 1
| 14,440
|
Ahmad Ismail
|
75,398,970
| 519,852
|
Python Pip automatically increment version number based on SCM
|
<p>Similar questions like this were raised many times, but I was not able to find a solution for my specific problem.</p>
<p>I was playing around with <code>setuptools_scm</code> recently and first thought it is exactly what I need. I have it configured like this:</p>
<p>pyproject.toml</p>
<pre><code>[build-system]
requires = ["setuptools_scm"]
build-backend = "setuptools.build_meta"
[project]
...
dynamic = ["version"]
[tool.setuptools_scm]
write_to = "src/hello_python/_version.py"
version_scheme = "python-simplified-semver"
</code></pre>
<p>and my __init__.py</p>
<pre><code>from ._version import __version__
from ._version import __version_tuple__
</code></pre>
<p>Relevant features it covers for me:</p>
<ul>
<li>I can use semantic versioning</li>
<li>it is able to use *.*.*.devN version strings</li>
<li>it increments minor version in case of <code>feature</code>-branches</li>
<li>it increments patch/micro version in case of <code>fix</code>-branches</li>
</ul>
<p>This is all cool. As long as I am on my <code>feature</code>-branch I am able to get the correct version strings.</p>
<p>What I like particularly is, that the dev version string contains the commit hash and is thus unique across multiple branches.</p>
<p>My workflow now looks like this:</p>
<ul>
<li>create <code>feature</code> or <code>fix</code> branch</li>
<li>commit, (push, ) publish</li>
<li>merge PR to <code>develop</code>-branch</li>
</ul>
<p>As soon as I am on my <code>feature</code>-branch I am able to run <code>python -m build</code> which generated a new <code>_version.py</code> with the correct version string accordingly to the latest git tag found. If I add new commits, it is fine, as the devN part of the version string changes due to the commit hash. I would even be able to run a <code>python -m twine upload dist/*</code> now. My package is build with correct version, so I simply publish it. This works perfectly fine localy and on CI for both <code>fix</code> and <code>feature</code> branches alike.</p>
<p>The problem that I am facing now, is, that I need a slightly different behavior for my merged PullRequests
As soon as I merge, e.g. 0.0.1.dev####, I want to run my Jenkins job not on the <code>feature</code>-branch anymore, but instead on <code>develop</code>-branch. And the important part now is, I want to</p>
<ul>
<li>get <code>develop</code>-branch (done by CI)</li>
<li>update version string to same as on branch but without devN, so: 0.0.1</li>
<li>build and publish</li>
</ul>
<p>In fact, setuptools_scm is changing the version to 0.0.2.dev### now, and I would like to have 0.0.1.
I was tinkering a bit with creating git tags before running <code>setuptools_scm</code> or <code>build</code>, but I was not able to get the correct version string to put into the tag. At this point I am struggling now.</p>
<p>Is anyone aware of a solution to tackle my issue with having?:</p>
<ul>
<li>minor increment on <code>feature</code>-branches + add .devN</li>
<li>patch/micro increment on <code>fix</code>-branches + add .devN</li>
<li>no increment on <code>develop</code>-branch and version string only containing major.minor.patch of merged branch</li>
</ul>
|
<python><pip><setuptools><setuptools-scm>
|
2023-02-09 13:02:00
| 1
| 1,374
|
Matthias
|
75,398,715
| 6,630,397
|
SQLAlchemy engine keeps some idle PostgreSQL database connection opened even when using a context manager
|
<p>When using an <a href="https://www.sqlalchemy.org/" rel="nofollow noreferrer">SQLAlchemy</a> <a href="https://docs.sqlalchemy.org/en/20/core/engines.html#sqlalchemy.create_engine" rel="nofollow noreferrer">engine</a> for <a href="https://geopandas.org/" rel="nofollow noreferrer">GeoPandas</a> <a href="https://geopandas.org/en/stable/docs/reference/api/geopandas.read_postgis.html" rel="nofollow noreferrer"><code>read_postgis()</code></a> method to read data from a <a href="https://www.postgresql.org/" rel="nofollow noreferrer">PostgreSQL</a>/<a href="https://postgis.net/" rel="nofollow noreferrer">PostGIS</a> database, there are plenty of busy database slots which stay open for nothing, often showing simple <code>COMMIT</code> or <code>ROLLBACK</code> query statements.</p>
<p>Here's a <code>test.py</code> code snippet which represents this problem:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
os.environ['USE_PYGEOS'] = '0'
import geopandas as gpd
from sqlalchemy import create_engine, text
def foo():
engine = create_engine("postgresql://postgres:password@db:5432/postgres")
with engine.begin() as connection:
gdf0 = gpd.read_postgis(
sql = text("WITH cte AS (SELECT * FROM public.cities) SELECT *, ST_Buffer(geom,0.0001) FROM public.cities WHERE name ILIKE 'bog';"),
con=connection,
geom_col='geom',
crs=4326,
)
gdf1 = gpd.read_postgis(
sql = text("WITH cte AS (SELECT * FROM public.cities) SELECT *, ST_Buffer(geom,0.0002) FROM public.cities WHERE name ILIKE 'bra';"),
con=connection,
geom_col='geom',
crs=4326,
)
gdf2 = gpd.read_postgis(
sql = text("WITH cte AS (SELECT * FROM public.cities) SELECT *, ST_Buffer(geom,0.0003) FROM public.cities WHERE country ILIKE 'ven';"),
con=connection,
geom_col='geom',
crs=4326,
)
i=-1
while i < 100:
i+=1
foo()
</code></pre>
<p>As you can see, I'm using <a href="https://docs.sqlalchemy.org/en/20/core/connections.html#sqlalchemy.engine.Engine.begin" rel="nofollow noreferrer"><code>engine.begin()</code></a>, which is known as "<a href="https://docs.sqlalchemy.org/en/20/core/connections.html#sqlalchemy.engine.Engine.begin" rel="nofollow noreferrer">Connect and Begin Once</a>" in the SQLAlchemy documentation:</p>
<blockquote>
<p>A convenient shorthand form for the above “begin once” block is to use the Engine.begin() method at the level of the originating Engine object, rather than performing the two separate steps of Engine.connect() and Connection.begin(); the Engine.begin() method returns a special context manager that internally maintains both the context manager for the Connection as well as the context manager for the Transaction normally returned by the Connection.begin() method:</p>
</blockquote>
<p>For convenience, here is also a database initialization script:</p>
<pre class="lang-bash prettyprint-override"><code>psql -U postgres -d postgres -c "CREATE TABLE IF NOT EXISTS cities (
id int PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name text,
country text,
geom geometry(Point,4326)
);
INSERT INTO public.cities (name, country, geom) VALUES
('Buenos Aires', 'Argentina', ST_SetSRID(ST_Point(-58.66, -34.58), 4326)),
('Brasilia', 'Brazil', ST_SetSRID(ST_Point(-47.91,-15.78), 4326)),
('Santiago', 'Chile', ST_SetSRID(ST_Point(-70.66, -33.45), 4326)),
('Bogota', 'Colombia', ST_SetSRID(ST_Point(-74.08, 4.60), 4326)),
('Caracas', 'Venezuela', ST_SetSRID(ST_Point(-66.86,10.48), 4326));"
</code></pre>
<p>I usually run this <a href="https://www.python.org/" rel="nofollow noreferrer">Python</a> code in multiple parallel instances as a <a href="https://docs.docker.com/reference/" rel="nofollow noreferrer">dockerized</a> application.</p>
<p>When using the following SQL snippet while the app is running, e.g. from <a href="https://www.sqlalchemy.org/" rel="nofollow noreferrer">pgAdmin4</a>, I can see plenty of <code>idle</code> connections which are not properly closed. They stay in an <code>idle</code> state as long as the app is running, showing simple <code>COMMIT</code> queries. (In the whole application, there may also be some idle <code>ROLLBACK</code> queries).</p>
<pre class="lang-sql prettyprint-override"><code>SELECT state, query, *
FROM pg_stat_activity
WHERE application_name NOT ILIKE '%pgAdmin 4%'
</code></pre>
<p>Here is the result of the previous query:</p>
<p><a href="https://i.sstatic.net/LHXXI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LHXXI.png" alt="enter image description here" /></a></p>
<p>If by any chance there are too much of those pending idle connections, I can also face the following error, certainly because there are no more available "slots" in the database for extra transactions to take place:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3269, in raw_connection
return self.pool.connect()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 1255, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
rec = pool._do_get()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 168, in _do_get
with util.safe_reraise():
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 166, in _do_get
return self._create_connection()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
return _ConnectionRecord(self)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 678, in __init__
self.__connect()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 902, in __connect
with util.safe_reraise():
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 640, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 580, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: sorry, too many clients already
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/test.py", line 42, in <module>
foo()
File "/app/test.py", line 19, in foo
with engine.begin() as connection:
File "/usr/local/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3209, in begin
with self.connect() as conn:
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3245, in connect
return self._connection_cls(self)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 147, in __init__
Connection._handle_dbapi_exception_noconnection(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2410, in _handle_dbapi_exception_noconnection
raise sqlalchemy_exception.with_traceback(exc_info[4]) from e
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3269, in raw_connection
return self.pool.connect()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 1255, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
rec = pool._do_get()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 168, in _do_get
with util.safe_reraise():
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 166, in _do_get
return self._create_connection()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
return _ConnectionRecord(self)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 678, in __init__
self.__connect()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 902, in __connect
with util.safe_reraise():
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 147, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 640, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 580, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: sorry, too many clients already
(Background on this error at: https://sqlalche.me/e/20/e3q8)
</code></pre>
<p>This error regularly prevents my application from working correctly with many errors, e.g.: <code>psycopg2.OperationalError: FATAL: sorry, too many clients already</code>.</p>
<p>I conclude that the Python code above is not perfectly written. Certainly because I didn't fully understand how the SQLAlchemy engine actually works.</p>
<p>Hence my question: could you explain what's wrong with this code and, most importantly, how could I properly close these queries?</p>
<h4>Version info:</h4>
<ul>
<li>Python 3.9.7</li>
<li>geopandas==0.12.2</li>
<li>psycopg2==2.9.5</li>
<li>SQLAlchemy==2.0.1</li>
</ul>
|
<python><pandas><postgresql><sqlalchemy><geopandas>
|
2023-02-09 12:38:32
| 1
| 8,371
|
swiss_knight
|
75,398,496
| 12,968,928
|
how to Find derivative of a LHS expression with respect to a variable in the RHS of a Sympy Eq
|
<p>Suppose I have an eq1 such that</p>
<pre><code>from sympy import symbols, solve, plot, Eq, diff
a, b, X, Y, U = symbols('a b X Y U')
eq1 = Eq(U, X**a*Y**b)
</code></pre>
<p>$U=(X^a)(Y^b)$</p>
<p>but When I run diff(eq1 , X)
the differential does not evaluate I merele just get the DU/DX symbol but not evaluated</p>
<p><a href="https://i.sstatic.net/nrIej.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nrIej.png" alt="enter image description here" /></a></p>
<p>I know I could defined the function as
<code>U = X**a * Y**b</code>
and easily compute <code>diff(U)</code></p>
<p>but printing the U expression will not look nice.</p>
|
<python><sympy>
|
2023-02-09 12:17:23
| 1
| 1,511
|
Macosso
|
75,398,474
| 6,758,739
|
How to run a unix command and which requires dynamic console input in python and how to run this without manual intervention
|
<p><strong>Command: /opt/oracle/admin/bin/dboc configure dbfs --home /oracle/client/18.3.0/</strong></p>
<blockquote>
<p>When we run the above command on linux terminal it expects 3 inputs one by one as below</p>
<p>Validating DBFS Client : asdjascads</p>
<p>Enter Connectstring for dbaas : akxhsakc</p>
<p>Enter password : xbvcascha</p>
</blockquote>
<p>Below is the python code that i have developed and it just is failing to wait and expect the inputs , this command expects the input and when i press enter , it just shows the expected input string and exits but does not wait</p>
<pre><code>#!/usr/bin/env python3
import subprocess
def run_dboc_command(args):
command = ['/opt/oracle/admin/bin/dboc'] + args
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return result.stdout.decode(), result.stderr.decode()
args=['configure', 'dbfs', '--home', ' /oracle/client/18.3.0/']
stdout,stderr = run_dboc_command(args)
print ("Output:")
print (stdout)
print ("Error:")
print (stderr)
</code></pre>
<p>Also , Is it possible to run the command from python script without any manual intervention. Like we will send all the expected inputs as arguments and it does not expect anythig</p>
|
<python><subprocess>
|
2023-02-09 12:15:26
| 0
| 992
|
LearningCpp
|
75,398,291
| 3,395,802
|
Dynamically add schema to sqlalchemy func
|
<pre><code>from sqlalchemy import func
data = db.session.query(func.your_schema.your_function_name()).all()
</code></pre>
<p>I have two schemas which have the same function. In this scenario, how can I dynamically pass the schema name to func?</p>
|
<python><postgresql><sqlalchemy>
|
2023-02-09 11:59:55
| 0
| 1,105
|
Nithin
|
75,398,224
| 6,849,045
|
How do I get the indexwise average of a array column with pyspark
|
<p>I have a df with a column <code>fftAbs</code> (absolute values acquired after an fft). The type of df['fftAbs'] is a <code>ArrayType(DoubleType())</code>. I want to get the indexwise average of all the values.
So if the column holds</p>
<pre><code>// Random data
||fftAbs ||
------------
|[0, 1, 2] |
|[2, 3, 12]|
|[1, 8, 4] |
</code></pre>
<p>I want to aquire a list like <code>[1, 4, 6]</code> (because 0+2+1/3 = 1, 1+3+8/3 = 4, 2+4+12/3 = 6)</p>
<p>I've tried using:</p>
<pre class="lang-py prettyprint-override"><code>import pyspark.sql.functions as F
avgDf = df.select(F.avg('fftAbs'))
</code></pre>
<p>but I'll get <code>AnalysisException: cannot resolve 'avg(fftAbs)' due to data type mismatch: function average requires numeric or interval types, not array<double>;</code></p>
<p>EDIT:
I also tried</p>
<pre><code>def _index_avg(twoDL):
return np.mean(twoDL)
spark_index_avg = F.udf(_index_avg, T.ArrayType(T.DoubleType(), False))
avgDf = df.agg(spark_index_avg(F.collect_list('fftAbs')))
</code></pre>
<p>but then I got <code>net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for numpy.dtype). This happens when an unsupported/unregistered class is being unpickled that requires construction arguments. Fix it by registering a custom IObjectConstructor for this class.</code></p>
<p>Just for reference, my complete code is here (except from the first query):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pyspark as ps
import pyspark.sql.functions as F
import pyspark.sql.types as T
from pyspark.sql.functions import col
def _rfft(x):
transformed = np.fft.rfft(x)
return map(lambda c: (c.real, c.imag),transformed.tolist())
spark_complexType = T.ArrayType(T.StructType([
T.StructField("real", T.DoubleType(), False),
T.StructField("imag", T.DoubleType(), False),
]), False)
spark_rfft = F.udf(_rfft, spark_complexType)
def _fft_bins(size, periodMicroSeconds):
return np.fft.rfftfreq(size, d=(periodMicroSeconds/10**6)).tolist()
spark_rfft_bins = F.udf(_fft_bins, T.ArrayType(T.DoubleType(), False))
def _abs_complex(complex_tuple_list):
return list([abs(complex(real, imag)) for real, imag in complex_tuple_list])
spark_abs_complex = F.udf(_abs_complex, T.ArrayType(T.DoubleType()))
# df incoming from builder
fftDf = df.withColumn('fft', spark_rfft(col('data'))) \
.withColumn('fftFreq', spark_rfft_bins('dataDim', 'samplePeriod')) \
.withColumn('fftAbs', spark_abs_complex('fft'))
avgDf = fftDf.select(F.avg('fftAbs'))
</code></pre>
|
<python><arrays><pandas><numpy><pyspark>
|
2023-02-09 11:54:24
| 2
| 1,072
|
Typhaon
|
75,397,986
| 17,487,457
|
How to generate expanded list from two short list
|
<p>I have two lists, one for attributes the other for statistics:</p>
<pre class="lang-py prettyprint-override"><code>l1 = ['speed', 'accel']
l2 = ['min', 'max', 'mean', 'std']
</code></pre>
<p>I want to generate this expanded list of attributes' statistics:</p>
<pre class="lang-py prettyprint-override"><code>l3 = ['speed_min', 'speed_max', 'speed_mean', 'speed_std', 'accel_min', 'accel_max', 'accel_mean', 'accel_std']
</code></pre>
<p>In that order.</p>
|
<python><python-3.x><list>
|
2023-02-09 11:32:02
| 1
| 305
|
Amina Umar
|
75,397,940
| 2,755,116
|
Debugging mako NameError("Undefined")
|
<p>Is there any way of obtaining more information about what happens when <code>NameError("Undefined")</code> is raised in Mako templates.</p>
<p>In <a href="https://pythonhosted.org/Flask-Mako/#error-handling" rel="nofollow noreferrer">Flask-Mako</a> there is a solution, but I'm not using flask. So I need a <em>pure</em> Mako solution</p>
|
<python><debugging><mako>
|
2023-02-09 11:27:45
| 1
| 1,607
|
somenxavier
|
75,397,861
| 10,811,647
|
Python dash return several values inside for loop
|
<p>For my dash app, in order to update some graphs dynamically, I have to use a function that I named <code>update_graphs</code> inside a for loop. Some of the graphs contain several traces while some others only have one. The <code>update_graphs</code> function is called inside a callback and returns a <code>dict</code> and an <code>int</code> to update the <code>extendData</code> property of the <code>graph</code> object. However, since I am using a <code>return</code> statement inside a for loop, I only get the first trace.</p>
<p>I am not familiar with the generators and the <code>yield</code> keyword, maybe this is an option. But I haven't been able to make it work.</p>
<p>I have also tried to store the results of the <code>update_graphs</code> inside a list but it is not working.</p>
<p>Any help is appreciated!</p>
<p>Here is the code for the app:</p>
<pre><code>import dash
from dash.dependencies import Output, Input, State, MATCH, ALL
from dash import dcc, html, ctx
import plotly
import plotly.express as px
import random
import plotly.graph_objs as go
import pandas as pd
# Initializing the data with the correct format
init_store = {}
n=3
init_df = pd.DataFrame({'a':pd.Series(dtype='int'), 'b':pd.Series(dtype='int'), 'c':pd.Series(dtype='int'), 'd':pd.Series(dtype='int')}, index=range(50))
init_df['a'] = init_df.index
init_store['0'] = init_df
for i in range(n):
init_df = pd.DataFrame({'a':pd.Series(dtype='int'), 'b':pd.Series(dtype='int')}, index=range(50))
init_df['a'] = init_df.index
init_store[f'{i+1}'] = init_df
# Function to update the dataframes with the new observations
def get_data(json_data):
df = pd.read_json(json_data)
compteur = df['a'][len(df['a'])-1]
if len(df.columns) > 2:
new_row = {'a':compteur + 1, 'b':random.randint(13,26), 'c':random.randint(13,26), 'd':random.randint(13,26)}
else:
new_row = {'a':compteur + 1, 'b':random.randint(13,26)}
df = df.shift(periods=-1)
df.iloc[len(df)-1] = new_row
return(df.to_json())
# Function to update the graphs based on the dataframes
def update_graphs(json_data, column, index=0):
df = pd.read_json(json_data)
nb_obs = df.shape[0]
x_new = df['a'][len(df)-1]
y_new = df[column][nb_obs-1]
return dict(x=[[x_new]], y=[[y_new]]), index
colors = px.colors.qualitative.G10
def generate_graph_containers(index, json_data):
dataframe = pd.read_json(json_data)
X = dataframe['a']
Y = dataframe.loc[:, dataframe.columns != 'a']
graph_id = {'type': 'graph-', 'index': index}
return(
html.Div(
html.Div(
dcc.Graph(
id=graph_id,
style={"height": "8rem"},
config={
"staticPlot": False,
"editable": False,
"displayModeBar": False,
},
figure=go.Figure(
{
"data": [
{
"x": list(X),
"y": list(Y[Y.columns[i]]),
"mode": "lines",
"name": Y.columns[i],
"line": {"color": colors[i+2]},
}
for i in range(len(Y.columns))
],
"layout": {
"uirevision": True,
"margin": dict(l=0, r=0, t=4, b=4, pad=0),
"xaxis": dict(
showline=False,
showgrid=False,
zeroline=False,
showticklabels=False,
),
"yaxis": dict(
showline=False,
showgrid=False,
zeroline=False,
showticklabels=False,
),
"paper_bgcolor": "rgba(0,0,0,0)",
"plot_bgcolor": "rgba(0,0,0,0)",
}
}
)
)
)
)
)
app = dash.Dash(__name__)
store = [dcc.Store(id={'type':'store-', 'index':i}, data=init_store[str(i)].to_json()) for i in range(n)]
def make_layout():
return(
html.Div(
[
html.Div(
store
),
dcc.Interval(
id = 'interval',
interval = 1000,
n_intervals = 0
),
html.Div(
[
generate_graph_containers(str(i), store[i].data) for i in range(n)
]
)
]
)
)
app.layout = make_layout
@app.callback(
Output(component_id={'type':'store-', 'index':MATCH}, component_property='data'),
[
Input('interval', 'n_intervals'),
State(component_id={'type':'store-', 'index':MATCH}, component_property='data')
]
)
def update_data(time, data):
return(get_data(data))
@app.callback(
Output(component_id={'type':'graph-', 'index':MATCH}, component_property='extendData'),
Input(component_id={'type':'store-', 'index':MATCH}, component_property="data")
)
def update_graphs_callback(data):
triggered_id = ctx.triggered_id
print(triggered_id['index'])
columns = ['b', 'c', 'd']
if triggered_id['index'] == 0:
for i in range(len(columns)):
return(update_graphs(data, columns[i], i))
else:
return(update_graphs(data, 'b'))
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
|
<python><for-loop><callback><return><plotly-dash>
|
2023-02-09 11:20:19
| 1
| 397
|
The Governor
|
75,397,763
| 3,386,779
|
keyword "sleep" not working in robotframework
|
<p>I'm using robot framework with selenium2Library .While using sleep 30s getting the below error as</p>
<pre><code>No keyword with name 'Sleep 30s' found. Did you mean:
BuiltIn.Sleep
</code></pre>
<p>Any dependent library I missed to include</p>
|
<python><robotframework>
|
2023-02-09 11:10:28
| 1
| 7,263
|
user3386779
|
75,397,736
| 4,451,521
|
Poetry install on an existing project Error "does not contain any element"
|
<p>I am using Poetry for the first time.
I have a very simple project. Basically</p>
<pre><code>a_project
|
|--test
| |---test_something.py
|
|-script_to_test.py
</code></pre>
<p>From a project I do <code>poetry init</code> and then <code>poetry install</code></p>
<p>I get the following</p>
<pre><code> poetry install
Updating dependencies
Resolving dependencies... (0.5s)
Writing lock file
Package operations: 7 installs, 0 updates, 0 removals
• Installing attrs (22.2.0)
• Installing exceptiongroup (1.1.0)
• Installing iniconfig (2.0.0)
• Installing packaging (23.0)
• Installing pluggy (1.0.0)
• Installing tomli (2.0.1)
• Installing pytest (7.2.1)
/home/me/MyStudy/2023/pyenv_practice/dos/a_project/a_project does not contain any element
</code></pre>
<p>after this I can run <code>poetry run pytest</code> without problem but what does that error message mean?</p>
|
<python><python-poetry>
|
2023-02-09 11:08:22
| 5
| 10,576
|
KansaiRobot
|
75,397,536
| 19,556,055
|
Filter DataFrame where a set of values are the same in another DataFrame
|
<p>I have a dataset with some employee information, and I would like to see if certain records appear in another DataFrame. However, there may be duplicate IDs (I know...), so I wanted to filter where the ID AND date of birth are the same. I tried doing it with a merge, but then all the columns get added, which I don't want. How should I go about this?</p>
<p>Example data:</p>
<pre><code>df1 = pd.DataFrame({"ID": [1, 2, 3, 4, 5], "DOB": ["1987-12-03", "1993-04-05", "2000-01-24", "1995-05-18", "1974-10-10"], "JOB": [6, 7, 8, 9, 10]})
df2 = pd.DataFrame({"ID": [1, 1, 2, 3, 3, 4, 4], "DOB": ["1987-12-03", "1999-06-16", "1993-04-05", "2000-01-24", "1968-11-13", "1995-05-18", "1988-12-12"], "JOB": [6, 11, 7, 8, 12, 9, 13]})
</code></pre>
<p>I want the output to be:</p>
<pre><code> ID DOB JOB
0 1 1987-12-03 6
1 2 1993-04-05 7
2 3 2000-01-24 8
3 4 1995-05-18 9
</code></pre>
<p>Since those are the values that are in both DataFrames based on ID AND DOB.</p>
|
<python><pandas><dataframe>
|
2023-02-09 10:49:55
| 1
| 338
|
MKJ
|
75,397,487
| 782,564
|
Prevent ListBoxItems from being resized (specifically shrunk)
|
<p>I have a Gtk Listbox to which I'm adding a large number of items (basically text labels). I've put the ListBox inside a ScrolledWindow but if I add too many items then the height of each item is reduced until the text on the label is no longer readable.</p>
<p>How can I prevent the ListBox items from being reduced in height as I add more of them?</p>
<p>The code I'm using to create the ListBox and add the items looks like this:</p>
<pre><code># Add the listbox
self.test_list_window = Gtk.ScrolledWindow();
self.test_list = Gtk.ListBox()
self.test_list.connect("row_activated", some_method)
self.test_list_window.add(self.test_list)
</code></pre>
<p>The adding of the items is done with this method (each ListBox item has a LHS and RHS label). I thought that the set_size_request would add a minimum size to the ListBox entries but it does not appear to do so (also setting a specific height in pixels feels like the wrong answer I just want to prevent the rows from shrinking).</p>
<pre><code>def add_list_box_entry(self, lhs, rhs, lbox, set_min_size=False):
box = Gtk.Box()
if set_min_size:
box.set_size_request(10, 10)
box.pack_start(Gtk.Label(label=lhs, xalign=0), True, True, 1)
lab = Gtk.Label(label=f'({rhs})')
lab.set_halign(0.95)
box.pack_start(lab,False, True, 5)
lbox.add(box)
</code></pre>
|
<python><gtk3><pygtk>
|
2023-02-09 10:46:03
| 1
| 1,618
|
Carcophan
|
75,397,439
| 694,360
|
Deriving Heron's formula with Sympy using Groebner basis
|
<p>I'm trying to implement with Sympy the procedure described <a href="https://www.andrew.cmu.edu/course/15-355/lectures/lecture12.pdf" rel="nofollow noreferrer">in this lecture</a>, where Groebner basis is used to derive <a href="https://en.wikipedia.org/wiki/Heron%27s_formula" rel="nofollow noreferrer">Heron's formula</a>, but I'm not able to implement the last step, the computation of the correct Groebner basis, surely because of my lack of understanding of the subject. Here is the code I wrote:</p>
<pre><code>import sympy as sy
sy.init_printing(use_unicode=True, wrap_line=False)
def cross(a,b): # 2D cross product
return a[0]*b[1]-a[1]*b[0]
n = 3 # number of vertices of triangle
# defining 3n+1 symbols
P = sy.Matrix(sy.MatrixSymbol('P', n, 2)) # matrix of coordinates of vertices
s = sy.symbols(f's0:{n}') # lengths of polygon' sides
A,R,cx,cy = sy.symbols('A R cx cy') # area, radius, center coordinates
C = sy.Matrix([[cx,cy]])
P[0,0] = 0
P[0,1] = 0
P[1,1] = 0
# defining 2n+1 equations
eq_area = 2*A - sum(map(cross,*zip(*((P[i,:],P[j,:]) for i,j in zip(range(n),list(range(1,n))+[0]))))) # area of polygon
eqs_vonc = [R**2-((C-P.row(r))*sy.transpose(C-P.row(r)))[0] for r in range(P.rows)] # vertices on circumference
eqs_sides = [s[i]**2-((P[i,:]-P[j,:])*sy.transpose(P[i,:]-P[j,:]))[0] for i,j in zip(range(n),list(range(1,n))+[0])] # sides lenghts
eqs = [eq_area]+eqs_sides+eqs_vonc
# compute Groebner basis
G = sy.groebner(eqs,A,R,*s,*C) # just tried
</code></pre>
<p>How should the last step be implemented in order to obtain the Heron's formula?</p>
|
<python><sympy><computational-geometry><groebner-basis>
|
2023-02-09 10:42:34
| 1
| 5,750
|
mmj
|
75,397,364
| 11,692,632
|
Google colab not detecting changes in .py files
|
<p>I'm importing a .py file in Google Colab. I do as usual, mounting Drive, inserting the directory and importing the file:</p>
<pre><code>from google.colab import drive
drive.mount('/content/drive', force_remount=True)
import sys
sys.path.insert(0, '/content/drive/My Drive/Colab Notebooks/working_folder/')
import my_file
my_file.my_function()
</code></pre>
<p>Everything works fine, but when I edit the .py file by double clicking on the files section in Google Colab, the changes are not detected, I have to restart the environment. Is that normal? How can I do to detect the changes?</p>
|
<python><google-colaboratory>
|
2023-02-09 10:36:39
| 1
| 773
|
chococroqueta
|
75,397,168
| 8,618,242
|
ROS setBool service always False
|
<p>I'm using ROS <strong>service server</strong> in <code>python</code>:</p>
<pre class="lang-py prettyprint-override"><code>#! /usr/bin/env python3
import rospy
from std_srvs.srv import SetBool, SetBoolResponse
class StateController:
def __init__(self):
rospy.init_node("state_controller", anonymous=True)
rospy.Service("validate_vision_info", SetBool, self.validVision)
rospy.spin()
def validVision(self, req):
model_valid = req.data
rospy.logwarn("Vision Model Validation: {}".format(req.data))
return SetBoolResponse(success=True, message="")
if __name__ == "__main__":
try:
controller_ = StateController()
except rospy.ROSInterruptException:
logging.error("Error in the State Controller")
</code></pre>
<p>And I'm calling that service from a <code>javaScript</code> <strong>server client</strong> as follows:</p>
<pre class="lang-js prettyprint-override"><code>import { Ros, Service, ServiceRequest } from "roslib";
class ROSInterface {
constructor() {
this.ros = new Ros({
url: "ws://localhost:9090",
});
}
createService = (service_name, service_type) => {
let service = new Service({
ros: this.ros,
name: service_name,
serviceType: service_type,
});
return service;
};
requestService = (params) => {
let request = new ServiceRequest(params);
return request;
};
callService_ = (service_name, service_type, params) => {
return new Promise((resolve, reject) => {
const srv = this.createService(service_name, service_type);
const req = this.requestService(params);
srv.callService(
req,
(result) => {
console.log(result);
resolve(result);
},
(error) => {
console.log(error);
reject(error);
}
);
});
};
}
async function validate(val) {
const service_name = "/validate_vision_info";
const service_type = "std_srvs/SetBool";
console.log(val);
const iface_ = new ROSInterface();
let serv = await iface_
.callService_(service_name, service_type, val)
.then((result) => {
return result;
}).catch(e => {
console.log(e);
});
console.log("service result", serv);
}
validate(true);
validate(false);
</code></pre>
<p>In both service calls I get <code>False</code> in the Service Service Request in the Python code.</p>
<p>Can you please tell me how to solve it? thanks in advance.</p>
|
<javascript><python><service><ros>
|
2023-02-09 10:19:21
| 1
| 4,115
|
Bilal
|
75,396,938
| 5,213,451
|
What makes Rich display wrong characters on Powershell 7?
|
<p>I've been a happy user of <a href="https://github.com/Textualize/rich" rel="nofollow noreferrer">Rich</a> for Python for some time now, but recently started having issues where colors and special characters are not displayed properly anymore. Here is an example output of printing a <code>Table</code> in <em>Powershell 7 (x64)</em>.</p>
<pre><code>ÔöÅÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔö│ÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöô
Ôöâ Character Ôöâ Binary Ôöâ
ÔöíÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔòçÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔö®
Ôöé A Ôöé 0b1000001 Ôöé
Ôöé B Ôöé 0b1000010 Ôöé
Ôöé C Ôöé 0b1000011 Ôöé
Ôöé D Ôöé 0b1000100 Ôöé
Ôöé E Ôöé 0b1000101 Ôöé
ÔööÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔö┤ÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÇÔöÿ
</code></pre>
<p>I did not change my Python or Rich version, and testing with older versions seems to yield the same result. So I expect the issue to be related to some Windows/Powershell configuration issues.</p>
<p>My problem is that I don't know what to look for, hence my question here.
I suspected a misconfiguration for displaying Unicode characters like in <a href="https://stackoverflow.com/questions/2105022/unicode-in-powershell-with-python-alternative-shells-in-windows">this thread</a>, but it doesn't seem to be the case:</p>
<pre><code>PS C:\Users\Thrastylon> [char]0x3a9
Ω
</code></pre>
<p>Any pointers or explanation of what could be happening would be appreciated.</p>
<hr />
<p>For reproducibility, here is the code I used to generate the table above:</p>
<pre class="lang-py prettyprint-override"><code>from rich.console import Console
from rich.table import Table
def make_table(n: int):
table = Table("Character", "Binary")
start_ord = ord("A")
for o in range(start_ord, start_ord + n):
table.add_row(chr(o), bin(o))
return table
if __name__ == "__main__":
console = Console()
table = make_table(n=5)
console.print(table)
</code></pre>
|
<python><powershell><encoding><rich>
|
2023-02-09 09:58:28
| 0
| 1,000
|
Thrastylon
|
75,396,863
| 7,290,602
|
How to force assumption of past dates when year is ambiguous
|
<p>I have dates in DD/MM/YY format, where year is ambiguous. These are recorded dates so will always be in the past.</p>
<p>At the time of writing, a date for '69 is correctly interpreted as 1969:</p>
<pre><code>>>> datetime.datetime.strptime('01/06/69','%d/%m/%y')
datetime.datetime(1969, 1, 1, 0, 6)
</code></pre>
<p>However, dates in '68 are being interpreted as 2068:</p>
<pre><code>>>> datetime.datetime.strptime('01/06/68','%d/%m/%y')
datetime.datetime(2068, 1, 1, 0, 6)
</code></pre>
<p>Is there an option in <code>datetime</code> to force assumption that dates are in the past?</p>
|
<python><datetime>
|
2023-02-09 09:52:10
| 3
| 941
|
otocan
|
75,396,692
| 11,710,304
|
How can I strip string columns in a df with mixed datatype in polars?
|
<p>I want to strip a dataframe based on its data type per column. If it is a string column, a strip should be executed. If it is not a string column, it should not be striped. In pandas there is the following approach for this task:</p>
<pre><code>df_clean = df_select.copy()
for col in df_select.columns:
if df_select[col].dtype == 'object':
df_clean[col] = df_select[col].str.strip()
</code></pre>
<p>How can this be executed in polars?</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"ID": [1, 1, 1, 1,],
"A": ["foo ", "ham", "spam ", "egg",],
"L": ["A54", " A12", "B84", " C12"],
}
)
</code></pre>
|
<python><python-polars>
|
2023-02-09 09:37:01
| 1
| 437
|
Horseman
|
75,396,615
| 5,556,466
|
ModuleNotFoundError: No module named 'aws_cdk'
|
<p>I am trying to use <code>aws_cdk</code> in my project.
I am using visual studio as a IDE and I setup my <code>virtualenv</code> doing this:</p>
<ol>
<li>Install <a href="https://nodejs.org/en/" rel="nofollow noreferrer">https://nodejs.org/en/</a>.</li>
<li>Through npm, install cdk from cmd window: <code>npm install -g aws-cdk</code></li>
<li>Install python3</li>
<li>Using terminal in Visual Studio Code: go to the project folder</li>
<li>Run <code>python -m venv .venv</code> to create the virtual env.</li>
<li>Activate it: <code>source .venv/bin/activate</code></li>
<li>Run pip install: <code>pip install -r requirements.txt</code></li>
<li>Run another pip install: <code>pip install -r requirements-dev.txt</code>.</li>
<li>Check with <code>cdk diff</code> in the cmd</li>
</ol>
<p>This last step returns:</p>
<pre><code>Traceback (most recent call last):
File "C:......\app.py", line 4, in <module>
import aws_cdk as cdk
ModuleNotFoundError: No module named 'aws_cdk'
</code></pre>
<p>My <code>App.py</code> looks like this:</p>
<pre><code>#!/usr/bin/env python3
import os
import aws_cdk as cdk
import module_config
from myproject.myproject_stack import MyStack
app = cdk.App()
tags = module_config.with_tags(service_name="my_project")
prefix = module_config.with_prefix(service_name="my_project")
MyStack(
app,
construct_id=prefix,
prefix=prefix,
tags=tags,
env=module_config.env,
)
app.synth()
</code></pre>
<p>My <code>requirements.txt</code> looks like:</p>
<pre><code>aws-cdk-lib==2.56.1
constructs>=10.0.0,<11.0.0
</code></pre>
<p>The <code>requirement-dev.txt</code> is:</p>
<pre><code>bandit>=1.7.4
black>=22.10.0
coverage>=6.4.4
pylint==2.15.5
pytest
pytest-cov
yamllint
pre-commit
</code></pre>
<p>If in the vistual studio I do <code>ctrl+mouse click</code> it opens the <code>aws-cdk</code> code, therefore I know it is installed, but seems like the virtual env is not able to find it.
In my local repository, the folder <code>.venv</code> has a folder <code>Lib\site-packages</code> and this one has another called <code>aws_cdk</code>.</p>
<p>I see everything corret, but when running the <code>cdk diff</code> it breaks.</p>
|
<python><aws-cdk>
|
2023-02-09 09:30:57
| 3
| 3,243
|
mrc
|
75,396,343
| 4,451,521
|
Poetry does not install pytest neither generates basic test
|
<p>I am following <a href="https://realpython.com/dependency-management-python-poetry/" rel="nofollow noreferrer">this tutorial on Poetry</a></p>
<p>I do as written</p>
<pre><code>poetry new rp-poetry
</code></pre>
<p>However when I inspect the folder structure, I notice that the file <code>test_rp_poetry.py</code> is not created.</p>
<p>Also the <code>__init__.py</code> is empty and does not contain the version</p>
<p>and looking at the <code>pyproject.toml</code> file I notice that there is no</p>
<pre><code>[tool.poetry.dev-dependencies]
pytest = "^5.2"
</code></pre>
<p>Has Poetry changed lately? Does the latest version require that pytest is installed manually??</p>
|
<python><pytest><python-poetry>
|
2023-02-09 09:05:31
| 1
| 10,576
|
KansaiRobot
|
75,396,255
| 235,921
|
AWS Lambda to get file from ZIP on S3 without download
|
<p>I have a zip file on S3 and I need to get a single file from the zip.
I would like to avoid downloading the whole ZIP, I don't know what size the ZIP may have and I only need a small file from that ZIP.
I want to do this with a Lambda function (it can be nodejs or python).</p>
|
<python><node.js><amazon-web-services><amazon-s3><aws-lambda>
|
2023-02-09 08:57:36
| 0
| 2,978
|
CC.
|
75,396,216
| 4,906,156
|
Problem building a ANN Regressor model with Autoencoder in Tensorflow 2.11
|
<p>My input is a 2D numpy array of dimensions (364660, 5052). The target is (364660, 1), a regression variable. I a trying to build a guided autoencoder + ANN regressor where encoded layer of autoencoder serves as input to ann regressor. I would like to train both models at one go. However, the loss for autoencoder should be a combined autoencoder loss + ann loss. Where as ANN loss remains the same. Here is my sample code</p>
<pre><code>class AutoencoderRegressor(tf.keras.Model):
def __init__(self, encoder_layers, decoder_layers, regressor_layers, autoencoder_loss_weights):
super(AutoencoderRegressor, self).__init__()
self.autoencoder = tf.keras.models.Sequential(encoder_layers + decoder_layers)
self.regressor = tf.keras.models.Sequential(regressor_layers)
self.autoencoder_loss_weights = autoencoder_loss_weights
def call(self, inputs, training=None, mask=None):
autoencoder_output = self.autoencoder(inputs)
regressor_input = self.autoencoder.get_layer(index=2).output
regressor_output = self.regressor(regressor_input)
return autoencoder_output, regressor_output
def autoencoder_loss(self, autoencoder_output, inputs):
binary_crossentropy = tf.keras.losses.BinaryCrossentropy()
mean_squared_error = tf.keras.losses.MeanSquaredError()
autoencoder_reconstruction_loss = binary_crossentropy(inputs, autoencoder_output)
autoencoder_regression_loss = mean_squared_error(inputs, autoencoder_output)
#autoencoder_loss = self.autoencoder_loss_weights[0] * autoencoder_reconstruction_loss + self.autoencoder_loss_weights[1] * autoencoder_regression_loss
autoencoder_loss = autoencoder_reconstruction_loss+autoencoder_regression_loss
return autoencoder_loss
def regressor_loss(self, regressor_output, targets):
mean_squared_error = tf.keras.losses.MeanSquaredError()
regressor_loss = mean_squared_error(targets, regressor_output)
return regressor_loss
# define the encoder layers
encoder_layers = [
tf.keras.layers.Dense(64, activation='relu', input_shape=(reduced_x_train2.shape[1],)),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu')]
# define the decoder layers
decoder_layers = [
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(reduced_x_train2.shape[1], activation='sigmoid')]
# define the regressor layers
regressor_layers = [
tf.keras.layers.Dense(8, activation='relu', input_shape=(16,)),
tf.keras.layers.Dense(1, activation='linear')]
# define the
autoencoder_loss_weights = [0.8, 0.2]
autoencoder_regressor = AutoencoderRegressor(encoder_layers, decoder_layers, regressor_layers, autoencoder_loss_weights)
autoencoder_regressor.compile(optimizer='adam', loss=[autoencoder_regressor.autoencoder_loss, autoencoder_regressor.regressor_loss])
autoencoder_regressor.fit(reduced_x_train2, [reduced_x_train2, y_train], epochs=100,
batch_size=32, validation_split=0.9,shuffle =True,
verbose = 2)
</code></pre>
<p>I get the following error:</p>
<p>TypeError Traceback (most recent call last)
Input In [14], in <cell line: 60>()
56 autoencoder_regressor = AutoencoderRegressor(encoder_layers, decoder_layers, regressor_layers, autoencoder_loss_weights)
58 autoencoder_regressor.compile(optimizer='adam', loss=[autoencoder_regressor.autoencoder_loss, autoencoder_regressor.regressor_loss])
---> 60 autoencoder_regressor.fit(reduced_x_train2, [reduced_x_train2, y_train], epochs=100,
61 batch_size=32, validation_split=0.9,shuffle =True,
62 verbose = 2)</p>
<p>TypeError: in user code:</p>
<pre><code>File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/engine/training.py", line 1051, in train_function *
return step_function(self, iterator)
File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/engine/training.py", line 1040, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/engine/training.py", line 1030, in run_step **
outputs = model.train_step(data)
File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/engine/training.py", line 890, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/engine/training.py", line 948, in compute_loss
return self.compiled_loss(
File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/engine/compile_utils.py", line 215, in __call__
metric_obj.update_state(loss_metric_value, sample_weight=batch_dim)
File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/utils/metrics_utils.py", line 70, in decorated
update_op = update_state_fn(*args, **kwargs)
File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/metrics/base_metric.py", line 140, in update_state_fn
return ag_update_state(*args, **kwargs)
File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/metrics/base_metric.py", line 449, in update_state **
sample_weight = tf.__internal__.ops.broadcast_weights(
File "/user/iibi/amudireddy/.conda/envs/tfni10_py38/lib/python3.8/site-packages/keras/engine/keras_tensor.py", line 254, in __array__
raise TypeError(
TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(), dtype=tf.float32, name=None), name='Placeholder:0', description="created by layer 'tf.cast_15'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as 'tf.cond, 'tf.function', gradient tapes, or 'tf.map_fn'. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as 'tf.math.add' or 'tf.reshape'. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer 'call' and calling that layer on this symbolic input/output.
</code></pre>
<p>Where am I going WRONG?</p>
<p>###EDITED question. I missed an implementation requirement.
In autoencoder_loss, autoencoder_reconstruction_loss should take inputs as (inputs, autoencoder_output) and autoencoder_regression_loss should take input as (targets, regressor_output)</p>
<p>I am not to sure how to implement this. Please help.</p>
|
<python><tensorflow><keras><regression><autoencoder>
|
2023-02-09 08:54:09
| 1
| 383
|
areddy
|
75,396,079
| 14,673,832
|
TypeError: 'int' object is not subscriptable while doing s-expression in Python
|
<p>I am trying to write a basic s-expression calculator in Python using s-expression which can contains add or multiply or both or none or just an integar number.</p>
<p>I tried the following snippet:</p>
<pre><code>def calc(expr):
print(expression[0])
if isinstance(expr, int):
return expr
elif expr[0] == '+':
return calc(expr[1]) + calc(expr[2])
elif expr[0] == '*':
return calc(expr[1]) * calc(expr[2])
else:
raise ValueError("Unknown operator: %s" % expr[0])
# Example usage
# expression = ('+', ('*', 3, 4), 5)
expression = (7)
result = calc(expression)
print(result)
</code></pre>
<p>When I tried to pass the expression <code>('+', ('*', 3, 4), 5)</code> , it gives the correct answer but when I just try to use number 7 or 7 inside tuple (7), it gives the above error. How to solve this?</p>
|
<python><recursion><s-expression>
|
2023-02-09 08:39:15
| 1
| 1,074
|
Reactoo
|
75,395,892
| 3,262,484
|
Writing a tuple search with Django ORM
|
<p>I'm trying to write a search based on tuples with the Django ORM syntax.</p>
<p>The final sql statement should look something like:</p>
<pre><code>SELECT * FROM mytable WHERE (field_a,field_b) IN ((1,2),(3,4));
</code></pre>
<p>I know I can achieve this in django using the extra keyword:</p>
<pre><code>MyModel.objects.extra(
where=["(field_a, field_b) IN %s"],
params=[((1,2),(3,4))]
)
</code></pre>
<p>but the "extra" keyword will be deprecated at some point in django so I'd like a pure ORM/django solution.</p>
<p>Searching the web, I found <a href="https://code.djangoproject.com/ticket/33015" rel="noreferrer">https://code.djangoproject.com/ticket/33015</a> and the comment from Simon Charette, something like the snippet below could be OK, but I can't get it to work.</p>
<pre><code>from django.db.models import Func, lookups
class ExpressionTuple(Func):
template = '(%(expressions)s)'
arg_joiner = ","
MyModel.objects.filter(lookups.In(
ExpressionTuple('field_a', 'field_b'),
((1,2),(3,4)),
))
</code></pre>
<p>I'm using Django 3.2 but I don't expect Django 4.x to do a big difference here. My db backend is posgresql in case it matters.</p>
|
<python><django><django-queryset>
|
2023-02-09 08:20:40
| 4
| 383
|
Bob Morane
|
75,395,863
| 3,459,293
|
Getting Categories data from rows in columns and matching values for actuals (y) and predicted (yhat) python
|
<p>I have a data frame like this, where Groups, Entity, Year they can have different categories. I have shown just one example.</p>
<pre><code>TimePeriod Groups Entity Category Year Quarter Predictions Value
1/1/2021 CO UK Model_Q1_2022 2021 1 yhat 25379.12223
1/1/2021 CO UK Model_Q4_2021 2021 1 y 19915.88
1/1/2021 CO UK Model_Q3_2021 2021 1 y 19915.88
1/1/2021 CO UK Model_Q3_2021 2021 1 yhat 24199.99065
1/1/2021 CO UK Model_Q4_2021 2021 1 yhat 24308.29262
1/1/2021 CO UK Model_Q2_2021 2021 1 yhat 24627.24434
1/1/2021 CO UK Model_Q1_2022 2021 1 y 19915.88
1/1/2021 CO UK Model_Q2_2021 2021 1 y 19915.88
</code></pre>
<p>I tried <code>pivot_tabel</code>. However, it gives give me needed columns but could not match values for <code>yhat</code> and <code>y</code></p>
<pre><code> df.pivot_table(index=df.columns[:-2].tolist(), columns=['Predictions'], values='Value').reset_index().rename_axis(columns=None)
</code></pre>
<h3>resulting output</h3>
<pre><code>TimePeriod Groups Entity Category Year Quarter y yhat
1/1/2021 CO UK Model_Q1_2022 2021 1 19915.88 NaN
1/1/2021 CO UK Model_Q1_2022 2021 1 NaN 25379.12223
1/1/2021 CO UK Model_Q2_2021 2021 1 19915.88 NaN
1/1/2021 CO UK Model_Q2_2021 2021 1 NaN 24627.24434
1/1/2021 CO UK Model_Q3_2021 2021 1 19915.88 NaN
1/1/2021 CO UK Model_Q3_2021 2021 1 NaN 24199.99065
1/1/2021 CO UK Model_Q4_2021 2021 1 19915.88 NaN
1/1/2021 CO UK Model_Q4_2021 2021 1 NaN 24308.29262
</code></pre>
<h3>datatypes</h3>
<pre><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 8 entries, 19535 to 140390
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 TimePeriod 8 non-null object
1 Groups 8 non-null object
2 Entity 8 non-null object
5 Currency 8 non-null object
6 Category 8 non-null object
7 Year 8 non-null int64
8 Quarter 8 non-null int64
9 Predictions 8 non-null object
10 Value 8 non-null float64
dtypes: float64(1), int64(2), object(9)
</code></pre>
<p>I dont know how can I achive above dataframe. Maybe I have to condition of TimePeriod / Categories. But then I can also have different Groups / Entity as well.</p>
<h3>expected output</h3>
<pre><code>TimePeriod Groups Entity Category Year Quarter yhat y
1/1/2021 CO UK Model_Q1_2022 2021 1 25379.12223 19915.88
1/1/2021 CO UK Model_Q2_2021 2021 1 24627.24434 19915.88
1/1/2021 CO UK Model_Q3_2021 2021 1 24199.99065 19915.88
1/1/2021 CO UK Model_Q4_2021 2021 1 24308.29262 19915.88
</code></pre>
<p>Any help / suggestion will be appreciated.</p>
|
<python><pandas><data-wrangling>
|
2023-02-09 08:16:33
| 1
| 340
|
user3459293
|
75,395,481
| 7,368,045
|
Order in Python for loop (LeetCode 219 Contains Duplicate II)
|
<p><a href="https://goodtecher.com/leetcode-219-contains-duplicate-ii/" rel="nofollow noreferrer">https://goodtecher.com/leetcode-219-contains-duplicate-ii/</a></p>
<p>In this Python solution, why does <code>visited[nums[i]] = i</code> have to come after the if condition? I don't understand why the code wouldn't work if the order is reversed like this:</p>
<pre><code> visited = {}
for i in range(len(nums)):
visited[nums[i]] = i
if nums[i] in visited and abs(i - visited[nums[i]]) <= k:
return True
return False
</code></pre>
<p>I would appreciate an explanation as well as some sources to help me understand what I don't understand about how order works in Python loops. Thank you!</p>
|
<python><python-3.x>
|
2023-02-09 07:32:39
| 1
| 463
|
Ashley Liu
|
75,395,322
| 7,340,317
|
How to get dictionary of df indices that links the same ids on different days?
|
<p>I've following toy-dataframe:</p>
<pre><code> | id| date
--------------
0 | a | d1
1 | b | d1
2 | a | d2
3 | c | d2
4 | b | d3
5 | a | d3
</code></pre>
<pre><code>import pandas as pd
df = pd.DataFrame({'id': ['a', 'b', 'a', 'c', 'b', 'a'], 'date': ['d1', 'd1', 'd2', 'd2', 'd3', 'd3']})
</code></pre>
<p>I want to obtaining 'linking dicitionary', like this: <code>d = {0: 2, 2: 5, 1: 4}</code>,
where (numbers are just row index)</p>
<ul>
<li><code>0:2</code> means link <code>a</code> from <code>d1</code> to <code>a</code> from <code>d2</code>,</li>
<li><code>2:5</code> means link <code>a</code> from <code>d2</code> to <code>a</code> from <code>d3</code>,</li>
<li><code>1:4</code> means link <code>b</code> from <code>d1</code> to <code>b</code> from <code>d3</code></li>
</ul>
<p>Is there some simple and clean way to get it?</p>
|
<python><pandas><dictionary>
|
2023-02-09 07:11:45
| 2
| 1,480
|
Quant Christo
|
75,395,297
| 657,693
|
Pandas Join into Comma Separated String, But One Record For Each Value in a Column
|
<p>I am trying to do some data manipulation on a Pandas DataFrame and I want to group by / categorize by a single column, but do so for each different corresponding group of rows.</p>
<p>For example, I have the below DataFrame -</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>sport</th>
<th>league</th>
<th>home</th>
<th>away</th>
<th>book</th>
<th>bet</th>
<th>odds</th>
<th>market</th>
</tr>
</thead>
<tbody>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Aaron Nesmith</td>
<td>1100</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Bam Adebayo</td>
<td>350</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Bennedict Mathurin</td>
<td>850</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Buddy Hield</td>
<td>900</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Caleb Martin</td>
<td>1100</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Gabe Vincent</td>
<td>950</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Jimmy Butler</td>
<td>550</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Myles Turner</td>
<td>600</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Tyler Herro</td>
<td>600</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Tyrese Haliburton</td>
<td>800</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Aaron Nesmith</td>
<td>1600</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Andrew Nembhard</td>
<td>2000</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Bam Adebayo</td>
<td>360</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Buddy Hield</td>
<td>850</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Caleb Martin</td>
<td>1600</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Gabe Vincent</td>
<td>1000</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Jimmy Butler</td>
<td>470</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Myles Turner</td>
<td>500</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Tyler Herro</td>
<td>550</td>
<td>First Basket</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Tyrese Haliburton</td>
<td>950</td>
<td>First Basket</td>
</tr>
</tbody>
</table>
</div>
<p>Here is the data as a copy / paste dict -</p>
<pre><code>{'sport': {180: 'basketball', 182: 'basketball', 184: 'basketball', 186: 'basketball', 188: 'basketball', 190: 'basketball', 192: 'basketball', 194: 'basketball', 196: 'basketball', 198: 'basketball', 210: 'basketball', 211: 'basketball', 212: 'basketball', 213: 'basketball', 214: 'basketball', 215: 'basketball', 216: 'basketball', 217: 'basketball', 218: 'basketball', 219: 'basketball'}, 'league': {180: 'NBA', 182: 'NBA', 184: 'NBA', 186: 'NBA', 188: 'NBA', 190: 'NBA', 192: 'NBA', 194: 'NBA', 196: 'NBA', 198: 'NBA', 210: 'NBA', 211: 'NBA', 212: 'NBA', 213: 'NBA', 214: 'NBA', 215: 'NBA', 216: 'NBA', 217: 'NBA', 218: 'NBA', 219: 'NBA'}, 'home': {180: 'Miami Heat', 182: 'Miami Heat', 184: 'Miami Heat', 186: 'Miami Heat', 188: 'Miami Heat', 190: 'Miami Heat', 192: 'Miami Heat', 194: 'Miami Heat', 196: 'Miami Heat', 198: 'Miami Heat', 210: 'Miami Heat', 211: 'Miami Heat', 212: 'Miami Heat', 213: 'Miami Heat', 214: 'Miami Heat', 215: 'Miami Heat', 216: 'Miami Heat', 217: 'Miami Heat', 218: 'Miami Heat', 219: 'Miami Heat'}, 'away': {180: 'Indiana Pacers', 182: 'Indiana Pacers', 184: 'Indiana Pacers', 186: 'Indiana Pacers', 188: 'Indiana Pacers', 190: 'Indiana Pacers', 192: 'Indiana Pacers', 194: 'Indiana Pacers', 196: 'Indiana Pacers', 198: 'Indiana Pacers', 210: 'Indiana Pacers', 211: 'Indiana Pacers', 212: 'Indiana Pacers', 213: 'Indiana Pacers', 214: 'Indiana Pacers', 215: 'Indiana Pacers', 216: 'Indiana Pacers', 217: 'Indiana Pacers', 218: 'Indiana Pacers', 219: 'Indiana Pacers'}, 'book': {180: 'DraftKings', 182: 'DraftKings', 184: 'DraftKings', 186: 'DraftKings', 188: 'DraftKings', 190: 'DraftKings', 192: 'DraftKings', 194: 'DraftKings', 196: 'DraftKings', 198: 'DraftKings', 210: 'FanDuel', 211: 'FanDuel', 212: 'FanDuel', 213: 'FanDuel', 214: 'FanDuel', 215: 'FanDuel', 216: 'FanDuel', 217: 'FanDuel', 218: 'FanDuel', 219: 'FanDuel'}, 'bet': {180: 'Aaron Nesmith', 182: 'Bam Adebayo', 184: 'Bennedict Mathurin', 186: 'Buddy Hield', 188: 'Caleb Martin', 190: 'Gabe Vincent', 192: 'Jimmy Butler', 194: 'Myles Turner', 196: 'Tyler Herro', 198: 'Tyrese Haliburton', 210: 'Aaron Nesmith', 211: 'Andrew Nembhard', 212: 'Bam Adebayo', 213: 'Buddy Hield', 214: 'Caleb Martin', 215: 'Gabe Vincent', 216: 'Jimmy Butler', 217: 'Myles Turner', 218: 'Tyler Herro', 219: 'Tyrese Haliburton'}, 'odds': {180: '1100', 182: '350', 184: '850', 186: '900', 188: '1100', 190: '950', 192: '550', 194: '600', 196: '600', 198: '800', 210: '1600', 211: '2000', 212: '360', 213: '850', 214: '1600', 215: '1000', 216: '470', 217: '500', 218: '550', 219: '950'}, 'market': {180: 'First Basket', 182: 'First Basket', 184: 'First Basket', 186: 'First Basket', 188: 'First Basket', 190: 'First Basket', 192: 'First Basket', 194: 'First Basket', 196: 'First Basket', 198: 'First Basket', 210: 'First Basket', 211: 'First Basket', 212: 'First Basket', 213: 'First Basket', 214: 'First Basket', 215: 'First Basket', 216: 'First Basket', 217: 'First Basket', 218: 'First Basket', 219: 'First Basket'}}
</code></pre>
<p>What I'd like to achieve is the sample output below. The goal of this output is for each 'book' in the DataFrame to group by the bet such that the odds for that book are first and the remaining odds are joined behind it with a '/' separator.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>sport</th>
<th>league</th>
<th>home</th>
<th>away</th>
<th>book</th>
<th>bet</th>
<th>market</th>
<th>odds</th>
</tr>
</thead>
<tbody>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Aaron Nesmith</td>
<td>First Basket</td>
<td>1100/1600</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Aaron Nesmith</td>
<td>First Basket</td>
<td>1600/1100</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>DraftKings</td>
<td>Bam Adebayo</td>
<td>First Basket</td>
<td>350/360</td>
</tr>
<tr>
<td>basketball</td>
<td>NBA</td>
<td>Miami Heat</td>
<td>Indiana Pacers</td>
<td>FanDuel</td>
<td>Bam Adebayo</td>
<td>First Basket</td>
<td>360/350</td>
</tr>
</tbody>
</table>
</div>
<p>I've tried this code and while it gets me close I can't get it to also have one record per book.</p>
<pre class="lang-py prettyprint-override"><code>print(single_game.groupby(["book", "bet"]).agg({"odds": "/".join}).reset_index())
</code></pre>
<p>Any help here would be greatly appreciated.</p>
|
<python><pandas><dataframe>
|
2023-02-09 07:09:42
| 1
| 1,366
|
mattdonders
|
75,395,204
| 3,136,710
|
Why is this giving me a bad estimation of pi (Leibniz formula)
|
<p>Leibniz formula is</p>
<p>π/4 = 1 - 1/3 + 1/5 - 1/7 + 1/9 - ...</p>
<p>Or, π = 4 ( 1 - 1/3 + 1/5 - 1/7 + 1/9 - ... )</p>
<p>I don't really understand why my code is producing a bad result. There is another way of doing this, but what is wrong with my code specifically? It seems like it should give me a good result.</p>
<pre><code>it = int(input("Enter number of iterations: "))
denom = 3
approx = 1
count = 0
while count < it:
if count % 2 == 0: #count is even
approx -= 1/denom
elif count % 2 == 1: #count is odd
approx += 1/denom
denom += 2
count += 1
approx *= 4 #multiply by 4 gets the final result
print("The approximation of pi is ", approx)
#For 5, The approximation of pi should be 3.3396825396825403
</code></pre>
<p>My output is:</p>
<pre><code>Enter number of iterations: 5
The approximation of pi is 2.9760461760461765
</code></pre>
<p>The expected output should be:</p>
<pre><code>Enter number of iterations: 5
The approximation of pi should be 3.3396825396825403
</code></pre>
|
<python>
|
2023-02-09 06:59:06
| 1
| 652
|
Spellbinder2050
|
75,395,037
| 6,013,016
|
python using super(), can't access variable from second parent class
|
<p>This is probably very simple question. In a child class, which is inherited from two parent classes, I can't access variable from second parent class using super(). Here is my code:</p>
<pre><code>class Core:
def __init__(self):
self.a_from_Core = 7
class Extra:
def __init__(self):
self.b_from_Extra = 90
class Trash(Core, Extra):
def __init__(self):
super().__init__()
print(self.a_from_Core) # working
print(self.b_from_Extra) # does not work
trashcan = Trash()
</code></pre>
<p>And here is error:</p>
<blockquote>
<p>AttributeError: 'Trash' object has no attribute 'b_from_Extra'</p>
</blockquote>
<p>What am I doing wrong?</p>
|
<python><class><inheritance><multiple-inheritance><super>
|
2023-02-09 06:37:56
| 2
| 5,926
|
Scott
|
75,394,933
| 10,437,110
|
How to gather outputs from joblib.Parallel in Python into one csv file?
|
<p>I have received a previously inactive code to make some changes to the code.
It reads an input csv file, loops over the row, calls a function, and gets an output row which is stored in the output csv file.</p>
<p>Here is the code:</p>
<pre><code>import pandas as pd
from joblib import Parallel, delayed
</code></pre>
<pre><code>
def merge_all_rows():
# some code to merge all csv files in a folder.
final_output.to_csv('final_output.csv')
def f(row):
# some code
output_row.to_csv(f'{index}.csv')
input_df = pd.read_csv(input_file)
for index, row in input_df.iterrows():
input_rows.append([index, row['a'], row['b'])
Parallel(n_jobs=5, verbose=0)(delayed(f)(input_row) for input_row in input_rows)
merge_all_rows()
</code></pre>
<p>I didn't like the approach where we are creating one csv file for one row and then finally appending data of these files/row to the final output csv file.</p>
<p>I wanted to try something better so I used a global variable to get all the rows but is didn't work as the global variable was empty at the end of the process.</p>
<pre><code>
def f(row):
global output_df
# some code
output_df = pd.concat([output_df, current_row_output_df])
output_df = pd.DataFrame()
input_df = pd.read_csv(input_file)
for index, row in input_df.iterrows():
input_rows.append([index, row['a'], row['b'])
Parallel(n_jobs=5, verbose=0)(delayed(f)(input_row) for input_row in input_rows)
display(output_df)
</code></pre>
<p>Can someone tell me a better approach? Not necessarily from joblib.
Any module which provide parallelism</p>
|
<python><pandas><csv>
|
2023-02-09 06:22:28
| 0
| 397
|
Ash
|
75,394,708
| 6,856,361
|
how to specify exact env variable from configmap in airflow kubernetespodoperator
|
<p>I'm defining the env vars for the pythons airflow dags as follow:</p>
<pre class="lang-py prettyprint-override"><code>from airflow.kubernetes.secret import Secret
db_pass = Secret('env', 'DB_PASS', 'db-credentials', 'db_user')
envs = {
'JAVA_OPTS':'-XX:MaxRAMPercentage=65.0'
}
with DAG(
'my_dag',
start_date=datetime(2023, 2, 7),
schedule_interval='@hourly',
catchup=False
) as dag:
KubernetesPodOperator(
namespace='jobs',
image='my-image',
env_vars=envs,
labels={'env': 'airflow'},
secrets=[db_pass],
configmaps=["middleware-ips"],
name='my-dag',
is_delete_operator_pod=True,
get_logs=True,
resources=resources,
image_pull_secrets='quay-key',
task_id='my-task',
startup_timeout_seconds=600,
dag=dag
)
</code></pre>
<p>I need to put in env the variable from configmap. It's defined in deployment in kube as follow:</p>
<pre class="lang-yaml prettyprint-override"><code> - name: MONGO_URL
valueFrom:
configMapKeyRef:
name: middleware-ips
key: mongodb-main
- name: APP_MONGOCONFIG_URI
value: mongodb://$(MONGO_URL)/db?compressors=zstd
</code></pre>
<p>How to specify it in the <code>envs</code> variable in the python dag so that the <code>APP_MONGOCONFIG_URI</code> get's to the <code>KubernetesPodOperator</code>?</p>
|
<python><kubernetes><airflow>
|
2023-02-09 05:50:59
| 1
| 2,162
|
Izbassar Tolegen
|
75,394,675
| 16,780,162
|
Plotting pcolormesh in python from csv data
|
<p>I am trying to make a pcolormesh plot in python from my csv file. But I am stuck with dimension error.</p>
<p>My csv looks like this:</p>
<pre><code>ratio 5% 10% 20% 30% 40% 50%
1.2 0.60 0.63 0.62 0.66 0.66 0.77
1.5 0.71 0.81 0.75 0.78 0.76 0.77
1.8 0.70 0.82 0.80 0.73 0.80 0.78
1.2 0.75 0.84 0.94 0.84 0.76 0.82
2.3 0.80 0.92 0.93 0.85 0.87 0.86
2.5 0.80 0.85 0.91 0.85 0.87 0.88
2.9 0.85 0.91 0.96 0.96 0.86 0.87
</code></pre>
<p>I want to make pcolormesh plot where x-axis shows ratio and y-axis shows csv header i.e <code>0.05, 0.1, 0.2, 0.3, 0.4, 0.5</code> and the plot includes values from csv 2nd column.</p>
<p>I tried to do following in python:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('./result.csv')
xlabel = df['ratio']
ylabel = [0.05, 0.1, 0.2, 0.3, 0.4, 0.5]
plt.figure(figsize=(8, 6))
df = df.iloc[:, 1:]
plt.pcolormesh(df, xlabel, ylabel, cmap='RdBu')
plt.colorbar()
plt.xlabel('rati0')
plt.ylabel('threshold')
plt.show()
</code></pre>
<p>But it doesn't work.</p>
<p>Can I get a help to make a plot as I want.</p>
<p>Thank you.</p>
|
<python><python-3.x><csv><matplotlib><plot>
|
2023-02-09 05:45:59
| 1
| 332
|
Codeholic
|
75,394,636
| 14,256,643
|
how to jump back before for loop and update variable until meet my requirments
|
<p>I am scraping nested text from parent category and child category.
here my loop look like:</p>
<pre><code>first for loop will scrape all parent category:
...seond for loop will scrape child1 category of parent category
...third for loop will scrape child2 category of child1 category
</code></pre>
<p>I am trying to scrape all parent and child category from this <a href="https://www.daraz.com.bd/" rel="nofollow noreferrer">page</a></p>
<p>If my <code>"sub_cat_1 = y.text"</code> <strong>None or empty string</strong> then I want to increment by 1 <code>Level_1_Category_No{increment_by_1}</code> here my this variable <code>sub_category_one = driver.find_elements(By.CSS_SELECTOR , ".Level_1_Category_No1 .lzd-site-menu-sub-item > a span")</code> here is my full code:</p>
<pre><code> driver.get("https://www.daraz.com.bd/")
time.sleep(10)
main_category = driver.find_elements(By.CSS_SELECTOR , '.lzd-site-menu-root-item span')
with open("all_category_subcat.csv", "w",encoding="utf-8",newline="") as f:
writer = csv.writer(f)
writer.writerow(["Main Category", "Sub Category 1", "Sub Category 2"])
for i in main_category:
hover = ActionChains(driver).move_to_element(i)
hover.perform()
main_cat = i.text
print(main_cat)
sub_category_one = driver.find_elements(By.CSS_SELECTOR , ".Level_1_Category_No1 .lzd-site-menu-sub-item > a span")
for y in sub_category_one:
hover = ActionChains(driver).move_to_element(y)
hover.perform()
sub_cat_1 = y.text
print("--------------",sub_cat_1,"--------------")
if sub_cat_1 == None sub_cat_1 == "":
#update the value of sub_category_one and run for loop again
sub_category_two = driver.find_elements(By.CSS_SELECTOR , ".lzd-site-menu-grand-active span")
for z in sub_category_two:
sub_cat_2 = z.text
print(sub_cat_2)
writer.writerow([main_cat, sub_cat_1, sub_cat_2])
</code></pre>
|
<python><python-3.x><selenium><selenium-webdriver>
|
2023-02-09 05:40:33
| 1
| 1,647
|
boyenec
|
75,394,573
| 51,292
|
AttributeError: module 'tensorflow.core.function.trace_type' has no attribute 'from_value'
|
<p>This one line python program fails with the error message above on windows 10 with python 310:</p>
<pre><code>from keras.models import Sequential
</code></pre>
<p>Preceding the keras's with 'tensorflow.' does not help.</p>
<p>Edit 1: Seems like none of my pthon files that us tensorflow are now failing with this error. I must have installed something that is incompatible. How does one find out what the problem is?</p>
<p>Edit 2: Any import of tensorflow or keras gets a stack trace (please see below). They all seem to get intp module_util and end up in _make_function_type.</p>
<pre><code>Traceback (most recent call last):
File "D:\ray\eclisepython\2py\src\p\t.py", line 3, in <module>
import tensorflow as tf
File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\__init__.py", line 37, in <module>
from tensorflow.python.tools import module_util as _module_util
</code></pre>
<p>...</p>
<pre><code> File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\python\eager\polymorphic_function\function_spec.py", line 272, in _make_function_type
type_constraint = trace_type.from_value(
AttributeError: module 'tensorflow.core.function.trace_type' has no attribute 'from_value'
Package Version
---------------------------- -----------
absl-py 1.3.0
altgraph 0.17.2
anyio 3.6.1
appdirs 1.4.4
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
arrow 1.2.3
asgiref 3.5.2
asttokens 2.2.1
astunparse 1.6.3
attrs 22.2.0
backcall 0.2.0
beautifulsoup4 4.11.1
bleach 5.0.1
blosc2 2.0.0
cachetools 5.2.1
certifi 2022.9.24
cffi 1.15.1
charset-normalizer 2.1.1
click 8.1.3
colorama 0.4.5
comm 0.1.2
contextvars 2.4
contourpy 1.0.6
cryptography 39.0.0
cycler 0.11.0
Cython 0.29.33
dash 2.7.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
debugpy 1.6.5
decorator 5.1.1
defusedxml 0.7.1
entrypoints 0.4
et-xmlfile 1.1.0
executing 1.2.0
fastjsonschema 2.16.2
finta 1.3
Flask 2.2.2
flatbuffers 23.1.4
fonttools 4.38.0
fqdn 1.5.1
frozendict 2.3.4
future 0.18.2
gast 0.4.0
google-auth 2.16.0
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
grpcio 1.51.1
h11 0.13.0
h5py 3.7.0
html5lib 1.1
htmltools 0.1.2
idna 3.3
immutables 0.18
ipykernel 6.20.1
ipython 8.8.0
ipython-genutils 0.2.0
ipywidgets 8.0.4
isoduration 20.11.0
itsdangerous 2.1.2
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
jsonpointer 2.3
jsonschema 4.17.3
jupyter 1.0.0
jupyter_client 7.4.8
jupyter-console 6.4.4
jupyter_core 5.1.3
jupyter-events 0.6.2
jupyter_server 2.0.6
jupyter_server_terminals 0.4.4
jupyterlab-pygments 0.2.2
jupyterlab-widgets 3.0.5
keras 2.10.0
Keras-Preprocessing 1.1.2
kiwisolver 1.4.4
libclang 15.0.6.1
linkify-it-py 2.0.0
lxml 4.9.2
Markdown 3.4.1
markdown-it-py 2.1.0
MarkupSafe 2.1.1
matplotlib 3.6.2
matplotlib-inline 0.1.6
mdurl 0.1.1
mistune 2.0.4
msgpack 1.0.4
multitasking 0.0.11
nbclassic 0.4.8
nbclient 0.7.2
nbconvert 7.2.7
nbformat 5.7.2
nest-asyncio 1.5.6
notebook 6.5.2
notebook_shim 0.2.2
numexpr 2.8.4
numpy 1.24.2
oauthlib 3.2.2
openpyxl 3.1.0
opt-einsum 3.3.0
packaging 21.3
pandas 1.5.3
pandas-datareader 0.10.0
pandocfilters 1.5.0
parso 0.8.3
patsy 0.5.3
pefile 2022.5.30
pickleshare 0.7.5
Pillow 9.4.0
pip 23.0
platformdirs 2.6.2
plotly 5.11.0
prometheus-client 0.15.0
prompt-toolkit 3.0.36
protobuf 3.19.6
psutil 5.9.3
pure-eval 0.2.2
py-cpuinfo 9.0.0
py4j 0.10.9.5
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
Pygments 2.14.0
pyinstaller 5.3
pyinstaller-hooks-contrib 2022.8
pyparsing 3.0.9
pyrsistent 0.19.3
pyspark 3.3.1
python-dateutil 2.8.2
python-json-logger 2.0.4
python-multipart 0.0.5
pytz 2022.7
pywin32 305
pywin32-ctypes 0.2.0
pywinpty 2.0.10
PyYAML 6.0
pyzmq 24.0.1
qtconsole 5.4.0
QtPy 2.3.0
requests 2.28.1
requests-oauthlib 1.3.1
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rsa 4.9
scikit-learn 1.2.0
scipy 1.10.0
seaborn 0.12.2
Send2Trash 1.8.0
setuptools 63.2.0
shiny 0.2.4
simplejson 3.18.1
six 1.16.0
sniffio 1.2.0
soupsieve 2.3.2.post1
stack-data 0.6.2
starlette 0.20.4
statsmodels 0.13.5
tables 3.8.0
tenacity 8.1.0
tensorboard 2.10.1
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.11.0
tensorflow-estimator 2.10.0
tensorflow-intel 2.11.0
tensorflow-io-gcs-filesystem 0.29.0
termcolor 2.2.0
terminado 0.17.1
Theano 1.0.5
threadpoolctl 3.1.0
tinycss2 1.2.1
tornado 6.2
traitlets 5.8.1
typing_extensions 4.3.0
uc-micro-py 1.0.1
uri-template 1.2.0
urllib3 1.26.12
uvicorn 0.18.2
watchdog 2.1.9
wcwidth 0.2.5
webcolors 1.12
webencodings 0.5.1
websocket-client 1.4.2
websockets 10.3
Werkzeug 2.2.2
wheel 0.38.4
widgetsnbextension 4.0.5
wrapt 1.14.1
xgboost 1.7.3
yahoo-finance 1.4.0
yfinance 0.2.3
</code></pre>
|
<python><python-3.x><tensorflow><keras><tensorflow2.0>
|
2023-02-09 05:32:14
| 1
| 10,041
|
Ray Tayek
|
75,394,542
| 12,331,179
|
Build Structure using pandas Dataframe
|
<p>Input Data</p>
<p><a href="https://i.sstatic.net/3pt4g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3pt4g.png" alt="enter image description here" /></a></p>
<pre><code>import pandas as pd
import numpy as np
a1=["data.country", "data.studentinfo.city","data.studentinfo.name.id.grant"]
a2=["StringType()","StringType()","StringType()"]
d1=pd.DataFrame(list(zip(a1,a2)),columns=['action','type'])
</code></pre>
<p>we have to build below structure using dataframe using for loop</p>
<pre><code>StructType([StructField("data",
StructType([StructField("country",StringType(),True),
StructField("studentinfo",
StructType([StructField("city",StringType(),True),
StructField("name",StructType([
StructField("id",StructType([
StructField("grant",StringType(),True)])
)]))
])
)])
)])
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2023-02-09 05:27:23
| 1
| 386
|
Amol
|
75,394,318
| 1,870,832
|
python text parsing to split list into chunks including preceding delimiters
|
<p><strong>What I Have</strong></p>
<p>After OCR'ing some public Q&A deposition pdfs which have a Q&A form, I have raw text like the following:</p>
<pre><code>text = """\na\n\nQ So I do first want to bring up exhibit No. 46, which is in the binder
in front of\nyou.\n\nAnd that is a letter [to] Alston\n& Bird...
\n\nIs that correct?\n\nA This is correct.\n\nQ Okay."""
</code></pre>
<p>...which I want to split into the separate questions and answers. Each Question or Answer starts with <code>'\nQ '</code>, <code>'\nA '</code>, <code>'\nQ_'</code> or <code>'\nA_'</code> (e.g. matches regex <code>"\n[QA]_?\s"</code>)</p>
<p><strong>What I've Done So Far</strong></p>
<p>I can get a list of all questions and answers with the following code:</p>
<pre><code>pattern = "\n[QA]_?\s"
q_a_list = re.split(pattern, text)
print(q_a_list)
</code></pre>
<p>which yields <code>q_a_list</code>:</p>
<pre><code>['\na\n',
'So I do first want to bring up exhibit No. 46, which is in the binder \nin front of\nyou.\n\nAnd that is a letter [to] Alston\n& Bird...\n\n\nIs that correct?\n',
'This is correct.\n',
'Okay.']
</code></pre>
<p><strong>What I Want</strong></p>
<p>This is close to what I want, but has the following problems:</p>
<ul>
<li>It's not always clear if a statement is a Question or an Answer, and</li>
<li>Sometimes, such as in this particular example, the first item in the list may be neither a Question nor Answer, but just random text before the first <code>\Q</code> delimiter.</li>
</ul>
<p>I would like a modified version of the my <code>q_a_list</code> above, but which addresses the two bulleted problems by linking each text chunk to the delimiter that preceded it. Something like:</p>
<pre><code>[{'0': '\na\n',
'\nQ': 'So I do first want to bring up exhibit No. 46, which is in the binder \nin front of\nyou.\n\nAnd that is a letter [to] Alston\n& Bird...\n\n\nIs that correct?\n',
'\nA': 'This is correct.\n',
'\nQ': 'Okay.'}]
</code></pre>
<p>or</p>
<pre><code>[{'\nQ': 'So I do first want to bring up exhibit No. 46, which is in the binder \nin front of\nyou.\n\nAnd that is a letter [to] Alston\n& Bird...\n\n\nIs that correct?\n',
'\nA': 'This is correct.\n',
'\nQ': 'Okay.'}]
</code></pre>
<p>or maybe even just a list with delimiters pre-pended:</p>
<pre><code>['\nQ: So I do first want to bring up exhibit No. 46, which is in the binder \nin front of\nyou.\n\nAnd that is a letter [to] Alston\n& Bird...\n\n\nIs that correct?\n',
'\nA: This is correct.\n',
'\nQ: Okay.'
]
</code></pre>
|
<python><regex><parsing><nlp><ocr>
|
2023-02-09 04:49:23
| 3
| 9,136
|
Max Power
|
75,394,255
| 6,751,456
|
write Trino query data directly to s3
|
<p>Currently we have</p>
<ul>
<li>Trino query run and fetch data</li>
<li>write this to local filesystem</li>
<li>upload this file to s3 bucket.</li>
</ul>
<p>For smaller data this is no issue. But currently with large data volume, this is posing an issue with server even returning <code>502 bad gateway</code>.</p>
<p>I am also familiar with <code>postgres/redshift</code> queries writing directly to <code>s3</code> with <code>copy</code> commands.</p>
<p>Is there similar approach for writing Trino queries too?</p>
|
<python><presto><trino><orc>
|
2023-02-09 04:39:20
| 1
| 4,161
|
Azima
|
75,394,159
| 6,037,395
|
How to adjust the font of tick labels for a percent-formatted axis in matplotlib?
|
<p>I usually use <code>matplotlib</code> with the following options:</p>
<pre class="lang-py prettyprint-override"><code>matplotlib.rcParams['text.latex.preamble'] = r'\usepackage{amsmath}'
matplotlib.rc('text', usetex = True)
</code></pre>
<p>such that the text font looks better (at least to me).
However, if I format one of the axis to percent,
the font of tick labels on that axis will fall back to the default.
Here is an MWE:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib
matplotlib.use('Agg')
matplotlib.rcParams['text.latex.preamble'] = r'\usepackage{amsmath}'
matplotlib.rc('text', usetex = True)
from matplotlib import pyplot as py
## setup figure
figure = py.figure(figsize = (7.5, 5.0))
axs = [py.subplot(1, 1, 1)]
## make plot
xs = np.linspace(0.0, np.pi, 100)
ys = np.sin(xs)
axs[0].plot(xs, ys, color = 'dodgerblue', label = r'$n = 1$')
ys = np.sin(2.0 * xs)
axs[0].plot(xs, ys, color = 'seagreen', label = r'$n = 2$')
axs[0].axhline(0.0, color = 'gray', linestyle = 'dashed')
## percentage y axis
formatter = matplotlib.ticker.PercentFormatter(xmax = 1.0, decimals = 0, symbol = r'\%', is_latex = True)
axs[0].yaxis.set_major_formatter(formatter)
## save figure
name = 'test.pdf'
py.tight_layout()
py.savefig(name)
py.close()
</code></pre>
<p>As shown below, the font on vertical axis is different from that of the horizontal,
how do I set it to be the same as that of the horizontal axis?
Thanks!</p>
<p><a href="https://i.sstatic.net/nRmsU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nRmsU.png" alt="percent-formatted-axis" /></a></p>
|
<python><matplotlib><fonts>
|
2023-02-09 04:19:19
| 1
| 1,654
|
zyy
|
75,394,143
| 19,950,360
|
OpenAI API error: "No module named 'openai.embeddings_utils'; 'openai' is not a package"
|
<p>I want to use <code>openai.embeddings_utils import get_embeddings</code>
So already install openai</p>
<pre><code>Name: openai
Version: 0.26.5
Summary: Python client library for the OpenAI API
Home-page: https://github.com/openai/openai-python
Author: OpenAI
Author-email: support@openai.com
License:
Location: /Users/lima/Desktop/Paprika/Openai/.venv/lib/python3.9/site-packages
Requires: aiohttp, requests, tqdm
Required-by:
</code></pre>
<p>This is my openai
But why not use openai.embeddings_utils??</p>
|
<python><python-3.x><pip><openai-api><azure-openai>
|
2023-02-09 04:16:03
| 9
| 315
|
lima
|
75,394,098
| 572,575
|
Python BeautifulSoup cannot file text in webpage
|
<p>I try to read text Hello World from website <a href="https://www.w3schools.com/python/default.asp" rel="nofollow noreferrer">https://www.w3schools.com/python/default.asp</a> by using BeautifulSoup with this code.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
url = "https://www.w3schools.com/python/default.asp"
res = requests.get(url)
res.encoding = "utf-8"
soup = BeautifulSoup(res.text, 'html.parser')
print(soup.prettify())
</code></pre>
<p>I print data from soup.prettify() and check data. it have no text Hello World. How to read text Hello World using BeautifulSoup?</p>
|
<python><beautifulsoup>
|
2023-02-09 04:06:35
| 1
| 1,049
|
user572575
|
75,394,057
| 2,525,940
|
QTimer: change interval while running, without restarting the interval
|
<p>As mentioned here <a href="https://stackoverflow.com/a/64301362/2525940">https://stackoverflow.com/a/64301362/2525940</a> calling <code>setInterval</code> on a <code>QTimer</code> restarts the interval.</p>
<p>I need a timer that takes account of the already elapsed time.</p>
<p>For example:
A QTimer is set to trigger every 5 minutes.<br />
2 minutes after the previous trigger the operator (via the gui) sets the interval to 4 minutes.<br />
The next trigger will happen 6 (2 + 4) minutes after the previous trigger. I need it to happen 4 minutes after the previous trigger, and then every 4 minutes.<br />
I can stop the multishot timer, trigger a singleshot for (new_interval - old_interval + remainingTime), and then restart the multishot with the new_interval but this seems cumbersome.</p>
<p>Is there a better way?</p>
<p>Minimum code to show timer restarts when interval changed</p>
<pre class="lang-py prettyprint-override"><code>import sys
import time
from PyQt5 import QtWidgets
from PyQt5 import QtCore
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.interval_seconds = 5
self.start_time = time.time()
self.last_trigger = self.start_time
self.timer = QtCore.QTimer()
self.timer.setInterval(self.interval_seconds * 1000)
self.timer.timeout.connect(self.trigger)
self.timer.start()
# Getting data from the first timer using another timer
self.timer2 = QtCore.QTimer()
self.timer2.setInterval(1000)
self.timer2.timeout.connect(self.update_status)
self.timer2.start()
# GUI
layout = QtWidgets.QVBoxLayout()
self.interval = QtWidgets.QSpinBox()
self.interval.setValue(self.interval_seconds)
self.set_btn = QtWidgets.QPushButton("Set interval")
self.set_btn.clicked.connect(self.set_interval)
self.text1 = QtWidgets.QPlainTextEdit()
self.text2 = QtWidgets.QLabel()
self.text3 = QtWidgets.QLabel()
layout.addWidget(QtWidgets.QLabel("Interval (s)"))
layout.addWidget(self.interval)
layout.addWidget(self.set_btn)
layout.addWidget(self.text1)
layout.addWidget(self.text2)
layout.addWidget(self.text3)
widget = QtWidgets.QWidget()
widget.setLayout(layout)
self.setCentralWidget(widget)
self.show()
def trigger(self):
self.last_trigger = time.time()
self.text1.appendPlainText(
f"TRIGGER @ {(self.last_trigger - self.start_time):.1f}"
)
def update_status(self):
time_gone = time.time() - self.last_trigger
time_left = (self.timer.remainingTime() // 1000) + 1
self.text2.setText(f"Elasped time (s) {time_gone:.1f} ")
self.text3.setText(f"Next trigger in ... (s) {time_left:.1f} ")
def set_interval(self):
self.interval_seconds = self.interval.value()
self.timer.setInterval(self.interval_seconds * 1000)
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
mw = MainWindow()
sys.exit(app.exec())
</code></pre>
|
<python><pyqt5>
|
2023-02-09 03:59:02
| 2
| 499
|
elfnor
|
75,394,033
| 3,713,236
|
Pipeline & ColumnTransformer: ValueError: Selected columns are not unique in dataframe
|
<p>Background: I am trying to learn from <a href="https://www.kaggle.com/code/hernnvignolo/house-prices-art-without-data-leakage#5.-Machine-Learning-model-for-predictions" rel="nofollow noreferrer">a notebook</a> used in <a href="https://www.kaggle.com/c/house-prices-advanced-regression-techniques" rel="nofollow noreferrer">Kaggle House Price Prediction Dataset</a>.</p>
<p>I am trying to use a <code>Pipeline</code> to transform numerical and categorical columns in a dataframe. It is having issues with my Categorical variables' names, which is a list stored in this variable <code>categ_cols_names</code>. It says that those categorical columns are not unique in dataframe, which I'm not sure what that means.</p>
<pre><code>categ_cols_names = ['MSZoning','Street','LotShape','LandContour','Utilities','LotConfig','LandSlope','Neighborhood','Condition1','Condition2','BldgType','HouseStyle','OverallQual','OverallCond','YearBuilt','YearRemodAdd','RoofStyle','RoofMatl','Exterior1st','Exterior2nd','MasVnrType','ExterQual','ExterCond','Foundation','BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2','Heating','HeatingQC','CentralAir','Electrical','BsmtFullBath','BsmtHalfBath','FullBath','HalfBath','BedroomAbvGr','KitchenAbvGr','KitchenQual','Functional','Fireplaces','GarageType','GarageYrBlt','GarageFinish','GarageCars','GarageQual','GarageCond','PavedDrive','MoSold','YrSold','SaleType','SaleCondition','OverallQual','GarageCars','FullBath','YearBuilt']
</code></pre>
<p>Below is my code:</p>
<pre><code># Get numerical columns names
num_cols_names = X_train.columns[X_train.dtypes != object].to_list()
# Numerical columns with missing values
num_nan_cols = X_train[num_cols_names].columns[X_train[num_cols_names].isna().sum() > 0]
# Assign np.nan type to NaN values in categorical features
# in order to ensure detectability in posterior methods
X_train[num_nan_cols] = X_train[num_nan_cols].fillna(value = np.nan, axis = 1)
# Define pipeline for imputation of the numerical features
num_pipeline = Pipeline(steps = [
('Simple Imputer', SimpleImputer(strategy = 'median')),
('Robust Scaler', RobustScaler()),
('Power Transformer', PowerTransformer())
]
)
# Get categorical columns names
categ_cols_names = X_train.columns[X_train.dtypes == object].to_list()
# Categorical columns with missing values
categ_nan_cols = X_train[categ_cols_names].columns[X_train[categ_cols_names].isna().sum() > 0]
# Assign np.nan type to NaN values in categorical features
# in order to ensure detectability in posterior methods
X_train[categ_nan_cols] = X_train[categ_nan_cols].fillna(value = np.nan, axis = 1)
# Define pipeline for imputation and encoding of the categorical features
categ_pipeline = Pipeline(steps = [
('Categorical Imputer', SimpleImputer(strategy = 'most_frequent')),
('One Hot Encoder', OneHotEncoder(drop = 'first'))
])
ct = ColumnTransformer([
('Categorical Pipeline', categ_pipeline, categ_cols_names),
('Numerical Pipeline', num_pipeline, num_cols_names)],
remainder = 'passthrough',
sparse_threshold = 0,
n_jobs = -1)
pipe = Pipeline(steps = [('Column Transformer', ct)])
pipe.fit_transform(X_train)
</code></pre>
<p>The <code>ValueError</code> occurs on the <code>.fit_transform()</code> line:
<a href="https://i.sstatic.net/qjxJd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qjxJd.png" alt="enter image description here" /></a></p>
<p>Here is a sample of my <code>X_train</code>:</p>
<pre><code>{'MSZoning': {0: 'RL', 1: 'RL', 2: 'RL', 3: 'RL', 4: 'RL'},
'Street': {0: 'Pave', 1: 'Pave', 2: 'Pave', 3: 'Pave', 4: 'Pave'},
'LotShape': {0: 'Reg', 1: 'Reg', 2: 'IR1', 3: 'IR1', 4: 'IR1'},
'LandContour': {0: 'Lvl', 1: 'Lvl', 2: 'Lvl', 3: 'Lvl', 4: 'Lvl'},
'Utilities': {0: 'AllPub',
1: 'AllPub',
2: 'AllPub',
3: 'AllPub',
4: 'AllPub'},
'LotConfig': {0: 'Inside', 1: 'FR2', 2: 'Inside', 3: 'Corner', 4: 'FR2'},
'LandSlope': {0: 'Gtl', 1: 'Gtl', 2: 'Gtl', 3: 'Gtl', 4: 'Gtl'},
'Neighborhood': {0: 'CollgCr',
1: 'Veenker',
2: 'CollgCr',
3: 'Crawfor',
4: 'NoRidge'},
'Condition1': {0: 'Norm', 1: 'Feedr', 2: 'Norm', 3: 'Norm', 4: 'Norm'},
'Condition2': {0: 'Norm', 1: 'Norm', 2: 'Norm', 3: 'Norm', 4: 'Norm'},
'BldgType': {0: '1Fam', 1: '1Fam', 2: '1Fam', 3: '1Fam', 4: '1Fam'},
'HouseStyle': {0: '2Story',
1: '1Story',
2: '2Story',
3: '2Story',
4: '2Story'},
'OverallQual': {0: '7', 1: '6', 2: '7', 3: '7', 4: '8'},
'OverallCond': {0: '5', 1: '8', 2: '5', 3: '5', 4: '5'},
'YearBuilt': {0: '2003', 1: '1976', 2: '2001', 3: '1915', 4: '2000'},
'YearRemodAdd': {0: '2003', 1: '1976', 2: '2002', 3: '1970', 4: '2000'},
'RoofStyle': {0: 'Gable', 1: 'Gable', 2: 'Gable', 3: 'Gable', 4: 'Gable'},
'RoofMatl': {0: 'CompShg',
1: 'CompShg',
2: 'CompShg',
3: 'CompShg',
4: 'CompShg'},
'Exterior1st': {0: 'VinylSd',
1: 'MetalSd',
2: 'VinylSd',
3: 'Wd Sdng',
4: 'VinylSd'},
'Exterior2nd': {0: 'VinylSd',
1: 'MetalSd',
2: 'VinylSd',
3: 'Wd Shng',
4: 'VinylSd'},
'MasVnrType': {0: 'BrkFace',
1: 'None',
2: 'BrkFace',
3: 'None',
4: 'BrkFace'},
'ExterQual': {0: 'Gd', 1: 'TA', 2: 'Gd', 3: 'TA', 4: 'Gd'},
'ExterCond': {0: 'TA', 1: 'TA', 2: 'TA', 3: 'TA', 4: 'TA'},
'Foundation': {0: 'PConc', 1: 'CBlock', 2: 'PConc', 3: 'BrkTil', 4: 'PConc'},
'BsmtQual': {0: 'Gd', 1: 'Gd', 2: 'Gd', 3: 'TA', 4: 'Gd'},
'BsmtCond': {0: 'TA', 1: 'TA', 2: 'TA', 3: 'Gd', 4: 'TA'},
'BsmtExposure': {0: 'No', 1: 'Gd', 2: 'Mn', 3: 'No', 4: 'Av'},
'BsmtFinType1': {0: 'GLQ', 1: 'ALQ', 2: 'GLQ', 3: 'ALQ', 4: 'GLQ'},
'BsmtFinType2': {0: 'Unf', 1: 'Unf', 2: 'Unf', 3: 'Unf', 4: 'Unf'},
'Heating': {0: 'GasA', 1: 'GasA', 2: 'GasA', 3: 'GasA', 4: 'GasA'},
'HeatingQC': {0: 'Ex', 1: 'Ex', 2: 'Ex', 3: 'Gd', 4: 'Ex'},
'CentralAir': {0: 'Y', 1: 'Y', 2: 'Y', 3: 'Y', 4: 'Y'},
'Electrical': {0: 'SBrkr', 1: 'SBrkr', 2: 'SBrkr', 3: 'SBrkr', 4: 'SBrkr'},
'BsmtFullBath': {0: '1', 1: '0', 2: '1', 3: '1', 4: '1'},
'BsmtHalfBath': {0: '0', 1: '1', 2: '0', 3: '0', 4: '0'},
'FullBath': {0: '2', 1: '2', 2: '2', 3: '1', 4: '2'},
'HalfBath': {0: '1', 1: '0', 2: '1', 3: '0', 4: '1'},
'BedroomAbvGr': {0: '3', 1: '3', 2: '3', 3: '3', 4: '4'},
'KitchenAbvGr': {0: '1', 1: '1', 2: '1', 3: '1', 4: '1'},
'KitchenQual': {0: 'Gd', 1: 'TA', 2: 'Gd', 3: 'Gd', 4: 'Gd'},
'Functional': {0: 'Typ', 1: 'Typ', 2: 'Typ', 3: 'Typ', 4: 'Typ'},
'Fireplaces': {0: '0', 1: '1', 2: '1', 3: '1', 4: '1'},
'GarageType': {0: 'Attchd',
1: 'Attchd',
2: 'Attchd',
3: 'Detchd',
4: 'Attchd'},
'GarageYrBlt': {0: '2003.0',
1: '1976.0',
2: '2001.0',
3: '1998.0',
4: '2000.0'},
'GarageFinish': {0: 'RFn', 1: 'RFn', 2: 'RFn', 3: 'Unf', 4: 'RFn'},
'GarageCars': {0: '2', 1: '2', 2: '2', 3: '3', 4: '3'},
'GarageQual': {0: 'TA', 1: 'TA', 2: 'TA', 3: 'TA', 4: 'TA'},
'GarageCond': {0: 'TA', 1: 'TA', 2: 'TA', 3: 'TA', 4: 'TA'},
'PavedDrive': {0: 'Y', 1: 'Y', 2: 'Y', 3: 'Y', 4: 'Y'},
'MoSold': {0: '2', 1: '5', 2: '9', 3: '2', 4: '12'},
'YrSold': {0: '2008', 1: '2007', 2: '2008', 3: '2006', 4: '2008'},
'SaleType': {0: 'WD', 1: 'WD', 2: 'WD', 3: 'WD', 4: 'WD'},
'SaleCondition': {0: 'Normal',
1: 'Normal',
2: 'Normal',
3: 'Abnorml',
4: 'Normal'},
'GrLivArea': {0: 1710, 1: 1262, 2: 1786, 3: 1717, 4: 2198},
'GarageArea': {0: 548, 1: 460, 2: 608, 3: 642, 4: 836},
'TotalBsmtSF': {0: 856, 1: 1262, 2: 920, 3: 756, 4: 1145},
'1stFlrSF': {0: 856, 1: 1262, 2: 920, 3: 961, 4: 1145},
'TotRmsAbvGrd': {0: 8, 1: 6, 2: 6, 3: 7, 4: 9}}
</code></pre>
|
<python><dataframe><pipeline><transformation><feature-engineering>
|
2023-02-09 03:55:43
| 1
| 9,075
|
Katsu
|
75,393,965
| 134,044
|
AWS Lambda Python __pycache__ bytecode and local imports without layers
|
<p>When creating an AWS Lambda using Python:</p>
<ol>
<li>Can the Lambda access local imports if the modules are included in the Lambda handler zip; and</li>
<li>What are the implications of including the <code>__pycache__</code> directories in the zip?</li>
</ol>
<h3>Question 1: Can the runtime access local imports?</h3>
<p>The AWS documentation focuses on the Lambda handler Python file containing the handler function itself. This is obviously must be included in the deployment zip. But we don't want one big function, or even several functions or classes in one big file.</p>
<p>If using the usual Python approach of creating sub-directories, containing modules and packages, included in the directory containing the handler itself, and included in the zip that is uploaded to the AWS Lambda handler, will that code be accessible at run-time, and therefore be importable by the Lambda handler?</p>
<p>I'm not referring to the AWS Lambda support for "layers", which is normally used for providing access to packages that are installed in a virtual environment with <code>pip</code> etc. I (think I) understand that support and am not asking about that.</p>
<p>I specifically just want to clarify: can the Lambda handler import from <em>local</em> files, for instance, an adjacent <code>definitions.py</code> being referenced by <code>from definitions import *</code> (please no judgements about don't import star :-) as long as it's also in the zip?</p>
<h3>Question 2: Is it good practice to include the <code>__pycache__</code> directories?</h3>
<p>In the <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-package.html" rel="nofollow noreferrer">AWS Python Lambda deployment documentation</a> the output of a zip command shows the packages included with the Lambda handler Python file, also including a <code>__pycache__</code> directory. Additional libraries are also shown but it seems intended that these are collected from the layers.</p>
<p>The AWS documentation shows the inclusion of <code>__pycache__</code> but no mention is made at all.</p>
<p>I believe AWS Lambda run-times are certain specific versions of Python running on AWS Linux images. I'm currently forced to develop on Windows :-(. Will this mismatch cause issues for the run-time? Would other considerations come into play such as ensuring included bytecode is the same version of Python as the runtime?</p>
<p>Doesn't Python expect bytecode for particular packages in a particular relative location? Presumably the top-level (in the zip) <code>__pycache__</code> directory should only contain handler routine bytecode?</p>
<p>Does the Lambda runtime even use the <code>__pycache__</code> directories? If this is workable and working, given the Lambda may only run once before being destroyed, does that imply that developers should put effort in to providing the bytecode in the Lambda zip to improve Lambda performance? In this case, is it necessary to run sufficient tests across the code before zipping it for Lambda, to ensure all the bytecode is generated?</p>
<h3>Context</h3>
<p>I have reviewed various articles on creating an AWS Lambda zip using Python, including the <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-handler.html" rel="nofollow noreferrer">AWS</a> <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-package.html" rel="nofollow noreferrer">documentation</a>, but the content is shallow and simplistic, failing to clarify the precise "reach" of the runtime. It's not even in the <a href="https://ran-isenberg.github.io/aws-lambda-handler-cookbook/" rel="nofollow noreferrer">AWS Lambda Handler Cookbook</a>.</p>
<p>I do not (yet) have access to a live AWS environment to test this out, and given this broad omission in the on-line documentation and community commentaries (most articles just parrot the AWS documentation anyway, albeit with worse grammar), I thought it would be good to get a clarification on SO.</p>
|
<python><amazon-web-services><aws-lambda><bytecode>
|
2023-02-09 03:40:26
| 0
| 4,109
|
NeilG
|
75,393,948
| 109,554
|
How can I create Quilt data packages with R instead of Python?
|
<p>Quilt has a <a href="https://pypi.org/project/quilt3/" rel="nofollow noreferrer">Python library</a> (<code>quilt3</code>) but I use <a href="https://www.r-project.org/" rel="nofollow noreferrer">R</a> for all my scientific workflows. Is there an R interface that I can use to take advantage of Quilt's data version control and data lineage functionality for my datasets?</p>
|
<python><r><quiltdata>
|
2023-02-09 03:38:12
| 1
| 3,176
|
tatlar
|
75,393,928
| 1,245,262
|
How can pandas concat function duplicate behavior of append function in pandas,
|
<p>I've just inherited some code that uses pandas' <code>append</code> method. This code causes Pandas to issue the following warning:</p>
<pre><code>The frame.append method is deprecated and will be removed from pandas
in a future version. Use pandas.concat instead.
</code></pre>
<p>So, I want to use <code>pandas.concat</code>, without changing the behavior the <code>append</code> method gave. However, I can't.</p>
<p>Below I've recreated code that illustrates my problem. It creates an empty DataFrame with 31 columns and shape (0,31). When a new, empty row is appended to this DataFrame, the result has shape (1,31). In the code below, I've tried several ways to use <code>concat</code> and get the same behavior as <code>append</code>.</p>
<pre><code>import pandas as pd
# Create Empty Dataframe With Column Headings
obs = pd.DataFrame(columns=['basedatetime_before', 'lat_before', 'lon_before',
'sog_before',
'cog_before',
'heading_before',
'vesselname_before', 'imo_before',
'callsign_before',
'vesseltype_before', 'status_before',
'length_before', 'width_before',
'draft_before',
'cargo_before',
'basedatetime_after', 'lat_after',
'lon_after',
'sog_after',
'cog_after', 'heading_after',
'vesselname_after', 'imo_after',
'callsign_after',
'vesseltype_after', 'status_after',
'length_after', 'width_after',
'draft_after',
'cargo_after'])
# Put initial values in DataFrame
desired = pd.Timestamp('2016-03-20 00:05:00+0000', tz='UTC')
obs['point'] = desired
obs['basedatetime_before'] = pd.to_datetime(obs['basedatetime_before'])
obs['basedatetime_after'] = pd.to_datetime(obs['basedatetime_after'])
obs.rename(lambda s: s.lower(), axis = 1, inplace = True)
# Create new 'dummy' row
new_obs = pd.Series([desired], index=['point'])
# Get initial Shape Information
print("Orig obs.shape", obs.shape)
print("New_obs.shape", new_obs.shape)
print("--------------------------------------")
# Append new dummy row to Data Frame
obs1 = obs.append(new_obs, ignore_index=True)
# Attempt to duplicate effect of append with concat
obs2 = pd.concat([obs, new_obs])
obs3 = pd.concat([obs, new_obs], ignore_index=True)
obs4 = pd.concat([obs, new_obs.T])
obs5 = pd.concat([obs, new_obs.T], ignore_index=True)
obs6 = pd.concat([new_obs, obs])
obs7 = pd.concat([new_obs, obs], ignore_index=True)
obs8 = pd.concat([new_obs.T, obs])
obs9 = pd.concat([new_obs.T, obs], ignore_index=True)
# Verify original DataFrame hasn't changed and append still works
obs10 = obs.append(new_obs, ignore_index=True)
# Print results
print("----> obs1.shape",obs1.shape)
print("obs2.shape",obs2.shape)
print("obs3.shape",obs3.shape)
print("obs4.shape",obs4.shape)
print("obs5.shape",obs5.shape)
print("obs6.shape",obs6.shape)
print("obs7.shape",obs7.shape)
print("obs8.shape",obs8.shape)
print("obs9.shape",obs9.shape)
print("----> obs10.shape",obs10.shape)
</code></pre>
<p>However, every way I've tried to use <code>concat</code> to add a new row to the DataFrame results in a new DataFrame with shape (1,32). This can be seen in the results shown below:</p>
<pre><code> Orig obs.shape (0, 31)
New_obs.shape (1,)
--------------------------------------
----> obs1.shape (1, 31)
obs2.shape (1, 32)
obs3.shape (1, 32)
obs4.shape (1, 32)
obs5.shape (1, 32)
obs6.shape (1, 32)
obs7.shape (1, 32)
obs8.shape (1, 32)
obs9.shape (1, 32)
----> obs10.shape (1, 31)
</code></pre>
<p>How can I use <code>concat</code> to add <code>new_obs</code> to the <code>obs</code> DataFrame and get a DataDrame with shape (1, 31) instead of (1,32)?</p>
|
<python><pandas><dataframe><append><concatenation>
|
2023-02-09 03:33:59
| 2
| 7,555
|
user1245262
|
75,393,832
| 4,285,029
|
Load Large Excel Files in Databricks using PySpark from an ADLS mount
|
<p>We are trying to load a large'ish excel file from a mounted Azure Data Lake location using pyspark on Databricks.</p>
<p>We have used pyspark.pandas to load and we have used spark-excel to load, not with a lot of success</p>
<h4>PySpark.Pandas</h4>
<pre><code>import pyspark.pandas as ps
df = ps.read_excel("dbfs:/mnt/aadata/ds/data/test.xlsx",engine="openpyxl")
</code></pre>
<p>We are experiencing some conversion error as below</p>
<pre><code>ArrowTypeError: Expected bytes, got a 'int' object
</code></pre>
<h4>spark-excel</h4>
<pre><code>df=spark.read.format("com.crealytics.spark.excel") \
.option("header", "true") \
.option("inferSchema","false") \
.load('dbfs:/mnt/aadata/ds/data/test.xlsx')
</code></pre>
<p>We are able to load a smaller file, but a larger file gives the following error</p>
<pre><code>org.apache.poi.util.RecordFormatException: Tried to allocate an array of length 185,568,653, but the maximum length for this record type is 100,000,000.
</code></pre>
<p>Is there any other way to load excel files in databricks with pyspark?</p>
|
<python><apache-spark><pyspark><databricks><azure-databricks>
|
2023-02-09 03:17:24
| 1
| 1,184
|
Lambo
|
75,393,827
| 1,029,902
|
Both selenium and bs4 cannot find div in page
|
<p>I am trying to scrape a Craigslist results page and neither bs4 or selenium can find the elements in the page even though I can see them on inspection using dev tools. The results are in list items with class <code>cl-search-result</code>, but it seems the soup returned has none of the results.</p>
<p>This is my script so far. It looks like even the soup that is returned is not the same as the html I see when I inspect with dev tools. I am expecting this script to return 42 items, which is the number of search results.</p>
<p>Here is the script:</p>
<pre><code>import time
import datetime
from collections import namedtuple
import selenium.webdriver as webdriver
from selenium.webdriver.firefox.service import Service
from selenium.webdriver.support.ui import Select
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import ElementNotInteractableException
from bs4 import BeautifulSoup
import pandas as pd
import os
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/109.0'
firefox_driver_path = os.path.join(os.getcwd(), 'geckodriver.exe')
firefox_service = Service(firefox_driver_path)
firefox_option = Options()
firefox_option.set_preference('general.useragent.override', user_agent)
browser = webdriver.Firefox(service=firefox_service, options=firefox_option)
browser.implicitly_wait(7)
url = 'https://baltimore.craigslist.org/search/sss#search=1~list~0~0'
browser.get(url)
soup = BeautifulSoup(browser.page_source, 'html.parser')
print(soup)
posts_html= soup.find_all('li', {'class': 'cl-search-result'})
print('Collected {0} listings'.format(len(posts_html)))
</code></pre>
|
<python><selenium><selenium-webdriver><web-scraping><beautifulsoup>
|
2023-02-09 03:16:27
| 1
| 557
|
Tendekai Muchenje
|
75,393,773
| 5,709,240
|
How to remove duplicate strings after grouping?
|
<p>I would like to group the following Pandas DataFrame by <code>ID</code> column:</p>
<pre><code>
|----+----------------------------------------+-----------------|
| ID | Elements | Colors |
|----+----------------------------------------+-----------------|
| A | '1st element, 2d element, 3d element' | 'red, blue' |
| A | '2d element, 4th element' | 'blue, green' |
| B | '3d element, 5th element, 6th element' | 'white, purple' |
| B | '3d element, 5th element, 7th element' | 'white, teal' |
| B | '3d element, 5th element, 8th element' | 'white, black' |
|----+----------------------------------------+-----------------|
</code></pre>
<p>In order to obtain the following Pandas DataFrame:</p>
<pre><code>|----+-----------------------------------------------------------------+------------------------------|
| ID | Elements | Colors |
|----+-----------------------------------------------------------------+------------------------------|
| A | '1st element, 2d element, 3d element, 4th element' | 'red, blue, green' |
| B | '3d element, 5h element, 6th element, 7th element, 8th element' | 'white, purple, teal, black' |
|----+-----------------------------------------------------------------+------------------------------|
</code></pre>
|
<python><pandas>
|
2023-02-09 03:03:29
| 1
| 933
|
crocefisso
|
75,393,766
| 15,174,775
|
How to get current script filename or path in python when instantiate a class?
|
<pre class="lang-py prettyprint-override"><code># utils.py
class Foo:
def __init__():
print(__file__)
# mod.py
from utils import Foo
foo = Foo()
# This prints /absoulte/utils.py
# the expected output is /absoulte/mod.py
</code></pre>
<p>Is it possible to make the imported class <code>Foo</code> initialization with current file info instead of where it defined without passing the parameter?</p>
|
<python><python-3.x>
|
2023-02-09 03:01:15
| 1
| 638
|
HALF9000
|
75,393,757
| 6,567,212
|
How to escape dot (.) in a key to get JSON value with redis-py?
|
<p>I had some JSON data containing dot(.) in keys that are written to redis using redis-py like this:</p>
<pre class="lang-py prettyprint-override"><code>r = redis.Redis()
r.json().set(_id, "$", {'First.Last': "John.Smith"})
</code></pre>
<p>It works if reading the whole JSON data like</p>
<pre class="lang-py prettyprint-override"><code>r.json().get(_id)
</code></pre>
<p>but error throws if directly getting the value with a path containing the dot:</p>
<pre><code>r.json().get(_id, "First\.Last") # ResponseError
</code></pre>
<p>Obviously, the dot is not correctly escaped. Also tried to quote it like <code>"'First.Last'"</code>but doesn't work either. What's the correct way to deal with this special character in a key? The redis instance is a redis-stack-server docker container running on linux. Thank you!</p>
|
<python><json><redis>
|
2023-02-09 02:59:30
| 1
| 329
|
ningl
|
75,393,747
| 5,218,497
|
SignatureDoesNotMatch while uploading file from React.js using boto3 generate_presigned_url
|
<p>Currently the presigned url is generated from Python Lambda function and testing it on postman to upload the file works perfectly.</p>
<p>When uploading file from <strong>React.js</strong> using axios it fails with 403 status code and below error.</p>
<p><strong>Code</strong>: SignatureDoesNotMatch</p>
<p><strong>Message</strong>: The request signature we calculated does not match the signature you provided. Check your key and signing method</p>
<p><strong>Python Code</strong></p>
<pre><code>import boto3
s3_client = boto3.client('s3')
params = {
'Bucket': 'bucket_name',
'Key': 'unique_id.pdf',
'ContentType': "application/pdf"
}
s3_response = s3_client.generate_presigned_url(ClientMethod='put_object', Params=params, ExpiresIn=300)
</code></pre>
<p><strong>React Code</strong></p>
<pre><code>const readFileDataAsBuffer = (file) =>
new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = (event) => {
resolve(event.target.result);
};
reader.onerror = (err) => {
reject(err);
};
reader.readAsArrayBuffer(file);
});
const onFileUploadChange = async (e) => {
const file = e.target.files[0];
const tempdata = await readFileDataAsBuffer(file);
return axios({
method: 'put',
url: presigned_url_link,
data: tempdata
})
.then(() => {})
.catch(() => {});
</code></pre>
<p>};</p>
|
<python><reactjs><amazon-s3><boto3>
|
2023-02-09 02:58:02
| 1
| 2,428
|
Sharath
|
75,393,740
| 8,076,158
|
How to subclass a frozen dataclass
|
<p>I have inherited the <code>Customer</code> dataclass. This identifies a customer in the customer DB table.</p>
<p><code>Customer</code> is used to produce summary statistics for transactions pertaining to a given customer. It is hashable, hence frozen.</p>
<p>I require a <code>SpecialCustomer</code> (a subclass of <code>Customer</code>) it has an extra property: <code>special_property</code>. Most of the properties inherited from <code>Customer</code> will be set to fixed values. This customer does not exist in the Customer table.</p>
<p>I wish to utilise code which has been written for <code>Customer</code>. Without <code>special_property</code> we will not be able to distinguish between special customers.</p>
<p>How do I instantiate <code>SpecialCustomer</code>?</p>
<p>Here is what I have. I know why this doesn't work. Is there some way to do this?:</p>
<pre><code>from dataclasses import dataclass
@dataclass(frozen=True, order=True)
class Customer:
property: str
@dataclass(frozen=True, order=True)
class SpecialCustomer(Customer):
special_property: str = Field(init=False)
def __init__(self, special_property):
super().__init__(property="dummy_value")
self.special_property = special_property
s = SpecialCustomer(special_property="foo")
</code></pre>
<p>Error:</p>
<pre><code>E dataclasses.FrozenInstanceError: cannot assign to field 'special_property'
<string>:4: FrozenInstanceError
</code></pre>
|
<python><pydantic><python-dataclasses>
|
2023-02-09 02:56:30
| 1
| 1,063
|
GlaceCelery
|
75,393,692
| 19,425,874
|
How to run a Python Script from a Google Sheet
|
<p>I have a Python script that leverages lists in a Google Sheet and sends bulk SMS text messages using Twilio.</p>
<p>I'm fairly new to this and have struggled to get this far - any Python script I've created in the past, I've been able to just run off my local computer in VS Code.</p>
<p>I am trying to share this with a family member - I've read into tkinter and gui's a bit, but because the rest of this workflow is already in a Google Sheet, it would be perfect if any user could just run the Python script right from the spreadsheet itself.</p>
<p>I found this, but I don't really understand how to create a webservice in GAE. I've googled it all over but struggling to put into action -- <a href="https://stackoverflow.com/questions/22926456/trigger-python-code-from-google-spreadsheets">Trigger python code from Google spreadsheets?</a></p>
<p>Is there a simple way to just tie my Python script into this spreadsheet so that anyone can run it? Or another way to go about this?</p>
<p>ChatGPT response says this, but I feel it is inaccurate (or I just can't get it working):</p>
<p><a href="https://i.sstatic.net/dk1pU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dk1pU.png" alt="ChatGPT Response" /></a></p>
<p>My code is here, if it helps:</p>
<pre><code>from twilio.rest import Client
import gspread
# Your Account SID and Auth Token from twilio.com/console
account_sid = 'AC868ea4e1a779ff0816b466a13f201b02'
auth_token = '85469ae1eb492ffc814c095b5c6e0889'
client = Client(account_sid, auth_token)
gc = gspread.service_account(filename='creds.json')
# Open a spreadsheet by ID
sh = gc.open_by_key('1KRYITQ_O_-7exPZp8zj1VvAUPPutqtO4SrTgloCx8x4')
# Get the sheets
wk = sh.worksheet("Numbers to Send")
# E.G. the URLs are listed on Sheet 1 on Column A
numbers = wk.batch_get(('f3:f',))[0]
names = wk.batch_get(('g3:g',))[0]
# names = ['John', 'Jane', 'Jim']
# numbers = ['+16099725052', '+16099725052', '+16099725052']
# Loop through the names and numbers and send a text message to each phone number
for i in range(len(names)):
message = client.messages.create(
to=numbers[i],
from_='+18442251378',
body=f"Hello {names[i][0]}, this is a test message from Twilio.")
print(f"Message sent to {names[i]} at {numbers[i]}")
</code></pre>
|
<python><google-apps-script><gspread>
|
2023-02-09 02:48:02
| 0
| 393
|
Anthony Madle
|
75,393,656
| 10,051,099
|
Does Django save model objects other than `save()` or `create()` methods?
|
<p>I'm writing something like the following:</p>
<pre class="lang-py prettyprint-override"><code>class Foo(models.Model):
a = models.CharField()
def f(foo: Foo) -> Foo:
y = Foo(
**{field.name: getattr(foo, field.name) for field in foo._meta.get_fields()}
) # copy foo with pk
y.a = "c"
return y
</code></pre>
<p>My concern is whether <code>y</code> are saved to DB before users call <code>save()</code> method. Could it occur?</p>
|
<python><django>
|
2023-02-09 02:37:40
| 2
| 3,695
|
tamuhey
|
75,393,602
| 13,324,349
|
How to pass references of declared variables to C function in python using ctypes
|
<p>I would like to call a C function from python. This C function is void, thus the "return parameters" (data I want to change) are defined as pointers in the C function's definition.</p>
<p>The function in C looks like this (note: I cannot and will not change it as it is generated automatically by <a href="https://www.mathworks.com/help/coder/ref/codegen.html" rel="nofollow noreferrer">MATLAB codegen</a>):</p>
<pre class="lang-c prettyprint-override"><code>void func(int *input_var, // input
float *output_var //output
) {
...
}
</code></pre>
<p>from python, I am calling my function as so</p>
<pre class="lang-py prettyprint-override"><code>import ctypes as C
func = C.CDLL('path/to/lib.so').func
input_var = C.c_int(5)
output_var = C.POINTER(C.c_float) # basically want to just declare a pointer here
func(C.byref(input_var), C.byref(output_var))
</code></pre>
<p>The error I get is</p>
<pre><code>TypeError: byref() argument must be a ctypes instance, not '_ctypes.PyCPointerType'
</code></pre>
<p>if I remove <code>bref()</code> I get</p>
<pre><code>ctypes.ArgumentError: argument 2: <class 'TypeError'>: Don't know how to convert parameter 2
</code></pre>
<p>I also tried to pass in output_var as <code>C.byref(output_var())</code>; this leads to a <code>Segmentation fault (core dumped)</code></p>
|
<python><c><ctypes>
|
2023-02-09 02:24:54
| 2
| 588
|
Brian Barry
|
75,393,588
| 16,933,406
|
separate the abnormal reads of DNA (A,T,C,G) templates
|
<p>I have millions of DNA clone reads and few of them are misreads or error. I want to separate the clean reads only.</p>
<p><strong>For non biological background:</strong></p>
<p>DNA clone consist of only four characters (A,T,C,G) in various permutation/combination. Any character, symbol or sign other that "A","T","C", and "G" in DNA is an error.
Is there any way (fast/high throughput) in <strong>python</strong> to separate the clean reads only.</p>
<p>Basically I want to find a way through which I can separate a string which has nothing but "A","T","C","G" alphabet characters.</p>
<p><strong>Edit</strong><br />
correct_read_clone: "ATCGGTTCATCGAATCCGGGACTACGTAGCA"</p>
<p>misread_clone: "ATCGG<strong>N</strong>ATCGACGTACGTACGTTTAAAGCAGG" or "ATCGGTT@CATCGAATCCGGGACTACGTAGCA" or "ATCGGTTCATCGAA*TCCGGGACTACGTAGCA" or "AT?CGGTTCATCGAATCCGGGACTACGTAGCA" etc</p>
<p>I have tried the below for loop</p>
<pre><code>check_list=['A','T','C','G']
for i in clone:
if i not in check_list:
continue
</code></pre>
<p>but the problem with this for loop is, it iterates over the string and match one by one which makes this process slow. To clean millions of clone this delay is very significant.</p>
|
<python><bioinformatics><python-re><biopython><dna-sequence>
|
2023-02-09 02:22:25
| 5
| 617
|
shivam
|
75,393,453
| 8,670,757
|
Pandas Reindex Multiindex Dataframe Replicating Index
|
<p>Thank you for taking a look! I am having issues with a 4 level multiindex & attempting to make sure every possible value of the 4th index is represented.</p>
<p>Here is my dataframe:</p>
<pre><code>
np.random.seed(5)
size = 25
dict = {'Customer':np.random.choice( ['Bob'], size),
'Grouping': np.random.choice( ['Corn','Wheat','Soy'], size),
'Date':np.random.choice( pd.date_range('1/1/2018','12/12/2022', freq='D'), size),
'Data': np.random.randint(20,100, size=(size))
}
df = pd.DataFrame(dict)
# create the Sub-Group column
df['Sub-Group'] = np.nan
df.loc[df['Grouping'] == 'Corn', 'Sub-Group'] = np.random.choice(['White', 'Dry'], size=len(df[df['Grouping'] == 'Corn']))
df.loc[df['Grouping'] == 'Wheat', 'Sub-Group'] = np.random.choice(['SRW', 'HRW', 'SWW'], size=len(df[df['Grouping'] == 'Wheat']))
df.loc[df['Grouping'] == 'Soy', 'Sub-Group'] = np.random.choice(['Beans', 'Meal'], size=len(df[df['Grouping'] == 'Soy']))
df['Year'] = df.Date.dt.year
</code></pre>
<p>With that, I'm looking to create a groupby like the following:</p>
<pre><code>(df.groupby(['Customer','Grouping','Sub-Group',df['Date'].dt.month,'Year'])
.agg(Units = ('Data','sum'))
.unstack()
)
</code></pre>
<p><a href="https://i.sstatic.net/tYPHA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tYPHA.png" alt="DataFrame" /></a></p>
<p>This works as expected. I want to reindex this dataframe so that every single month (index 3) is represented & filled with 0s. The reason I want this is later on I'll be doing a cumulative sum of a groupby.</p>
<p>I have tried both the following reindex & nothing happens - many months are still missing.</p>
<pre><code>rere = pd.date_range('2018-01-01','2018-12-31', freq='M').month
(df.groupby(['Customer','Grouping','Sub-Group',df['Date'].dt.month,'Year'])
.agg(Units = ('Data','sum'))
.unstack()
.fillna(0)
.pipe(lambda x: x.reindex(rere, level=3, fill_value=0))
)
</code></pre>
<p>I've also tried the following:</p>
<pre><code>(df.groupby(['Customer','Grouping','Sub-Group',df['Date'].dt.month,'Year'])
.agg(Units = ('Data','sum'))
.unstack()
.fillna(0)
.pipe(lambda x: x.reindex(pd.MultiIndex.from_product(x.index.levels)))
)
</code></pre>
<p>The issue with the last one is that the index is much too long - it's doing the cartesian product of Grouping & Sub-Group when really there are no combinations of 'Wheat' as a Grouping & 'Dry' as 'Sub-Group'.</p>
<p>I'm looking for a flexible way to reindex this dataframe to make sure a specific index level (3rd in this case) has every option.</p>
<p>Thanks so much for any help!</p>
|
<python><pandas><dataframe><numpy>
|
2023-02-09 01:53:25
| 1
| 341
|
keg5038
|
75,393,424
| 9,981,147
|
Calculate a number to the power of a large integer
|
<p>I'm trying to help someone calculate the value of 4^3e9.</p>
<p>The problem is that most software don't support numbers this large. Is there an alternative way to calculate this?</p>
<p>My first attempt is to try to divide the number as it is being calculated by looping from 1 to 3e9. If the intermediate result is > 10 then I divide it by it's power of 10, then add this to a variable. In the end I will have a floating point number and the power of 10.</p>
<pre><code>import math
powerof10 = 0
powerof = int(3e9)
# print('powerof', powerof)
initial_value = 4
float_value = initial_value
for i in range(1,powerof): #start from 1 to get correct number of operations
float_value *= initial_value
# print('float value', float_value)
print(i)
if (float_value > 10):
powerof10increment = math.floor(math.log10(float_value))
# print('powerof10', powerof10increment)
powerof10 += powerof10increment
float_value /= 10**powerof10increment
# print('reduced float value', float_value)
print(float_value, ' x 10^', powerof10)
</code></pre>
<p>This based on this question here: <a href="https://stackoverflow.com/questions/75348758/i-want-to-know-what-is-the-value-of-4-to-the-power-of-3000000000-3e9">I want to know what is the value of 4 to the power of 3000000000 (3e+9)</a></p>
<p>According to the question the number should be in the format 1 x 10^x, so I think only x is required.</p>
|
<python><math>
|
2023-02-09 01:48:49
| 1
| 481
|
BrendanOtherwhyz
|
75,393,420
| 19,425,874
|
How to switch python print results to string instead of object?
|
<p>I'm working on a tool that will send bulk SMS text messages. I am using the Twilio API - I have two lists in a Google Sheet, one that is phone numbers, one that is first names.</p>
<p>My script loops through the lists, assigns the name as a variable, then creates the messages.</p>
<p>When I reference the lists from the Google Sheet, I'm running into an error - I think based on how the info is printing.</p>
<p>Here's the overview:</p>
<p><strong>WORKING SCRIPT NOT USING GOOGLE SHEETS LIST - THIS PRINTS PERFECTLY:</strong></p>
<pre><code>from twilio.rest import Client
import gspread
# Your Account SID and Auth Token from twilio.com/console
account_sid = '#################'
auth_token = '#################'
client = Client(account_sid, auth_token)
gc = gspread.service_account(filename='creds.json')
# Open a spreadsheet by ID
sh = gc.open_by_key('1KRYITQ_O_-7exPZp8zj1VvAUPPutqtO4SrTgloCx8x4')
# Get the sheets
wk = sh.worksheet("Numbers to Send")
# E.G. the URLs are listed on Sheet 1 on Column A
# numbers = wk.batch_get(('E3:E',))[0]
# names = wk.batch_get(('D3:D',))[0]
names = ['John', 'Jane', 'Jim']
numbers = ['+PhoneNumber', '+PhoneNumber', '+PhoneNumber']
# Loop through the names and numbers and send a text message to each phone number
for i in range(len(names)):
message = client.messages.create(
to=numbers[i],
from_='+18442251378',
body=f"Hello {names[i]}, this is a test message from Twilio.")
print(f"Message sent to {names[i]} at {numbers[i]}")
</code></pre>
<p>However, when I try to get the lists from the Google Sheet, the message prints like this:</p>
<p>"Hello ['Zak'], this is a message from Twilio"... whereas the first method (not using Sheets), does not place the brackets around the name. It sends like a clean string.</p>
<p>I tried adding in a str pre-fix on the names variable, but it didn't work. Any ideas how I could tweak this into printing a clean message?</p>
<pre><code>from twilio.rest import Client
import gspread
# Your Account SID and Auth Token from twilio.com/console
account_sid = '#####################'
auth_token = '######################'
client = Client(account_sid, auth_token)
gc = gspread.service_account(filename='creds.json')
# Open a spreadsheet by ID
sh = gc.open_by_key('1KRYITQ_O_-7exPZp8zj1VvAUPPutqtO4SrTgloCx8x4')
# Get the sheets
wk = sh.worksheet("Numbers to Send")
# E.G. the URLs are listed on Sheet 1 on Column A
numbers = wk.batch_get(('E3:E',))[0]
names = wk.batch_get(('D3:D',))[0]
# names = ['John', 'Jane', 'Jim']
# numbers = ['+16099725052', '+16099725052', '+16099725052']
# Loop through the names and numbers and send a text message to each phone number
for i in range(len(names)):
message = client.messages.create(
to=numbers[i],
from_='+18442251378',
body=f"Hello {names[i]}, this is a test message from Twilio.")
print(f"Message sent to {names[i]} at {numbers[i]}")
</code></pre>
|
<python><string><twilio><gspread>
|
2023-02-09 01:48:16
| 0
| 393
|
Anthony Madle
|
75,393,385
| 10,461,632
|
How to efficiently calculate folder and file sizes in a directory?
|
<p>How can I efficiently calculate the size of every subfolder and file in a given directory?</p>
<p>The code I have so far does what I want, but it is inefficient and slow because of how I have to calculate the parent folder size.</p>
<p>Here's my current timing:</p>
<pre><code>Section 1: 0.53 s
Section 2: 30.71 s
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>import os
import time
import collections
def folder_size(directory):
parents = []
file_size = collections.defaultdict(int)
parent_size = collections.defaultdict(int)
t0 = time.time()
#### Section 1 ####
for root, dirs, files in os.walk(directory):
root = os.path.abspath(root)
parents.append(root)
for f in files:
f = os.path.join(root, f)
file_size[f] += os.path.getsize(f)
###################
t1 = time.time()
print(f'walk time: {round(t1-t0, 2)}')
#### Section 2 ####
for parent in parents:
parent_split = parent.split(os.sep)
for filename, value in file_size.items():
parent_for_file = filename.split(os.sep)[:len(parent_split)]
if parent_split == parent_for_file:
parent_size[parent] += value
###################
t2 = time.time()
print(f'parent size time: {round(t2-t1, 2)}')
return file_size, parent_size
</code></pre>
<p>Section 2 of the code is inefficient for a couple reasons:</p>
<p><strong>Inefficiency #1</strong></p>
<p>I need to capture folders where there are no files. For example, in a folder structure like this:</p>
<pre><code>TopFolder
├── FolderA
│ ├── folder_P1
│ │ ├── folder_P1__file_1.txt
│ │ └── folder_P1__file_2.txt
│ ├── folder_P10
│ │ ├── folder_P10__file_1.txt
│ │ └── folder_P10__file_2.txt
.
.
.
</code></pre>
<p>I want to end up with a size (in bytes) for each directory, like this:</p>
<pre><code>'..../TopFolder': 114000,
'..../TopFolder/FolderA': 38000,
'..../TopFolder/FolderA/folder_P1': 38,
'..../TopFolder/FolderA/folder_P10': 38,
.
.
.
</code></pre>
<p>In order to get the total size for folders that have subfolders, like <code>TopFolder</code> and <code>FolderA</code>, I stored the parents separately, so I could go back and calculate their size based on the file sizes.</p>
<p><strong>Inefficiency #2</strong></p>
<p>The code is really slow because I have to <code>split()</code> the strings to determine the parent (confirmed with the <code>cProfile</code> module). I have to do this because if I do something like the snippet below, certain folder sizes will be calculated incorrectly. I also tried using <code>re.split()</code>, but that's even slower.</p>
<pre><code>#### Section 2 ####
...
for parent in parents:
for filename, value in file_size.items():
if parent in filename:
parent_size[parent] += value
...
###################
</code></pre>
<p>Here's the wrong output with <code>if parent in filename</code>:</p>
<pre><code>'..../TopFolder': 114000,
'..../TopFolder/FolderA': 38000,
'..../TopFolder/FolderA/folder_P1': 4256,
'..../TopFolder/FolderA/folder_P10': 456,
'..../TopFolder/FolderA/folder_P100': 76,
'..../TopFolder/FolderA/folder_P1000': 38,
.
.
.
</code></pre>
<p>Here's the correct output with the original code:</p>
<pre><code>'..../TopFolder': 114000,
'..../TopFolder/FolderA': 38000,
'..../TopFolder/FolderA/folder_P1': 38,
'..../TopFolder/FolderA/folder_P10': 38,
'..../TopFolder/FolderA/folder_P100': 38,
'..../TopFolder/FolderA/folder_P1000': 38,
.
.
.
</code></pre>
<p>Section 2 either needs to be improved so it runs faster, or Section 2 needs to be incorporated into Section 1. I've searched the internet for ideas, but have only been able to find info on calculating the top level directory size and am running out of ideas.</p>
<p>Here's the code I used to create a sample directory structure:</p>
<pre><code>import os
folder = 'TopFolder'
subfolders = ['FolderA', 'FolderB', 'FolderC']
for i in range(1000):
for subfolder in subfolders:
path = os.path.join(folder, subfolder, f'folder_P{i + 1}')
if not os.path.isdir(path):
os.makedirs(path)
for k in range(2):
with open(os.path.join(path, f'folder_P{i + 1}__file_{k + 1}.txt'), 'w') as file_out:
file_out.write(f'Hello from file {k + 1}!\n')```
</code></pre>
|
<python><python-3.x>
|
2023-02-09 01:41:03
| 1
| 788
|
Simon1
|
75,393,358
| 11,462,274
|
Intercalate pandas dataframe columns when they are in pairs
|
<p>The desired result is this:</p>
<pre class="lang-none prettyprint-override"><code>id name
1 A
2 B
3 C
4 D
5 E
6 F
7 G
8 H
</code></pre>
<p>Currently I do it this way:</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'home_id': ['1', '3', '5', '7'],
'home_name': ['A', 'C', 'E', 'G'],
'away_id': ['2', '4', '6', '8'],
'away_name': ['B', 'D', 'F', 'H']})
id_col = pd.concat([df['home_id'], df['away_id']])
name_col = pd.concat([df['home_name'], df['away_name']])
result = pd.DataFrame({'id': id_col, 'name': name_col})
result = result.sort_index().reset_index(drop=True)
print(result)
</code></pre>
<p>But this form uses the index to reclassify the columns, generating possible errors in cases where there are equal indexes.</p>
<p>How can I intercalate the column values always being:</p>
<p>Use the home of the 1st line, then the away of the 1st line, then the home of the 2nd line, then the away of the 2nd line and so on...</p>
|
<python><pandas><dataframe>
|
2023-02-09 01:35:04
| 3
| 2,222
|
Digital Farmer
|
75,393,016
| 2,304,905
|
Is there a way to "dynamically" annotate the return type of a function in Python?
|
<p>Imagine I have the following class structure with three class families: <code>Parent</code>, <code>Child1</code>, and <code>Child2</code> each with <code>Abstract</code>, <code>Explicit</code> and <code>Implicit</code> flavors.</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
import abc
###################################
# Parent classes
###################################
class AbstractParent(abc.ABC):
@property
@abc.abstractmethod
def type_explicit(self) -> type[AbstractExplicitParent]:
pass
class AbstractExplicitParent(AbstractParent):
pass
class AbstractImplicitParent(AbstractParent):
pass
###################################
# Child 1 classes
###################################
class AbstractChild1(AbstractParent):
@property
def type_explicit(self) -> type[ExplicitChild1]:
return ExplicitChild1
class ExplicitChild1(AbstractExplicitParent, AbstractChild1):
pass
class ImplicitChild1(AbstractImplicitParent, AbstractChild1):
pass
###################################
# Child 2 classes
###################################
class AbstractChild2(AbstractParent):
@property
def type_explicit(self) -> type[ExplicitChild2]:
return ExplicitChild2
class ExplicitChild2(AbstractExplicitParent, AbstractChild2):
pass
class ImplicitChild2(AbstractImplicitParent, AbstractChild1):
pass
</code></pre>
<p>The details of these classes aren't important except for the fact that I've defined an abstract property called <code>type_explicit</code> in <code>AbstractParent</code> with concrete implementations in <code>AbstractChild1</code> and <code>AbstractChild2</code>. The obvious purpose of this property is to return the explicit flavor of the current class, and you can see that <code>AbstractChild1.type_explicit</code> returns <code>ExplicitChild1</code> and similarly for <code>AbstractChild2.type_explicit</code>.</p>
<p>Now imagine I want to define a function that can accept any subclass of <code>AbstractParent</code> but always returns the explicit flavor of that subclass. Obviously I can write</p>
<pre><code>def some_function(a: AbstractParent) -> AbstractExplicitParent:
...
</code></pre>
<p>but that doesn't capture all the type information, for example: <code>(ImplicitChild1,) -> ExplicitChild1</code>.</p>
<p>Is there a way to dynamically compute the return type, say by defining a (meta)class called <code>Explicit</code> that can evaluate the <code>type_explicit</code> property, such that I can write</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T", bound=AbstractParent)
def some_function(a: T) -> Explicit[T]:
...
</code></pre>
<p>even if I need to change how <code>type_explicit</code> works, say by changing it to a class variable/method?</p>
<p>The only reason I think this could work is because of things like <code>typing.Type</code> which are sort of dynamic since I can write things like</p>
<pre><code>def get_type(a: T) -> Type[T]:
return type(a)
</code></pre>
|
<python><generics><python-typing><mypy>
|
2023-02-09 00:22:31
| 0
| 714
|
Roy Smart
|
75,392,950
| 10,426,490
|
How does python version affect Azure Functions?
|
<p>I'm <em>developing</em> Azure Functions using Python <strong>3.10.10</strong> on my machine, deploying the Function through Azure DevOps which is <em>building</em> the artifact using Python <strong>3.6.8</strong>, and the <code>Python Version</code> shown for the Function App <em>host</em> is <strong>3.8</strong>.</p>
<p>There was a recent update of Azure Functions Runtime which deprecated Python 3.6. (see <a href="https://learn.microsoft.com/en-us/azure/azure-functions/migrate-version-3-version-4?tabs=net6-in-proc%2Cazure-cli%2Cwindows&pivots=programming-language-python#breaking-changes-between-3x-and-4x" rel="nofollow noreferrer">breaking changes here</a>).</p>
<p>How does python version affect Azure Functions? How do we keep the versions aligned?</p>
|
<python><azure-functions><version>
|
2023-02-09 00:06:44
| 1
| 2,046
|
ericOnline
|
75,392,836
| 2,904,824
|
Accepting an agent forward in a Paramiko Server
|
<p>Ok, so you implement <a href="https://docs.paramiko.org/en/stable/api/server.html#paramiko.server.ServerInterface.check_channel_forward_agent_request" rel="nofollow noreferrer"><code>paramiko.ServerInterface.check_channel_forward_agent_request</code></a> to accept or deny agent forwarding initiated by the client.</p>
<p>But neither <a href="https://docs.paramiko.org/en/stable/" rel="nofollow noreferrer">the docs</a> nor <a href="https://github.com/paramiko/paramiko/" rel="nofollow noreferrer">the code</a> nor <a href="https://github.com/openssh/openssh-portable/" rel="nofollow noreferrer">the OpenSSH code</a> give any indication what you do next.</p>
<ul>
<li>Does the agent consume the channel or is it sideband? is it even associated with a specific channel?</li>
<li>Is there an in-process way to call to the agent?</li>
<li>How do I set up local socket for other programs to use the forwarded agent?</li>
<li>How do I connect the agent forward to another paramiko transport for chain forwarding?</li>
</ul>
|
<python><ssh><paramiko>
|
2023-02-08 23:46:09
| 1
| 667
|
AstraLuma
|
75,392,825
| 11,187,883
|
VS Code connecting to jupyterlab server not selecting kernel
|
<p>We have recently moved to a JupyterLab Server from another IDE. We are trying to get VS Code hooked up so that we can code in it rather. After much struggle, we got VS Code to connect to our remote JupyterLab server. On the status bar in the bottom, it shows
<a href="https://i.sstatic.net/aB568.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aB568.png" alt="enter image description here" /></a></p>
<p>However, as soon as we connect to the JupyerLab server, all the 'run' buttons on screen disappears.</p>
<p><a href="https://i.sstatic.net/gRR14.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gRR14.png" alt="enter image description here" /></a></p>
<p>We are getting no support from our IT and have to figure it out ourselves.
A colleague suspects that it (VS Code) is not picking up the python kernel from the server. How do we go about selecting it? or pointing to it?</p>
<p>An additional question, how do we see and browse the folders on the JupyterLab server in VS Code?</p>
<p>Appreciate any assistance</p>
|
<python><visual-studio-code>
|
2023-02-08 23:44:12
| 1
| 769
|
GenDemo
|
75,392,769
| 289,784
|
How to use Apache Arrow IPC from multiple processes (possibly from different languages)?
|
<p>I'm not sure where to begin, so looking for some guidance. I'm looking for a way to create some arrays/tables in one process, and have it accessible (read-only) from another.</p>
<p>So I create a <code>pyarrow.Table</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>a1 = pa.array(list(range(3)))
a2 = pa.array(["foo", "bar", "baz"])
a1
# <pyarrow.lib.Int64Array object at 0x7fd7c4510100>
# [
# 0,
# 1,
# 2
# ]
a2
# <pyarrow.lib.StringArray object at 0x7fd7c5d6fa00>
# [
# "foo",
# "bar",
# "baz"
# ]
tbl = pa.Table.from_arrays([a1, a2], names=["num", "name"])
tbl
# pyarrow.Table
# num: int64
# name: string
# ----
# num: [[0,1,2]]
# name: [["foo","bar","baz"]]
</code></pre>
<p>Now how do I read this from a different process? I thought I would use <a href="https://docs.python.org/3/library/multiprocessing.shared_memory.html" rel="noreferrer"><code>multiprocessing.shared_memory.SharedMemory</code></a>, but that didn't quite work:</p>
<pre class="lang-py prettyprint-override"><code>shm = shared_memory.SharedMemory(name='pa_test', create=True, size=tbl.nbytes)
with pa.ipc.new_stream(shm.buf, tbl.schema) as out:
for batch in tbl.to_batches():
out.write(batch)
# TypeError: Unable to read from object of type: <class 'memoryview'>
</code></pre>
<p>Do I need to wrap the <code>shm.buf</code> with something?</p>
<p>Even if I get this to work, it seems very fiddly. How would I do this in a robust manner? Do I need something like zmq?</p>
<p>I'm not clear how this is zero copy though. When I write the record batches, isn't that serialisation? What am I missing?</p>
<p>In my real use case, I also want to talk to Julia, but maybe that should be a separate question when I come to it.</p>
<p>PS: I have gone through the <a href="https://arrow.apache.org/docs/python/ipc.html" rel="noreferrer">docs</a>, it didn't clarify this part for me.</p>
|
<python><ipc><pyarrow><apache-arrow>
|
2023-02-08 23:34:50
| 1
| 4,704
|
suvayu
|
75,392,766
| 2,059,078
|
How do I render a table in a specific Layout() using the python rich Live() class
|
<p>Here is the code to create the layout</p>
<pre><code>import time
from rich.live import Live
from rich.table import Table
from rich.layout import Layout
layout=Layout()
layout.split_row(
Layout(name="left"),
Layout(name="right"), )
print(layout)
</code></pre>
<p>I would like to display the below table in the right column but I can't figure out how</p>
<pre><code>table = Table()
table.add_column("Row ID")
table.add_column("Description")
table.add_column("Level")
with Live(table, refresh_per_second=4): # update 4 times a second to feel fluid
for row in range(12):
time.sleep(0.4) # arbitrary delay
# update the renderable internally
table.add_row(f"{row}", f"description {row}", "[red]ERROR")
</code></pre>
|
<python><rich>
|
2023-02-08 23:34:26
| 1
| 1,295
|
MiniMe
|
75,392,711
| 11,963,167
|
Call function and assign result to same named input in python
|
<p>It's probably a dumb Python question, anyway: let's say I have a function that takes an object as parameter and returns a new one, letting the input object untouched.
I want to apply such function to multiple objects, x1, ..., xn, and I want the output to be named the same as the input, ie x1,...xn.</p>
<p>Basically, it would be <code>x1=f(x1),...,xn=f(xn)</code>. But I don't want to repeat n times the calls to f.
Is there a Pythonic way to do it?</p>
<p>Edit:
I would like to know if a pack-apply-unpack pattern exists, such as
<code>x1,...,xn = something_that_applies_f(x1,...,xn)</code></p>
|
<python>
|
2023-02-08 23:26:47
| 0
| 496
|
Clej
|
75,392,654
| 15,239,717
|
Django Filter using Q Objects on two Models
|
<p>I am working a Django Application where I want to search two Models Profile (surname or othernames fields) and Account (account_number field) using Q objects.
From what I did, it is searching only one Model (Account) and any attempt to search any value from the other Model (Profile) triggers an error which says: <strong>Field 'id' expected a number but got 'Ahile Terdoo'.</strong>
See my Model Code:</p>
<pre><code>class Profile(models.Model):
customer = models.OneToOneField(User, on_delete=models.CASCADE, null = True)
surname = models.CharField(max_length=20, null=True)
othernames = models.CharField(max_length=40, null=True)
gender = models.CharField(max_length=6, choices=GENDER, blank=True, null=True)
address = models.CharField(max_length=200, null=True)
phone = models.CharField(max_length=11, null=True)
image = models.ImageField(default='avatar.jpg', blank=False, null=False, upload_to ='profile_images',
)
def __str__(self):
return f'{self.customer.username}-Profile'
class Account(models.Model):
customer = models.OneToOneField(User, on_delete=models.CASCADE, null=True)
account_number = models.CharField(max_length=10, null=True)
date = models.DateTimeField(auto_now_add=True, null=True)
def __str__(self):
return f' {self.customer} - Account No: {self.account_number}'
</code></pre>
<p>Here is my form:</p>
<pre><code>class SearchCustomerForm(forms.Form):
value = forms.CharField(label = 'Enter Name or Acct. Number', max_length=30)
</code></pre>
<p>Here is my views</p>
<pre><code>def create_account(request):
if searchForm.is_valid():
#Value of search form
value = searchForm.cleaned_data['value']
#Filter Customer by Surname, Othernames , Account Number using Q Objects
user_filter = Q(customer__profile__exact = value) | Q(account_number__exact = value)
#Apply the Customer Object Filter
list_customers = Account.objects.filter(user_filter)
else:
list_customers = Account.objects.order_by('-date')[:8]
</code></pre>
<p>Someone should help on how best to search using Q objects on two Models.</p>
|
<python><django>
|
2023-02-08 23:17:53
| 2
| 323
|
apollos
|
75,392,643
| 19,916,174
|
Use iterator in function in function parameter
|
<p>I was trying to create some code that can solve problems with sums.
For example, if you want to take the sum of 4*i for all values of i from 3, 109, I want this code to be able to do that. However, it should also be able to deal with more complicated things than just multiplication. See a sample of what I want to do below</p>
<pre><code>from typing import Callable
class MyClass:
def __init__(self):
pass
def function_sum(self, lower_bound: int, upper_bound: int, function: Callable, *args):
return sum((function(*args) for i in range(lower_bound, upper_bound+1)))
print(MyClass().function_sum(1, 10, lambda x: x*i, 1))
</code></pre>
<p>Is there a way to use the iterable i which is used inside the function, as a part of the function in the parameter, without forcing i to be a parameter?</p>
<pre><code>from typing import Callable
class MyClass:
def __init__(self):
pass
def function_sum(self, lower_bound: int, upper_bound: int, func: Callable, *args):
# i is forced to be a parameter in function
return sum((func(*args, i) for i in range(lower_bound, upper_bound+1)))
print(MyClass().function_sum(1, 10, lambda x: x*i, 1))
</code></pre>
|
<python>
|
2023-02-08 23:16:43
| 2
| 344
|
Jason Grace
|
75,392,639
| 2,801,404
|
Why are all PyTorch dataloader proccesses in S state (interruptible sleep) except one?
|
<p><strong>Short version</strong>: I'm using PyTorch dataloading library to load data in parallel for training a deep learning model. When I look at the CPU usage with <code>htop</code>, I see a bunch of processes running the same python script but only one of them is in R state (<em>running or runnable</em>) using 150% CPU, whereas all the others are in S state (<em>interruptible sleep</em>), using around 8% CPU each.</p>
<p><strong>More details</strong>:</p>
<p>I'm training a VQA model with a bunch of auxiliary tasks. In terms of data, this means I'm loading several lists, numpy arrays, dictionaries and images. These are queried on demand by datasets via the <code>__getitem__</code> function, which in turn are used by the dataloader processes.</p>
<p>This is what <em>htop</em> looks like:</p>
<p><a href="https://i.sstatic.net/MSQJK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MSQJK.png" alt="enter image description here" /></a></p>
<p>This is what <em>nvtop</em> looks like:</p>
<p><a href="https://i.sstatic.net/IlU4H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IlU4H.png" alt="enter image description here" /></a></p>
<p>As you can see, only one process is in R state, everything else is in S state.</p>
<p>In this particular example, I'm using two dataloaders for training (concretely, <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/vqa.py#L296" rel="nofollow noreferrer">this one</a> and <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/vqa.py#L879" rel="nofollow noreferrer">this one</a>), and I'm merging them <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/train_vqa.py#L754" rel="nofollow noreferrer">here</a> using <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/dataloading_utils.py#L165" rel="nofollow noreferrer">this function</a>.</p>
<p>I also build a hierarchy of nested datasets <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/vqa.py#L291" rel="nofollow noreferrer">here</a> using <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/dataloading_utils.py#L62" rel="nofollow noreferrer">these classes</a>, but the "leaf" atomic dataset class is defined <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/vqa.py#L67" rel="nofollow noreferrer">here</a>. I don't know if having a hierarchy of nested datasets could be (part of) the problem.</p>
<p>The data itself is loaded into memory from disk and prepared <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/mimiccxr/mimiccxr_vqa_dataset_management.py#L392" rel="nofollow noreferrer">here</a>, <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/mimiccxr/mimiccxr_vqa_dataset_management.py#L609" rel="nofollow noreferrer">here</a>, <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/mimiccxr/mimiccxr_vqa_dataset_management.py#L623" rel="nofollow noreferrer">here</a>, and <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/datasets/mimiccxr/mimiccxr_vqa_dataset_management.py#L665" rel="nofollow noreferrer">here</a>.</p>
<p>I'm profoundly ignorant as to how PyTorch implements multiprocess dataloading and how the data is shared and read across the different processes under the hood. Is there something I'm probably doing wrong that is causing all processes except one to be in <em>interruptible sleep</em> (S) state? How can I have all processes loading data at roughly 100% CPU usage (in R state) to make the most of Pytorch's parallel dataloading?</p>
<p><strong>Extra details</strong></p>
<p>For this example, I ran <a href="https://github.com/PabloMessina/MedVQA/blob/69a095a7d11cc6de0774fab01f38486119395407/medvqa/train_vqa.py#L1679" rel="nofollow noreferrer">this script</a> with argument --num-workers=3.</p>
<p>This script was invoked inside a jupyter notebook cell using the !syntax (!python myscript.py), which is itself running inside a tmux terminal. Just letting those extra details be known in case they may be part of the problem.</p>
|
<python><pytorch><multiprocessing><pytorch-dataloader>
|
2023-02-08 23:16:05
| 0
| 451
|
Pablo Messina
|
75,392,523
| 11,092,636
|
Why can't RegularGridInterpolator not return several values (for a function that outputs in $R^d$)
|
<p>MRE (with working output and output that doesn't work although I would like it to work as it would be the intuitional thing to do):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.interpolate import RegularGridInterpolator, griddata
def f(x1, x2, x3):
return x1 + 2*x2 + 3*x3, x1**2, x2
# Define the input points
xi = [np.linspace(0, 1, 5), np.linspace(0, 1, 5), np.linspace(0, 1, 5)]
# Mesh grid
x1, x2, x3 = np.meshgrid(*xi, indexing='ij')
# Outputs
y = f(x1, x2, x3)
assert (y[0][1][1][3] == (0.25 + 2*0.25 + 3*0.75))
assert (y[1][1][1][3] == (0.25**2))
assert (y[2][1][1][3] == 0.25)
#### THIS WORKS BUT I CAN ONLY GET THE nth (with n integer in [1, d]) VALUE RETURNED BY f
# Interpolate at point 0.3, 0.3, 0.4
interp = RegularGridInterpolator(xi, y[0])
print(interp([0.3, 0.3, 0.4])) # outputs 2.1 as expected
#### THIS DOESN'T WORK (I WOULD'VE EXPECTED A LIST OF TUPLES FOR EXAMPLE)
# Interpolate at point 0.3, 0.3, 0.4
interp = RegularGridInterpolator(xi, y)
print(interp([0.3, 0.3, 0.4])) # doesn't output array([2.1, 0.1, 0.3])
</code></pre>
<p>What is intriguing is that griddata does support functions that output values in <code>R^d</code></p>
<pre class="lang-py prettyprint-override"><code># Same with griddata
grid_for_griddata = np.array([x1.flatten(), x2.flatten(), x3.flatten()]).T
assert (grid_for_griddata.shape == (125, 3))
y_for_griddata = np.array([y[0].flatten(), y[1].flatten(), y[2].flatten()]).T
assert (y_for_griddata.shape == (125, 3))
griddata(grid_for_griddata, y_for_griddata, [0.3, 0.3, 0.4], method='linear')[0] # outputs array([2.1, 0.1, 0.3]) as expected
</code></pre>
<p>Am I using <code>RegularGridInterpolator</code> the wrong way?</p>
<p>I know someone might say "just use <code>griddata</code>", but because my data is in a rectilinear grid, I should use <code>RegularGridInterpolator</code> so that it's faster, right?</p>
<p>Proof that it's faster:
<a href="https://i.sstatic.net/WTMST.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WTMST.png" alt="enter image description here" /></a></p>
|
<python><scipy><interpolation>
|
2023-02-08 22:55:25
| 1
| 720
|
FluidMechanics Potential Flows
|
75,392,475
| 2,665,843
|
How do you delete an axis when plotting data using Datura?
|
<p>Using <a href="https://datura.readthedocs.io/en/latest/examples.html" rel="nofollow noreferrer">Datura</a>, how do you prevent an axis from appearing in the figure?</p>
<p>if you try y_ticks=None, a y-axis still appears:</p>
<pre><code>datura.plot([1, 2, 3], [2, 3, 1], y_ticks=None)
</code></pre>
|
<python><visualization>
|
2023-02-08 22:47:21
| 1
| 577
|
Peter
|
75,392,443
| 6,401,858
|
Fast way to save/load giant Python dictionaries (~50GB)?
|
<p>I have a bunch of data I need to store in memory as a python dictionary for O(1) fast lookups (millions of key,value pairs). I've been using the json library, which saved the data to a 50GB json file and takes 45min to load every time I start my program. There's gotta be a faster way...</p>
<p>The keys are Int64s (e.g. 1101101000011011011010000110010101011101000101010101011101000101). The bits are meaningful, so I can't use different keys.</p>
<p>The values are lists of 3 Float32 lists (e.g. [[-34.83263, 46.90836, 66.2], [14.23263, -76.90836, 310.4], ... ]). The outer lists are of variable length. The inner lists always have 3 floats.</p>
<p>It doesn't need to be saved as human-readable, so I've looked into using pickle instead. Is pickle going to be best for this or is there something more efficient? Are there better ways to store this much data and load it quickly into a python dictionary when I run my program?</p>
|
<python><json><dictionary><bigdata><pickle>
|
2023-02-08 22:42:10
| 1
| 382
|
fariadantes
|
75,392,409
| 3,056,036
|
Python + Linux - Excel to HTML (keeping format)
|
<p>I'm looking for a way to convert excel to html while preserving formatting.</p>
<p>I know this is doable on windows due to the availability of some underlying win32 libraries, (eg via <code>xlwings</code>
<a href="https://stackoverflow.com/questions/68273994/python-excel-to-html-keeping-format">Python - Excel to HTML (keeping format)</a>)</p>
<p>But I'm looking for a solution on Linux.
I've also come by <a href="https://products.aspose.com/cells/python-java/conversion/excel-to-html/" rel="nofollow noreferrer">Aspose Cells</a> but this requires a paid license or else it will add a lot of extra junk to the output that needs to be scrubbed out.</p>
<p>And lastly I tried the python lib <code>xlsx2html</code> but it does a very poor job at preserving formatting.</p>
<p>Are there any suggestions for a Linux based solution? I'd also be interested in tools written in other languages that can be easily wrapped around via python.</p>
<p>Thanks in advance!</p>
<p>Update:
Here is an example of a random excel sheet I converted via excel itself that I would like to reproduce. It has some colors, some border variations, some merged cells and some font sizes to see if they all work.
<a href="https://i.sstatic.net/cVDNn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cVDNn.png" alt="enter image description here" /></a></p>
|
<python><excel><linux>
|
2023-02-08 22:36:12
| 2
| 309
|
ateymour
|
75,392,399
| 8,176,763
|
extremely slow to write sql query into csv file
|
<p>I have a sql query that containers approximately 500k rows and 47 columns and I want this query to be dumped into a csv file , so I can import afterwards the file into a table onto a new database hosted into another server. My code does not use any fancy library that would cause some overhead into the process , but nonetheless the writing takes around 15 minutes to complete. I believe there is something wrong with my code but can't define what would speed things up. The connection uses <code>cx_oracle</code> driver in python.</p>
<pre><code>import config
from pathlib import WindowsPath
import csv
con = cx_Oracle.connect(f'{config.USER_ODS}/{config.PASS_ODS}@{config.HOST_ODS}:{config.PORT_ODS}/{config.SERVICENAME_ODS}')
sql = 'SELECT * FROM ods.v_hsbc_ucmdb_eim'
cur = con.cursor()
output = WindowsPath('result.csv')
with output.open('w',encoding="utf-8") as f:
writer = csv.writer(f, lineterminator="\n")
cur.execute(sql)
col_names = [row[0] for row in cur.description]
writer.writerow(col_names)
for row in cur:
writer.writerow(row)
</code></pre>
|
<python><csv><cx-oracle>
|
2023-02-08 22:34:55
| 1
| 2,459
|
moth
|
75,392,320
| 6,100,445
|
Blocking function sent to_thread not executing if awaited secondly
|
<p>I am learning Python asyncio and I have created a simple app to get console input and print a counter simultaneously. I know there are packages like aioconsole, aconsole, etc. but I'm looking for an answer to the specific question in the code comments below.</p>
<p>UPDATE: This was solved with help from <code>user2357112</code> and from my comments to their suggestions as shown below...my solution edits are shown in-line in the code snippet below with comments "solution edit"</p>
<pre><code>"""
Goal: simultaneously print a counter and accept keyboard input, using async concepts
"""
import asyncio
def get_console_input():
""" Get console input forever
This gets invoked by asyncio.to_thread since input() is blocking
And therefore this does not get the async keyword
"""
while True:
s = input('> ')
print(s)
if s == 'x':
return
async def counter():
""" Print ascending counter every 1sec
"""
i = 0
while True:
await asyncio.sleep(1)
print(i)
i += 1
async def main():
## The following allows simultaneous printing of the counter and getting console input:
# t_console = asyncio.to_thread(get_console_input) # <-- solution edit (remove)
coro_console = asyncio.to_thread(get_console_input) # <-- solution edit
t_console = asyncio.create_task(coro_console) # <-- solution edit
t_counter = asyncio.create_task(counter())
await t_console # <-- await t_console first
await t_counter
## ...but the following does NOT allow simultaneous printing of the counter and getting console input:
## UPDATE: with changes shown on the lines marked with "solution edit", now this works as expected
# t_console = asyncio.to_thread(get_console_input)
# t_counter = asyncio.create_task(counter())
# await t_counter # <-- await t_counter first
# await t_console
## If I "await t_console" before "await t_counter", then it works (the counter is printed and console input is allowed simultaneously)
## But if I "await t_counter" before "await t_console", then it does not work (the counter is printed but console input is never allowed/executed)
## ...seems like "await t_counter" should suspend the counter() coroutine at the asyncio.sleep(1) line inside counter() to then let the "await t_console" line execute...but it's not doing that...why?
## UPDATE: with changes shown on the lines marked with "solution edit", all works as expected
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
|
<python><python-3.x>
|
2023-02-08 22:20:02
| 1
| 927
|
rob_7cc
|
75,392,292
| 7,920,004
|
Capture Redshift's RAISE data from procedure's loop in Python
|
<p>At the beginning I prepared simple proc:</p>
<pre><code>CREATE OR REPLACE PROCEDURE public.record_example()
AS '
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT * FROM public.event
LOOP
RAISE INFO ''eventid = %'', rec.eventid;
END LOOP;
END;
'
LANGUAGE plpgsql;
</code></pre>
<p>To not save result into temp table I want to catch it on the fly but can't find best approach to achieve it. Didn't find and good method in <code>redshift_connector</code> and <code>boto3</code>.</p>
<p>It seems that Redshift doesn't support <a href="https://www.postgresql.org/docs/current/sql-do.html" rel="nofollow noreferrer">DO</a></p>
<p>I could query <code>svl_stored_proc_messages</code> but I don't want any in-the-middle steps...</p>
<pre><code>import redshift_connector
conn = redshift_connector.connect(
host='xyz-cluster.abc.region.redshift.amazonaws.com',
database='',
user='',
password=''
)
cursor = conn.cursor()
cur = cursor.execute("CALL public.record_example();")
</code></pre>
<p>Goal is to get <code>dataFrame</code> that would have data from <code>Messages</code>:</p>
<p><a href="https://i.sstatic.net/SmRA1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SmRA1.png" alt="enter image description here" /></a></p>
|
<python><amazon-web-services><amazon-redshift><plpgsql>
|
2023-02-08 22:16:31
| 0
| 1,509
|
marcin2x4
|
75,392,144
| 9,404,560
|
Python - Draw Bounding Box Around A Group Of Contours
|
<p>Hi I have a code that finds and draws contours around objects that are Yellow.</p>
<p>Here is the code:</p>
<pre><code>import cv2
import numpy as np
from PIL import ImageGrab
lower_yellow = np.array([20, 100, 100])
upper_yellow = np.array([30, 255, 255])
def test():
while True:
imgDef = ImageGrab.grab()
image = np.array(imgDef)
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
hsv = cv2.cvtColor(rgb, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower_yellow, upper_yellow)
kernel = np.ones((5,5),np.uint8)
mask = cv2.dilate(mask, kernel, iterations=1)
mask = cv2.erode(mask, kernel, iterations=1)
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image, contours, -1, (0,255,0), 3)
cv2.imshow('test', image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
if __name__ == "__main__":
test()
</code></pre>
<p>Right now the output is as follows:
<a href="https://i.sstatic.net/sJ4Sw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sJ4Sw.png" alt="enter image description here" /></a></p>
<p>I wish to group contours that are in close proximity to one another and draw a bounding box around them like so:</p>
<p><a href="https://i.sstatic.net/EJKRS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EJKRS.png" alt="enter image description here" /></a></p>
<p>How can I achieve this? Am I right to be looking into the scikit KMeans function to group them?</p>
|
<python><opencv>
|
2023-02-08 21:56:38
| 1
| 1,490
|
SunAwtCanvas
|
75,392,099
| 284,932
|
Unexpected result trying to plot the regression line using axline without numpy
|
<p>For studying purposes, I am trying to make a simple linear regression without external libs to calculate the slope and the intercept:</p>
<pre><code>import matplotlib.pyplot as plt
x = [1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2]
y = [60, 62, 64, 66, 68, 70, 72, 74]
n = len(x)
sx = sum(x)
sy = sum(y)
sxy = sum([x[i] * y[i] for i in range(n)])
sx2 = sum([x[i] ** 2 for i in range(n)])
b = (n * sxy - sx * sy) / (n * sx2 - sx ** 2)
a = (sy / n) - b * (sx / n)
def predict_peso(altura):
return a + b * altura
altura_prev = 1.75
peso_prev = predict_peso(altura_prev)
plt.plot(altura_prev, peso_prev, marker="o", markeredgecolor="red",
markerfacecolor="green")
plt.scatter(x, y)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/qs0Sf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qs0Sf.png" alt="enter image description here" /></a></p>
<p>But when I try to draw the line using axline:</p>
<pre><code>plt.axline(xy1=(0, a), slope=b, linestyle="--", color="k")
</code></pre>
<p>I got this result:</p>
<p><a href="https://i.sstatic.net/972AZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/972AZ.png" alt="enter image description here" /></a></p>
<p>What can I do to fix that and draw the line properly?</p>
<p>Thanks in advance !</p>
|
<python><matplotlib><linear-regression>
|
2023-02-08 21:50:57
| 1
| 474
|
celsowm
|
75,392,012
| 12,043,946
|
In python, how to Combine multiple arrays in the same column + row into one array?
|
<p>I have a dataframe df with a column named pvalues. Here is the column</p>
<pre><code>print(grouped['pvalue'].to_dict())
{0: [array([0.96612999, 0.30348366])]
4: [array([0.66871158, 0.0011381 ]), array([0.18113085, 0.04860657])],
5: [array([0.66871158, 0.0011381 ]), array([0.00000000e+00, 8.54560803e-07])],
6: [array([0.66871158, 0.0011381 ]), array([8.47561031e-131, 1.28484156e-018])]}
</code></pre>
<p>basically I want a function that will keep index 0 the same but will make index 4 look like this:</p>
<pre><code>0: [array([0.96612999, 0.30348366])]
4:[array([0.66871158, 0.0011381 ,0.18113085, 0.04860657])]
</code></pre>
<p>Thanks</p>
|
<python><arrays><pandas><dataframe>
|
2023-02-08 21:39:16
| 1
| 392
|
d3hero23
|
75,391,852
| 14,293,020
|
How to plot lines between points, and change their color based on specific values in Python?
|
<p><strong>Context:</strong></p>
<ul>
<li>3x35 <code>values</code> array that associates 1 value per segment</li>
<li>4x35x2 <code>matpos</code> array that gathers the coordinates of 4x35 points (hence 3x35 segments).</li>
</ul>
<p><strong>Question:</strong></p>
<p>How can I define each segment's color based on their values from the <code>values</code> array ?</p>
<p><strong>Code attempt:</strong></p>
<pre><code># Array of values for each point
values = np.random.rand(3,35)
# Generate array of positions
x = np.arange(0,35)
y = np.arange(0,4)
matpos = np.array([[(y[i], x[j]) for j in range(0,len(x))] for i in range(0,len(y))])
# plot the figure
plt.figure()
for i in range(len(y)-1):
for j in range(len(x)):
# plot each segment
plt.plot(matpos[i:i+2,j,0],matpos[i:i+2,j,1]) #color = values[i,j]
</code></pre>
|
<python><matplotlib><plot><colors>
|
2023-02-08 21:19:15
| 1
| 721
|
Nihilum
|
75,391,797
| 11,027,207
|
Celery - assign session dynamicall/reusing connections
|
<p>Breaking my head for few days for simple task(Thought it simple...not anymore):</p>
<p>Main program sends hundreds of sql queries to fetch data from Multiple DBs .
I thought Celery can be the right choice as it can scale and also simplify the threading/async orchestration .</p>
<p>The "clean" solution would be one generic class supposed to looks something like:</p>
<pre><code>@app.task(bind=True , name='fetch_data')
def fetch_data(self,*args,**kwargs):
db= kwargs['db']
sql= kwargs['sql']
session = DBContext().get_session(db)
result = session.query(sql).all()
...
</code></pre>
<p>But having trouble to implement such DBContext class which will instantiate once for each DB and reuse the DB sessions for request and once requests done - close it .
(or any other recommendation you suggest ).</p>
<p>I was thinking about using a Base class to decorate the function and
keep the all available connections there ,
But the problem such class can't init dynamically but once ...
maybe there's way to make it work but not sure how ...</p>
<pre><code>class DatBaseFactory(Task):
def __call__(self, *args, **kwargs):
print("In class",self.db)
self.engine = DBContext.get_db(self.db)
return super().__call__(*args, **kwargs)
@app.task(bind=True ,base=DatBaseFactory, name='test_db', db=db ,engine='' )
def test_db(self,*args,**kwargs):
print("From Task" ,self.engine)
</code></pre>
<p>Other alterative would be duplicating the functions as number of the DB and "preserved" them the sessions - but that's quite ugly solution .</p>
<p>Hope some1 can help here with this trouble ....</p>
|
<python><sqlalchemy><celery><distributed>
|
2023-02-08 21:13:41
| 0
| 424
|
AviC
|
75,391,656
| 7,437,221
|
How to join a dictionary with same key as df index as a new column with values from the dictionary
|
<p>I have the following data:</p>
<p>A dictionary <code>dict</code> with a <code>key: value</code> structure as <code>tuple(str, str,): list[float]</code></p>
<pre><code>{
('A', 'B'): [0, 1, 2, 3],
('A', 'C'): [4, 5, 6, 7],
('A', 'D'): [8, 9, 10, 11],
('B', 'A'): [12, 13, 14, 15]
}
</code></pre>
<p>And a pandas dataframe <code>df</code> with an index of 2 columns that correspond to the keys in the dictionary:</p>
<pre><code>df.set_index("first", "second"]).sort_index()
print(df.head(4))
==============================================
tokens
first second
A B 166
C 128
D 160
B A 475
</code></pre>
<p>I want to create a new column, <code>numbers</code> in <code>df</code> with the values provided from <code>dict</code>, whose key corresponds with an index row in <code>df</code>. The example result would be:</p>
<pre><code>print(df.head(4))
========================================================================
tokens numbers
first second
A B 166 [0, 1, 2, 3]
C 128 [4, 5, 6, 7]
D 160 [8, 9, 10, 11]
B A 475 [12, 13, 14, 15]
</code></pre>
<p>What is the best way to go about this? Keep performance in mind, as this dataframe may be 10-100k rows long</p>
|
<python><pandas><dataframe><numpy><dictionary>
|
2023-02-08 20:56:28
| 2
| 353
|
Sean Sailer
|
75,391,594
| 6,658,209
|
What's the fastest way to turn json results from an API into a dataframe?
|
<p>Below is an example of sports betting app I'm working on.</p>
<p>games.json()['data'] - contains the game id for each sport event for that day. The API then returns the odds for that specific game.</p>
<p>What's the fastest option to take json and turn it into a panda dataframe? currently looking into msgspec.</p>
<p>Some games can have over 5K total bets</p>
<pre><code>master_df = pd.DataFrame()
for game in games.json()['data']:
odds_params = {'key': api_key, 'game_id': game['id'], 'sportsbook': sportsbooks}
odds = requests.get(api_url, params=odds_params)
for o in odds.json()['data'][0]['odds']:
temp = pd.DataFrame()
temp['id'] = [game['id']]
for k,v in game.items():
if k != 'id' and k != 'is_live':
temp[k] = v
for k, v in o.items():
if k == 'id':
temp['odds_id'] = v
else:
temp[k] = v
if len(master_df) == 0:
master_df = temp
else:
master_df = pd.concat([master_df, temp])
</code></pre>
<p>odds.json response snippet -</p>
<pre><code>{'data': [{'id': '35142-30886-2023-02-08',
'sport': 'basketball',
'league': 'NBA',
'start_date': '2023-02-08T19:10:00-05:00',
'home_team': 'Washington Wizards',
'away_team': 'Charlotte Hornets',
'is_live': False,
'tournament': None,
'status': 'unplayed',
'odds': [{'id': '4BB426518ECF',
'sports_book_name': 'Betfred',
'name': 'Charlotte Hornets',
'price': 135.0,
'checked_date': '2023-02-08T11:46:12-05:00',
'bet_points': None,
'is_main': True,
'is_live': False,
'market_name': '1st Half Moneyline',
'home_rotation_number': None,
'away_rotation_number': None,
'deep_link_url': None,
'player_id': None},
....
</code></pre>
<p>By the end of this process, I usually have about 30K records in the dataframe</p>
|
<python><json><pandas><python-requests>
|
2023-02-08 20:49:48
| 1
| 6,395
|
bbennett36
|
75,391,482
| 5,896,319
|
How to create a new csv from a csv that separated cell
|
<p>I created a function for convert the csv.
The main topic is: get a csv file like:</p>
<pre><code>,features,corr_dropped,var_dropped,uv_dropped
0,AghEnt,False,False,False
</code></pre>
<p>and I want to conver it to an another csv file:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: center;">features</th>
<th style="text-align: center;">corr_dropped</th>
<th style="text-align: center;">var_dropped</th>
<th style="text-align: right;">uv_dropped</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: center;">AghEnt</td>
<td style="text-align: center;">False</td>
<td style="text-align: center;">False</td>
<td style="text-align: right;">False</td>
</tr>
</tbody>
</table>
</div>
<p>I created a function for that but it is not working. The output is same as the input file.</p>
<p><strong>function</strong></p>
<pre><code>def convert_file():
input_file = "../input.csv"
output_file = os.path.splitext(input_file)[0] + "_converted.csv"
df = pd.read_table(input_file, sep=',')
df.to_csv(output_file, index=False, header=True, sep=',')
</code></pre>
|
<python><django>
|
2023-02-08 20:36:49
| 1
| 680
|
edche
|
75,391,450
| 16,319,191
|
Convert columns to binary (0 or 1) inplace using conditions in pandas
|
<p>I want to convert column values to 0 or 1s. If the column value is >0 then it should be 1 otherwise if it is <=0 then it should be 0. I have over 100 columns.
Example df is:</p>
<pre><code>df = pd.DataFrame({
"col1": [0, 0, 2, 0, 7],
"col2": [121, 9, 22, 3, 7],
"col3": [181, 0, 2, 3, 0]})
</code></pre>
<p>I can achieve the answer using the threshold for 1 column at a time, but I have 100s of columns.</p>
<pre><code>threshold = 0
df['col1'] = df['col1'].gt(threshold).astype(int)
</code></pre>
<p>Answer should be:</p>
<pre><code>answerdf = pd.DataFrame({
"col1": [0, 0, 1, 0, 1],
"col2": [1, 1, 1, 1, 1],
"col3": [1, 0, 1, 1, 0]})
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-08 20:34:05
| 0
| 392
|
AAA
|
75,391,328
| 6,197,439
|
Python timestamps not increasing on some platforms?
|
<p>I just tried the following little Python3 test:</p>
<pre class="lang-py prettyprint-override"><code>import time
tvar = 1
tsa = time.time()
tvar += 1
tsb = time.time()
print("tsb {} tsa {} tsb-tsa {}".format(tsb, tsa, tsb-tsa))
print(" rate {}".format(tvar/(tsb-tsa)))
tsc = time.monotonic()
tvar += 1
tsd = time.monotonic()
print("tsd {} tsc {} tsd-tsc {}".format(tsd, tsc, tsd-tsc))
print(" rate {}".format(tvar/(tsd-tsc)))
</code></pre>
<p>If I run this under <a href="https://replit.com/languages/python3" rel="nofollow noreferrer">https://replit.com/languages/python3</a> - then the script completes fine, and results I get are more-less expected:</p>
<pre class="lang-none prettyprint-override"><code>tsb 1675886955.5210621 tsa 1675886955.5210614 tsb-tsa 7.152557373046875e-07
rate 2796202.6666666665
tsd 23325.72801936 tsc 23325.728018751 tsd-tsc 6.090012902859598e-07
rate 4926097.937479465
</code></pre>
<p>However, when I run the very same script under Python3 in MINGW64 shell (MSYS2 on Windows 10) which is currently version Python 3.10.9, the script actually crashes:</p>
<pre><code>$ python3 test_time.py
tsb 1675887125.3224628 tsa 1675887125.3224628 tsb-tsa 0.0
Traceback (most recent call last):
File "D:\msys64\tmp\test\test_time.py", line 7, in <module>
print(" rate {}".format(tvar/(tsb-tsa)))
ZeroDivisionError: float division by zero
</code></pre>
<p>So, here the <code>tsb</code> and <code>tsa</code> timestamps ended up being <em>equal</em>, even if they are called separated by an assignment command (<code>tvar += 1</code>) - but even if the timestamp assignments were called one after the other, they <em>should</em> have shown a different value?</p>
<p>And the same problems appears with <code>time.monotonic()</code> as well (although the above example crashes before we get to that point).</p>
<p>So, is this Python3 behavior (i.e. on some platforms, <code>time.time()</code> or <code>time.monotonic()</code> can be called two times in a row, and return the exact same timestamp) expected; and is there anything I can do, to have timestamp behavior consistent across platforms (that is, at least: two subsequent (quick) calls to <code>time.time()</code> or <code>time.monotonic()</code> should result with not just differing timestamps, but with timestamps where the earlier timestamp is a smaller numeric value than later timestamp? (so that durations/time differences can never result with zero, which would otherwise cause division by zero when calculating rates)</p>
|
<python><python-3.x><time><timing>
|
2023-02-08 20:21:19
| 0
| 5,938
|
sdbbs
|
75,391,274
| 20,959,773
|
Get variable inside DOM before DOM changes when clicking redirecting button
|
<p>I have been trying for so long to find a way to persist variables between page refreshes and different pages in one browser session opened from selenium python.
Unfortunately, neither storing variable in localStorage, sessionStorage or window.name doesn't work after testing so many times and research.</p>
<p>So I have resorted to a python script which continuously repeats driver.execute_script('return variable') and continue to gather data while surfing.
Data that needs to be collected, is a value of element that gets clicked, which is catched by eventListener for click and inserted to local variable I have added to the page.</p>
<p>This all works fine, except for the time where the element that gets clicked, is the actual button that contains a link that redirects page and changes the DOM.</p>
<p>My best guess is that at the same moment, the click, my JavaScript script that stores the variable, my JavaScript script that retrieves the variable, and the page redirect, all almost happen at the same time, suspecting that the change of the DOM happens before the retrieving of the variable, thus canceling any of my efforts to get that data.</p>
<p>This is the code:</p>
<pre><code>from selenium.common import TimeoutException, WebDriverException
from selenium.webdriver.support.ui import WebDriverWait
from selenium import webdriver
class Main:
def __init__(self, page_url):
self.__driver = webdriver.Chrome()
self.__element_list = []
self.__page_url = page_url
def start(self):
program_return = []
self.__driver.get(self.__page_url)
event_js = '''
var array_events = []
var registerOuterHtml = (e) => {
array_events.push(e.target.outerHTML)
window.array_events = array_events
}
var registerUrl = (e) => {
array_events.push(document.documentElement.outerHTML)
}
getElementHtml = document.addEventListener("click", registerOuterHtml, true)
getDOMHtml = document.addEventListener("click", registerUrl, true)
'''
return_js = '''return window.array_events'''
self.__driver.set_script_timeout(10000)
self.__driver.execute_script(event_js)
try:
for _ in range(800):
if array_events := self.__driver.execute_script(return_js):
if array_events[-2:] not in program_return:
program_return.append(array_events[-2:])
else:
try:
WebDriverWait(self.__driver, 0.1).until(
lambda driver: self.__driver.current_url != self.__page_url)
except TimeoutException:
pass
else:
self.__page_url = self.__driver.current_url
self.__driver.execute_script(event_js)
except WebDriverException:
pass
finally:
print(len(program_return)) # should print total number of clicks made.
</code></pre>
<p>To test it out, call it like this:</p>
<pre><code>Main('any url you wish').start()
</code></pre>
<p>And after clicking, and should at least click a button which changes the page, you can close the window manually and check the results.</p>
<p>Please indent the functions of the class a tab to the right, I can't format it here for the sake of my life!</p>
<p>Any idea or ideally a solution to this problem would be greatly appreciated.</p>
<p>Overall question---Taking for granted that variable persistence between different pages is not possible, <strong>How can I get the value of that variable that gets set on the time of click, before the page changes, from the same click action?</strong> (Maybe delay whole page...??)</p>
|
<javascript><python><html><selenium>
|
2023-02-08 20:14:40
| 1
| 347
|
RifloSnake
|
75,391,230
| 11,067,209
|
How to access the value projection at MultiHeadAttention layer in Pytorch
|
<p>I'm making an own implementation for the <a href="https://arxiv.org/abs/2106.05234" rel="nofollow noreferrer">Graphormer</a> architecture. Since this architecture needs to add an edge-based bias to the output for the key-query multiplication at the self-attention mechanism I am adding that bias by hand and doing the matrix multiplication for the data with the attention weights outside the attention mechanism:</p>
<pre><code>import torch as th
from torch import nn
# Variable inicialization
B, T, C, H = 2, 3, 4, 2
self_attn = nn.MultiheadAttention(C, H, batch_first = True)
# Tensors
x = th.randn(B, T, C)
attn_bias = th.ones((B, T, T))
# Self-attention mechanism
_, attn_wei = self_attn(query=x, key=x, value=x)
# Adding attention bias
if attn_bias is not None:
attn_wei = attn_wei + attn_bias
x = attn_wei @ x # TODO use value(x) instead of x
print(x)
</code></pre>
<p>This works, but for using the full potential of self-attention, the last matrix multiplication should be like <code>x = attn_wei @ value(x)</code> but I am not able to get the value projector from the <code>selt_attn</code> object as it should have something like that inside of it.</p>
<p>How could I do this?</p>
|
<python><pytorch><transformer-model><self-attention><multihead-attention>
|
2023-02-08 20:10:14
| 0
| 665
|
Angelo
|
75,391,183
| 10,452,700
|
Problem with handling multiple legends in subplots when you use plt.twinx()
|
<p>I'm struggling to pass the list of my subplots with different scales on sides using <code>plt.twinx()</code> in each subplot to show all labels in a single legend box, but I get the following error:</p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'get_label'</p>
</blockquote>
<p>I have tried the following:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib
print('matplotlib: {}'.format(matplotlib.__version__))
#matplotlib: 3.2.2
#Generate data
import pandas as pd
df = pd.DataFrame(dict(
Date = [1,2,3],
Male = [10,20,30],
Female = [20,30,10],
Others = [700,500,200]
))
print(df)
# Date Male Female Others
#0 1 10 20 700
#1 2 20 30 500
#2 3 30 10 200
# Create pandas stacked line plot
import numpy as np
import matplotlib.pyplot as plt
Userx = 'foo'
fig, ax = plt.subplots(nrows=2, ncols=1 , figsize=(20,10))
plt.subplot(211)
linechart1 = plt.plot(df['Date'], df['Male'], color='orange', marker=".", markersize=5, label=f"Leg1 for {Userx}" ) #, marker="o"
scatterchart2 = plt.scatter(df['Date'], df['Female'], color='#9b5777', marker='d', s=70 , label=f"Leg2 for {Userx}" )
plt.legend( loc='lower right', fontsize=15)
plt.ylabel('Scale1', fontsize=15)
plt.twinx()
linechart3, = plt.plot(df['Date'], df['Others'], color='black', marker=".", markersize=5, label=f"Leg3 for {Userx}" ) #, marker="o"
#lns123 = [linechart1,scatterchart2,linechart3]
#plt.legend(handles=lns123, loc='best', fontsize=15)
plt.legend( loc='best', fontsize=15)
plt.ylabel('Scale2', fontsize=15)
plt.xlabel('Timestamp [24hrs]', fontsize=15, color='darkred')
plt.ticklabel_format(style='plain')
#lns123 = [linechart1,scatterchart2,linechart3]
#plt.legend(handles=lns123, loc='best', fontsize=15)
#plt.legend(handles=[linechart1[0],scatterchart2[0],linechart3[0]], loc='best', fontsize=15)
plt.subplot(212)
barchart1 = plt.bar(df['Date'], df['Male'], color='green', label=f"Leg1 for {Userx}" , width=1, hatch='o' ) #, marker="o"
barchart2 = plt.bar(df['Date'], df['Female'], color='blue', label=f"Leg2 for {Userx}" , width=0.9, hatch='O') #, marker="o"
plt.ticklabel_format(style='plain')
plt.ylabel('Scale1', fontsize=15)
plt.twinx()
barchart3 = plt.bar(df['Date'], df['Others'], color='orange', label=f"Leg3 for {Userx}", width=0.9 , hatch='/', alpha=0.1 ) #, marker="o"
plt.ylabel('Scale2', fontsize=15)
plt.ticklabel_format(style='plain')
plt.xlabel('Timestamp [24hrs]', fontsize=15, color='darkred')
bar123 = [barchart1,barchart2,barchart3]
plt.legend(handles=bar123, loc='best', fontsize=15)
#plt.show(block=True)
plt.show()
</code></pre>
<p>I have tried the following solutions unsuccessfully:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/53435665/matplotlib-shows-no-legend">ref1</a></li>
<li><a href="https://stackoverflow.com/a/72615525/10452700">ref2</a></li>
<li><a href="https://stackoverflow.com/a/55197471/10452700">using <code>,</code></a></li>
<li><a href="https://stackoverflow.com/questions/9834452/how-do-i-make-a-single-legend-for-many-subplots/46921590#46921590">How do I make a single legend for many subplots?</a> This is not my answer or I couldn't figure it out how I can handle the labels with <code>plt.twinx()</code> since it gathered all legends of subplots to a single legend box, while I want to include/integrate into individual legends for each subplot as I marked in the photo.</li>
</ul>
<p>currently, I just use/duplicate <code>plt.legend( loc='lower right', fontsize=15)</code> in <code>subplot(211)</code>:</p>
<pre><code>plt.subplot(211)
chart1 = ...
chart2 = ...
#here
plt.twinx()
chart3, = ...
#here
plt.subplot(212)
barchart1 = ...
barchart2 = ...
plt.twinx()
barchart3, = ...
bar123 = [barchart1,barchart2,barchart3]
plt.legend(handles=bar123, loc='best', fontsize=15)
</code></pre>
<p>and get the below output:
<img src="https://i.sstatic.net/9JhcL.jpg" alt="img" />
The interesting is I don't have this problem for bar plots <code>plt.bar()</code>.</p>
|
<python><matplotlib><legend><subplot><legend-properties>
|
2023-02-08 20:05:06
| 1
| 2,056
|
Mario
|
75,391,105
| 8,618,242
|
ROS Publisher is not publishing continuously
|
<p>My Publisher is not publishing continuously, can you please tell me how can I subscribe/publish and advertise services in the same time? thanks in advance.</p>
<pre class="lang-py prettyprint-override"><code>#! /usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import rospy
from std_msgs.msg import String, Bool
from state_controller.srv import ActorChange, ActorChangeResponse
class StateController:
def __init__(self):
self.accum_plan = {}
rospy.init_node("state_controller", anonymous=True, log_level=rospy.INFO)
self.accPlan_pub = rospy.Publisher("accumulated_plan", String, queue_size=10)
self.stateReader()
def stateReader(self):
"""
subscribe to model/lego_map/yumi_motion_status
"""
self.accPlan_pub.publish(json.dumps(self.accum_plan))
rospy.Subscriber("model", String, self.modelRec)
rospy.Service("change_actor", ActorChange, self.changeActor)
rospy.sleep(3)
rospy.spin()
## Subscriber CallBacks
def modelRec(self, data):
""" """
model = json.loads(data.data)
if model != self.old_model:
self.old_model = model
def changeActor(self, req):
""" """
lego = req.data
return ActorChangeResponse(True)
if __name__ == "__main__":
try:
controller_ = StateController()
except rospy.ROSInterruptException:
logging.error("Error in the State Controller")
</code></pre>
|
<python><ros><publisher><subscriber>
|
2023-02-08 19:56:19
| 1
| 4,115
|
Bilal
|
75,390,940
| 7,318,120
|
update pip to be consistent with python in ubuntu 20.04
|
<p>I have <code>python 3.8</code> as the default install with <code>ubuntu 20.04</code>.</p>
<p>I have upgraded to <code>python 3.11</code>.</p>
<p>However, if i do <code>pip3 --version</code> is see this:</p>
<pre><code>pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
</code></pre>
<p>In windows i just do <code>pip install <package></code> and i want to achieve the same level of simplicity in ubuntu.</p>
<p>How to get this to upgrade too (python 3.11) or whatever the latest version of python is in Ubuntu ?</p>
|
<python><pip><ubuntu-20.04>
|
2023-02-08 19:39:34
| 0
| 6,075
|
darren
|
75,390,927
| 7,343,922
|
Pyspark: How to avoid python UDF as a driver operation?
|
<p>I have a python UDF that needs to be run in a pyspark code, Is there any way of calling that UDF using mappartitions, so that I can avoid that python operation not just run only in the driver node and use the full cluster, because If I just use the UDF directly on the dataframe, that would run as a driver operation, isn't it? What is the efficient way of doing this?</p>
<pre><code>class Some_class_name
def pyt_udf(x):
<some python operation>
return data
def opr_to_be_done:
df = spark.sql(f'''select col1, col2 from table_name''')
rdd2=df.rdd.mappartition(lambda x: pyt_udf(x))
</code></pre>
|
<python><pyspark>
|
2023-02-08 19:38:14
| 1
| 306
|
user7343922
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.