QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,239,640
| 11,199,298
|
Selenium error on Chrome version 113- Message: javascript error: Object.hasOwn is not a function
|
<p>Chrome has been updated recently and some of the functions raises</p>
<pre><code>selenium.common.exceptions.JavascriptException: Message: javascript error: Object.hasOwn is not a function
(Session info: chrome=113.0.5672.93)
</code></pre>
<p>This happens for clicks, and find_elements and I can't find any solution to this. I was able to use <code>driver.execute_script('arguments[0].click()', element)</code> for clicks and it started working, but it fails for <code>find_element</code>s. Any solution?</p>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2023-05-12 20:09:40
| 2
| 2,211
|
Tugay
|
76,239,626
| 2,178,774
|
xarray: preprocess for nearest Lat/Lon non-NaN variable
|
<p>Each <code>latitude</code>/<code>longitude</code> location in the dataset is associated with typically one <code>sss</code> value per day. I would like to retrieve the <code>sss</code> value nearest to a specified location on a given day. For example, find the <code>sss</code> value nearest to <code>latitude=-89.88</code> and <code>longitude=-179.9</code> on <code>01/01/2014</code>.</p>
<p>Each daily file is around 20MB, and I'd like to do this over 10 years.</p>
<p>URL: <a href="https://www.star.nesdis.noaa.gov/data/socd1/coastwatch/products/miras/nc/SM_D2014001_Map_SATSSS_data_1day.nc" rel="nofollow noreferrer">https://www.star.nesdis.noaa.gov/data/socd1/coastwatch/products/miras/nc/SM_D2014001_Map_SATSSS_data_1day.nc</a></p>
<pre><code><xarray.Dataset>
Dimensions: (time: 1, altitude: 1, latitude: 720, longitude: 1440, nv: 2)
Coordinates:
* time (time) datetime64[ns] 2014-01-01T12:00:00
* altitude (altitude) float64 0.0
* latitude (latitude) float32 -89.88 -89.62 -89.38 ... 89.38 89.62 89.88
* longitude (longitude) float32 -179.9 -179.6 -179.4 ... 179.4 179.6 179.9
Dimensions without coordinates: nv
Data variables:
coord_ref int32 ...
time_bnds (time, nv) datetime64[ns] dask.array<chunksize=(1, 2), meta=np.ndarray>
sss (time, altitude, latitude, longitude) float32 dask.array<chunksize=(1, 1, 720, 1440), meta=np.ndarray>
sss_dif (time, altitude, latitude, longitude) float32 dask.array<chunksize=(1, 1, 720, 1440), meta=np.ndarray>
l2_time (time, altitude, latitude, longitude) datetime64[ns] dask.array<chunksize=(1, 1, 720, 1440), meta=np.ndarray>
l2_lat (time, altitude, latitude, longitude) float32 dask.array<chunksize=(1, 1, 720, 1440), meta=np.ndarray>
l2_lon (time, altitude, latitude, longitude) float32 dask.array<chunksize=(1, 1, 720, 1440), meta=np.ndarray>
Attributes: (12/39)
history: Wed Feb 20 18:15:04 2019\n: /data/data001/...
title: Sea Surface Salinity - Near Real Time - Mi...
product_name: SM_D2014001_Map_SATSSS_data_1day.nc
source: European Space Agency(ESA)
platform: PROTEUS
instrument: Microwave Imaging Radiometer with Aperture...
... ...
Summary: CoastWatch/OceanWatch Level-3 SSS products...
Input Files: /data/data001/repository/SMOS/ESA_restrict...
cdm_data_type: Grid
product_version: 662
time_coverage_start: 2014-01-01T00:08:52Z
time_coverage_stop: 2014-01-02T00:23:10Z
</code></pre>
<p>I can kind of get this to work, but the <code>sss</code> value is being returned as NaN.</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
from functools import partial
def _preprocess(x):
# return the nearest location that has non-NaN sss
return x.where(x['sss'].notnull()).sel(
longitude=-179.9, latitude=-89.88, method='nearest')
partial_func = partial(_preprocess)
ds = xr.open_mfdataset("*.nc", preprocess=partial_func)
</code></pre>
<p>Returns:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>time</th>
<th>altitude</th>
<th>sss</th>
<th>latitude</th>
<th>longitude</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>2014-01-01 12:00:00</td>
<td>0.0</td>
<td>NaN</td>
<td>-86.44</td>
<td>-170.1</td>
</tr>
</tbody>
</table>
</div>
<p>Whereas I would like something like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>time</th>
<th>altitude</th>
<th>sss</th>
<th>latitude</th>
<th>longitude</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>2014-01-01 12:00:00</td>
<td>0.0</td>
<td>27.1</td>
<td>-82.28</td>
<td>-169.9</td>
</tr>
</tbody>
</table>
</div>
<p>Rather than utilizing <code>open_mfdataset</code>, this functionality can be implemented on a day-by-day, file-by-file basis. For example, utilizing geopandas:</p>
<pre class="lang-py prettyprint-override"><code>import geopandas
ds = xr.open_dataset('SM_D2014001_Map_SATSSS_data_1day.nc')
a = ds.to_dataframe().reset_index()
a = a[~a.sss.isna()]
a['LL'] = a['latitude'].astype(str) + a['longitude'].astype(str)
locs = a[['latitude', 'longitude']].value_counts().reset_index()
locs['LL'] = locs['latitude'].astype(str) + locs['longitude'].astype(str)
gdf1 = geopandas.GeoDataFrame(
locs,
geometry=geopandas.points_from_xy(locs.longitude, locs.latitude),
crs="EPSG:4326"
)
gdf1 = gdf1.to_crs(crs=3857)
gdf2 = geopandas.GeoDataFrame(
pd.DataFrame(), geometry=geopandas.points_from_xy([-179.9] [-89.88]),
crs="EPSG:4326"
)
gdf2 = gdf2.to_crs(crs=3857)
dx = geopandas.sjoin_nearest(gdf2, gdf1, distance_col="distance",
how='left')
a = a[a.LL.isin(dx['LL'])]
</code></pre>
<p>However, this is quite slow, hence the desire to utilize <code>preprocess</code> with <code>open_mfdataset</code>.</p>
<p>Using <code>.sel()</code> with bounds provides some improvement, for example:</p>
<pre class="lang-py prettyprint-override"><code>from functools import partial
def _preprocess(x, lon_bnds, lat_bnds):
return x.sel(longitude=slice(*lon_bnds), latitude=slice(*lat_bnds))
lon_bnds, lat_bnds = (-130, -110), (26, 39)
partial_func = partial(_preprocess, lon_bnds=lon_bnds,
lat_bnds=lat_bnds)
a = xr.open_mfdataset(['SM_D2014001_Map_SATSSS_data_1day.nc'],
preprocess=partial_func)
</code></pre>
<p><strong>Edit:</strong> Rather than using nearest, another option is simply to take the mean of all <code>sss</code> observations within a given Lat./Lon. rectangular region, per day. Using <code>open_mfdataset()</code> with one file at a time (one day at a time), per <code>.nc</code> file, <code>preprocess</code> can take a slice and then perform a mean for <code>sss</code> on the slice. This takes 1 minute to process 20 days' worth of files. However, using <code>open_mfdataset('*.nc', preprocess=partial_func)</code> should improve performance. For example,</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
from functools import partial
def _preprocess(x, lon_bnds, lat_bnds):
return x.sel(longitude=slice(*lon_bnds), latitude=slice(*lat_bnds))\
.groupby('time').mean(...)
lon_bnds, lat_bnds = (-123, -111), (26.5, 38.5)
partial_func = partial(_preprocess, lon_bnds=lon_bnds,
lat_bnds=lat_bnds)
ds = xr.open_mfdataset('*.nc', preprocess=partial_func)
</code></pre>
|
<python><geospatial><python-xarray>
|
2023-05-12 20:07:05
| 0
| 516
|
There
|
76,239,472
| 12,436,050
|
Merge two pandas dataframes on multiple columns
|
<p>I have two dataframes and I would like to join these dataframes based on column 'Name' of df1 and multiple columns of df2.</p>
<pre><code>df1
Name id
ZYMAXID 9416X 6390
ZYPRED 6391
df2
label pref_label alt_label
ZYPRED None None
None ZYMAXID 9416X None
</code></pre>
<p>The final output should be:</p>
<pre><code>Name id label pref_label alt_label
ZYMAXID 9416X 6390 None ZYMAXID 9416X None
ZYPRED 6391 ZYPRED None None
</code></pre>
<p>I tried below join but it is giving me error.</p>
<pre><code>df = df1.merge(df2, left_on=df1["Name"].str.lower(), right_on=['df2["label"].str.lower()','df2["pref_label"].str.lower()''df2["alt_label"].str.lower()', indicator = True)
</code></pre>
<p>Any help is highly appreciated.</p>
|
<python><pandas><dataframe>
|
2023-05-12 19:41:19
| 3
| 1,495
|
rshar
|
76,239,464
| 1,739,325
|
How to properly set st.session_state in streamlit
|
<p>So I have such sample code. As you can see locale setted twice. However I keep getting error:</p>
<p><em>AttributeError: st.session_state has no attribute "locale". Did you forget to initialize it? More info: <a href="https://docs.streamlit.io/library/advanced-features/session-state#initialization" rel="nofollow noreferrer">https://docs.streamlit.io/library/advanced-features/session-state#initialization</a></em></p>
<p>What is wrong here?</p>
<pre><code>import streamlit as st
from dataclasses import dataclass
@dataclass
class Locale:
ai_role_prefix: str
# --- LOCALE SETTINGS ---
en = Locale(
ai_role_prefix="base",
)
if 'locale' not in st.session_state:
st.session_state.locale = en
if __name__ == "__main__":
st.session_state.locale = en
print(st.session_state.locale.ai_role_prefix)
</code></pre>
<p>Full error stack:</p>
<pre><code>C:\Users\dex\mambaforge\python.exe C:/Users/dex/Desktop/gpt4free/AudioChatGPT/src/chat_ui/temp.py
2023-05-12 22:43:37.715 WARNING streamlit.runtime.state.session_state_proxy: Session state does not function when running a script without `streamlit run`
Traceback (most recent call last):
File "C:\Users\dex\mambaforge\lib\site-packages\streamlit\runtime\state\session_state.py", line 370, in __getitem__
return self._getitem(widget_id, key)
File "C:\Users\dex\mambaforge\lib\site-packages\streamlit\runtime\state\session_state.py", line 415, in _getitem
raise KeyError
KeyError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\dex\mambaforge\lib\site-packages\streamlit\runtime\state\session_state_proxy.py", line 119, in __getattr__
return self[key]
File "C:\Users\dex\mambaforge\lib\site-packages\streamlit\runtime\state\session_state_proxy.py", line 90, in __getitem__
return get_session_state()[key]
File "C:\Users\dex\mambaforge\lib\site-packages\streamlit\runtime\state\safe_session_state.py", line 113, in __getitem__
return self._state[key]
File "C:\Users\dex\mambaforge\lib\site-packages\streamlit\runtime\state\session_state.py", line 372, in __getitem__
raise KeyError(_missing_key_error_message(key))
KeyError: 'st.session_state has no key "locale". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\dex\Desktop\gpt4free\AudioChatGPT\src\chat_ui\temp.py", line 22, in <module>
print(st.session_state.locale.ai_role_prefix)
File "C:\Users\dex\mambaforge\lib\site-packages\streamlit\runtime\state\session_state_proxy.py", line 121, in __getattr__
raise AttributeError(_missing_attr_error_message(key))
AttributeError: st.session_state has no attribute "locale". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization
Process finished with exit code 1
</code></pre>
|
<python><attributeerror><streamlit>
|
2023-05-12 19:40:11
| 1
| 5,851
|
Rocketq
|
76,239,461
| 489,088
|
How to properly specify the type of a local array in Numba?
|
<p>I have a local array declared in a cuda-compiled numba kernel:</p>
<pre><code>from numba import cuda, types
@cuda.jit((types.void(), device=True)
def A():
arr = cuda.local.array(30, dtype=types.int64)
</code></pre>
<p>I need then to pass this array to another cuda compiled function:</p>
<pre><code>@cuda.jit((types.void(), device=True)
def A():
arr = cuda.local.array(30, dtype=types.int64)
B(arr)
@cuda.jit((types.void(
# What is the argument type?
), device=True)
def B(arr):
# do things with arr
pass
</code></pre>
<p>I tried typing like this:</p>
<pre><code>@cuda.jit((types.void(
types.Array(types.int64, 1, 'A')
)), device=True)
@cuda.jit((types.void(
types.Array(types.Literal, 1, 'A')
)), device=True)
@cuda.jit((types.void(
types.Array(types.Literal[int64](-1), 1, 'A')
)), device=True)
@cuda.jit((types.void(
types.Array(types.Literal[int](-1), 1, 'A')
)), device=True)
@cuda.jit((types.void(
types.Literal[int](-1)
)), device=True)
</code></pre>
<p>None of it works, I get various errors such as:</p>
<pre><code>numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Internal error at <numba.core.typeinfer.ArgConstraint object at 0x7fb691c9d950>.
'property' object has no attribute 'is_precise'
During: typing of argument at /home/file.py (94)
Enable logging at debug level for details.
</code></pre>
<p>Or</p>
<pre><code>Traceback (most recent call last):
File "/home/file.py", line 43, in <module>
types.Array(types.Literal[int], 1, 'A'),
~~~~~~~~~~~~~^^^^^
TypeError: type 'Literal' is not subscriptable
</code></pre>
<p>Or</p>
<pre><code>numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in function eq>) found for signature:
>>> eq(array(int64, 1d, C), Literal[int](-1))
</code></pre>
<p>What am I missing?</p>
|
<python><python-3.x><cuda><numba>
|
2023-05-12 19:39:45
| 1
| 6,306
|
Edy Bourne
|
76,239,420
| 3,908,009
|
How does Celery worker run the code defined elsewhere in a task?
|
<p>I tried reading official documentation as well as other SO threads, but it is still not clear how Celery works.</p>
<p>From what I understand:</p>
<ol>
<li><strong>Django app</strong>: Celery is installed in Django (or any app) where <code>@shared_task</code> decorator function defines the work to be performed.</li>
<li><strong>Message broker</strong>: A message broker gets this task from 1. and queues it.</li>
<li><strong>Celery Worker</strong>: A completely separate Celery worker picks up the task and runs it. This worker can be in a completely different machine even, so long as it has access to the message broker.</li>
</ol>
<p>So, then the burning question is:</p>
<p><strong>How does the Celery worker get the code defined in @shared_task to run the task?</strong></p>
<p>Basically, how does 3. get what's defined in 1. if they are only connected using a message broker? Is the python code stored in the message broker as string? What is the data structure of the message broker item/record?</p>
|
<python><django><celery><message-queue><django-celery>
|
2023-05-12 19:33:03
| 1
| 1,086
|
Neil
|
76,239,350
| 10,576,322
|
Options for configuration of python libraries
|
<p>I wrote a library around a REST API that I am going to use in other libraries or python projects. In this library I need some variables like URLs that I didn't want to hardcode into the library, but make it configurable. I don't want the URL or the USER and so on to be in input in the functions of that library, because in that case I would end up with handing them over on and on and need to provide them in every call. So they should be part of the config.</p>
<p>My current solution is that I just wrote <code>os.getenv("MY_URL")</code> at the place I needed it and export those variable in the application. That is outlined in solution 1 and it is a solution that works.</p>
<p>I want to understand alternatives and what the proper pythonic way of solving this topic is. My question is not about how to store a config for an application. I know about options to export something to environment and about ways to load TOML, JSON, INI and so on.</p>
<h2>1. Solution simply using os.environ</h2>
<p>I use the environment to hold the config of the library. The library just assumes that the needed environment variables are set. The script or application has the responsibility to export them.</p>
<p><code>my_package/__init__.py</code></p>
<pre><code>from .my_module import my_func
</code></pre>
<p><code>my_package/my_module.py</code></p>
<pre><code>import os
def my_func():
print(os.getenv["MY_URL"])
</code></pre>
<p>A possible usecase could look like this:</p>
<p><code>main.py</code></p>
<pre><code>import os
import my_package
os.environ["MY_URL"] = "http://example.com"
my_package.my_func()
</code></pre>
<p>Advantages:</p>
<ol>
<li>I don't need to define the variable inside the library or before import.</li>
<li>It can be handy when working with docker to put configuration directly into the container environment.</li>
<li>I can change the values during runtime by just manipulating the environment.</li>
</ol>
<p>Possible downsides:</p>
<ol>
<li>Only strings can be used. That doesn't matter for the things I need currently and obviously one can transform the strings again.</li>
<li>I don't know whether it is good practice to have a <code>os.getenv()</code> just there in the library. It feels a bit hacky.</li>
</ol>
<h2>2. Solution using a global variable in the library and monkey patching</h2>
<p>I have no clue, whether this is common style at all. Or if it's good design. The idea is I set the variables I need to use to None and do monkey patching in my scripts or applications. To make it convenient I could use a dictionary for that.</p>
<p><code>my_package/__init__.py</code></p>
<pre><code>from .my_module import my_func
config = None
</code></pre>
<p><code>my_package/my_module.py</code></p>
<pre><code># other code
def my_func():
print(my_package.config["MY_URL"])
</code></pre>
<p>A possible usecase could look like this:</p>
<p><code>main.py</code></p>
<pre><code>import my_package
my_package.config = {"MY_URL": "http://example.com"}
my_package.my_func()
</code></pre>
<p>Advantages:</p>
<ol>
<li>I don't need to define the variable inside the library or before import.</li>
<li>I can change the values during runtime by just monkey patching again.</li>
<li>I don't rely on the environment in the library, but still I could search in the environment for the values and add them to the dict.</li>
</ol>
<p>Possible downsides:</p>
<ol>
<li>That looks more hacky than solution one.</li>
</ol>
<h2>3. Solution using a global variable in the library and define a config method to override it.</h2>
<p>I have again no clue, whether this is common style at all. Or if it's good design. The idea is I set the config dict to None and call the config method in my scripts or applications that takes the dict. The benefit would be that I can log the calls to this config method and that looks cleaner. Although it is more effort compared to solution 1.</p>
<p><code>my_package/__init__.py</code></p>
<pre><code>from .my_module import my_func
from .config import config_loader
</code></pre>
<p><code>my_package/config.py</code></p>
<pre><code>from logging import getLogger
config = None
def config_loader(config_dict):
global config
logger.info(f"config loaded: {config_dict}")
config = config_dict
logger = getLogger(__name__)
</code></pre>
<p><code>my_package/my_module.py</code></p>
<pre><code>import my_package.config as cfg
# other code
def my_func():
print(cfg.config["MY_URL"])
</code></pre>
<p>A possible usecase could look like this:</p>
<p><code>main.py</code></p>
<pre><code>import my_package
my_package.config_loader({"MY_URL": "http://example.com"})
my_package.my_func()
</code></pre>
<p>Advantages:</p>
<ol>
<li>I don't need to define the variable inside the library or before import.</li>
<li>I can change the values during runtime by just calling the method again.</li>
<li>I don't rely on the environment in the library, but still I could search in the environment for the values and add them to the dict.</li>
<li>I have a defined way to set this config to s</li>
</ol>
<p>Possible downsides:</p>
<ol>
<li>That's more complicated over all and maybe exotic.</li>
</ol>
|
<python><configuration><environment-variables>
|
2023-05-12 19:21:43
| 0
| 426
|
FordPrefect
|
76,239,230
| 13,850,111
|
Is it possible to connect to a websocket on the same server using Flask?
|
<p>I'm trying to make a minimal implementation of an API that responds to a POST request by connecting to a websocket on the same server, using <code>Flask</code>, <code>websocket-client</code>, and <code>flask-sock</code></p>
<p>Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
from flask_sock import Sock
import websocket
app = Flask(__name__)
sock = Sock(app)
@sock.route('/echo')
def echo(ws):
while True:
data = ws.receive()
print(data)
ws.send(data)
@app.route("/", methods=['POST'])
def hello_world():
ws = websocket.WebSocket()
ws.connect(f'ws://localhost:5000/echo')
ws.send('hello')
resp = ws.recv()
ws.close()
return resp
sock.app.run()
</code></pre>
<p>when I send a POST request to <code>http://localhost:5000/</code>, I get a 307 Temporary Redirect error.</p>
<p>I've also tried:</p>
<ol>
<li>running the app through ngrok, where I've tried every combination of sending the POST request to <code>localhost</code>/the ngrok URL and making the websocket URL <code>localhost</code>/the ngrok URL</li>
<li>running two separate tunnels with ngrok - one for the HTTP server and another as a dedicated websocket server (moving the <code>echo</code> function and all relevant code over</li>
</ol>
<p>none of these approaches has worked</p>
<p><strong>Is it possible to connect to a websocket in this way using Python (or at all?)</strong> Thank you!</p>
|
<python><flask><websocket>
|
2023-05-12 19:00:51
| 0
| 429
|
WhoDatBoy
|
76,239,211
| 1,137,713
|
How to use typing overrides for pandas, torch, and numpy inputs
|
<p>I'm currently trying setup overloaded type definitions for my module to show that passing a data type will return the same type. My current module looks like:</p>
<pre><code>class MyModule:
@overload
def __call__(self, inputs: pd.DataFrame) -> pd.DataFrame:
...
@overload
def __call__(self, inputs: torch.Tensor) -> torch.Tensor:
...
@overload
def __call__(self, inputs: np.ndarray) -> np.ndarray:
...
def __call__(self, inputs: np.ndarray | torch.Tensor | pd.DataFrame):
pass
</code></pre>
<p>However, mypy is raising errors about this:</p>
<pre><code>utils.py:71: error: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
utils.py:71: error: Overloaded function signature 3 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
utils.py:71: error: Overloaded function signature 3 will never be matched: signature 2's parameter type(s) are the same or broader [misc]
Found 3 errors in 1 file (checked 21 source files)
</code></pre>
<p>Instances of the three types do not return <code>True</code> for <code>isinstance</code> tests, so I'm trying to figure out why I'm getting these errors.</p>
<p>Edit:
Package versions: <code>mypy==1.2.0</code>, <code>torch==2.0.0</code>, <code>numpy==1.24.3</code>, <code>pandas==2.0.1</code>, and <code>pandas-stubs==2.0.1.230501</code>.</p>
<p>Added <code>reveal_type(pandas.DataFrame)</code>, <code>reveal_type(np.ndarray)</code>, and <code>reveal_type(torch.Tensor)</code> to the file and ran mypy on it. Got this as part of the output, but got the same errors as above as well:</p>
<pre><code>utils.py:13: note: Revealed type is "Overload(def (data: Union[Union[typing.Sequence[Any], numpy.ndarray[Any, Any], pandas.core.series.Series[Any], pandas.core.indexes.base.Index], pandas.core.frame.DataFrame, builtins.dict[Any, Any], typing.Iterable[Union[Union[typing.Sequence[Any], numpy.ndarray[Any, Any], pandas.core.series.Series[Any], pandas.core.indexes.base.Index], Tuple[typing.Hashable, Union[typing.Sequence[Any], numpy.ndarray[Any, Any], pandas.core.series.Series[Any], pandas.core.indexes.base.Index]], builtins.dict[Any, Any]]], None] =, index: Union[Union[pandas.core.indexes.base.Index, pandas.core.series.Series[Any], numpy.ndarray[Any, Any], builtins.list[Any], builtins.dict[Any, Any], builtins.range, builtins.tuple[Any, ...]], None] =, columns: Union[Union[pandas.core.indexes.base.Index, pandas.core.series.Series[Any], numpy.ndarray[Any, Any], builtins.list[Any], builtins.dict[Any, Any], builtins.range, builtins.tuple[Any, ...]], None] =, dtype: Any =, copy: builtins.bool =) -> pandas.core.frame.DataFrame, def (data: Union[builtins.str, builtins.bytes, datetime.date, datetime.datetime, datetime.timedelta, numpy.datetime64, numpy.timedelta64, builtins.bool, builtins.int, builtins.float, pandas._libs.tslibs.timestamps.Timestamp, pandas._libs.tslibs.timedeltas.Timedelta, builtins.complex], index: Union[pandas.core.indexes.base.Index, pandas.core.series.Series[Any], numpy.ndarray[Any, Any], builtins.list[Any], builtins.dict[Any, Any], builtins.range, builtins.tuple[Any, ...]], columns: Union[pandas.core.indexes.base.Index, pandas.core.series.Series[Any], numpy.ndarray[Any, Any], builtins.list[Any], builtins.dict[Any, Any], builtins.range, builtins.tuple[Any, ...]], dtype: Any =, copy: builtins.bool =) -> pandas.core.frame.DataFrame)"
utils.py:14: note: Revealed type is "def [_ShapeType <: Any, _DType_co <: numpy.dtype[Any]] (shape: Union[typing.SupportsIndex, typing.Sequence[typing.SupportsIndex]], dtype: Union[numpy.dtype[Any], None, Type[Any], numpy._typing._dtype_like._SupportsDType[numpy.dtype[Any]], builtins.str, Tuple[Any, builtins.int], Tuple[Any, Union[typing.SupportsIndex, typing.Sequence[typing.SupportsIndex]]], builtins.list[Any], TypedDict('numpy._typing._dtype_like._DTypeDict', {'names': typing.Sequence[builtins.str], 'formats': typing.Sequence[Any], 'offsets'?: typing.Sequence[builtins.int], 'titles'?: typing.Sequence[Any], 'itemsize'?: builtins.int, 'aligned'?: builtins.bool}), Tuple[Any, Any]] =, buffer: Union[builtins.bytes, builtins.bytearray, builtins.memoryview, array.array[Any], mmap.mmap, numpy.ndarray[Any, numpy.dtype[Any]], numpy.generic] =, offset: typing.SupportsIndex =, strides: Union[typing.SupportsIndex, typing.Sequence[typing.SupportsIndex]] =, order: Union[None, Literal['K'], Literal['A'], Literal['C'], Literal['F']] =) -> numpy.ndarray[_ShapeType`1, _DType_co`2]"
utils.py:15: note: Revealed type is "Overload(def (*args: Any, *, device: Union[torch._C.device, builtins.str, builtins.int, None] =) -> torch._tensor.Tensor, def (storage: torch.types.Storage) -> torch._tensor.Tensor, def (other: torch._tensor.Tensor) -> torch._tensor.Tensor, def (size: Union[torch._C.Size, builtins.list[builtins.int], builtins.tuple[builtins.int, ...]], *, device: Union[torch._C.device, builtins.str, builtins.int, None] =) -> torch._tensor.Tensor)"
</code></pre>
|
<python><numpy><pytorch><mypy><python-typing>
|
2023-05-12 18:57:44
| 1
| 2,465
|
iHowell
|
76,239,192
| 3,247,006
|
How to pass multiple arguments to a template tag with multiple "as" arguments in Django Template?
|
<p>With <a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/#simple-tags" rel="nofollow noreferrer">@register.simple_tag</a>, I defined the custom tag <code>test()</code> which has 2 parameters as shown below:</p>
<pre class="lang-py prettyprint-override"><code>from django.template import Library
register = Library()
@register.simple_tag
def test(first_name="John", last_name="Smith"):
return first_name + " " + last_name
</code></pre>
<p>Then, I pass <code>"Anna"</code> and <code>"Miller"</code> to <code>test()</code> with two <code>as</code> arguments in Django Template as shown below:</p>
<pre><code>{% test "Anna" as f_name "Miller" as l_name %}
{{ f_name }} {{ l_name }}
</code></pre>
<p>But, there is the error below:</p>
<blockquote>
<p>django.template.exceptions.TemplateSyntaxError: 'test' received too
many positional arguments</p>
</blockquote>
<p>Actually, there is no error if I pass only <code>"Anna"</code> to <code>test()</code> with one <code>as</code> argument as shown below:</p>
<pre><code>{% test "Anna" as f_name %}
{{ f_name }}
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Anna Smith
</code></pre>
<p>And, there is no error if I pass <code>"Anna"</code> and <code>"Miller"</code> to <code>test()</code> without <code>as</code> arguments as shown below:</p>
<pre><code>{% test "Anna" "Miller" %}
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Anna Miller
</code></pre>
<p>So, how can I pass multiple arguments to a template tag with multiple <code>as</code> arguments in Django Template?</p>
|
<python><django><django-templates><arguments><templatetags>
|
2023-05-12 18:53:42
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,239,157
| 7,822,387
|
how to call databricks notebook from python with rest api
|
<p>I want to create a python notebook on my desktop that pass an input to another notebook in databricks, and then return the output of the databricks notebook. For example, my local python file will pass a string into a databricks notebook, which will reverse the string and then output the result back to my local python file. What would be the best way to achieve this?</p>
<p>This is what I have so far. I am getting a response from the api, but I am expecting an attribute in the metadata called "notebook_output". What am I missing to get this response? Or is there somewhere else I can look to get the notebook output from the run?</p>
<pre><code>import os
from databricks_cli.sdk.api_client import ApiClient
from databricks_cli.clusters.api import ClusterApi
os.environ['DATABRICKS_HOST'] = "https://adb-################.##.azuredatabricks.net/"
os.environ['DATABRICKS_TOKEN'] = "token-value"
api_client = ApiClient(host=os.getenv('DATABRICKS_HOST'), token=os.getenv('DATABRICKS_TOKEN'))
runJson = """
{
"name": "test job",
"max_concurrent_runs": 1,
"tasks": [
{
"task_key": "test",
"description": "test",
"notebook_task":
{
"notebook_path": "/Users/user@domain.com/api_test"
},
"existing_cluster_id": "cluster_name",
"timeout_seconds": 3600,
"max_retries": 3,
"retry_on_timeout": true
}
]
}
"""
runs_api = RunsApi(api_client)
run_id = runs_api.submit_run(runJson)
metadata = runs_api.get_run_output(run_id['run_id'])['metadata']
</code></pre>
<p>Output:</p>
<pre><code>{'job_id': 398029273095601, 'run_id': 150609942, 'creator_user_name': 'user', 'number_in_job': 150609942, 'state': {'life_cycle_state': 'TERMINATED', 'result_state': 'SUCCESS', 'state_message': '', 'user_cancelled_or_timedout': False},
'task': {'notebook_task': {'notebook_path': 'path', 'source': 'WORKSPACE'}},
'cluster_spec': {'existing_cluster_id': 'cluster'},
'cluster_instance': {'cluster_id': 'id', 'spark_context_id': 'id'}, 'start_time': 1683904971067, 'setup_duration': 1000, 'execution_duration': 8000, 'cleanup_duration': 0, 'end_time': 1683904981007, 'run_duration': 9940, 'run_name':
'Untitled', 'run_page_url': 'url', 'run_type': 'SUBMIT_RUN', 'attempt_number': 0, 'format': 'SINGLE_TASK'}
</code></pre>
|
<python><databricks><databricks-rest-api>
|
2023-05-12 18:47:05
| 1
| 311
|
J. Doe
|
76,238,864
| 7,339,624
|
Center a tensor in pytorch
|
<p>Cantering means shifting a vector so that its mean will be zero.</p>
<p>I want to center each row of a tensor, for example:</p>
<pre><code>A = |1 , 1| A_center = |0, 0|
|0 , 4| |-2,2|
</code></pre>
<p>For this, I use this approach:</p>
<pre><code>a_center = a - a.mean(dim=1).unsqueeze(1)
</code></pre>
<p>I want to know if there is any built-in way to center a tensor in PyTorch. Because I know there are built-in functions for length normalization, I assume there might be a function for tensor centering too, but I wasn't able to find it.</p>
|
<python><numpy><pytorch><tensor>
|
2023-05-12 17:54:42
| 1
| 4,337
|
Peyman
|
76,238,813
| 3,247,006
|
How to apply custom "css" and "js" files to all admins in only one app efficiently?
|
<p>I have custom <code>css</code> and <code>js</code> files, <code>admin.py</code> and overridden <a href="https://github.com/django/django/blob/main/django/contrib/admin/templates/admin/base.html" rel="nofollow noreferrer">base.html</a> located in <code>templates/admin/app1/</code> as shown below:</p>
<pre class="lang-none prettyprint-override"><code>django-project
|-core
| |-settings.py
| └-static
| └-core
| └-admin
| └-app1
| |-css
| | └-custom.css # Here
| └-js
| └-custom.js # Here
|-app1
| |-models.py
| └-admin.py # Here
|-app2
└-templates
└-admin
└-app1
└-base.html # Here
</code></pre>
<p>First, I set the custom <code>css</code> and <code>js</code> files to <code>css</code> and <code>js</code> respectively in <a href="https://docs.djangoproject.com/en/4.2/topics/forms/media/#assets-as-a-static-definition" rel="nofollow noreferrer">Media class</a> in all admins <code>Person</code>, <code>Animal</code> and <code>Food</code> in <code>app1</code> as shown below, then I can apply them to all admins <code>Person</code>, <code>Animal</code> and <code>Food</code> in only <code>app1</code>:</p>
<pre class="lang-py prettyprint-override"><code># "app1/admin.py"
from django.contrib import admin
from .models import Person, Animal, Food
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
class Media:
css = {
'all': ('core/admin/app1/css/custom.css',) # Here
}
js = ('core/admin/app1/js/custom.js',) # Here
@admin.register(Animal)
class AnimalAdmin(admin.ModelAdmin):
class Media:
css = {
'all': ('core/admin/app1/css/custom.css',) # Here
}
js = ('core/admin/app1/js/custom.js',) # Here
@admin.register(Food)
class FoodAdmin(admin.ModelAdmin):
class Media:
css = {
'all': ('core/admin/app1/css/custom.css',) # Here
}
js = ('core/admin/app1/js/custom.js',) # Here
</code></pre>
<p>But, the solution above is not efficient so instead I set them after <code><link ... "admin/css/base.css" %}{% endblock %}"></code> in <code>base.html</code> as shown below but the solution below doesn't work:</p>
<pre><code># "templates/admin/app1/base.html"
# ...
<title>{% block title %}{% endblock %}</title>
<link rel="stylesheet" href="{% block stylesheet %}{% static "admin/css/base.css" %}{% endblock %}">
<link rel="stylesheet" href="{% static "core/admin/app1/css/custom.css" %}"> {# Here #}
<script src="{% static 'core/admin/app1/js/custom.js' %}" defer></script> {# Here #}
{% block dark-mode-vars %}
<link rel="stylesheet" href="{% static "admin/css/dark_mode.css" %}">
# ...
</code></pre>
<p>So, I put <code>base.html</code> in <code>templates/admin/</code> as shown below, then the solution above works but the custom <code>css</code> and <code>js</code> files are applied to all admins in all apps:</p>
<pre class="lang-none prettyprint-override"><code>django-project
|-core
| |-settings.py
| └-static
| └-core
| └-admin
| └-app1
| |-css
| | └-custom.css
| └-js
| └-custom.js
|-app1
| |-models.py
| └-admin.py
|-app2
└-templates
└-admin
└-base.html # Here
</code></pre>
<p>So, how can I apply the custom <code>css</code> and <code>js</code> files to all admins <code>Person</code>, <code>Animal</code> and <code>Food</code> in only <code>app1</code> efficiently?</p>
|
<javascript><python><css><django><django-admin>
|
2023-05-12 17:46:27
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,238,761
| 9,058,482
|
Why does running the following code with os.fork() in a Jupyter Notebook cause the notebook to crash?
|
<p>I am trying to run the following Python code in a Jupyter Notebook cell that utilizes os.fork() to create a child process. However, the notebook crashes when executing the cell. The code is as follows:</p>
<pre><code>import os
def main():
print("Starting process...")
# Call os.fork() to create a child process
pid = os.fork()
if pid == 0:
# This block runs in the child process
print("I'm the child process!")
else:
# This block runs in the parent process
print("I'm the parent process!")
print("Exiting process...")
if __name__ == "__main__":
main()
</code></pre>
|
<python><jupyter-notebook>
|
2023-05-12 17:38:25
| 1
| 569
|
Harsh Kumar Chourasia
|
76,238,730
| 12,961,237
|
Is a websocket connection handled by one single lambda instance on AWS?
|
<p>If I have a websocket connection:</p>
<pre><code>client <=> AWS Lambda [Python]
</code></pre>
<p>While a connection is active: Will the client be connected to the same lambda instance for the whole time or can it change?</p>
|
<python><amazon-web-services><websocket>
|
2023-05-12 17:33:28
| 1
| 1,192
|
Sven
|
76,238,569
| 2,687,317
|
pandas- create new series from multilevel columns and aggregate
|
<p>So I have some complicated analysis to do. I have this data in a df:</p>
<pre><code> Date/Time TimeStamp CallerId CallType watts Band slot Channel
0 20:02.0 3113677432 17794800 C1 0.060303 12 1 2
1 20:02.0 3113677432 5520488 OP8 0.302229 12 1 1
2 20:02.0 3113677432 5520488 OP8 0.302229 13 1 1
3 20:02.0 3113677432 5520488 OP8 0.302229 12 2 1
4 20:02.0 3113677432 5520488 OP8 0.302229 13 2 1
5 20:02.0 3113677432 5520488 OP8 0.302229 12 3 1
6 20:02.0 3113677432 5520488 OP8 0.302229 13 3 1
7 20:02.0 3113677432 5520488 OP8 0.302229 12 4 1
8 20:02.0 3113677432 5520488 OP8 0.302229 13 4 1
9 20:07.0 3113677488 17794800 C1 0.151473 12 1 2
10 20:07.0 3113677488 5218651 CC8kds 0.475604 13 4 1
11 20:07.0 3113677488 5514318 BD 1.906933 12 1 6
12 20:11.0 3113677532 17794800 C1 0.038048 12 1 2
13 20:11.0 3113677532 5218651 CC8kds 0.300086 13 4 1
14 20:11.0 3113677532 5501460 PTN3 4.790000 12 1 5
15 21:51.0 3113678643 9895585 CC8kds 0.075378 12 1 1
16 21:51.0 3113678643 5482185 OP8 0.302229 13 1 1
17 21:51.0 3113678643 5482185 OP8 0.302229 13 2 1
18 21:51.0 3113678643 5482185 OP8 0.302229 13 3 1
19 21:51.0 3113678643 5482185 OP8 0.302229 13 4 1
20 21:51.0 3113678643 5513470 PTN3 4.790000 12 3 1
21 21:51.0 3113678643 5518399 PTN3 4.790000 12 3 5
</code></pre>
<p>The TimeStamp repeats over many of the items since each row captures a Band (10-20) and a Channel in that band (1-7). The slot further divides the channel into (1-4) time slots. I use a pivot_table to group the watts (power) in each (Band, Channel, slot) as follows:</p>
<pre><code>df = df.pivot_table(index='TimeStamp', columns=['Band','Channel','slot'], values='watts',
aggfunc=sum, fill_value=0)
</code></pre>
<p><a href="https://i.sstatic.net/ZOBnB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZOBnB.png" alt="enter image description here" /></a>
What I'd like to do is create a new series, <code>freq = 1200+(Band-1)+Channel/12</code> and then sum the power in all 4 slots ... for example:</p>
<p>Band 12, Channel 1 @ TimeStamp 3113677432 results in a total pwr of
<code>0.302229+0.302229+0.302229+0.302229 = 1.208916</code> for a freq value of <code>1200+(12-1)+1/12=1211.083</code> ... so I'd have something like this:</p>
<pre><code>TimeStamp Freq Pwr ...
3113677432 1211.083 1.208916 ... # This is the total for Band 12, Channel 1 @ this timestamp
3113677432 1211.640 2.208916 ... # this isn't a real total
</code></pre>
<p>I need this for each freq computed at each TimeStamp. It would be nice to have those Freq as their own (multilevel) column headers (I'm making up numbers here):</p>
<pre><code>Freq 1211.083 1211.64 1212.04...
TimeStamp
3113677432 1.208916 2.208916 2.208916...
3113677488 2.058406 0.475604 2.208916...
</code></pre>
<p>I can group on one col header like this:</p>
<pre><code>df.groupby(level=0, axis=1).sum() # Groups by Band
</code></pre>
<p>or</p>
<pre><code>df.groupby(level=1, axis=1).sum() # Groups by Channel - but across ALL Bands - WRONG
</code></pre>
<p>So is there an straightforward way to group by both Band and Channel and sum over all 4 slots?</p>
|
<python><pandas><group-by>
|
2023-05-12 17:07:31
| 1
| 533
|
earnric
|
76,238,519
| 572,575
|
Python Cannot search data in Facebook
|
<p>I want to search data in facebook by using <a href="https://github.com/kevinzg/facebook-scraper" rel="nofollow noreferrer">facebook_scraper</a> like this code</p>
<pre><code>from facebook_scraper import get_posts
for post in get_posts('nintendo', cookies="cookies.json", pages=10):
print(post)
</code></pre>
<p>I try to search in another language like this.</p>
<pre><code>get_posts('khách sạn', cookies="cookies.json", pages=10)
</code></pre>
<p>It show output is empty. How to search another language using facebook_scraper?</p>
|
<python><facebook><web-scraping>
|
2023-05-12 17:01:01
| 0
| 1,049
|
user572575
|
76,238,495
| 2,062,470
|
python "from datetime import date" gets forgotten
|
<p>Trying to parse csv dates (climate date) as datetime date objects but somehow "from datetime import date" gets forgotten. The error:
<code>TypeError: 'str' object is not callable</code> is shown when calling date().</p>
<pre><code>import csv
from datetime import date
file = 'nbcn-monthly_SMA_previous.csv'
with open(file, newline='') as csvfile:
readData = list(csv.DictReader(csvfile, delimiter=';', quotechar='|'))
meanTemp = {}
for i in readData:
date = i['date']
year = date[0:4]
month = date[4:6]
day = date[6:8]
from datetime import date # Without this it doesn't work
dateF = date(int(year), int(month), int(day))
mtemp = i['tre200m0']
meanTemp[date] = {'date':dateF, 'year':year, 'month':month, 'day':day, 'mmTemp':mtemp}
</code></pre>
<p>The code should work without the extra <code>from datetime import date</code> but doesn't.
Sure, putting the extra import there works but this cannot be right.</p>
<p>The problem persists in the iPython shell. Run the script and the error is shown. Feed in date(1984,1,1) and the error is shown. Re-import date and everything is fine.</p>
<p>Anybody know what is wrong here?</p>
|
<python><datetime>
|
2023-05-12 16:57:07
| 1
| 409
|
mercergeoinfo
|
76,238,414
| 5,605,073
|
Recursively lookup value with polars?
|
<p>I want to be able to get the manager_id of each manager recursively using a polars DF which has two columns:</p>
<p>"employee_id", "manager_1_id"</p>
<p>In pandas, this code was:</p>
<pre><code>id_index = df.set_index("employee_id")["manager_1_id"]
for i in range(1, 12):
df[f"manager_{str(i + 1)}_id"] = df[f"manager_{str(i)}_id"].map(id_index)
</code></pre>
<p>Each manager_id value is also an employee ID and ultimately I want a column per manager:</p>
<p>"employee_id, manager_1_id, manager_2_id, manager_3_id, ..."</p>
<p>Is there a good way to achieve this with polars without running the pandas snippet? I was attempting to loop through some left joins, but it did not seem like a great approach.</p>
<p>Edit: example is below. The raw data has two columns, the employee ID (all many thousands of employees) and the employee ID of their direct manager.</p>
<pre><code>employee_id | manager_1_id
1 | 3
2 | 5
3 | 4
4 | 5
5 |
</code></pre>
<p>Goal is to expand this out into columns (manager_1 to manager_12)</p>
<pre><code>employee_id | manager_1_id | manager_2_id | manager_3_id | ...
1 3 4 5
2 5
3 4 5
5 5
5
</code></pre>
<p>Hopefully that is clear. Employee 1 reports to Employee 3, who reports to Employee 4, who reports to Employee 5. Employee 5 is the CEO who reports to no one.</p>
|
<python><dataframe><python-polars>
|
2023-05-12 16:43:07
| 2
| 588
|
ldacey
|
76,238,412
| 2,233,608
|
Getting a mypy type error using pydantic generics and don't know why
|
<p>With the following project setup</p>
<p>pyproject.toml</p>
<pre><code>[tool.poetry]
name = "test"
version = "0.1.0"
description = ""
authors = ["Joe Smith <joe.smith@example.com>"]
[tool.poetry.dependencies]
python = "^3.11"
pydantic = "^1.10.7"
mypy = "^1.3.0"
[tool.mypy]
plugins = [
"pydantic.mypy"
]
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>And the following code example:</p>
<pre><code>from typing import Generic, TypeVar
from pydantic.generics import GenericModel
T = TypeVar("T")
class BaseModel(GenericModel, Generic[T]):
data: T
class StringModel(BaseModel[str]):
pass
model = StringModel(data="data")
model.data = "other data"
</code></pre>
<p>I am getting the following mypy error:</p>
<pre><code>error: Incompatible types in assignment (expression has type "str", variable has type "T") [assignment]
model.data = "other data"
^~~~~~~~~~~~
</code></pre>
<p>It seems to be an issue with the <code>pydantic.mypy</code> mypy plugin because when I remove it, I don't get the error anymore.</p>
|
<python><mypy><pydantic>
|
2023-05-12 16:42:49
| 1
| 1,178
|
niltz
|
76,238,390
| 13,921,399
|
Identify first block in a pandas series based on switching values
|
<p>Consider the following pandas series:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
s = pd.Series([0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1])
</code></pre>
<p>I want to identify the first block of 1 values. The block starts when 0 switches to 1 for the first time and ends when it switches back (don't has to). The rest should just equal zero. One restriction: no iteration allowed, only pure pandas.</p>
<p>Expected output:</p>
<pre class="lang-py prettyprint-override"><code>s_new = pd.Series([0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0])
</code></pre>
|
<python><pandas>
|
2023-05-12 16:39:52
| 1
| 1,811
|
ko3
|
76,238,354
| 6,866,311
|
Can't install PyTorch with CUDA enabled on WSL
|
<p>For the last few months, I've been intermittently running into an issue where I can't install PyTorch with CUDA enabled on WSL on Windows 11.</p>
<p>I use a Windows 11 Desktop PC with an RTX 4090 GPU. The Windows installation has WSL installed and enabled, and I run all my Jupyter Notebooks from WSL.</p>
<p>I used the following command from <a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">PyTorch's website</a> to install torch:</p>
<p><code>conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia</code></p>
<p>However, when I run the following cell in a Jupyter Notebook, I get a negative response:</p>
<pre><code>import torch
cuda_available = torch.cuda.is_available()
if cuda_available:
print("CUDA is available")
else:
print("CUDA is not available")
</code></pre>
<p><code>CUDA is not available</code></p>
<p>How do I get a CUDA enabled PyTorch version running?</p>
|
<python><pytorch><gpu><nvidia>
|
2023-05-12 16:35:22
| 1
| 549
|
Oion Akif
|
76,238,268
| 10,634,126
|
Retrieve transferred file size from AWS S3 put_object
|
<p>I have code to upload a file to AWS S3 in Python using <code>boto3.client</code> and <code>put_object</code> like (example below assumes JSON data):</p>
<pre><code>def snapshot(data, path):
s3_client = boto3.client("s3", region_name="us-east-1")
r = s3_client.put_object(
Body=bytes(json.dumps(data, indent=2, default=str).encode("utf-8")),
ContentType="application/json",
Bucket=BUCKET,
Key=path + ".json"
)
*** Return r.fileSize or something like that ***
</code></pre>
<p>In looking through the documentation for <code>boto3</code> and for the AWS S3 API, I don't see any way to retrieve the file size of the transferred object.</p>
<p>Is there any way to retrieve this without an additional interaction with S3 (i.e. reading file size from the newly uploaded file), and without simply using Python to get the size of the object passed?</p>
|
<python><amazon-web-services><amazon-s3><boto3><filesize>
|
2023-05-12 16:23:23
| 0
| 909
|
OJT
|
76,238,241
| 1,473,517
|
How to read and insert an image in a Word document?
|
<p>I have a skeleton Word document in .docx format that includes an image of my signature. I want to update the document using Python. This works well but the signature image disappears when I do this. How can I read the image from the file and then insert it somewhere? I tried</p>
<pre><code>from docx import Document
doc = Document('test.docx')
inline_shapes = doc.inline_shapes
for inline_shape in inline_shapes:
....
</code></pre>
<p>But inline_shapes is empty.</p>
<p>I am using python-docx version '0.8.11'.</p>
|
<python><python-docx>
|
2023-05-12 16:20:22
| 0
| 21,513
|
Simd
|
76,238,185
| 15,637,435
|
Apply conditional opacity in plotly figure
|
<h3>Problem</h3>
<p>I'm trying to create a Plotly chart where I would like to apply conditional opacity if an entry is selected (column 'selected': True or False) or not. Unfortunately, the things I tried did not work out. How can I apply conditional opacity to my Plotly chart?</p>
<h2>What I tried</h2>
<p>I tried to hand the 'opacity'-values of the dataframe to the <code>opacity</code> argument of <code>px.scatter()</code> to, which resulted in a ValueError.</p>
<p>I also tried to update the traces with <code>product_fig.update_traces(marker=dict(opacity=df['opacity'].tolist()))</code>, which unfortunately also did not work and did not apply the opacity to the points in the scatter plot.</p>
<h2>Code</h2>
<pre><code>import pandas as pd
import plotly.express as px
def build_product_data_fig(df: pd.DataFrame) -> go.Figure:
df['opacity'] = df['selected'].apply(lambda x: 0.8 if x else 0.2)
product_fig = px.scatter(df,
x='emission',
y='weight_gram',
color='category',
size='price',
template='plotly_dark')
return product_fig
</code></pre>
<h2>Sample data</h2>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">emission</th>
<th style="text-align: right;">weight_gram</th>
<th style="text-align: center;">category</th>
<th style="text-align: center;">selected</th>
<th style="text-align: center;">price</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">120.4</td>
<td style="text-align: right;">1250</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">True</td>
<td style="text-align: center;">32.0</td>
</tr>
<tr>
<td style="text-align: right;">92.0</td>
<td style="text-align: right;">950</td>
<td style="text-align: center;">B</td>
<td style="text-align: center;">False</td>
<td style="text-align: center;">20.0</td>
</tr>
<tr>
<td style="text-align: right;">105.5</td>
<td style="text-align: right;">1100</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">True</td>
<td style="text-align: center;">25.0</td>
</tr>
<tr>
<td style="text-align: right;">87.8</td>
<td style="text-align: right;">800</td>
<td style="text-align: center;">B</td>
<td style="text-align: center;">False</td>
<td style="text-align: center;">18.0</td>
</tr>
<tr>
<td style="text-align: right;">100.2</td>
<td style="text-align: right;">1050</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">False</td>
<td style="text-align: center;">22.0</td>
</tr>
<tr>
<td style="text-align: right;">110.6</td>
<td style="text-align: right;">1150</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">True</td>
<td style="text-align: center;">28.0</td>
</tr>
<tr>
<td style="text-align: right;">95.2</td>
<td style="text-align: right;">900</td>
<td style="text-align: center;">B</td>
<td style="text-align: center;">False</td>
<td style="text-align: center;">16.0</td>
</tr>
<tr>
<td style="text-align: right;">115.8</td>
<td style="text-align: right;">1200</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">True</td>
<td style="text-align: center;">30.0</td>
</tr>
</tbody>
</table>
</div>
|
<python><plotly>
|
2023-05-12 16:12:15
| 2
| 396
|
Elodin
|
76,238,102
| 3,247,006
|
How to show alert messages with the buttons in "Change List" page in Django Admin?
|
<p>There is <code>Person</code> model as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "app1/models.py"
from django.db import models
class Person(models.Model):
first_name = models.CharField(max_length=20)
last_name = models.CharField(max_length=20)
</code></pre>
<p>And, there is <code>Person</code> admin as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "app1/admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name')
list_editable = ('first_name', 'last_name')
list_per_page = 3
list_display_links = None
ordering = ('id',)
</code></pre>
<p>And, there is <strong>Change List</strong> page for <code>Person</code> admin as shown below:</p>
<p><a href="https://i.sstatic.net/gYcjy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYcjy.png" alt="enter image description here" /></a></p>
<p>Then, an alert message is shown after changing the first or last names, then clicking on <code>Go</code> button as shown below:</p>
<p><a href="https://i.sstatic.net/wBTW6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wBTW6.png" alt="enter image description here" /></a></p>
<p>So now, how can I show the same alert message after changing the first or last names, then clicking on <strong>page numbers'</strong>, <code>Show all</code> or <code>ADD PERSON</code> buttons as shown below:</p>
<p><a href="https://i.sstatic.net/IoJKr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IoJKr.png" alt="enter image description here" /></a></p>
<p>In addition, an alert message is shown after not changing the first or last names and selecting <strong>an Admin Action</strong>, then clicking on <code>Save</code> button as shown below:</p>
<p><a href="https://i.sstatic.net/cbIgs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cbIgs.png" alt="enter image description here" /></a></p>
<p>And, another alert message is shown after changing the first or last names and selecting <strong>an Admin Action</strong>, then clicking on <code>Save</code> button as shown below:</p>
<p><a href="https://i.sstatic.net/g3GHs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g3GHs.png" alt="enter image description here" /></a></p>
<p>So now again, how can I show the same 2 alert messages after not changing or changing the first or last names, then clicking on <strong>page numbers'</strong>, <code>Show all</code> or <code>ADD PERSON</code> buttons as shown below:</p>
<p><a href="https://i.sstatic.net/qnVeo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qnVeo.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/AIzZF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AIzZF.png" alt="enter image description here" /></a></p>
<p>So, how can I do them?</p>
|
<javascript><python><django><django-models><django-admin>
|
2023-05-12 16:01:46
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,238,069
| 2,471,211
|
EMR - Pyspark, No module named 'boto3'
|
<p>I am running an EMR with the following creation statement:</p>
<pre><code>$ aws emr create-cluster \
--name "my_cluster" \
--log-uri "s3n://somebucket/" \
--release-label "emr-6.8.0" \
--service-role "arn:aws:iam::XXXXXXXXXX:role/EMR_DefaultRole" \
--ec2-attributes '{"InstanceProfile":"EMR_EC2_DefaultRole","EmrManagedMasterSecurityGroup":"sg-xxxxxxxx","EmrManagedSlaveSecurityGroup":"sg-xxxxxxxx","KeyName":"some_key","AdditionalMasterSecurityGroups":[],"AdditionalSlaveSecurityGroups":[],"ServiceAccessSecurityGroup":"sg-xxxxxxxx","SubnetId":"subnet-xxxxxxxx"}' \
--applications Name=Spark Name=Zeppelin \
--configurations '[{"Classification":"spark-env","Configurations":[{"Classification":"export","Properties":{"PYSPARK_PYTHON":"/usr/bin/python3"}}],"Properties":{}}]' \
--instance-groups '[{"InstanceCount":2,"InstanceGroupType":"CORE","Name":"Core","InstanceType":"r6g.xlarge","EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"VolumeType":"gp2","SizeInGB":32},"VolumesPerInstance":2}]},"Configurations":[{"Classification":"spark-env","Configurations":[{"Classification":"export","Properties":{"PYSPARK_PYTHON":"/usr/bin/python3"}}],"Properties":{}}]},{"InstanceCount":1,"InstanceGroupType":"MASTER","Name":"Primary","InstanceType":"r6g.xlarge","EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"VolumeType":"gp2","SizeInGB":32},"VolumesPerInstance":2}]},"Configurations":[{"Classification":"spark-env","Configurations":[{"Classification":"export","Properties":{"PYSPARK_PYTHON":"/usr/bin/python3"}}],"Properties":{}}]}]' \
--bootstrap-actions '[{"Args":[],"Name":"install python package","Path":"s3://something/bootstrap/bootstrap-script.sh"}]' \
--scale-down-behavior "TERMINATE_AT_TASK_COMPLETION" \
--auto-termination-policy '{"IdleTimeout":3600}' \
--step-concurrency-level "3" \
--os-release-label "2.0.20230418.0" \
--region "us-east-1"
</code></pre>
<p>My bootstrap script (bootstrap-script.sh):</p>
<pre><code>#!/bin/bash
echo -e 'Installing Boto3... \n'
which pip3
which python3
pip3 install -U boto3 botocore --user
</code></pre>
<p>Once the EMR is up, I add this step:</p>
<pre><code>$ spark-submit --deploy-mode cluster s3://something/py-spark/simple.py
</code></pre>
<p>Simple.py is just like this:</p>
<pre><code>import boto3
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName('Simple test') \
.getOrCreate()
spark.stop()
</code></pre>
<p>My step fails, with:</p>
<pre><code>ModuleNotFoundError: No module named 'boto3'
</code></pre>
<p>I logged on the master node as hadoop and ran:</p>
<pre><code>$ pip3 freeze
aws-cfn-bootstrap==2.0
beautifulsoup4==4.9.3
boto==2.49.0
click==8.1.3
docutils==0.14
jmespath==1.0.1
joblib==1.1.0
lockfile==0.11.0
lxml==4.9.1
mysqlclient==1.4.2
nltk==3.7
nose==1.3.4
numpy==1.20.0
py-dateutil==2.2
pystache==0.5.4
python-daemon==2.2.3
python37-sagemaker-pyspark==1.4.2
pytz==2022.2.1
PyYAML==5.4.1
regex==2021.11.10
simplejson==3.2.0
six==1.13.0
tqdm==4.64.0
windmill==1.6
</code></pre>
<p>Yet, in my bootstrap logs:</p>
<pre><code>Installing Boto3...
/usr/bin/pip3
/usr/bin/python3
Collecting boto3
Downloading boto3-1.26.133-py3-none-any.whl (135 kB)
Collecting botocore
Downloading botocore-1.29.133-py3-none-any.whl (10.7 MB)
Collecting s3transfer<0.7.0,>=0.6.0
Downloading s3transfer-0.6.1-py3-none-any.whl (79 kB)
Requirement already satisfied, skipping upgrade: jmespath<2.0.0,>=0.7.1 in /usr/local/lib/python3.7/site-packages (from boto3) (1.0.1)
Collecting urllib3<1.27,>=1.25.4
Downloading urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting python-dateutil<3.0.0,>=2.1
Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Requirement already satisfied, skipping upgrade: six>=1.5 in /usr/local/lib/python3.7/site-packages (from python-dateutil<3.0.0,>=2.1->botocore) (1.13.0)
Installing collected packages: urllib3, python-dateutil, botocore, s3transfer, boto3
Successfully installed boto3-1.26.133 botocore-1.29.133 python-dateutil-2.8.2 s3transfer-0.6.1 urllib3-1.26.15
</code></pre>
<p>And the log looks the same for all my nodes.</p>
<p>So on the master, as hadoop, I ran:</p>
<pre><code>$ which python3
/bin/python3
</code></pre>
<p>Then, just to verify my bootstrap actually did something:</p>
<pre><code>$ /usr/bin/pip3 freeze
aws-cfn-bootstrap==2.0
beautifulsoup4==4.9.3
boto==2.49.0
boto3==1.26.133
botocore==1.29.133
click==8.1.3
docutils==0.14
</code></pre>
<p>So the python3 I updated in the bootstrapper (/usr/bin/python3) is not the same that's used by default for hadoop.</p>
<p>Yet I tried to make sure pyspark uses the right "python" in my EMR configs:</p>
<pre><code>{"Classification":"spark-env","Configurations":[{"Classification":"export","Properties":{"PYSPARK_PYTHON":"/usr/bin/python3"}}],"Properties":{}}]}
</code></pre>
<p>But PYSPARK_PYTHON doesn't seem to be set on any of the nodes when I login. I do not understand why.</p>
<p>I am looking for the correct steps to follow to get my "import boto3" working from my pyspark script (I do not want to make changes to simple.py).</p>
<hr />
<p>Update: it seems to work in client mode:</p>
<pre><code>$ spark-submit --deploy-mode client s3://something/py-spark/simple.py
</code></pre>
<p>But of course, I want to run it in production, in cluster mode...</p>
|
<python><pyspark><amazon-emr>
|
2023-05-12 15:57:22
| 1
| 485
|
Flo
|
76,237,883
| 5,896,319
|
How to display a Foreign key serializer as a dropdown?
|
<p>I have three different models:</p>
<pre><code>class Province(Model):
province = models.CharField(max_length=250)
class BaseCase(ModelWithStamps):
...
province = models.ForeignKey(Province, null=True, blank=True, on_delete=models.CASCADE)
class Event(BaseEvent):
....
@property
def province(self):
if hasattr(self, 'case'):
return self.case.province
return None
@property
def province_id(self):
if hasattr(self, 'case'):
return self.case.province.id
return None
</code></pre>
<p>And I have a serializer:</p>
<pre><code>class BaseEditCaseSerializer(...):
....
province = serializers.ModelField(model_field=Case()._meta.get_field('province'),
required=False, allow_null=True)
class Meta:
model = ManualEvent
fields = (..., 'province')
</code></pre>
<p>Even, the province is a foreign key, the province field is displayed as a "text field" in front end and I cannot change it from there.</p>
<p>I want to display it as a dropdown (<code>Province.objects.all()</code>).
How can I do it?</p>
|
<python><django>
|
2023-05-12 15:34:24
| 3
| 680
|
edche
|
76,237,843
| 1,745,291
|
Is it possible to create a dynamic module with a custom `__dict__` class?
|
<p>When doing <code>module = types.ModuleType('mymodule')</code>, we don't get any chance of doing further customization, and we can't override <code>module.__dict__</code> since it's readonly...</p>
<p>The same seems to apply with all other <code>importlib</code> functions (<code>module_from_spec</code> alike)</p>
<p>Is there a way of doing it which is not too much dirty (like... sub-classing ModuleType, and creating a temporary module and copying the fields...) ?</p>
|
<python><import><python-importlib>
|
2023-05-12 15:30:03
| 1
| 3,937
|
hl037_
|
76,237,791
| 14,957,517
|
ModuleNotFoundError: No module named 'jax.experimental.vectorize'
|
<p>I had this issue when I was working on a code that was not mine, and I was really overwhelmed by it for a month without a fixed any help online.</p>
<p>I have jax-0.4.10 installed and Using a Mac book Pro.</p>
<p>I never found any solution online probably because this error was specifically my problem.</p>
|
<python><module><vectorization><modulenotfounderror><jax>
|
2023-05-12 15:24:22
| 1
| 987
|
Cyebukayire
|
76,237,762
| 9,937,874
|
Pandas select rows where value is in either of two columns
|
<p>I have a dataframe that looks like this</p>
<pre><code>Title Description
Area 51 Aliens come to earth on the 4th of July.
Matrix Hacker Neo discovers the shocking truth.
Spaceballs A star-pilot for hire and his trusty sidekick must come to the rescue of a princess.
</code></pre>
<p>I am want to select rows that contain the word Space or Aliens in either the title or description.</p>
<p>I can select rows that contain space using a single column but I am unsure of how to include the second column.</p>
<pre><code> words_of_interest = ["Space", "Aliens"]
df[df["Title"].str.contains("|".join(words_of_interest))]
Title Description
Area 51 Aliens come to earth on the 4th of July.
Spaceballs A star-pilot for hire and his trusty sidekick must come to the rescue of a
</code></pre>
|
<python><pandas>
|
2023-05-12 15:21:45
| 2
| 644
|
magladde
|
76,237,729
| 180,826
|
Python, FASTAPI and PlanetScale - I cannot connect to the my MySQL Database because of some strange SSL error
|
<p>I'm hosting FASTAPI app inside of a docker container. I can build and run docker and run the application perfectly fine in the browser. The issue occurs when I call an API that attempts to talk to my MySQL db from the hosting platform PlanetScale. Provided are my dockerfile:</p>
<p>DOCKERFILE</p>
<pre><code>FROM python:3.6.4
#WORKDIR /etc/ssl/
#add /etc/ssl/certs:/etc/ssl/certs:ro
RUN mkdir /apis
#RUN mkdir /etc
#RUN mkdir /etc/ssl
WORKDIR /apis
COPY requirements.txt .
COPY main.py .
COPY prod.env .
COPY cert.pem /etc/ssl/
RUN python -m pip install --upgrade pip
#RUN pip install -r requirements.txt
RUN pip install "fastapi[all]"
RUN pip install mysql
RUN pip install mysqlclient
RUN pip install python-dotenv
RUN pip install PyMySQL
COPY . .
#CMD yum update ca-certificates
# THE BELOW SECTION IS POPULATED WITH THE CORRECT SETTINGS.
ENV HOST=<>
ENV USERNAME=<>
ENV PASSWORD=<>
ENV DATABASE=<>
RUN echo "Host: $HOST"
RUN echo "Username: $USERNAME"
RUN echo "Password: $PASSWORD"
RUN echo "Database: $DATABASE"
#RUN cd /etc/ssl/
#COPY /etc/ssl/ .
CMD ["uvicorn", "main:app", "--host=0.0.0.0", "--port=80"]
</code></pre>
<p>When I attempting to run an api I get the following error:
MySQLdb.OperationalError: (2026, 'SSL connection error: unknown error number')</p>
<p>Any ideas</p>
|
<python><mysql><docker><fastapi>
|
2023-05-12 15:17:07
| 0
| 1,221
|
Brandon Michael Hunter
|
76,237,647
| 3,196,122
|
Is there a way to set the color of plotly legend items?
|
<p>I want to visualize multiple boolean arrays using Scatter traces with timestamps as x and the array-index as y. The values should be visualized using the colors. However, what I do not like is that the colors of the legend items are always set to the first element of these traces. Is there a way to set the legend item color independently? Here's a minimal example to show the issue.</p>
<pre><code>from plotly import graph_objects as go
data = np.random.randint(0,2,(3, 50))
traces = [
go.Scatter(
x = np.arange(len(line)),
y = np.ones(len(line)) * i,
mode='markers',
marker={
'color': line,
'cmin': 0,
'cmax': 1,
'colorscale': [[0, 'red'], [1, 'green']]
},
) for i, line in enumerate(data)
]
fig = go.Figure(traces)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/OyaAw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OyaAw.png" alt="enter image description here" /></a></p>
|
<python><colors><plotly><legend><scatter-plot>
|
2023-05-12 15:07:28
| 1
| 483
|
KrawallKurt
|
76,237,511
| 3,692,455
|
Unable to validate a YAML file using PyYAML library
|
<p>I am trying to validate YAML files using Python and I am using the PyYAML library for that.</p>
<p>But unfortunately, it reports any files it checks as valid YAML files, not only YAML files. I am not sure why.</p>
<p>Below is the Python script. Suppose you pass a sample bash script <code>file_path = './testbashscript.sh'</code> to validate, it reports the bash script as a valid YAML file which does not make sense to me.</p>
<pre><code>#!/usr/bin/env python3
import yaml
def is_yaml_file(file_path):
try:
with open(file_path, 'r') as file:
yaml.safe_load(file)
return True
except:
return False
def main():
file_path = './testbashscript.sh'
if is_yaml_file(file_path):
print(f"{file_path} is a valid YAML file")
else:
print(f"{file_path} is not a valid YAML file")
if __name__ == "__main__":
main()
</code></pre>
<p>Content of the sample bash script:</p>
<pre><code>→ cat testbashscript.sh
#!/usr/bin/env bash
echo "Hello World"
</code></pre>
<p>The output I get is:</p>
<pre><code>→ python3 yamlvalidate.py
./testbashscript.sh is a valid YAML file
</code></pre>
<p>The expected result is, it should say it is not a valid YAML file. Am I doing something wrong here?</p>
|
<python><python-3.x><pyyaml>
|
2023-05-12 14:54:29
| 1
| 903
|
vjwilson
|
76,237,417
| 12,155,713
|
How to combine multiple crosstabs while preserving same binary column
|
<p>I would like stack 2 crosstabs for 2 categorical variables (gender, race) while preserving the same binary variable (goal) as columns.</p>
<p>Take for example this dataset:</p>
<pre><code>df = pd.DataFrame({'GenderDSC':['Male','Female','Female','Male','Male','Male'],
'Goal':[0,1,0,1,0,0],
'Race':['African-American','White','White','Asian','Asian','White']})
df
</code></pre>
<p>Where crosstabs would output:</p>
<p><a href="https://i.sstatic.net/QiD0Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QiD0Z.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/UtsaQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UtsaQ.png" alt="enter image description here" /></a></p>
<p>But would like them to be merged like:</p>
<p><a href="https://i.sstatic.net/tnBoq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tnBoq.png" alt="enter image description here" /></a></p>
<p>So far, I have tried:</p>
<pre><code>a = pd.crosstab(df['GenderDSC'],df['Goal'])
b = pd.crosstab(df['Race'],df['Goal'])
pd.concat([a,b])
</code></pre>
<p><a href="https://i.sstatic.net/3mop3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3mop3.png" alt="enter image description here" /></a></p>
<p>But I lose the groups for each columns (genderdsc, race), so thinking there could be a better way to connect them.</p>
|
<python><pandas><pivot-table>
|
2023-05-12 14:41:29
| 2
| 354
|
Gabe Verzino
|
76,237,353
| 5,389,418
|
Unable to import module 'vc__handler__python'
|
<p>I am trying to deploy django web app through vercel. I have added django in requirements.txt file but it shows this error:</p>
<p><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'vc__handler__python': No module named 'django'</code></p>
<p>Feel free to share if it can be fixed.</p>
|
<python><django><vercel><modulenotfounderror>
|
2023-05-12 14:32:57
| 0
| 506
|
Leo
|
76,237,292
| 239,879
|
My Transformer Encoder / Decoder has the same values for all time steps in eval with PyTorch
|
<p>I have a model:</p>
<pre><code># model.py
import torch
import torch.nn as nn
import math
class TransformerAutoencoder(nn.Module):
def __init__(self, d_model, nhead, num_layers, dim_feedforward, dropout=0.0):
super(TransformerAutoencoder, self).__init__()
self.encoder = nn.TransformerEncoder(
encoder_layer=nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward, dropout),
num_layers=num_layers,
)
self.relu = nn.ReLU()
self.bottleneck = nn.Linear(d_model, d_model)
self.decoder = nn.TransformerDecoder(
decoder_layer=nn.TransformerDecoderLayer(d_model, nhead, dim_feedforward, dropout),
num_layers=num_layers
)
self.d_model = d_model
def forward(self, src, tgt=None):
num_time_frames = src.size(1)
# Generate sinusoidal position embeddings
position_embeddings_src = self._get_sinusoidal_position_embeddings(num_time_frames, self.d_model).to(src.device)
# Add position embeddings to input
src = src + position_embeddings_src
src = src.transpose(0, 1) # shape: (T, batch_size, n_mels)
# Pass the input through the encoder
memory = self.encoder(src).transpose(0, 1) # shape: (batch_size, T, n_mels)
memory = self.relu(memory)
# Pass the output of the encoder through the bottleneck
bottleneck = self.bottleneck(memory) # shape: (batch_size, T, n_mels)
bottleneck = self.relu(bottleneck)
bottleneck = bottleneck.mean(dim=1) # shape: (batch_size, n_mels)
if tgt is not None:
# In training mode, we have the target sequence
# Prepend the bottleneck to the target sequence
tgt = torch.cat((bottleneck.unsqueeze(1), tgt), dim=1) # shape: (batch_size, T + 1, n_mels)
# Generate position embeddings for the new target sequence
position_embeddings_tgt = self._get_sinusoidal_position_embeddings(
num_time_frames + 1, self.d_model).to(tgt.device) # +1 to account for the bottleneck
tgt = tgt + position_embeddings_tgt
tgt = tgt.transpose(0, 1) # shape: (T + 1, batch_size, n_mels)
output = self.decoder(tgt, memory.transpose(0, 1)) # shape: (T + 1, batch_size, n_mels)
else:
# In inference mode, we generate the target sequence step by step
output = self._generate_sequence(bottleneck, memory.transpose(0, 1), num_time_frames)
# Transpose output back to (batch_size, T, n_mels)
output = output.transpose(0, 1)
return output
def _generate_sequence(self, bottleneck, memory, max_length):
# Initialize output with the bottleneck
output = bottleneck.unsqueeze(0) # shape: (1, batch_size, n_mels)
print("output shape: ", output.shape, output)
print("memory shape: ", memory.shape)
for _ in range(max_length):
output_step = self.decoder(output, memory)
print("output_step shape: ", output_step.shape, output_step)
output = torch.cat((output, output_step[-1:, :, :]), dim=0)
# Transpose output back to (batch_size, T, n_mels)
print("output shape: ", output.shape)
return output
def _get_sinusoidal_position_embeddings(self, num_positions, d_model):
position_embeddings = torch.zeros(num_positions, d_model)
positions = torch.arange(0, num_positions, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model))
position_embeddings[:, 0::2] = torch.sin(positions * div_term)
position_embeddings[:, 1::2] = torch.cos(positions * div_term)
position_embeddings = position_embeddings.unsqueeze(0)
return position_embeddings
</code></pre>
<p>Forgetting the sequence generation part, when I run this in eval mode, all the time steps from the encoder are the same. What could I be missing?</p>
|
<python><pytorch><transformer-model>
|
2023-05-12 14:26:07
| 0
| 43,875
|
Shamoon
|
76,237,273
| 9,318,323
|
Get info about Jenkins job from within python while the job is running
|
<p>So, I have this python script on Jenkins.</p>
<p>I would like to write a function that, when called from Jenkins pipeline, will tell me which job it belongs to and ideally a link to the job itself.</p>
<p><strong>Basically, the script must know that it is being run on Jenkins and tell which job / pipeline is running it.</strong></p>
<p>If I run a build and it has this path: <a href="https://jenkins.foo.com/job/PYTHON-TEST/job/env_test/226/" rel="nofollow noreferrer">https://jenkins.foo.com/job/PYTHON-TEST/job/env_test/226/</a>, I want the script to return this link or at least parts of it like:</p>
<ol>
<li>jenkins.foo.com</li>
<li>PYTHON-TEST</li>
<li>env_test</li>
<li>226</li>
</ol>
<p>I see that points 2,3,4 can be pretty much extracted using</p>
<pre><code>import socket
socket.gethostname() # python-test-env-test-226-pt1l3-a1op4-61ak5
</code></pre>
<p>I know it is possible on Databricks using:</p>
<pre><code>dbutils.notebook.entry_point.getDbutils().notebook().getContext().toJson()
</code></pre>
<p>In fact, I already implemented it there. Now, I am searching for a similar functionality on Jenkins. Any advice on how to do it? Perhaps, there are some other ways instead of <code>socket.gethostname()</code>?</p>
|
<python><jenkins><jenkins-pipeline>
|
2023-05-12 14:22:58
| 1
| 354
|
Vitamin C
|
76,237,230
| 10,895,042
|
Python: function only works when created inside main code, not when imported with from functions import *
|
<p>I'm finding lots of variations on this question on Stackoverflow, but not quite the same:</p>
<p>I have something like the code below. If I define it within my main code it will find <em>df</em>, even when <em>df</em> itself is only defined after this function definition. It also works as expected: <em>df</em> is taken from the module scope and changed accordingly.</p>
<pre><code>def update_df():
df['x'] = df['y']
</code></pre>
<p>However, if I put it inside "functions.py" and use</p>
<pre><code>from functions import *
</code></pre>
<p>it doesn't work anymore.</p>
<p>I would expect that the <em>import</em> * takes all definitions from <em>functions.py</em> and gives them the same definition in the main module. It doesn't fail on import, it does fail on usage.</p>
<p>How can I move this function into <em>functions.py</em> so it doesn't clutter my main code?</p>
|
<python><import><scope>
|
2023-05-12 14:19:29
| 2
| 833
|
StephanT
|
76,237,219
| 5,050,431
|
LangChain: ValueError: 'not' is not a valid parameter name
|
<p>I am using 3.11.3 and langchain 0.0.166 on Windows. When I try and run</p>
<p><code>from langchain.document_loaders import UnstructuredPDFLoader</code></p>
<p>I get an error of</p>
<blockquote>
<p>ValueError "'not' is not a valid parameter name"</p>
</blockquote>
<p>It seems like FastAPI had this <a href="https://github.com/tiangolo/fastapi/discussions/6230" rel="nofollow noreferrer">issue</a> but it's not coming up in my stack trace and I don't think I have any dependencies on it.</p>
<p>Stack trace is:</p>
<pre><code>Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1128, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\langchain\agents\__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\langchain\agents\agent.py", line 15, in <module>
from langchain.agents.tools import InvalidTool
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\langchain\agents\tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\langchain\tools\__init__.py", line 25, in <module>
from langchain.tools.openapi.utils.api_models import APIOperation
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\langchain\tools\openapi\utils\api_models.py", line 6, in <module>
from openapi_schema_pydantic import MediaType, Parameter, Reference, RequestBody, Schema
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\openapi_schema_pydantic\__init__.py", line 3, in <module>
from .v3 import *
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\openapi_schema_pydantic\v3\__init__.py", line 1, in <module>
from .v3_1_0 import *
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\openapi_schema_pydantic\v3\v3_1_0\__init__.py", line 9, in <module>
from .open_api import OpenAPI
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\openapi_schema_pydantic\v3\v3_1_0\open_api.py", line 5, in <module>
from .components import Components
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\openapi_schema_pydantic\v3\v3_1_0\components.py", line 7, in <module>
from .header import Header
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\openapi_schema_pydantic\v3\v3_1_0\header.py", line 3, in <module>
from .parameter import Parameter
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\openapi_schema_pydantic\v3\v3_1_0\parameter.py", line 6, in <module>
from .media_type import MediaType
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\openapi_schema_pydantic\v3\v3_1_0\media_type.py", line 8, in <module>
from .schema import Schema
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\openapi_schema_pydantic\v3\v3_1_0\schema.py", line 10, in <module>
class Schema(BaseModel):
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\pydantic\main.py", line 292, in __new__
cls.__signature__ = ClassAttribute('__signature__', generate_model_signature(cls.__init__, fields, config))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Student\Anaconda3\envs\RL\Lib\site-packages\pydantic\utils.py", line 258, in generate_model_signature
merged_params[param_name] = Parameter(
^^^^^^^^^^
File "C:\Users\Student\Anaconda3\envs\RL\Lib\inspect.py", line 2722, in __init__
raise ValueError('{!r} is not a valid parameter name'.format(name))
ValueError: 'not' is not a valid parameter name
python-BaseException
</code></pre>
|
<python><python-3.11><py-langchain>
|
2023-05-12 14:18:19
| 1
| 4,199
|
A_Arnold
|
76,237,114
| 3,412,607
|
XSL to flatten XML converted Pandas dataframe
|
<p>I have the following XML Structure:</p>
<pre><code><ml:Meals xmlns:ml="http://www.food.com">
<ml:Meal>
<ml:type>lunch</ml:type>
<ml:main_course>turkey sandwich</ml:main_course>
<ml:desert ml:Descriptor="cookie">
<ml:ID ml:type="ID">d47e8bb7876e10001d05b9ca33070215</ml:ID>
</ml:desert>
</ml:Meal>
</ml:Meals>
</code></pre>
<p>I want the converted Pandas dataframe to eventually look like this.</p>
<pre><code>+-----+---------------+------+
| type| main_course|desert|
+-----+---------------+------+
|lunch|turkey sandwich|cookie|
+-----+---------------+------+
</code></pre>
<p>I have been able to achieve this using the following XSL code:</p>
<pre><code><xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:ml="http://www.food.com">
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="ml:Meal">
<xsl:copy>
<xsl:copy-of select="*"/>
<desert><xsl:value-of select="ml:desert/@ml:Descriptor"/></desert>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
</code></pre>
<p>This works when using the following command:</p>
<pre><code>pandaDf = pd.read_xml(mealSample, namespaces={"ml": "http://www.food.com"}, stylesheet=xsl)
</code></pre>
<p>My question is, I acticipate there might be additional child nodes within the Meal node that has a <code>md:Descriptor</code> that I'll want to pull out and put in the value of the corresponding column in the dataframe. Is there a way to modify the XSL to match on the nested <code>ml:Descriptor</code> field value and swap the value out in the way it's currently done for the hard coded <code>ml:desert</code> node name?</p>
<p>Thanks in advance!</p>
|
<python><pandas><xml><xslt><lxml>
|
2023-05-12 14:05:25
| 2
| 815
|
daniel9x
|
76,237,040
| 5,531,578
|
combine multiple GridSearchCV instances
|
<p>I could save time by splitting up parameters that I would like to put into a GridSearchCV so that I could have further parallelization of parameter space exploration (beyond setting <code>n_jobs</code>).</p>
<p>For instance instead of:</p>
<pre class="lang-py prettyprint-override"><code>all_params = {'n_estimators' : [10, 20], 'min_samples_leaf' : [10, 20]}
</code></pre>
<p>I could create two parameter sets that would still cover the same parameter space, and send off two independent jobs to run at the same time:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.model_selection import GridSearchCV
# job 1 (independent script from job 2)
params1 = {'n_estimators' : [10], 'min_samples_leaf' : [10, 20]}
grid_search1 = GridSearchCV(..., param_grid=params1)
# job 2 (independent script from job 1)
params2 = {'n_estimators' : [20], 'min_samples_leaf' : [10, 20]}
grid_search2 = GridSearchCV(..., param_grid=params2)
</code></pre>
<p>and then I would like combined them:</p>
<pre class="lang-py prettyprint-override"><code>grid_search1 = # load pkl from GridSearchCV(..., param_grid=params1)
grid_search2 = # load pkl from GridSearchCV(..., param_grid=params2)
full_grid = grid_search1 + grid_search2
print(full_grid.best_params_)
</code></pre>
<p>I realize I could iterate somthing like <code>grid_search1.cv_results_</code> and compare with <code>grid_search2.cv_results_</code> but looking for something cleaner.</p>
|
<python><scikit-learn><gridsearchcv>
|
2023-05-12 13:56:59
| 1
| 303
|
BML
|
76,237,016
| 5,790,653
|
How to show the output of shell commands in Flask python
|
<p>I started learning Flask recently and I'm at the early stages.</p>
<p>I googled and these are the links I used to help:</p>
<p><a href="https://www.digitalocean.com/community/tutorials/how-to-use-a-postgresql-database-in-a-flask-application" rel="nofollow noreferrer">this</a> and <a href="https://stackoverflow.com/questions/46535490/calling-python-function-from-html-button-to-run-shell-command">this</a>.</p>
<p>I'm trying to run a simple command like <code>ls -lh /home</code> in Flask when I call <code>http://localhost/ls</code> and show me this output in the web browser:</p>
<pre class="lang-none prettyprint-override"><code>root@saeed:~# ls -lh /home
total 0
drwxr-xr-x 5 root root 70 May 8 23:47 git/
drwxr-xr-x 2 root root 97 May 9 12:07 images/
drwxr-xr-x 3 root root 20 May 10 04:16 node/
drwxr-xr-x 3 root root 101 May 11 01:25 python/
drwxr-x--- 4 saeed saeed 137 Jan 7 10:48 saeed/
</code></pre>
<p>I mean when I enter <code>http://localhost</code>, I see the above output in my internet browser.</p>
<p>I tried different things (you may see the history of this question), but none of them worked and I see a blank white screen in the web browser.</p>
<p>How to do this?</p>
<p><code>app.py</code>:</p>
<pre><code>from flask import Flask, render_template
import subprocess
app = Flask(__name__)
@app.route('/')
def index():
# something here to run a simple `pwd` or `ls -l`
</code></pre>
<p><code>templates/index.html</code>:</p>
<pre><code>{% extends 'base.html' %}
# something here to show the output of `pwd` or `ls -l` in the web
</code></pre>
|
<python><flask>
|
2023-05-12 13:54:16
| 1
| 4,175
|
Saeed
|
76,237,007
| 16,383,578
|
How to call class methods during instantiation of frozen dataclasses?
|
<p>I have a wrapper that adds some methods to a bunch of <code>dataclass</code>es, the <code>dataclass</code>es are all meant to be frozen, and should check the data types of the initialization values during instantiation.</p>
<p>And I also wanted to overload the constructor of the <code>dataclass</code>es, so that they can be instantiated like these <code>Foo(fields)</code>, <code>Foo(*fields)</code>, <code>Foo(mapping)</code> and <code>Foo(**mapping)</code>, but <em><strong>NOT</strong></em> <code>Foo(*args, **kwargs)</code>. All the lengths of the arguments passed are equal, the first is a sequence containing the values of the fields in order, and the third is a mapping containing the key-value pairs of the fields, the constructor should accept either a <code>list</code>, or a <code>dict</code>, or an unpacked <code>list</code>, or an unpacked <code>dict</code>, but not a mixture of those. (i.e. not like <code>Foo(a, [b, c])</code> or <code>Foo(a, b, c, d=1)</code>).</p>
<p>Refer to this <a href="https://codereview.stackexchange.com/q/284930/234107">question</a> for more context. As you see I have succeeded in doing so in my hand-rolled class, but it is unpythonic.</p>
<p>This is a minimal reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, fields, asdict
from datetime import datetime
from typing import Union
SENTINEL = object()
def wrapper(cls):
cls._name = [f.name for f in fields(cls)]
@classmethod
def from_sequence(cls, sequence):
for arg, field in zip(sequence, fields(cls)):
if not isinstance(arg, field.type):
raise TypeError(f"'{arg}' not of type '{field.type}'.")
return cls(*sequence)
cls.from_sequence = from_sequence
@classmethod
def from_dict(cls, mapping):
for field in fields(cls):
value = mapping.get(field.name, SENTINEL)
if value != SENTINEL and not isinstance(value, field.type):
raise TypeError(f"Field ''{field.name}' value '{value}' not of type '{field.type}'.")
return cls(**mapping)
cls.from_dict = from_dict
return cls
NoneType = type(None)
@wrapper
@dataclass(frozen=True)
class Person:
Name: str
Age: Union[int, float]
Birthdate: datetime
</code></pre>
<p>It sort of works, but the constructor accepts *args or **kwargs but not packed arguments, nor does it do type checking:</p>
<pre><code>In [2]: Person('Jane Smith', 23, datetime(2000, 1, 1))
Out[2]: Person(Name='Jane Smith', Age=23, Birthdate=datetime.datetime(2000, 1, 1, 0, 0))
In [3]: Person(None, None, None)
Out[3]: Person(Name=None, Age=None, Birthdate=None)
In [4]: Person.from_sequence(['Jane Smith', 23, datetime(2000, 1, 1)])
Out[4]: Person(Name='Jane Smith', Age=23, Birthdate=datetime.datetime(2000, 1, 1, 0, 0))
In [5]: Person.from_sequence([None]*3)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[5], line 1
----> 1 Person.from_sequence([None]*3)
Cell In[1], line 13, in wrapper.<locals>.from_sequence(cls, sequence)
11 for arg, field in zip(sequence, fields(cls)):
12 if not isinstance(arg, field.type):
---> 13 raise TypeError(f"'{arg}' not of type '{field.type}'.")
15 return cls(*sequence)
TypeError: 'None' not of type '<class 'str'>'.
In [6]: Person([None]*3)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 Person([None]*3)
TypeError: Person.__init__() missing 2 required positional arguments: 'Age' and 'Birthdate'
</code></pre>
<p>I tried to overload <code>__new__</code> to use the corresponding class methods during instantiation:</p>
<pre><code>@wrapper
@dataclass(frozen=True)
class Person:
Name: str
Age: Union[int, float]
Birthdate: datetime
def __new__(cls, *args, **kwargs):
if args:
assert not kwargs
data = args if len(args) != 1 else args[0]
return cls.from_dict(data) if isinstance(data, dict) else cls.from_sequence(data)
else:
assert kwargs
return cls.from_dict(kwargs)
</code></pre>
<p>But it doesn't work, it just crashes my interpreter without any exception being raised, I figured this is because circular reference, <code>__new__</code> calls <code>classmethod</code>, and <code>classmethod</code> calls <code>__new__</code> recursively, this goes on forever, so it crashed the interpreter. I tried to overload <code>__init__</code> and the same thing happens.</p>
<p>How do I properly override instantiation of a frozen <code>dataclass</code> so that the proper <code>classmethod</code> is called to validate the datatypes and handle the arguments?</p>
|
<python><python-3.x><python-dataclasses>
|
2023-05-12 13:52:36
| 1
| 3,930
|
Ξένη Γήινος
|
76,236,985
| 1,282,773
|
How to compare string enums in python
|
<p>Let's say I have <code>MyEnum</code> enum, and I want to define total ordering for it. My approach is:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
from functools import total_ordering
@total_ordering
class MyEnum(Enum):
ONE = "one",
TWO = "two",
THREE = "three",
def __eq__(self, other):
if not isinstance(other, MyEnum):
return False
return self.value == other.value
def __hash__(self):
return self.value.__hash__()
def __le__(self, other):
if not isinstance(other, MyEnum):
return False
enum2int = {
MyEnum.ONE: 1,
MyEnum.TWO: 2,
MyEnum.THREE: 3,
}
return enum2int[self] < enum2int[other]
</code></pre>
<p>But this is error prone and cumbersome since I need to list the enum values twice. Is there a better approach?</p>
<p>I understand that if I use <code>int</code> values or even <code>IntEnum</code> class, I can do the comparison easily. But I want the values to be strings and compare them based on the declaration order.</p>
|
<python><enums>
|
2023-05-12 13:49:58
| 4
| 6,807
|
ivaigult
|
76,236,933
| 12,753,007
|
Plotting two lines with the same starting point
|
<p>Given the coordinates of 3 points x, y, z, I would like to join x-y and x-z as 2 line segments, with the same starting point x. But the plot is displaying 2 intersecting lines, with starting and ending points other than x, y, z. I would like some help fixing this.</p>
<pre><code>x = [0, 9]
y = [7, 2]
z = [4, 3]
plt.plot(x, y)
plt.plot(x, z)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/83hsZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/83hsZ.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-05-12 13:44:18
| 0
| 461
|
siegfried
|
76,236,860
| 8,937,353
|
Find all matching points in multiple CSV list
|
<p>I have 4 CSVs with 5 columns and over 500 rows.
1st column is say Plane. second is Points. 3rd and 4th are x and y coordinates of Points. 5th is Values.
For each Plane, there can be multiple Points. In each CSV Plane is matched. For e.g. Plane = 3 in 1 CSV is also Plane = 3 in other CSVs. But the Points number is different in all CSVs.</p>
<p>I want to find all the closest points for a given Plane, for all planes.
<a href="https://i.sstatic.net/XnyYz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XnyYz.png" alt="CSV1" /></a></p>
<p><a href="https://i.sstatic.net/HHIBC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HHIBC.png" alt="CSV2" /></a></p>
<p>Result:</p>
<p><a href="https://i.sstatic.net/zfgvq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zfgvq.png" alt="Results" /></a></p>
<p>These Planes can also change later, so I created a function that takes Plane numbers and matches points.</p>
<pre><code>def matchPoints(plane_1,plane_2,plane_3,plane_4):
algo:
return matchedPoints, matchedValues
</code></pre>
<p>but I compare each point in a plane with all points of same plane in all 4 CSVs one by one.
Which makes the code very slow. Is there any alternative to speed up this process.</p>
|
<python><csv>
|
2023-05-12 13:35:11
| 0
| 302
|
A.k.
|
76,236,755
| 353,337
|
Read XML processing instrutions with attributes
|
<p>I have an XML file</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<?foo class="abc" options="bar,baz"?>
<document>
...
</document>
</code></pre>
<p>and I'm interested in the processing instruction <code>foo</code> and its attributes.</p>
<p>I can use <code>ET.iterparse</code> for reading the PI, but it escapes me how to access the attributes as a dictionary – <code>.attrib</code> only gives an empty dict.</p>
<pre class="lang-py prettyprint-override"><code>import xml.etree.ElementTree as ET
for _, elem in ET.iterparse("data.xml", events=("pi",)):
print(repr(elem.tag))
print(repr(elem.text))
print(elem.attrib)
</code></pre>
<pre class="lang-py prettyprint-override"><code><function ProcessingInstruction at 0x7f848f2f7ba0>
'foo class="abc" options="bar,baz"'
{}
</code></pre>
<p>Any hints?</p>
|
<python><xml><processing-instruction>
|
2023-05-12 13:21:04
| 3
| 59,565
|
Nico Schlömer
|
76,236,652
| 3,247,006
|
How to prevent the text in Django Admin to get bigger when overriding "base.css" set in overridden "base.html"?
|
<p>I could change the header color to black in all admin pages with the following steps.</p>
<p>So, this is Django Admin as shown below:</p>
<p><a href="https://i.sstatic.net/u9JkE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u9JkE.png" alt="enter image description here" /></a></p>
<p>Then, I copied <a href="https://github.com/django/django/blob/main/django/contrib/admin/static/admin/css/base.css" rel="nofollow noreferrer">base.css</a> from <code>django/contrib/admin/static/admin/css/base.css</code> in my virtual environment to <code>core/static/core/admin/css/</code> and copied <a href="https://github.com/django/django/blob/main/django/contrib/admin/templates/admin/base.html" rel="nofollow noreferrer">base.html</a> from <code>django/contrib/admin/templates/admin/base.html</code> to <code>templates/admin/</code> as shown below:</p>
<pre class="lang-none prettyprint-override"><code>django-project
|-core
| |-settings.py
| └-static
| └-core
| └-admin
| └-css
| └-base.css # Here
|-app1
|-app2
└-templates
└-admin
└-base.html # Here
</code></pre>
<p>Then in <code>base.css</code>, I replaced <code>background: var(--header-bg);</code> with <code>background: black;</code> as shown below:</p>
<pre class="lang-css prettyprint-override"><code>/* "core/static/core/admin/css/base.css" */
#header {
width: auto;
height: auto;
display: flex;
justify-content: space-between;
align-items: center;
padding: 10px 40px;
/* background: var(--header-bg); */
background: black; /* Here */
color: var(--header-color);
overflow: hidden;
}
</code></pre>
<p>Then in <a href="https://docs.djangoproject.com/en/4.2/ref/templates/builtins/#static" rel="nofollow noreferrer">{% static %}</a> in <code>base.html</code>, I replaced <code>admin/css/base.css</code> with <code>core/admin/css/base.css</code> as shown below to change the header color to black in all admin pages:</p>
<pre><code># "templates/admin/base.html"
# ... {# "admin/css/base.css" is replaced #}
<title>{% block title %}{% endblock %}</title> {# ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ #}
<link rel="stylesheet" href="{% block stylesheet %}{% static "core/admin/css/base.css" %}{% endblock %}">
{% block dark-mode-vars %}
#...
</code></pre>
<p>Then, I could change the header color to black in all admin pages but the text gets bigger than the original one as shown below.</p>
<p>After:
<a href="https://i.sstatic.net/F7d0S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F7d0S.png" alt="enter image description here" /></a></p>
<p>Before(Original):
<a href="https://i.sstatic.net/u9JkE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u9JkE.png" alt="enter image description here" /></a></p>
<p>So in <code>base.html</code>, I added <code>type="text/css"</code> and <code>media="all"</code> to <code><link></code> as shown below but the text is still bigger than the original one:</p>
<pre><code># "templates/admin/base.html"
# ...
<title>{% block title %}{% endblock %}</title>
<link rel="stylesheet" type="text/css" media="all" href="{% block stylesheet %}{% static "core/admin/css/base.css" %}{% endblock %}">
{% block dark-mode-vars %} {# ↑ Here #} {# ↑ Here #}
#...
</code></pre>
<p>So, how can I prevent the text in Django Admin to get bigger when overriding <code>base.css</code> set in the overridden <code>base.html</code>?</p>
|
<python><django><django-templates><django-admin><django-staticfiles>
|
2023-05-12 13:08:31
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,236,370
| 5,510,713
|
GIMP script-fu to ImageMagic conversion
|
<p>I have written the following GIMP <code>script-fu</code> function to remove lens distortion from an image using the <code>lens-distortion-plugin</code>. I execute this script through a command line function from my main python script. Just wondering if there is an equivalent ImageMagic function (or any other library) to achieve the same so that i don't have to leave the python script.</p>
<p><strong>lens-distortion.scm</strong></p>
<pre><code>(define (lens-distortion filename destination)
(let* (
(image (car (gimp-file-load RUN-NONINTERACTIVE filename filename))) ; load the image
(drawable (car (gimp-image-flatten image)))
(offset-x -3.51)
(offset-y -9.36)
(main-adjust 28.07)
(edge-adjust 0)
(rescale -100)
(brighten 0.58)
)
(gimp-message (string-append "processing-" filename))
(plug-in-lens-distortion RUN-NONINTERACTIVE image drawable offset-x offset-y main-adjust edge-adjust rescale brighten)
(gimp-file-save RUN-NONINTERACTIVE image drawable destination destination)
(gimp-image-delete image)
)
)
</code></pre>
<p><strong>Example input image:</strong></p>
<p>As you can see the distortion does not start from the middle which makes it a lot challenging to get rid of it.</p>
<p><a href="https://i.sstatic.net/4Z9iQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Z9iQ.jpg" alt="enter image description here" /></a></p>
|
<python><python-3.x><imagemagick><gimp><gimpfu>
|
2023-05-12 12:35:17
| 0
| 776
|
DhiwaTdG
|
76,236,236
| 11,311,927
|
Is it possible to create mock data using pydantic-factories in such a way that attributes adhere to pydantic validator requirements?
|
<p>I'm trying to create some mock data, where one of my classes (for which I specified a Pydantic dataclass by inheriting from <code>Pydantic.BaseModel</code>) has date attributes. I want to make sure that the <code>end_date</code> is always later than the <code>start_date</code>, and validate this using the pydantic validator decorator:</p>
<pre><code>from pydantic import BaseModel, validator
from datetime import datetime
class Dates(BaseModel):
start_date: datetime
end_date: datetime
@validator('end_date')
def ensure_end_date_is_after_start_date(cls, end_date, values):
if not end_date > values['start_date']:
raise ValueError(f"End date {end_date} is not after start date {values['start_date']}.")
return end_date
</code></pre>
<p>Suppose that I now want to create some mock data, using the <code>pydantic_factories</code> package. I define a class that inherits from <code>ModelFactory</code> to later create some mock data.</p>
<pre><code>from pydantic_factories import ModelFactory
ModelFactory.seed_random(0)
class DatesFactory(ModelFactory):
__model__ = Dates
</code></pre>
<p>However, in this fashion, I have no guarantee that <code>start_date</code> will be before <code>end_date</code>. I set <code>ModelFactory.seed_random(0)</code>, to get reproducible results. And indeed, when I create my mock data, I see that the date validation failed:</p>
<pre><code>my_dates = DatesFactory()
my_dates.build()
Traceback (most recent call last):
.
.
.
pydantic.error_wrappers.ValidationError: 1 validation error for Dates
end_date
End date 2006-06-21 01:20:01 is not after start date 2022-02-03 12:04:21. (type=value_error)
</code></pre>
<p><strong>My question</strong>: How do I make sure that I get mock data that adheres to my validation criteria? What is the usual way to go about this in a way that is extensible to more complex validations?</p>
<p>Thanks in advance!</p>
<p>Edit: The current example may be a bit silly; I am aware it could be circumvented by defining my attributes differently (e.g. define only <code>start_date</code> and an always positive <code>duration</code>). This example is, however, something that I would like to extend to something a bit more complex.</p>
<p>Complete MWE:</p>
<pre><code>from pydantic import BaseModel, validator
from datetime import datetime
from pydantic_factories import ModelFactory
class Dates(BaseModel):
start_date: datetime
end_date: datetime
@validator('end_date')
def ensure_end_date_is_after_start_date(cls, end_date, values):
if not end_date > values['start_date']:
raise ValueError(f"End date {end_date} is not after start date {values['start_date']}.")
return end_date
class DatesFactory(ModelFactory):
__model__ = Dates
ModelFactory.seed_random(0)
my_dates = DatesFactory()
my_dates.build()
</code></pre>
|
<python><validation><attributes><factory><pydantic>
|
2023-05-12 12:21:06
| 2
| 315
|
Sam
|
76,235,975
| 8,838,303
|
Why does this dictionary suddenly become empty?
|
<p>I have a CSV file "country.csv" that looks precisely like this:</p>
<pre><code>Country Code,Country Name,Country-ID
US,United States,0
DE,Germany,1
AU,Australia,2
CZ,Czechia,3
CA,Canada,4
AR,Argentina,5
BR,Brazil,6
PT,Portugal,7
GB,United Kingdom,8
IT,Italy,9
GG,Guernsey,10
RO,Romania,11
ES,Spain,12
FR,France,13
NL,Netherlands,14
,,15
SK,Slovakia,16
</code></pre>
<p>I wrote a simple Python script that, using dictionaries, reads this file into "country_dictionary" and prints the "lines" out. However, after running through "country_dictionary" once it suddenly becomes empty. Could you please tell me what I am doing wrong?</p>
<pre><code>import csv
def csv_to_dictionary(csv_name, delimiter):
input_file = csv.DictReader(open(csv_name, 'r', encoding='utf-8'), delimiter=delimiter)
return input_file
country_dictionary = csv_to_dictionary("country.csv", ',')
country_list = []
for row in country_dictionary:
print(row)
country_list.append(row["Country Code"])
print(country_list) #Up to here everything works as intended
for row in country_dictionary:
print(row) # The dictionary seems to be empty?
</code></pre>
|
<python><csv><dictionary>
|
2023-05-12 11:47:34
| 1
| 475
|
3nondatur
|
76,235,910
| 9,479,564
|
Using scipy.optimize.curve_fit does not fit curve properly
|
<p>I want to fit a random time series segment to a predefined list of functions. For demo purposes, I only use the sine function and a demo time series.</p>
<pre class="lang-py prettyprint-override"><code>amplitude = 1
omega = 2
phase = 0.5
offset = 4
def sine(x, a, b, c, d):
"""Sine function"""
return a*np.sin(b*x+c) + d
x = np.linspace(0,100, 1000)
parameters = [amplitude, omega, phase, offset]
demo_values = sine(x, *parameters)
</code></pre>
<p><a href="https://i.sstatic.net/6vKhf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6vKhf.jpg" alt="Plot of demo values" /></a></p>
<p>As mentioned in the title I make use of the <code>scipy.optimize.curve_fit</code> method to try to find the parameters, as such:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.optimize import curve_fit
popt, err = curve_fit(f=sine, xdata=x, ydata=demo_values)
fitted = sine(x, *popt)
</code></pre>
<p>When comparing the curve-fitted parameters with the original parameters I find them to be quite different. I do not know what I am doing wrong.</p>
<pre class="lang-py prettyprint-override"><code>print(f"Scipy params: {popt}")
print(f"Original params: {parameters}")
>>> Scipy params: [0.02834886 1.15624779 1.8580548 4.00011998]
>>> Original params: [1, 2, 0.5, 4]
</code></pre>
<p><a href="https://i.sstatic.net/pvdlS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pvdlS.jpg" alt="Comparison of Scipy vs Demo" /></a></p>
<p>NB. As mentioned in the introduction I do not want to find a solution for only the sine function, as I would like to extend this flow for other functions as well. I have seen on SO that using the p0 variable significantly increases the accuracy, but I do not know how to make an initial guess in a generic way (for any curve).</p>
<blockquote>
<p>I tried to curve fit a simple Sine curve with the <code>scipy.optimize.curve_fit</code> function and expected the library to handle that quite nicely. Unfortunately, that was not the case. I have also seen that other posts related to my question use a p0 (initial guess) variable, which I don't know how to create for a generic curve.</p>
</blockquote>
|
<python><numpy><scipy><curve-fitting>
|
2023-05-12 11:37:57
| 0
| 888
|
Oddaspa
|
76,235,874
| 15,295,149
|
Downgrade Python from 3.11.2 to 3.10 in specifc environment
|
<p>I am using Python's virtual environment 'venv'. My current version is 3.11.2</p>
<p>I need to downgrade it.</p>
<p>I have already tried the following steps:</p>
<pre><code>pip3 install python==3.10.10
</code></pre>
<p>and got the following error:</p>
<p><strong>ERROR: Could not find a version that satisfies the requirement python==3.10.10 (from versions: none)</strong></p>
<p><strong>ERROR: No matching distribution found for python==3.10.10</strong></p>
<p>I also tried for lower versions like 3.8.0, 3.9.0, ... Allways same error.</p>
<p>Thanks</p>
|
<python><python-3.x>
|
2023-05-12 11:32:02
| 2
| 746
|
devZ
|
76,235,868
| 11,065,874
|
In python pytest, How do I start a new runtime for each test function?
|
<p>I have a singlton function that I want to test as below:</p>
<pre><code>import abc
class SingletonABCMeta(abc.ABCMeta):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(SingletonABCMeta, cls).__call__(*args, **kwargs)
return cls._instances[cls]
else:
raise Exception("This is a singleton. no more than one instance of it can exist")
class Cabc(abc.ABC, metaclass=SingletonABCMeta):
@abc.abstractmethod
def foo(self): pass
class C(Cabc):
def __init__(self, a):
print(f"initializing with {a}")
self._a = a
def foo(self):
return self._a
def test_c_first():
ins = C(1)
assert ins.foo() == 1
def test_c_second():
# this test fails but I don't want it to fail.
# I want to be able to have a fresh session to be able to create a fresh instance from the class C
ins = C(2)
assert ins.foo() == 2
</code></pre>
<p>The second time I am testing it I want it to be in a new runtime so that I can create a new instance.</p>
<p>Right now the second test fails. The desired behavior is both tests pass.</p>
<p>one solution is removing the instances at the end of first test</p>
<pre><code>def test_c_first():
ins = C(1)
assert ins.foo() == 1
C._instances = {}
</code></pre>
<p>But I prefer not to do this and keep the tests closer to real situations. I am looking for starting a new runtime somehow</p>
<p>How do I do this?</p>
<p>context:
My singleton is a memory database client that I want to initialize in each test and test my fastapi application</p>
|
<python><pytest><singleton>
|
2023-05-12 11:31:07
| 1
| 2,555
|
Amin Ba
|
76,235,454
| 13,734,451
|
Using np.where to create a column
|
<p>I would like to create number of days based on two variables. If the units are in week, multiply the <code>nums</code> by 7, if by month by 12 and 365 if by year. Am unable to achieve this...any leads??</p>
<pre><code>import pandas as pd
import numpy as np
nums = [2, 3, 4, 1, 4, 2, 4, 11, 1, 1]
units = ['days', 'weeks', 'year', 'month', 'days', 'weeks', 'years', 'months', 'day', 'week']
df = pd.DataFrame({'nums': nums,
'units': units})
df
def convert_days(var1,var2,var3):
df[var3]= np.where((df[var1]=='weeks', df[var2]*7, df[var2]))
return df
convert_days('units', 'nums', 'test')
</code></pre>
|
<python><pandas><numpy>
|
2023-05-12 10:37:48
| 1
| 1,516
|
Moses
|
76,235,423
| 10,441,038
|
How to specify discrete color of markers in Plotly.graph_objects?
|
<p>After a lot of searching, I still have <strong>NOT</strong> found how to specify discrete color mapping for markers in a scatter figure that created by plotly.<strong>graph_objects (instead of plotly.express)</strong>.</p>
<p>There are some docs about specifying discrete color mapping of plotly, but all of them are for px(plotly.express). e.g.
<a href="https://plotly.com/python/discrete-color/" rel="nofollow noreferrer">Discrete colors in Python</a>.</p>
<p>In the fact my requirement is very simple: I have a pd.Dataframe that has a column named 'label'. It stores my category info, and may has one of three values, i.e.[0, 1, 2].</p>
<p>I want markers be ploted as 'red' if the column 'label' has value 2, and 'yellow' for value 1, and 'blue' for value 0.</p>
<p>I mimiced px's sample and coded following:</p>
<pre><code>from plotly.subplots import make_subplots
import plotly.graph_objects as go
fts['label'] = make3ClassLabel( df['target'] )
fig = make_subplots(rows=1, cols=2, shared_yaxes=True)
fig.add_trace(go.Scatter(x=fts['feat_x'], y=fts['feat_y0'],
mode='markers', marker_color=fts['label'],
############ following line doesn't work! #####################
color_discrete_sequence=['blue', 'yellow', 'red']),
row=1, col=1)
fig.add_trace(go.Scatter(x=fts['feat_x'], y=fts['feat_y1'],
mode='markers',
############ also doesn't work! ###############################
marker={ 'color':fts['label'],
'color_discrete_sequence':['blue', 'yellow', 'red'] }),
row=1, col=2)
fig.update_traces(marker=dict(size=1, line=dict(width=0)))
fig.show()
</code></pre>
<p>Any hints or tips will be appretiated!!!</p>
|
<python><colors><plotly><visualization><plotly.graph-objects>
|
2023-05-12 10:33:43
| 1
| 2,165
|
Leon
|
76,235,372
| 8,699,450
|
Why is multiplying a 62x62 matrix slower than multiplying a 64x64 matrix?
|
<p>Using the code below (in a google colab), I noticed that multiplying a <code>62x62</code> with a <code>62x62</code> is about 10% slower than multiplying a <code>64x64</code> with another <code>64x64</code> matrix. Why is this?</p>
<pre class="lang-py prettyprint-override"><code>import torch
import timeit
a, a2 = torch.randn((62, 62)), torch.randn((62, 62))
b, b2 = torch.randn((64, 64)), torch.randn((64, 64))
def matmuln(c,d):
return c.matmul(d)
print(timeit.timeit(lambda: matmuln(a, a2), number=1000000)) # 13.864160071000015
print(timeit.timeit(lambda: matmuln(b, b2), number=1000000)) # 12.539578468999991
</code></pre>
|
<python><performance><pytorch><matrix-multiplication>
|
2023-05-12 10:27:11
| 0
| 380
|
Robin van Hoorn
|
76,235,238
| 4,471,415
|
Ignoring malformed utf8 in python3
|
<p>I have a script to convert xml to text (extract document content sans xml tags) that needs to deal with corrupt xml files.</p>
<p>The corruption comes from a partially decoded zip file, so there may be a utf8 character that got truncated. <code>lxml</code> throws an error when encountering the last few bytes.</p>
<p>The data is coming from <code>stdin</code> or a file, if that makes a difference.</p>
<p>How can I 'sanitize' any malformed trailing bytes before using lxml, or selectively just ignore this specific error?</p>
<p>Here's the relevant part of the code:</p>
<pre><code>def xml2text(xml):
"""
A string representing the textual content of this run, with content
child elements like ``<w:tab/>`` translated to their Python
equivalent.
Adapted from: https://github.com/python-openxml/python-docx/
"""
text = u''
parser = lxml.etree.XMLParser(recover=True)
root = lxml.etree.parse(xml, parser)
for child in root.iter():
if child.tag == qn('w:t'):
t_text = child.text
text += t_text if t_text is not None else ''
elif child.tag == qn('w:tab'):
text += '\t'
elif child.tag in (qn('w:br'), qn('w:cr')):
text += '\n'
elif child.tag == qn("w:p"):
text += '\n\n'
return text
def process(docx):
text = u''
text += xml2text(docx)
return text.strip()
if __name__ == '__main__':
args = process_args()
if args.docx is None:
text = process(sys.stdin)
else:
text = process(args.docx)
sys.stdout.write(text)
</code></pre>
|
<python><lxml>
|
2023-05-12 10:08:48
| 1
| 628
|
Reinstate Monica
|
76,235,197
| 18,904,265
|
How to handle Exceptions in a series of functions, which call another?
|
<p>I have multiple functions, which are chained together, something like this:</p>
<pre class="lang-py prettyprint-override"><code>def first_function(params):
value = None
try:
value = some_api_call(params)
except ValueError:
print("some ValueError Message!")
if value is not None:
return value
variable = first_function(params)
def second_function(variable):
return some_other_api_call(variable)
</code></pre>
<p>As you can see, there is a try/except in the first function, and similar ones are in the other functions as well. Since all of those functions (more like 3-4) depend on each other and would raise an Error, if anyone of those failed, I'd have to include an AttributeError additionally to the other errors I'd want to catch, or check if the value exists, right?</p>
<p>Now I was thinking, is there a more pythonic way of doing this? In <a href="https://stackoverflow.com/a/843306/18904265">another post I read</a>, it was stated that it's bad practice to check if a variable exists - which I would need to do, if I wanted to implement checks. Maybe I could raise the errors in every function and only catch them in the last one, or something similar?</p>
|
<python>
|
2023-05-12 10:02:59
| 2
| 465
|
Jan
|
76,235,139
| 12,430,846
|
Read a large TAD data with Dask
|
<p>I have a very large dataframe. It was originally a TAD file. Someone saved it with CSV extension.</p>
<p>I was trying to read it in pandas but it takes hours, even with <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">chunksize</a> parameter.</p>
<pre><code>start = time.time()
#read data in chunks of 1 million rows at a time
chunk = pd.read_csv(
'/.../estrapola_articoli.csv',
sep='\t',
lineterminator='\r',
chunksize=1000000) # <-- here
end = time.time()
print("Read csv with chunks: ",(end-start),"sec")
articoli = pd.concat(chunk)
</code></pre>
<p>I've read about Dask and I've tried the following:</p>
<pre><code>import dask
import dask.dataframe as dd
df = dd.read_csv(
'/.../estrapola_articoli.csv',
sep='\t',
lineterminator='\r')
</code></pre>
<p>Unfortunately, I've got this error</p>
<blockquote>
<p>ValueError: Sample is not large enough to include at least one row of
data. Please increase the number of bytes in <code>sample</code> in the call to
<code>read_csv</code>/<code>read_table</code></p>
<p>The above exception was the direct cause of the following exception:</p>
<p>ValueError Traceback (most recent call
last) /usr/local/lib/python3.10/dist-packages/dask/backends.py in
wrapper(*args, **kwargs)
125 return func(*args, **kwargs)
126 except Exception as e:
--> 127 raise type(e)(
128 f"An error occurred while calling the {funcname(func)} "
129 f"method registered to the {self.backend} backend.\n"</p>
<p>ValueError: An error occurred while calling the read_csv method
registered to the pandas backend. Original Message: Sample is not
large enough to include at least one row of data. Please increase the
number of bytes in <code>sample</code> in the call to <code>read_csv</code>/<code>read_table</code></p>
</blockquote>
<p>So I used <a href="https://docs.dask.org/en/stable/generated/dask.dataframe.read_csv.html" rel="nofollow noreferrer">sample</a>:</p>
<pre><code>import dask.dataframe as dd
df = dd.read_csv(
'/.../estrapola_articoli.csv',
sep='\t',
lineterminator='\r',
sample=1000000) # 1MB
</code></pre>
<p>It gives me the same error. I could try significantly increasing the sample size further, but this could lead to inefficient computations if the sample is too large.</p>
<p>Any help to read this file?</p>
|
<python><pandas><dask>
|
2023-05-12 09:56:21
| 0
| 543
|
coelidonum
|
76,235,109
| 6,165,496
|
Missing dependency 'openpyxl' when using pandas in Lambda
|
<p>I'm trying to read an excel file from S3 using pandas in Lambda. To make pandas work in my Lambda function I've created a custom layer. This custom layer uses a zip file which I've created using the following script:</p>
<pre><code>python3 -m pip install wheel
curl -LO https://files.pythonhosted.org/packages/e9/d7/ee1b27176addc1236f4a59a9ca105bbdf60424a597ab9b4e13f09e0a816f/pandas-2.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
curl -LO https://files.pythonhosted.org/packages/83/be/de078ac5e4ff572b1bdac1808b77cea2013b2c6286282f89b1de3e951273/numpy-1.24.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
curl -LO https://files.pythonhosted.org/packages/7f/99/ad6bd37e748257dd70d6f85d916cafe79c0b0f5e2e95b11f7fbc82bf3110/pytz-2023.3-py2.py3-none-any.whl
curl -LO https://files.pythonhosted.org/packages/6a/94/a59521de836ef0da54aaf50da6c4da8fb4072fb3053fa71f052fd9399e7a/openpyxl-3.1.2-py2.py3-none-any.whl
unzip pandas-2.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
unzip numpy-1.24.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
unzip pytz-2023.3-py2.py3-none-any.whl
unzip openpyxl-3.1.2-py2.py3-none-any.whl
rm pandas-2.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
rm numpy-1.24.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
rm pytz-2023.3-py2.py3-none-any.whl
rm openpyxl-3.1.2-py2.py3-none-any.whl
mkdir python
mv panda* python/
mv numpy* python/
mv pytz* python/
mv openpyxl* python/
zip -r libraries_layer python/
aws s3 cp libraries_layer.zip s3://bucket-with-lambda-libraries/function-name/
</code></pre>
<p>As you can see, I am installing the packages using wheel and adding them to the zipfile. The contents of the zipfile look like this:</p>
<p>python</p>
<ul>
<li>numpy</li>
<li>numpy-libs</li>
<li>numpy-1.24.3.dist-info</li>
<li>openpyxl</li>
<li>openpyxl-3.1.2.dist-info</li>
<li>pandas</li>
<li>pandas.libs</li>
<li>pandas-2.0.1.dist-info</li>
<li>pytz</li>
<li>pytz-2023.3.dist-info</li>
</ul>
<p><a href="https://i.sstatic.net/JEjDN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JEjDN.png" alt="contents zip file" /></a></p>
<p>As far as I can see this seems to be correct. Now when I try to read the excel file using following Python code:</p>
<pre><code>import pandas as pd
import boto3
import os
from io import BytesIO
def read_from_s3(bucket_name, file_key):
s3_client = boto3.client('s3')
file_obj = s3_client.get_object(Bucket=bucket_name, Key=file_key)
data = file_obj['Body'].read()
return data
def convert_excel_data_to_df(excel_data):
df = pd.read_excel(BytesIO(excel_data), engine='openpyxl')
return df
def handler(event, context):
data = read_from_s3(os.environ['INPUT_BUCKET'], event['FILE_KEY'])
df = convert_excel_data_to_df(data)
return df
</code></pre>
<p>I get error:</p>
<pre><code> {
"errorMessage": "Missing optional dependency 'openpyxl'. Use pip or conda to install openpyxl.",
"errorType": "ImportError",
"requestId": "59838522-be9a-4866-a115-3ddf73618f58",
"stackTrace": [
" File \"/var/task/index.py\", line 36, in handler\n df = convert_excel_data_to_df(data)\n",
" File \"/var/task/index.py\", line 15, in convert_excel_data_to_df\n df = pd.read_excel(BytesIO(excel_data), engine='openpyxl')\n",
" File \"/opt/python/pandas/io/excel/_base.py\", line 478, in read_excel\n io = ExcelFile(io, storage_options=storage_options, engine=engine)\n",
" File \"/opt/python/pandas/io/excel/_base.py\", line 1513, in __init__\n self._reader = self._engines[engine](self._io, storage_options=storage_options)\n",
" File \"/opt/python/pandas/io/excel/_openpyxl.py\", line 548, in __init__\n import_optional_dependency(\"openpyxl\")\n",
" File \"/opt/python/pandas/compat/_optional.py\", line 145, in import_optional_dependency\n raise ImportError(msg)\n"
]
}
</code></pre>
<p>When reading similar threads concerning this issue, it seems to be a problem with the installment of <code>openpyxl</code>. But as far as I can see I've added it correctly. Could this be a version dependency issue? Or is could this be something else?</p>
|
<python><pandas><amazon-web-services><aws-lambda>
|
2023-05-12 09:51:55
| 1
| 1,284
|
RudyVerboven
|
76,235,000
| 1,145,666
|
Information sent in DELETE body is not processed by CherryPy
|
<p>I have this Javascript (jQuery) code:</p>
<pre><code>$.ajax({
url: `/rest/order`,
type: 'DELETE',
data: { "magentoid": order_id },
success: function(data) {
},
error: function(xhr, status, error) {
}
});
</code></pre>
<p>And when checking this call in the Chrome developer console, I can see the body is sent as well:</p>
<p><a href="https://i.sstatic.net/0nqKu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0nqKu.png" alt="enter image description here" /></a></p>
<p>However, in my CherryPy implementation, that data seems to be completely ignored:</p>
<pre><code>@cherrypy.expose
@cherrypy.tools.json_out()
def order(self, supplier = None, magentoid = None, dropship = None):
logging.info(f"supplier: {supplier}, magid: {magentoid}")
</code></pre>
<p>results in:</p>
<pre><code>INFO:root:supplier: None, magid: None
</code></pre>
<p>When I make the call in Javascript with <code>$.post</code>, it does work:</p>
<pre><code>$.post(
"/rest/order",
{ "magentoid": orderid, "dropship": $("#dropship").is(":checked") },
function(data) {
}
).fail(function() {
});
</code></pre>
<p>and this shows in my CherryPy log:</p>
<pre><code>INFO:root:supplier: None, magid: M13000000063
</code></pre>
<p><strong>Update</strong>, also when I change <code>DELETE</code> into <code>POST</code> in the <code>$.ajax</code> call, the data goes away.</p>
|
<javascript><python><jquery><cherrypy><http-delete>
|
2023-05-12 09:38:27
| 0
| 33,757
|
Bart Friederichs
|
76,234,895
| 3,679,054
|
Problem with Qt Designer - pglive integration for real-time plotting
|
<p>I followed the tutorial on the <a href="https://pypi.org/project/pglive/" rel="nofollow noreferrer">pglive docs</a> and I was able to have a real-time sine plot on my computer. Then I used an arduino to stream some random number through the serial port and I was able to read and plot the data using pglive.</p>
<p>Now I want to integrate a custom gui made with Qt Designer.</p>
<p>This is the code I'm trying to run:</p>
<pre><code>import sys
from math import sin
from threading import Thread
from time import sleep
from PyQt5.QtWidgets import QApplication
from PyQt5 import QtWidgets, QtCore, uic
from pglive.sources.data_connector import DataConnector
from pglive.sources.live_plot import LiveLinePlot
from pglive.sources.live_plot_widget import LivePlotWidget
import serial
arduino = serial.Serial(port='COM17', baudrate=9600, timeout=.1)
app = QApplication(sys.argv)
running = True
ui = uic.loadUi('test.ui')
plot_widget = ui.plot1 # LivePlotWidget(title="Line Plot @ 100Hz")
# plot_curve = LiveLinePlot()
# plot_widget.addItem(plot_curve)
# DataConnector holding 600 points and plots @ 100Hz
data_connector = DataConnector(ui.plot1, max_points=600, update_rate=100)
# data_connector = DataConnector(plot_curve, max_points=600, update_rate=100)
def sin_wave_generator(connector):
"""Sine wave generator"""
x = 0
while running:
x += 1
data = arduino.readline().decode("utf-8") # retrieve serial data from feather
values = data.split(",") # split at commas
data_point = float(values[0])
# Callback to plot new data point
connector.cb_append_data_point(data_point, x)
sleep(0.01)
plot_widget.show()
Thread(target=sin_wave_generator, args=(data_connector,)).start()
app.exec()
running = False
</code></pre>
<p>Basically, I commented the <code>plot_curve</code> and <code>plot_widget</code> lines, and loaded the <code>ui</code> onto the <code>plot_widget</code> variable.</p>
<p>The gui is just a widget (named <em>plot1</em>) that I promoted to <code>LivePlotWidget</code> and set header file to <code>pglive.sources.live_plot_widget</code> as explained again by pglive guys in their docs.</p>
<p>However, what I obtain is the following:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\New Wyss User\AppData\Roaming\Python\Python311\site-packages\pglive\sources\live_plot.py", line 134, in <lambda>
plot.slot_roll_tick = lambda data_connector, tick: plot.plot_widget.slot_roll_tick(data_connector, tick)
^^^^^^^^^^^^^^^^
File "C:\Users\New Wyss User\AppData\Roaming\Python\Python311\site-packages\pyqtgraph\widgets\PlotWidget.py", line 82, in __getattr__
raise AttributeError(attr)
AttributeError: plot_widget
Traceback (most recent call last):
File "C:\Users\New Wyss User\AppData\Roaming\Python\Python311\site-packages\pglive\sources\live_plot.py", line 121, in <lambda>
plot.slot_new_data = lambda y, x, kwargs: plot.setData(x, y, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: setData(self, key: int, value: Any): argument 1 has unexpected type 'numpy.ndarray'
</code></pre>
<p>Any suggestion? Thanks!</p>
|
<python><pyqt><real-time><qt-designer><pyqtgraph>
|
2023-05-12 09:24:12
| 1
| 421
|
Sfrow
|
76,234,657
| 4,255,096
|
Pandas: Shorten string for multiple columns
|
<p>I have a pandas dataframe with over 100 columns, a good portion of them are type string but not all of them. I want to find a way to shorten the length each column for those rows that exceed a certain length.</p>
<p>What would be a sensible approach to do this?</p>
<p>I thought about storing which columns I want to transform in a dictionary object where I have the name of the column and the new field length (the stop).</p>
<pre><code>columns_dict = {'col1':100, 'col2': 500, 'col3':500 ...}
</code></pre>
<p>As for the method, I found that <code>pandas.Series.str.slice</code> would be what I need. So I'm thinking something along the lines of:</p>
<pre><code>def slice_strings(col):
return col.str.slice(start=0, stop=dict.get(col))
</code></pre>
<p>The above does not have the condition specified in this case though.</p>
<p>But how would I be able to apply to only those columns and with the appropriate value?</p>
<p>Can I use <code>pandas.DataFrame.apply</code>?</p>
<pre><code>df = df[df[[list(columns_dict.keys())]].apply(slice_strings, axis=1)]
</code></pre>
|
<python><pandas><dataframe>
|
2023-05-12 08:53:30
| 1
| 375
|
Geosphere
|
76,234,472
| 19,318,120
|
Django Inline for m2m field
|
<p>I've a Page model with m2m field to model called content
I created an inline page but when testing the performance with django silk, I saw 11 queries mostly because of language field that's being fetched at least 4 times
here's my code:</p>
<p>models.py</p>
<pre><code>class Page(models.Model):
created_at = models.DateTimeField(auto_now_add=True)
name = models.ManyToManyField('content.Content', blank=True, related_name="as_page_names_set")
description = models.ManyToManyField('content.Content', blank=True, related_name="as_page_description_set")
slug = models.SlugField()
class Language(models.Model):
language_name = models.CharField(max_length=16, unique=True)
iso_code = models.CharField(max_length=6, unique=True)
class Content(models.Model):
language = models.ForeignKey(Language, models.CASCADE)
content = RichTextUploadingField(max_length=2048)
</code></pre>
<p>admin:</p>
<pre><code>
class PageNameAdmin(admin.StackedInline):
extra = 1
model = Page.name.through
verbose_name = "Page Name"
form = PageContentForm
class PageDescriptionAdmin(admin.StackedInline):
extra = 1
model = Page.description.through
form = PageContentForm
verbose_name = "Page Description"
@admin.register(Page)
class PageAdmin(admin.ModelAdmin):
inlines = [PageNameAdmin]
exclude = ('name', 'description')
def get_queryset(self, request: HttpRequest) -> QuerySet[Any]:
return super().get_queryset(request).prefetch_related('name', 'description')
</code></pre>
<p>forms.py:</p>
<pre><code>class PageContentForm(forms.ModelForm):
language = forms.ModelChoiceField(queryset=Language.objects.all(), required=True)
content = RichTextFormField(required=True)
FIELDS = ['content', 'language']
class Meta:
model = Page.name.through
fields = ('page', 'content', 'language')
def get_initial_for_field(self, field: Field, field_name: str) -> Any:
if self.instance.pk and field_name in self.FIELDS:
return getattr(self.instance.content, field_name)
return super().get_initial_for_field(field, field_name)
def clean(self) -> Dict[str, Any]:
if self.instance.pk is None:
self.cleaned_data['content'] = self.create_content_obj(**self.cleaned_data)
self.cleaned_data.pop('language')
else:
self.cleaned_data['content'] = self.get_content_obj(**self.cleaned_data)
return super().clean()
def create_content_obj(self, content = None, language = None, *args, **kwargs):
return Content.objects.create(content=content, language=language)
def get_content_obj(self, *args, **kwargs):
content_obj = self.instance.content
for k in self.FIELDS:
setattr(content_obj, k, kwargs[k])
content_obj.save()
return content_obj
</code></pre>
<p>What am I doing wrong?</p>
|
<python><django>
|
2023-05-12 08:28:25
| 0
| 484
|
mohamed naser
|
76,234,452
| 2,562,058
|
ClobberWarning with a simple conda build
|
<p>I am trying to build a simple package (only python code) and I get this warning:</p>
<pre><code>ClobberWarning: This transaction has incompatible packages due to a shared path.
packages: conda-forge/noarch::tomli-2.0.1-pyhd8ed1ab_0, conda-forge/osx-arm64::mypy-1.2.0-py311he2be06e_0
path: 'lib/python3.11/site-packages/tomli/__pycache__/__init__.cpython-311.pyc'
</code></pre>
<p>I followed the suggested practice of installing packages from the same channel, but yet I got this issue.</p>
<p>How can I solve it?</p>
|
<python><anaconda><conda><miniconda><conda-build>
|
2023-05-12 08:27:03
| 1
| 1,866
|
Barzi2001
|
76,234,354
| 1,020,139
|
How does Poetry associate a project to its virtual environment?
|
<p>How does Poetry associate a project to its virtual environment in <code>~/.cache/pypoetry/virtualenvs</code>? I can't find any link inside the project, e.g. <code>grep -ie NKJBdMnE .</code> returns nothing.</p>
<p><code>poetry env info</code>:</p>
<pre><code>Virtualenv
Python: 3.10.11
Implementation: CPython
Path: /home/lddpro/.cache/pypoetry/virtualenvs/lxxo-NKJBdMnE-py3.10
Executable: /home/lddpro/.cache/pypoetry/virtualenvs/lxxo-NKJBdMnE-py3.10/bin/python
Valid: True
System
Platform: linux
OS: posix
Python: 3.10.11
Path: /var/lang
Executable: /var/lang/bin/python3.10
</code></pre>
<p><code>poetry config --list</code>:</p>
<pre><code>[lddpro@0a0aecf400ca lddpro-bff]$ poetry config --list
cache-dir = "/home/lddpro/.cache/pypoetry"
experimental.new-installer = true
experimental.system-git-client = false
installer.max-workers = null
installer.no-binary = null
installer.parallel = true
virtualenvs.create = true
virtualenvs.in-project = null
virtualenvs.options.always-copy = false
virtualenvs.options.no-pip = false
virtualenvs.options.no-setuptools = false
virtualenvs.options.system-site-packages = false
virtualenvs.path = "{cache-dir}/virtualenvs" # /home/lddpro/.cache/pypoetry/virtualenvs
virtualenvs.prefer-active-python = false
virtualenvs.prompt = "{project_name}-py{python_version}"
</code></pre>
<p><a href="https://python-poetry.org/docs/configuration/" rel="nofollow noreferrer">https://python-poetry.org/docs/configuration/</a></p>
|
<python><python-poetry>
|
2023-05-12 08:10:34
| 1
| 14,560
|
Shuzheng
|
76,234,337
| 404,109
|
langchain\\chains\\llm_summarization_checker\\prompts\\create_facts.txt error when try to package python file
|
<p>I tried to <code>pyinstaller</code> package my python file which uses <code>langchain</code>. i.e it imports:</p>
<pre><code>from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
</code></pre>
<p>But when packaging, these imports cause the following error when I try to run the executable.</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'C:\\chatgpt\\dist\\chat\\langchain\\chains\\llm_summarization_checker\\prompts\\create_facts.txt'
</code></pre>
<p><a href="https://i.sstatic.net/yIJ3H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yIJ3H.png" alt="enter image description here" /></a></p>
|
<python><pyinstaller><langchain>
|
2023-05-12 08:07:28
| 6
| 1,059
|
Jason
|
76,234,312
| 13,916,049
|
ImportError: cannot import name 'is_categorical' from 'pandas.api.types'
|
<p>I want to convert <code>file1.hic</code> file into <code>.cool</code> format using <a href="https://github.com/4dn-dcic/hic2cool" rel="noreferrer">hic2cool</a>, which is written in Python. I converted the files using command line:</p>
<pre><code>hic2cool convert file1.hic file1.cool -r 10000
</code></pre>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "/home/melchua/.local/bin/hic2cool", line 5, in <module>
from hic2cool.__main__ import main
File "/home/melchua/.local/lib/python3.9/site-packages/hic2cool/__init__.py", line 2, in <module>
from .hic2cool_utils import (
File "/home/melchua/.local/lib/python3.9/site-packages/hic2cool/hic2cool_utils.py", line 27, in <module>
import cooler
File "/home/melchua/.local/lib/python3.9/site-packages/cooler/__init__.py", line 14, in <module>
from .api import Cooler, annotate
File "/home/melchua/.local/lib/python3.9/site-packages/cooler/api.py", line 12, in <module>
from .core import (get, region_to_offset, region_to_extent, RangeSelector1D,
File "/home/melchua/.local/lib/python3.9/site-packages/cooler/core.py", line 3, in <module>
from pandas.api.types import is_categorical
ImportError: cannot import name 'is_categorical' from 'pandas.api.types' (/home/melchua/.local/lib/python3.9/site-packages/pandas/api/types/__init__.py)
</code></pre>
|
<python><pandas><linux>
|
2023-05-12 08:03:48
| 2
| 1,545
|
Anon
|
76,234,027
| 21,404,794
|
How to sort the index of a pandas.Dataframe to pass pd.testing.assert_frame_equal
|
<p>I'm using pd.testing.assert_frame_equal to test if a function I've written works as expected, and I'm getting a weird error:</p>
<p>When I compare the 2 dataframes that are printed in exactly the same way (I checked the values are the same with np.array_equal(df1.values, df2.values), so I know they're the same by human standards), It gives me an assertion error:</p>
<pre class="lang-py prettyprint-override"><code>DataFrame.index values are different (90.0 %)
[left]: Int64Index([0, 3, 6, 9, 12, 15, 18, 1, 4, 7, 10, 13, 16, 19, 2, 5, 8, 11, 14,
17],
dtype='int64')
[right]: RangeIndex(start=0, stop=20, step=1)
</code></pre>
<p>where left is the dataframe returned by the function and right is the one I created for the testing.</p>
<p>There are 2 errors here. First, the types of the indexes are not equal, which is weird, but can be solved easily by doing :</p>
<pre class="lang-py prettyprint-override"><code>right.index = list(right.index)
</code></pre>
<p>As seen <a href="https://stackoverflow.com/a/52576441/21404794">here</a>. The second problem is the order of the left indexes, as after adding that line of code to the test it still returns this error:</p>
<pre class="lang-py prettyprint-override"><code>DataFrame.index values are different (90.0 %)
[left]: Int64Index([0, 3, 6, 9, 12, 15, 18, 1, 4, 7, 10, 13, 16, 19, 2, 5, 8, 11, 14,
17],
dtype='int64')
[right]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19],
dtype='int64')
</code></pre>
<p>The function I'm testing splits the dataframe in groups, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html" rel="nofollow noreferrer">melts</a> some of them, and then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer">joins</a> everything back together.</p>
<p>I'm using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html#pandas.DataFrame.sort_index" rel="nofollow noreferrer">sort_index</a> inside the function to keep the indexes ordered (as they should be), but while in the prints I've done they appear ordered (0,1,2,3...) in the testing error, as you can see they are not ordered.</p>
<p>I've also tried using reindex and reset_index, but they didn't show any difference (maybe there's some hidden option I've not tried out, who knows...</p>
<p>The final question is: How can I order the Indexes of a DataFrame so the testing can pass?</p>
<p><strong>When used in the correct file and function, sort_index allows the test to pass</strong></p>
<p>(I was changing a copy instead of the correct function...)</p>
|
<python><pandas><dataframe>
|
2023-05-12 07:24:45
| 2
| 530
|
David Siret Marqués
|
76,233,986
| 20,646,427
|
How to upload doc/docx file and return pdf file in FastAPI?
|
<p>I'm looking for a way to convert word or excel file to pdf file using in my FastAPI project.</p>
<p>I tried to use <code>comtypes</code> library but that didn't help. I guess this library is not what i exactly need</p>
<p>I got an error</p>
<blockquote>
<p>ctypes.ArgumentError: argument 1: TypeError: Cannot put
<starlette.datastructures.UploadFile object at 0x0000020CFF14E310> in
VARIANT</p>
</blockquote>
<p>main.py</p>
<pre><code>import comtypes.client
app = FastAPI()
@app.post('/')
async def root(file: UploadFile = File(...)):
wdFormatPDF = 17
word = comtypes.client.CreateObject('Word.Application')
doc = word.Documents.Open(file)
doc.SaveAs(f'{file.filename}', FileFormat=wdFormatPDF)
doc.Close()
word.Quit()
return {'file_name': file.filename}
</code></pre>
|
<python><fastapi>
|
2023-05-12 07:20:03
| 0
| 524
|
Zesshi
|
76,233,888
| 7,752,049
|
Scrape data with scrapy
|
<p>I am going to scrape the Pararius.nl as a practice with scrapy but when I start crawling it returns the fairlane protection, how can I pass it? Do I need any tools? please help with one example</p>
<pre><code>def parse(self, response):
url = 'https://www.pararius.nl/{deal_type}/nederland/p-{page}/'
for deal_type in ['huurwoningen', 'koopwoningen']:
for i in range(1, 2):
yield scrapy.Request(url.format(deal_type=deal_type, page=i), callback=self.parse_pages,cookies=self.cookies,
headers=self.h, method='GET', cb_kwargs={'deal_type': deal_type})
def parse_pages(self, response, deal_type):
print(response.url)
return
</code></pre>
|
<python><scrapy>
|
2023-05-12 07:05:30
| 2
| 2,479
|
parastoo
|
76,233,767
| 9,717,388
|
How to implement case-insensitive in SlugRelatedField in drf serializers?
|
<p>I'm using SlugRelatedField in my serializer. I want to validate it on a case-insensitive basis, so I add postfix "__iexact" to the "slug_field" attr. And validation works as I need (case-insensitive).</p>
<pre><code>class MySerializer(ModelSerializer):
customer = serializers.SlugRelatedField(queryset=Customer.objects.all(),
required=False,
allow_null=True,
slug_field='name__iexact')
</code></pre>
<p>But when I try to get serializer.data, the following error occurs:<br />
<code>* {AttributeError}'Customer' object has no attribute 'name__iexact'</code><br />
How can it be solved?</p>
|
<python><django><django-rest-framework>
|
2023-05-12 06:48:14
| 1
| 343
|
PolYarBear
|
76,233,637
| 1,702,957
|
Offline Anaconda Installation package
|
<p>This is relatively simple question but after surfing on internet for about an hour, I have to ask. I am looking for Anaconda offline installation but couldn't find any link to the installer. According to <a href="https://docs.anaconda.com/anaconda-repository/admin-guide/install/airgap-archive/" rel="nofollow noreferrer">this</a> link, installer must be around 30GB for windows 64 bit. But in <a href="https://repo.anaconda.com/pkgs/" rel="nofollow noreferrer">archvies</a> I am not able to see this file.</p>
|
<python><installation><anaconda><offline>
|
2023-05-12 06:26:56
| 1
| 1,639
|
aneela
|
76,233,578
| 2,287,458
|
Group/Pivot Data Values By First & Last Entry in DataFrame
|
<p>Let's assume I have the following data (in reality I have millions of entries):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import datetime as dt
df_data = pd.DataFrame([
[dt.date(2023, 5, 8), 'Firm A', 'AS', 250.0, -1069.1],
[dt.date(2023, 5, 8), 'Firm A', 'JM', 255.0, -1045.5],
[dt.date(2023, 5, 8), 'Firm A', 'WC', 250.0, -1068.8],
[dt.date(2023, 5, 11), 'Firm A', 'WC', 250.0, -1068.8],
[dt.date(2023, 5, 8), 'Firm B', 'AS', 31.9, -317.9],
[dt.date(2023, 5, 8), 'Firm B', 'JM', 33.5, -310.7],
[dt.date(2023, 5, 8), 'Firm B', 'WC', 34.5, -305.9],
[dt.date(2023, 5, 11), 'Firm B', 'AS', 33.0, -313.1],
[dt.date(2023, 5, 11), 'Firm B', 'JM', 33.5, -310.7],
[dt.date(2023, 5, 11), 'Firm B', 'WC', 35.0, -303.5],
[dt.date(2023, 5, 10), 'Firm C', 'BC', 167.0, 301.0],
[dt.date(2023, 5, 9), 'Firm D', 'BA', 791.9, 1025.0],
[dt.date(2023, 5, 9), 'Firm D', 'CT', 783.8, 1000.0],
[dt.date(2023, 5, 11), 'Firm D', 'BA', 783.8, 1000.0],
[dt.date(2023, 5, 11), 'Firm D', 'CT', 767.9, 950.0]],
columns=['Date', 'Name', 'Source', 'Value1', 'Value2'])
</code></pre>
<p>Now for each <code>Name</code> I want to find its <strong>first</strong> & <strong>last</strong> available <code>Date</code> and compute the <code>mean</code> for the columns <code>Value1</code> & <code>Value2</code> for each of the dates. And ultimately, I want to compute the <strong>change</strong> in both values between the first & last date.</p>
<p>The problem is that not all names have data on the same dates, and some names only have data on 1 date.</p>
<p>The following approach works:</p>
<pre class="lang-py prettyprint-override"><code>def compute_entry(df: pd.DataFrame) -> dict:
dt_min = df.Date.min()
dt_max = df.Date.max()
idx_min = df.Date == dt_min
idx_max = df.Date == dt_max
data = {
'Min Date': dt_min,
'AvgValue1 (Min)': df[idx_min].Value1.mean(),
'AvgValue2 (Min)': df[idx_min].Value2.mean(),
'#Sources (Min)': df[idx_min].Value2.count(),
'Max Date': dt_max,
'AvgValue1 (Max)': df[idx_max].Value1.mean(),
'AvgValue2 (Max)': df[idx_max].Value2.mean(),
'#Sources (Max)': df[idx_max].Value2.count(),
'Value1 Change': df[idx_max].Value1.mean() - df[idx_min].Value1.mean(),
'Value2 Change': df[idx_max].Value2.mean() - df[idx_min].Value2.mean()
}
return data
df_pivot = pd.DataFrame.from_dict({sn_id: compute_entry(df_sub)
for sn_id, df_sub in df_data.groupby('Name')}, orient='index')
</code></pre>
<p>And gives the <strong>desired</strong> format:
<a href="https://i.sstatic.net/y1g0S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y1g0S.png" alt="Pivot Table" /></a>
However, this approach is very slow for many entries.</p>
<p>So instead I tried using <code>pd.pivot_table</code> which is much faster:</p>
<pre class="lang-py prettyprint-override"><code>pd.pivot_table(df_data,
index=['Name', 'Date'],
aggfunc={'Value1': np.mean, 'Value2': np.mean, 'Source': len})
</code></pre>
<p>But the output is not quite in the right format, and I find it difficult to convert the pivot table into the same <strong>desired</strong> format as above.</p>
<p>Is there a good way to use pandas built-in (vectorised) functions to achieve the <strong>desired</strong> format?</p>
|
<python><python-3.x><pandas><dataframe><pivot-table>
|
2023-05-12 06:15:48
| 2
| 3,591
|
Phil-ZXX
|
76,233,570
| 8,236,050
|
Show cursor in Selenium Webdriver and s
|
<p>I am building a Selenium Webdriver script in Python. My issue is that I am performing some actions that consist on moving to a certain position and clicking there. My action is not working, so I wanted to run the script as non headless to see what is happening, but, as the cursor does not appear in the screen, I am not able to see if my mouse is moving as I expected. Is there any way to enable seeing the cursor when running Selenium in a browser window? I am using Chromium but I am planning on adding more browsers to this script, so I would need something that at least works for Chromium, but it would be useful to have a solution for other browsers too. The code I have to move to the element and click it is this:</p>
<pre><code>def scrollToCenter(driver, element):
# Get the height of the window
window_height = driver.execute_script("return window.innerHeight;")
# Get the position of the element relative to the top of the page
element_position = element.location['y']
# Calculate the position to scroll to
scroll_position = element_position - window_height/2
# Scroll the page to the desired position
driver.execute_script("window.scrollTo(0, {});".format(scroll_position))
# move the cursor to the element
actions = ActionChains(driver)
actions.move_to_element(element).perform()
def simulateClick(driver, el):
# Scroll to the point where the element is located
driver.execute_script("arguments[0].scrollIntoView();", el)
try:
# Move to the element and click it
scrollToCenter(driver, el)
el.click()
except Exception as e:
print(e)
</code></pre>
<p>Seeing the scroll being performed would also be useful.</p>
<p>Also, I'd like to know if I can store this test case's headless execution in a video, so that I do not need to watch the execution while it happens to check if the mouse is in the correct position or not.</p>
|
<python><google-chrome><selenium-webdriver><selenium-chromedriver><google-chrome-headless>
|
2023-05-12 06:14:02
| 2
| 513
|
pepito
|
76,233,348
| 2,552,290
|
idiom for protecting against inexact native division in code that uses sympy?
|
<p>I am developing some code whose variables and function params are
sometimes native python numeric types (int, float)
and sometimes sympy types
(sympy.core.numbers.Integer, sympy.core.numbers.Rational, sympy.core.symbol.Symbol,
sympy.core.add.Add, etc.).
And I sometimes want to express division (/),
but I'm having trouble finding a non-error-prone way to express it.</p>
<p>Here is some very simple representative example code, which works fine, until it doesn't:</p>
<pre><code>import sympy
def MyAverageOfThreeNumbers(a, b, c):
return (a + b + c) / 3
print(MyAverageOfThreeNumbers(0, 1, 2))
# 1.0
print(type(MyAverageOfThreeNumbers(0, 1, 2)))
#<class 'float'>
print(MyAverageOfThreeNumbers(sympy.Integer(0), 1, 2))
# 1
print(type(MyAverageOfThreeNumbers(sympy.Integer(0), 1, 2)))
#<class 'sympy.core.numbers.One'>
x = sympy.symbols("x")
print(MyAverageOfThreeNumbers(x, 1, 2))
# x/3 + 1
print(type(MyAverageOfThreeNumbers(x, 1, 2)))
# <class 'sympy.core.add.Add'>
print(MyAverageOfThreeNumbers(x, x, x))
# x
print(type(MyAverageOfThreeNumbers(x, x, x)))
# <class 'sympy.core.symbol.Symbol'>
</code></pre>
<p>So far so good; but then...</p>
<pre><code>print(MyAverageOfThreeNumbers(1, 1, 2))
# 1.3333333333333333 <-- bad! I want 4/3
print(type(MyAverageOfThreeNumbers(1, 1, 2)))
# <class 'float'> <-- bad! I want sympy.core.numbers.Rational or equivalent
print(sympy.Rational(MyAverageOfThreeNumbers(1, 1, 2)))
# 6004799503160661/4503599627370496 <-- bad! I want 4/3
print(type(sympy.Rational(MyAverageOfThreeNumbers(1, 1, 2))))
# <class 'sympy.core.numbers.Rational'>
</code></pre>
<p>Solutions I've considered:</p>
<p>(1) Whenever I type '/' in my code,
make sure at least one of the operands is a sympy type rather than native.
E.g. one way to safely rewrite my function would be as follows:</p>
<pre><code> def MyAverageOfThreeNumbers(a, b, c):
return (a + b + c) * sympy.core.numbers.One() / 3
print(MyAverageOfThreeNumbers(1, 1, 2))
# 4/3 <-- good!
</code></pre>
<p>(2) Avoid/prohibit use of '/' in my code entirely, except in this helper function:</p>
<pre><code> def MySafeDivide(a, b):
return a * sympy.core.numbers.One() / b
</code></pre>
<p>(in fact I could avoid it there too, using <a href="https://docs.python.org/3/library/operator.html#operator.truediv" rel="nofollow noreferrer">operator.truediv</a> instead of the <code>/</code> operator).
Then I'd rewrite my function as:</p>
<pre><code> def MyAverageOfThreeNumbers(a, b, c):
return MySafeDivide(a + b + c, 3)
</code></pre>
<p>(3) Whenever I write a function designed to accept both native types and sympy times,
always convert to sympy types at the beginning of the function body:
E.g. I'd rewrite my function as:</p>
<pre><code> def MyAverageOfThreeNumbers(a, b, c):
# In case any or all of a,b,c are native types...
a *= sympy.core.numbers.One()
b *= sympy.core.numbers.One()
c *= sympy.core.numbers.One()
# Now a,b,c can be used safely in subsequent arithmetic
# that may involve '/', without having to scrutinize the code too closely.
return (a + b + c) / 3
</code></pre>
<p>All three of the above solutions seem ugly and (more importantly) error prone,
and they require me to periodically audit my code
to make sure I haven't mistakenly added any new unsafe uses of '/'.
Also, I'm finding that it's too tempting to leave the following very frequent
kind of expression as-is, since it's safe:</p>
<pre><code> some_python_expression/2
</code></pre>
<p>rather than rewriting it as one of:</p>
<pre><code> (some_python_expression * sympy.core.numbers.One()) / 2
</code></pre>
<p>or:</p>
<pre><code> MySafeDivide(some_python_expression, 2)
</code></pre>
<p>but then that makes my code harder to audit for mistakes,
since <code>some_python_expression/2</code> is safe but <code>some_python_expression/3</code> isn't.
(Nit: actually even <code>some_python_expression/2</code> isn't <em>completely</em> safe, e.g. <code>2**-1074/2</code> yields <code>0.0</code>)</p>
<p>So I'm looking for a robust maintainable solution that will bulletproof my code from this kind of mistake. Ideally I'd like to either:</p>
<ul>
<li>consistently override '/' so that it always calls MySafeDivide() (see above) throughout my python file, or</li>
<li>prohibit the use of '/' throughout my python file (ideally at compile time, but runtime would be better than nothing)</li>
</ul>
<p>Are either of these things possible in python?
Note, I want to stick with standard python3 as the interpreter, which rules out solutions that require help from a nonstandard interpreter or compiler.</p>
|
<python><sympy>
|
2023-05-12 05:23:14
| 2
| 5,611
|
Don Hatch
|
76,233,333
| 13,916,049
|
Convert selected files in directory into another format
|
<p>For all the files in the input directory ending with "hic", I want to convert them to cool format. A single file can be converted using:</p>
<pre><code>hic2cool_convert(<infile>, <outfile>)
</code></pre>
<p>But I'm not sure how to write a for loop to convert all the relevant files.</p>
<pre><code>indir = "./others/"
outdir = "output/"
for file in os.listdir(indir + "*hic"):
with open(os.path.join(indir + "*hic", file), "r") as f: # open in readonly mode
hic2cool_convert(f, outdir + "f")
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 for file in os.listdir(indir + "*hic"):
2 with open(os.path.join(indir + "*hic", file), "r") as f: # open in readonly mode
3 hic2cool_convert(f, outdir + "f")
FileNotFoundError: [Errno 2] No such file or directory: './others/*hic'
</code></pre>
|
<python><file><directory>
|
2023-05-12 05:18:58
| 2
| 1,545
|
Anon
|
76,233,316
| 597,858
|
Main window shrinks in tkinter
|
<p>I have a simple tkinter app that plots a sine curve. The problem I am facing is that when I press the PLOT button, original root window shrinks in size. I want to keep the window size fixed, irrespective of pressing the PLOT button or not. what shall I do?</p>
<pre><code>import tkinter as tk
import matplotlib.pyplot as plt
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import numpy as np
# Function to handle button click event
def plot_sine_wave():
# Generate random data for the sine wave
x = np.linspace(0, 2 * np.pi, 100)
y = np.sin(x + np.random.uniform(-0.5, 0.5))
# Create a new window for the plot
plot_window = tk.Toplevel(root)
plot_window.title("Random Sine Wave Plot")
# Create the plot
plt.figure()
plt.plot(x, y)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Random Sine Wave")
plt.grid(True)
# Display the plot in the new window
canvas = FigureCanvasTkAgg(plt.gcf(), master=plot_window)
canvas.draw()
canvas.get_tk_widget().pack()
# Create the main window
root = tk.Tk()
root.title("Sine Wave Plot")
root.geometry("200x100")
root.resizable(False, False) # Set resizable attribute to False
# Create the 'PLOT' button
plot_button = tk.Button(root, text="PLOT", command=plot_sine_wave)
plot_button.pack(pady=20)
# Run the Tkinter event loop
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2023-05-12 05:15:13
| 1
| 10,020
|
KawaiKx
|
76,233,267
| 10,576,322
|
Python logging of conditional imports in libraries
|
<p>I have a library with a module that does an conditional import.</p>
<p>I want to be able in my applications to log the outcome of the conditional import. But that forces me to configure my handlers etc. before the import of that module, which is regarded bad style.</p>
<p>The module looks something like that.</p>
<pre><code>from logging import getLogger
import foo
try:
import bar as br
BAR_INSTALLED = True
msg = "bar is imported."
except ImportError:
BAR_INSTALLED = False
msg = "bar is not available."
logger = getLogger(__main__)
logger.info(msg)
</code></pre>
<p>Bonus question: Pycharm is complaining that br is not defined in the except clause. I really don't need it, because in that case you really have to force yoursslf to import the wrong function and try to use it wizhout br. Should one care about it?</p>
<p>Edit: Regarding the comments on the bonus question, I changed the code to the following:</p>
<p>my_package/my_module.py</p>
<pre><code>from logging import getLogger
import foo
try:
import bar as br
except ImportError:
br = None
logger = getLogger(__main__)
if br:
msg = "bar is imported."
else:
msg = "bar is not available."
logger.info(msg)
</code></pre>
<p>To make it more clear. I have a completly other script in an environment where I installed my_package.
In this script I want to log whether br was installed that currently forces me to to something like:</p>
<pre><code>import logging.config
logging.config.dictConfig(config_dict)
import my_package
# other code
</code></pre>
|
<python><logging><import><python-logging>
|
2023-05-12 05:03:11
| 1
| 426
|
FordPrefect
|
76,233,184
| 302,102
|
Defining fst and snd with type variables
|
<p>I sometimes encounter typing errors when using Haskell-style <code>fst</code> and <code>snd</code> functions defined in Python as follows:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
a = TypeVar("a")
b = TypeVar("b")
def fst(x: tuple[a, b]) -> a:
"""Return the first element of a pair."""
return x[0]
def snd(x: tuple[a, b]) -> b:
"""Return the second element of a pair."""
return x[1]
</code></pre>
<p>As an example, running the following through <a href="https://www.mypy-lang.org/" rel="nofollow noreferrer">mypy</a> succeeds:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Iterable, Iterator
def f(xs: Iterable[tuple[str, int]]) -> Iterator[int]:
"""Return second element of string-integer pairs."""
return map(snd, xs)
</code></pre>
<p>However, this function:</p>
<pre class="lang-py prettyprint-override"><code>def g(xs: Iterable[tuple[str, int]]) -> list[int]:
"""Return second element of string-integer pairs."""
return list(map(snd, xs))
</code></pre>
<p>results in the following mypy error:</p>
<pre><code>error: Argument 1 to "map" has incompatible type "Callable[[Tuple[a, b]], b]"; expected "Callable[[Tuple[str, int]], b]" [arg-type]
</code></pre>
<p>What is the cause of this error? Are <code>fst</code> and <code>snd</code> defined incorrectly?</p>
|
<python><mypy>
|
2023-05-12 04:44:13
| 0
| 1,935
|
aparkerlue
|
76,233,172
| 19,079,397
|
How to convert geopandas dataframe with multi-polygon into geojson?
|
<p>I have a Geopandas data frame with multi-polygons geometries. Now, I want to convert the data frame into geojson. So, I have converted the dataframe into <code>dict</code> and then used <code>json.dump(dict)</code> to convert the data frame into json. This works well when I have single polygon but throws error <code>TypeError: Object of type MultiPolygon is not JSON serializable</code> when the geometry column has multi-polygon. What is the best to convert geopandas dataframe into <code>json</code> <code>seraliazble</code> whether the geometry is multi-polygon or polygon.</p>
<pre><code>df=
location geometry
1 MULTIPOLYGON (((-0.304766 51.425882, -0.304904...
2 MULTIPOLYGON (((-0.305968 51.427425, -0.30608 ...
3 MULTIPOLYGON (((-0.358358 51.423471, -0.3581 5...
4 MULTIPOLYGON (((-0.357654 51.413925, -0.357604...
list_data = df.to_dict(orient='records')
print(json.dumps(list_data))
</code></pre>
<p>Error:-</p>
<pre><code>TypeError: Object of type MultiPolygon is not JSON serializable
</code></pre>
|
<python><json><geojson><geopandas><multipolygons>
|
2023-05-12 04:42:04
| 2
| 615
|
data en
|
76,233,092
| 3,009,875
|
Django ReportLab 'list' object has no attribute 'getKeepWithNext'
|
<p>I am trying to generate a pdf that will look like this:<a href="https://i.sstatic.net/lecxv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lecxv.png" alt="enter image description here" /></a></p>
<p>Here is my code:</p>
<pre><code>doc = doc = SimpleDocTemplate(invoice_name)
flowable = []
flowable.append(Paragraph(invoice_name))
flowable.append(Paragraph(invoice_number))
flowable.append(Paragraph(invoice_date))
flowable.append(Spacer(1,4))
t = Table(table_data)
flowable.append(t)
flowable.append(Spacer(1,4))
flowable.append(Paragraph(comment_and_amount))
doc.build(flowable)
</code></pre>
<p>But I get this error: <code>'list' object has no attribute 'getKeepWithNext'</code></p>
|
<python><django><reportlab>
|
2023-05-12 04:18:20
| 1
| 2,209
|
Yax
|
76,234,086
| 7,090,396
|
Scan Gun returning empty data in Python Script
|
<p>I have a symbol barcode scanner model DS4308 that I am trying to pull scan data from.</p>
<p>In my python script, I am using the "hid" library via <code>import hid</code> to connect to the device. Here is the code block that is used to connect to the scanner:</p>
<pre><code> def initScanner(self):
for device in hid.enumerate():
if device['path'].endswith('hidraw6'.encode()):
# Open the device
h = hid.Device(path=device['path'])
print("Initialized Scanner")
return h
...
scanner = self.initScanner()
...
</code></pre>
<p>Which that part works and I am able to connect to the scanner. I then attempt to read the output from the scanner here:</p>
<pre><code> while True:
barcode = scanner.read(64)
if barcode:
print(barcode)
</code></pre>
<p>While debugging, the script holds on <code>barcode = scanner.read(64)</code> which is expected as it's a blocking process. When I scan a test barcode, however, the output is always this:</p>
<pre><code>b'\x00\x00\x00\x00\x00\x00\x00\x00'
b'\x00\x00\x00\x00\x00\x00\x00\x00'
</code></pre>
<p>I've tried multiple barcodes and they all return that same value.</p>
<p>I have no experience with programming barcode scanners, so I don't really know where to even start with this. I can't find any documentation or tutorials on how to program the scanner other than what is on the product page here: <a href="https://www.zebra.com/us/en/support-downloads/scanners/general-purpose-scanners/ds4308.html#pageandfilelist_e56b" rel="nofollow noreferrer">https://www.zebra.com/us/en/support-downloads/scanners/general-purpose-scanners/ds4308.html#pageandfilelist_e56b</a></p>
<p>But I don't really understand any of what they are talking about in the manuals. Is there something more I need to do with the input from <code>scanner.read(64)</code> or something else I need to configure with the scanner to get it to work?</p>
<p>Any help would be appreciated.</p>
|
<python>
|
2023-05-12 04:02:37
| 1
| 1,512
|
Eric Brown
|
76,233,033
| 3,741,571
|
Why do Python and Java behave differently when decoding GBK
|
<p>This question is related to <a href="https://stackoverflow.com/questions/76232910/">How to decode bytes in GB18030 correctly</a>.</p>
<p>I would like to decode an array of bytes which are encoded with <a href="https://en.wikipedia.org/wiki/GBK_(character_encoding)" rel="nofollow noreferrer">GBK</a>, but found that Python and Java behave differently sometimes.</p>
<pre class="lang-py prettyprint-override"><code>ch = b'\xA6\xDA'
print(ch.decode('gbk'))
</code></pre>
<p>It raises an error:</p>
<blockquote>
<p>UnicodeDecodeError: 'gbk' codec can't decode byte 0xa6 in position 0: illegal multibyte sequence</p>
</blockquote>
<p>Java is able to decode it.</p>
<pre class="lang-java prettyprint-override"><code>byte[] data = {(byte) 0xA6, (byte) 0xDA};
String s = new String(data, Charset.forName("GBK"));
System.out.println(s);
</code></pre>
<p>It seems that Python and Java adopt different implementations for <code>GBK</code>, right?</p>
|
<python><java><unicode><character-encoding>
|
2023-05-12 03:58:03
| 1
| 6,975
|
chenzhongpu
|
76,232,951
| 8,205,099
|
Docker configuration working in one machine(Ubuntu) but not in other(Mac M1)
|
<p>I have setup Django with MySQL in docker compose in my mac M1. Everytime I do docker-compose up I get error for django application</p>
<pre><code>"Error: File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 166, in __init__
web_1 | super(Connection, self).__init__(*args, **kwargs2)
web_1 | django.db.utils.OperationalError: (2002, "Can't connect to MySQL server on 'db' (115)")
</code></pre>
<p>I have another machine (ubuntu) with same docker configuration there I don't get any issue.</p>
<pre><code>version: '3.2'
services:
web:
restart: always
expose:
- "5000"
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:5000
ports:
- "9000:5000"
environment:
- MYSQL_HOST=db
- MYSQL_USER=me
- MYSQL_PASSWORD=me
- MYSQL_DB=my_db
depends_on:
- db
volumes:
- ".:/code/google"
db:
image: mysql:5.7
restart: always
expose:
- "9306"
ports:
- "9306:3306"
environment:
- MYSQL_DATABASE=my_db
- MYSQL_USER=me
- MYSQL_PASSWORD=me
- MYSQL_ROOT_PASSWORD=me
volumes:
- "./db_data:/opt/homebrew/var/mysql"
- "./mysql-init:/docker-entrypoint-initdb.d"
</code></pre>
<p>Here is my django setting</p>
<pre><code>DATABASES = {
"default": {
"ENGINE": "django.db.backends.mysql",
"NAME": "my_db",
"USER": "me",
"PASSWORD": "me",
"HOST": "db",
"PORT": "3306",
}
}
</code></pre>
<p>I have tried many things most of which i don't remember now. i.e i tried swapping the services, keeping only db service seems to work without web, tried changing the volume location, deleted all images, containers and volumes and rebuilding.</p>
<p>If this works in ubuntu one machine shouldn't it work with mac M1 too ? Isn't this what docker guarantee ?</p>
|
<python><python-3.x><django><docker><docker-compose>
|
2023-05-12 03:36:14
| 1
| 4,176
|
Wahdat Jan
|
76,232,910
| 3,741,571
|
How to decode bytes in GB18030 correctly
|
<p>Recently, I am studying <a href="https://en.wikipedia.org/wiki/GB_18030" rel="nofollow noreferrer">GB 18030</a>, and found that some characters cannot encoded/decoded correctly when they are mapped to the <em>private user area</em> (PUA). For example, according to the specification of GB 18030, this is <em>A6</em> double-byte region:</p>
<p><a href="https://i.sstatic.net/6hmja.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6hmja.png" alt="enter image description here" /></a></p>
<p>Most characters' encoding/decoding are expected:</p>
<pre class="lang-py prettyprint-override"><code>ch = b'\xA6\F5'
print(ch.decode('gb18030')) # ︴
</code></pre>
<p>But some are not:</p>
<pre class="lang-py prettyprint-override"><code>ch = b'\xA6\xDA'
print(ch.decode('gb18030')) # , but it should be︒
</code></pre>
<p>I also tested in Java, and the results are the same. It seems that if the characters are mapped to PUA, neither Java nor Python can decode them correctly. Is it related to the specific language/library implementation?</p>
|
<python><java><character-encoding>
|
2023-05-12 03:21:56
| 0
| 6,975
|
chenzhongpu
|
76,232,802
| 14,399,602
|
What's wrong with using df.loc to assign new values?
|
<p>I do suppose the following two functions generate the same results:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def test_fn1(df: pd.DataFrame) -> pd.DataFrame:
df['b'] = df['b'].apply(lambda x: [x] if isinstance(x, dict) else x)
return df
def test_fn2(df: pd.DataFrame) -> pd.DataFrame:
df.loc['b'] = df['b'].apply(lambda x: [x] if isinstance(x, dict) else x)
return df
</code></pre>
<p><code>test_fn1</code> works as expected:</p>
<pre class="lang-py prettyprint-override"><code>test_df = pd.DataFrame({'a': [1, 2, 3], 'b': [{'c': 1}, {'c': 2}, {'c': 3}]})
test_df
a b
0 1 {'c': 1}
1 2 {'c': 2}
2 3 {'c': 3}
</code></pre>
<pre><code>test_fn1(test_df)
a b
0 1 [{'c': 1}]
1 2 [{'c': 2}]
2 3 [{'c': 3}]
</code></pre>
<p>Somthing wrong with <code>test_fn2</code> which uses <code>.loc</code> to assign values:</p>
<pre class="lang-py prettyprint-override"><code>test_fn2(test_df)
a b
0 1.0 [{'c': 1}]
1 2.0 [{'c': 2}]
2 3.0 [{'c': 3}]
b NaN NaN
</code></pre>
<p>What causes the last row?</p>
|
<python><pandas><dataframe>
|
2023-05-12 02:42:04
| 1
| 880
|
Michael Chao
|
76,232,426
| 12,091,935
|
How to generate random variables from a probability density function/ probability mass function
|
<p>I am trying simulate random arrival times for cars that will park in a small parking lot. I want to use a random generator to create random parking events. I am not sure if numpy is the correct library for this job.</p>
<p>Example:
I use randint to generate the number of cars that will park on that day randint(3, high = 20), then I use the Poisson random variable generator to output the charging times in seconds in a day (k-index) [0 – 86,399] with a mean of 35,999 (10:00)</p>
<p>I tried using <code>np.random.poisson(lam=(35999.), size=(np.random.randint(3, high = 20), 1))</code>, but the random variables seem to be too centered around 35,999</p>
<pre><code>array([[35843],
[35923],
[35872],
[36118],
[36426],
[35998]])
array([[36163],
[35895],
[35756],
[35799],
[36240],
[36397]])
array([[36001],
[36256],
[36084],
[36201]])
array([[36002],
[35888],
[36200],
[36035],
[35931],
[35688],
[36054],
[36251],
[35968],
[35888],
[35732],
[35503],
[36076],
[36131],
[35994],
[36075]])
</code></pre>
<p>I am looking for values with a higher variance like:</p>
<pre><code>get_poisson_rv(lam = 35999, size = randint(3,high = 20))
array([[16547],
[35923],
[29300],
[43456],
[51234],
[70754]])
array([[12473],
[28792],
[33543],
[38967],
[44563],
[56373]])
array([[6598],
[27650],
[38982],
[45632]])
array([[2564],
[8734],
[15267],
[22543],
[28965],
[32054],
[36658],
[38251],
[43968],
[44888],
[49732],
[53503],
[58076],
[68131],
[74994],
[82350]])
</code></pre>
<p>I am not sure if a Poisson function is the best method for this, and how I could increase the variance. Also, is there a way for python to generate a mixed or compound Poisson function. Apologies for any naive question, I am new to use random generators for multiple peak times. Thank you for the help</p>
|
<python><numpy><scipy><data-science><probability-density>
|
2023-05-12 00:32:50
| 1
| 435
|
Luis Enriquez-Contreras
|
76,232,399
| 8,838,303
|
Python: How to handle indirect references when inserting into PostgreSQL-tables?
|
<p>I have two .csv files named "country.csv", which looks like this:</p>
<pre><code>Country Code,Country Name,Country-ID
US,United States,0
DE,Germany,1
AU,Australia,2
CZ,Czechia,3
CA,Canada,4
AR,Argentina,5
BR,Brazil,6
PT,Portugal,7
GB,United Kingdom,8
IT,Italy,9
GG,Guernsey,10
RO,Romania,11
</code></pre>
<p>and "users.csv", which looks like this:</p>
<pre><code>User-ID,Age,username,Country-ID
1,,madMeerkat6#yHazv,0
2,18.0,innocentUnicorn8#eCMNj,1
3,,jubilantStork8#YgoL-,0
4,17.0,hushedOatmeal4#y5QVW,0
5,,thrilledRhino7#3PYN3,0
6,61.0,insecureCaviar4#xosWW,0
7,,artisticGarlic3#Sla7S,2
8,,dearMandrill9#c1J0m,1
9,,cynicalDinosaur3#0wSxC,0
10,26.0,gloomyCake2#eRcdC,0
11,14.0,sincereCockatoo6#eDuI_,0
</code></pre>
<p>I had to generate the following PostgreSQL-tables using (precisely) the commands:</p>
<pre><code>CREATE TABLE Country (
ISO_3166 CHAR(2) PRIMARY KEY,
CountryName VARCHAR(256),
CID varchar(16)
);
CREATE TABLE Users (
UID INT PRIMARY KEY,
Username VARCHAR(256),
DoB DATE,
Age INT,
ISO_3166 CHAR(2) REFERENCES Country (ISO_3166)
);
</code></pre>
<p>and now I want to insert the values from the csv-files into the tables. My attempt was the following Python script:</p>
<pre><code>import csv
import sys
import psycopg2
from psycopg2 import extras
import re
import ast
from datetime import date
def csv_to_dictionary(csv_name, delimiter):
input_file = csv.DictReader(open(csv_name, 'r', encoding='utf-8'), delimiter=delimiter)
return input_file
sql_con = psycopg2.connect(host='localhost', port='5432', database="XYZ", user='postgres', password='XYZ')
cursor = sql_con.cursor()
country_dictionary = csv_to_dictionary("country.csv", ',')
for row in country_dictionary:
cursor.execute(""" INSERT INTO country (iso_3166, countryname, cid) VALUES (%s, %s, %s) """, (row["Country Code"], row["Country Name"], row["Country-ID"]))
user_dictionary = csv_to_dictionary("user.csv", ',')
for row in user_dictionary:
if row["Age"] == "" and row["Country-ID"] == "0":
cursor.execute(""" INSERT INTO users (uid, username) VALUES (%s, %s) """, (int(row["User-ID"]), row["username"]))
elif row["Age"] != "" and row["Country-ID"] == "0":
cursor.execute(""" INSERT INTO users (uid, username, age) VALUES (%s, %s, %s) """, (int(row["User-ID"]), row["username"], int(float(row["Age"]))))
elif row["Age"] == "" and row["Country-ID"] != "0":
cursor.execute(""" INSERT INTO users (uid, username, iso_3166) VALUES (%s, %s, %s) """, (int(row["User-ID"]), row["username"], row["Country-ID"]))
else:
cursor.execute(""" INSERT INTO users (uid, username, age, iso_3166) VALUES (%s, %s, %s, %s) """, (int(row["User-ID"]), row["username"], int(float(row["Age"])), row["Country-ID"]))
sql_con.commit()
cursor.close()
sql_con.close()
</code></pre>
<p>The insertion of the data from "country.csv" works fine, however, the problem here is that "ISO_3166" in the table "Users" references "ISO_3166" in the table "Country", but users.csv" only contains the "Country-ID" in ( which is the same as the "Country-ID" in the "users.csv"). I know that there is a 1-1 correspondences between "Country" and "Country-ID" (in "country.csv"), but I do not see how to obtain the "Country" from the corresponding "Country_ID".</p>
<p>Could you please tell me how to achieve this?</p>
|
<python><sql><database><postgresql>
|
2023-05-12 00:22:31
| 1
| 475
|
3nondatur
|
76,232,300
| 4,422,095
|
How to create DAGs with Spark?
|
<p>I'm new to Spark and I've realized that, for the pipeline I'm making, it would be much more convenient to have a DAG to represent the pipeline to improve monitoring, scheduling, etc.</p>
<p>I connected Spark to my MySQL database and ran a few scripts with Spark dataframes using PyTorch and it worked great. I was able to apply machine learning models and stuff.</p>
<p>The problems started once I started looking to set up a DAG. I had read Dagster is more lightweight then airflow, so I decided to use Dagster, but this created issues.</p>
<p>My goal was, for each set of transformations to do to my Spark data frame, I was going to define separate @op functions in dagster that would let me put them into a nice flow chart so I could observe them during execution from the dagit GUI.</p>
<p>However, this doesn't work because apparently you can't pass Spark DFs between these functions since dagster serializes the outputs and then deserializes them once inputted into the next function.</p>
<p>Airflow also has a similar problem it seems whereby, in order to pass data between wo tasks, you have to use the XCom (Cross Communication) to facilitate communication and data exchange between tasks within a DAG.</p>
<p>Thus, it seems like neither of these are suitable for passing data between different tasks, so I'm confused, how does one use DAGs to organize data processing in Spark?</p>
|
<python><apache-spark><airflow><dagster>
|
2023-05-11 23:55:35
| 1
| 2,244
|
Stan Shunpike
|
76,232,295
| 8,243,535
|
python get a file by relative path
|
<p>I have been fumbling with this for a while, and I am thinking maybe there is no real way of doing this in python and the best approach is to always let the user figure out their absolute path and pass that in to my function and I can run it.</p>
<p>this code works if I run it from the directory that the code <code>read_file.py</code> is in. However, if I navigate one directory up <code>cd ..</code> and try to run <code>python my_directory/read_file.py</code> then it will fail and say file cannot be found.</p>
<p>Is there a way that I can pass in relative path and have it always work regardless of where I am in the terminal and am running this code. I come from a NodeJS and Java background.</p>
<h2>Input</h2>
<pre class="lang-py prettyprint-override"><code>"""
read the a file from a path with both relative and absolute value to see which one works
the function accepts a path value instead of a str value
"""
import os
from pathlib import Path
def read_file_resolve(file_path: Path) -> None:
file_path = file_path.resolve()
with file_path.open() as f:
print(f.read())
def read_file_os_absolute(file_path: Path) -> None:
file_path = os.path.abspath(file_path)
file_path = Path(file_path)
with file_path.open() as f:
print(f.read())
if __name__ == "__main__":
my_file_path = Path("./upload_to_s3/my_downloaded_test.txt")
# Get the absolute path to the file.
# absolute_file_path = os.path.abspath(my_file_path)
# Read the file.
try:
read_file_resolve(file_path=my_file_path)
print("resolve worked")
except Exception:
read_file_os_absolute(file_path=my_file_path)
print("os abs worked")
</code></pre>
<h2>output:</h2>
<pre class="lang-bash prettyprint-override"><code>FileNotFoundError: [Errno 2] No such file or directory
</code></pre>
<p>but when I run it from the directory that it is in, then it works fine</p>
|
<python><python-3.x><path>
|
2023-05-11 23:54:11
| 1
| 443
|
AugustusCaesar
|
76,232,123
| 3,486,773
|
How to replace string in column names of pyspark dataframe?
|
<p>I have a pyspark data frame that every column appends the table name ie: <code>Table.col1</code>, <code>Table.col2</code>...</p>
<p>I would like to replace '<code>Table.'</code> with <code>''</code> (nothing) in every column in my dataframe.</p>
<p>How do I do this? Everything I have found deals with doing this to the values in the columns and not the column names themselves.</p>
|
<python><dataframe><apache-spark><pyspark><regex-replace>
|
2023-05-11 23:03:16
| 2
| 1,278
|
user3486773
|
76,232,118
| 1,120,370
|
CURL works for an Elasticsearch instance but Python interfaces do not: what am I missing?
|
<p>The devops team at my company has set up an Elasticsearch/Kibana server for me. Everything works fine through the Kibana interface (including Dev Tools), however, I am having trouble accessing the server via Python. I think it's an SSL issue, but I'm sure if my diagnosis is correct or what exactly I have to do to fix it.</p>
<p>The following command returns a list of indexes as expected.</p>
<pre><code>curl -u "username:password" -k https://server/_cat/indicies
</code></pre>
<p>The Elasticsearch Python client fails like so.</p>
<pre><code>from elasticsearch import Elasticsearch
basic_auth = ("username", "password")
host = "https://server"
elasticsearch_client = Elasticsearch({"host": host, "scheme": "https"}, basic_auth=basic_auth)
elasticsearch_client.indices.get("*")
...
elasticsearch.exceptions.ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7fa6c8f96920>:
Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known) caused by:
NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7fa6c8f96920>:
Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known)
</code></pre>
<p>In an attempt to work around SSL restrictions by following a solution to the <a href="https://stackoverflow.com/questions/54454126/set-verify-certs-false-yet-elasticsearch-elasticsearch-throws-ssl-error-for-cert">"Set verify_certs=False yet elasticsearch.Elasticsearch throws SSL error for certificate verify failed"</a> question I tried</p>
<pre class="lang-py prettyprint-override"><code>from elasticsearch import RequestsHttpConnection
elasticsearch_client = Elasticsearch(
[host],
connection_class=RequestsHttpConnection,
http_auth=("username", "password"),
use_ssl=True,
verify_certs=False,
)
elasticsearch_client.indices.get("*")
</code></pre>
<p>But that results in:</p>
<pre><code>...
File /usr/local/anaconda3/envs/maintainer-text/lib/python3.10/site-packages/elasticsearch/connection/base.py:330, in Connection._raise_error(self, status_code, raw_data)
327 except (ValueError, TypeError) as err:
328 logger.warning("Undecodable raw error response from server: %s", err)
--> 330 raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
331 status_code, error_message, additional_info
332 )
NotFoundError: NotFoundError(404, 'Not Found', 'Not Found')
</code></pre>
<p>(Can you set things up on the server side to disallow ignoring SSL certificates?)</p>
<p>Trying to move one level lower, if I use <code>requests</code> I get</p>
<pre><code>import requests
requests.get("https://server/_cat/indicies", auth=("username", "password"))
...
SSLError: HTTPSConnectionPool(host='es-api.dev.sparkgov.ai', port=443): Max retries exceeded with url: /_cat/indicies (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:997)')))
</code></pre>
<p>It looks like my problem is that I don't have the proper SSL certificate. CURL knows how to get around this, by all my Python tools don't. Is this a correct diagnosis?</p>
<p>Do I <a href="https://stackoverflow.com/questions/61961725/connect-to-elasticsearch-with-python-using-ssl">ask my dev ops team for a .pem file</a>? How do I get this to work?</p>
|
<python><elasticsearch><ssl-certificate>
|
2023-05-11 23:02:32
| 1
| 17,226
|
W.P. McNeill
|
76,232,039
| 5,379,479
|
ImportError from Great Expectations
|
<p>I've been using Great Expecations for a while and recently when I rebuilt a docker image I started to get this error. The image builds fine, but when I try to run code and import the package this error appears.</p>
<pre><code> import great_expectations as ge
File "/opt/conda/envs/my_env/lib/python3.8/site-packages/great_expectations/__init__.py", line 6, in <module>
from great_expectations.data_context.migrator.cloud_migrator import CloudMigrator
File "/opt/conda/envs/my_env/lib/python3.8/site-packages/great_expectations/data_context/__init__.py", line 1, in <module>
from great_expectations.data_context.data_context import (
File "/opt/conda/envs/my_env/lib/python3.8/site-packages/great_expectations/data_context/data_context/__init__.py", line 1, in <module>
from great_expectations.data_context.data_context.abstract_data_context import (
File "/opt/conda/envs/my_env/lib/python3.8/site-packages/great_expectations/data_context/data_context/abstract_data_context.py", line 38, in <module>
from great_expectations.core import ExpectationSuite
File "/opt/conda/envs/my_env/lib/python3.8/site-packages/great_expectations/core/__init__.py", line 3, in <module>
from .domain import Domain
File "/opt/conda/envs/my_env/lib/python3.8/site-packages/great_expectations/core/domain.py", line 8, in <module>
from great_expectations.core.id_dict import IDDict
File "/opt/conda/envs/my_env/lib/python3.8/site-packages/great_expectations/core/id_dict.py", line 5, in <module>
from great_expectations.core.util import convert_to_json_serializable
File "/opt/conda/envs/my_env/lib/python3.8/site-packages/great_expectations/core/util.py", line 34, in <module>
from IPython import get_ipython
File "/opt/conda/envs/my_env/lib/python3.8/site-packages/IPython/__init__.py", line 30, in <module>
raise ImportError(
</code></pre>
<p>Does anyone know how to resolve this error? I tried bumping the revision of Great Expectations to the latest, but the error persists.</p>
|
<python><importerror><great-expectations>
|
2023-05-11 22:38:05
| 0
| 2,138
|
Jed
|
76,232,029
| 3,385,948
|
Pip could not find a version that satisfies the requirement, but it IS there in my private PyPI repository
|
<p>I know many questions have been asked along these lines, but I'm stumped.</p>
<p>I've got a private PyPI repo at something like "https://pypi.my_domain.com" and it's been working well for a few years now. However, when I run the following command (which previously worked), it says it can't find the file, but I see it right there!</p>
<pre class="lang-bash prettyprint-override"><code>$ ./pip install --index-url https://${USERNAME}:${PASSWORD}@pypi.my_domain.com canpy
Looking in indexes: https://:****@pypi.my_domain.com
ERROR: Could not find a version that satisfies the requirement canpy (from versions: none)
ERROR: No matching distribution found for canpy
</code></pre>
<p>Here are the files available for download in my private PyPI repo:
<a href="https://i.sstatic.net/gd8XI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gd8XI.png" alt="Private PyPI repo contents" /></a></p>
<p>When I run <code>pip debug --verbose</code> to see the compatible versions, I get the following output, which matches what I have available for download/install in my private repo. Note the last one <code>cp36-cp36m-linux_armv7l</code> which is the one I need on this Raspberry Pi-like computer.</p>
<p>NOTE: This has worked before, many times! Today it just stopped working...</p>
<pre class="lang-bash prettyprint-override"><code>/mnt/dataflash/miniconda3/bin/pip debug --verbose
WARNING: This command is only meant for debugging. Do not use this with automation for parsing and getting these details, since the output and options of this command may change without notice.
pip version: pip 21.3.1 from /mnt/dataflash/miniconda3/lib/python3.6/site-packages/pip (python 3.6)
sys.version: 3.6.6 | packaged by rpi | (default, Sep 6 2018, 10:56:14)
[GCC 6.3.0 20170516]
sys.executable: /mnt/dataflash/miniconda3/bin/python
sys.getdefaultencoding: utf-8
sys.getfilesystemencoding: ascii
locale.getpreferredencoding: ANSI_X3.4-1968
sys.platform: linux
sys.implementation:
name: cpython
'cert' config value: Not specified
REQUESTS_CA_BUNDLE: None
CURL_CA_BUNDLE: None
pip._vendor.certifi.where(): /mnt/dataflash/miniconda3/lib/python3.6/site-packages/pip/_vendor/certifi/cacert.pem
pip._vendor.DEBUNDLED: False
vendored library versions:
CacheControl==0.12.6
colorama==0.4.4
distlib==0.3.3
distro==1.6.0
html5lib==1.1
msgpack==1.0.2 (Unable to locate actual module version, using vendor.txt specified version)
packaging==21.0
pep517==0.12.0
platformdirs==2.4.0
progress==1.6
pyparsing==2.4.7
requests==2.26.0
certifi==2021.05.30
chardet==4.0.0
idna==3.2
urllib3==1.26.7
resolvelib==0.8.0
setuptools==44.0.0 (Unable to locate actual module version, using vendor.txt specified version)
six==1.16.0
tenacity==8.0.1 (Unable to locate actual module version, using vendor.txt specified version)
tomli==1.0.3
webencodings==0.5.1 (Unable to locate actual module version, using vendor.txt specified version)
Compatible tags: 174
cp36-cp36m-manylinux_2_25_armv7l
cp36-cp36m-manylinux_2_24_armv7l
cp36-cp36m-manylinux_2_23_armv7l
cp36-cp36m-manylinux_2_22_armv7l
cp36-cp36m-manylinux_2_21_armv7l
cp36-cp36m-manylinux_2_20_armv7l
cp36-cp36m-manylinux_2_19_armv7l
cp36-cp36m-manylinux_2_18_armv7l
cp36-cp36m-manylinux_2_17_armv7l
cp36-cp36m-manylinux2014_armv7l
cp36-cp36m-linux_armv7l
...
</code></pre>
|
<python><python-3.x><linux><pip>
|
2023-05-11 22:36:50
| 1
| 5,708
|
Sean McCarthy
|
76,231,996
| 7,025,033
|
Weighted element-wise between two arrays, numpy
|
<p>I want to perform</p>
<pre><code>array_0 = np.array([ [1,2,3], [1,2,3], [1,2,3] ])
array_1 = np.array([ [1,2,3], [1,2,3], [1,2,3] ])
weights = np.array([ 1, 5 ])
result = array_0 * weights[0] + array_1 * weights[1]
</code></pre>
<p>Is there a <code>numpy</code> function that does just that?</p>
<p>Obviously, I could use <code>numpy.average()</code> with <code>1==sum(weights)</code>, and then multiply the result to compensate, but my question is: <strong>is there a function that does the sum-product without tricks?</strong></p>
<p>Also, my question may be invalid: I assume that <code>w1*A1+w2*A2+w3*A3+...</code> results in as many for loops as there are operation, not just a single elementwise for loop.</p>
<p>There is a similar question, which does not work in my case:</p>
<p><a href="https://stackoverflow.com/questions/56327842/weighted-average-element-wise-between-two-arrays">element-wise weighted average</a></p>
|
<python><numpy>
|
2023-05-11 22:27:53
| 0
| 1,230
|
Zoltan K.
|
76,231,987
| 4,487,457
|
Unable to install poetry in dataflow
|
<p>I am getting this error when running any poetry executables</p>
<pre><code>Traceback (most recent call last):
File "/root/.local/bin/poetry", line 5, in <module>
from poetry.console.application import main
File "/root/.local/share/pypoetry/venv/lib/python3.8/site-packages/poetry/console/application.py", line 11, in <module>
from cleo.application import Application as BaseApplication
ModuleNotFoundError: No module named 'cleo'
</code></pre>
<p>My container is built using this logic.</p>
<pre><code>FROM gcr.io/dataflow-templates-base/python38-template-launcher-base:flex_templates_base_image_release_20230508_RC00
ARG DIR=/dataflow/template
ARG dataflow_file_path
ARG PROJECT_ID
# environment to pull the right containers
ARG ENV
ARG TOKEN
ENV COMPOSER_$ENV=1
# copying over necessary files
RUN mkdir -p ${DIR}
WORKDIR ${DIR}
COPY transform/dataflow/${dataflow_file_path}.py beam.py
COPY deploy/dataflow/poetry.lock .
COPY deploy/dataflow/pyproject.toml .
# env var in order to use custom lib, for more info, see:
# https://cloud.google.com/dataflow/docs/guides/templates/configuring-flex-templates#set_required_dockerfile_environment_variables
ENV FLEX_TEMPLATE_PYTHON_PY_FILE="${DIR}/beam.py"
ENV FLEX_TEMPLATE_PYTHON_EXTRA_PACKAGES=""
ENV FLEX_TEMPLATE_PYTHON_PY_OPTIONS=""
ENV PIP_NO_DEPS=True
# install poetry
RUN curl -sSL https://install.python-poetry.org | python -
ENV PATH "/root/.local/bin/:${PATH}"
RUN poetry --version
</code></pre>
<p>I have tried uninstalling it and the suggestions from:</p>
<ul>
<li><a href="https://medium.com/@life-is-short-so-enjoy-it/python-poetry-modulenotfounderror-no-module-named-cleo-1f924239b06c" rel="nofollow noreferrer">https://medium.com/@life-is-short-so-enjoy-it/python-poetry-modulenotfounderror-no-module-named-cleo-1f924239b06c</a></li>
<li><a href="https://stackoverflow.com/questions/69950213/i-am-getting-no-module-name-cleo-when-trying-to-do-poetry-add">I am getting no module name cleo when trying to do "poetry add"</a></li>
<li><a href="https://github.com/python-poetry/poetry/issues/3071" rel="nofollow noreferrer">https://github.com/python-poetry/poetry/issues/3071</a></li>
<li><a href="https://github.com/python-poetry/poetry/issues/2991" rel="nofollow noreferrer">https://github.com/python-poetry/poetry/issues/2991</a></li>
</ul>
<p>These aren't really applicable because they are <code>poetry</code> executables in a non-docker environment but not really sure what else to do . I have an Dataflow SDK that was build from <code>apache/beam_python3.8_sdk:2.45.0</code> that has the same logic and it is working.</p>
<p>I disregarded the last <code>RUN</code> command and built the container, these are the outputs of some checks</p>
<pre><code>❯ docker run --rm --entrypoint /bin/bash dataflow -c 'which poetry'
/root/.local/bin/poetry
❯ docker run --rm --entrypoint /bin/bash dataflow -c 'poetry'
Traceback (most recent call last):
File "/root/.local/bin/poetry", line 5, in <module>
from poetry.console.application import main
File "/root/.local/share/pypoetry/venv/lib/python3.8/site-packages/poetry/console/application.py", line 11, in <module>
from cleo.application import Application as BaseApplication
ModuleNotFoundError: No module named 'cleo'
❯ docker run --rm --entrypoint /bin/bash dataflow -c 'which python'
/usr/local/bin/python
</code></pre>
<p>My assumption is that the <code>poetry</code> executables are importing in a virtualenv that the other libraries aren't installed to.</p>
<p>UPDATE:</p>
<p>I went down a rabbit hole and did this</p>
<pre><code>RUN pip install --no-cache-dir poetry cleo rapidfuzz importlib_metadata zipp crashtest
</code></pre>
<p>And running <code>poetry --version</code> worked but <code>poetry config virtualenvs.create false</code> or any other command will throw this error</p>
<pre><code>During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/poetry", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/site-packages/poetry/console/application.py", line 409, in main
exit_code: int = Application().run()
File "/usr/local/lib/python3.8/site-packages/cleo/application.py", line 338, in run
self.render_error(e, io)
File "/usr/local/lib/python3.8/site-packages/poetry/console/application.py", line 180, in render_error
self.set_solution_provider_repository(self._get_solution_provider_repository())
File "/usr/local/lib/python3.8/site-packages/poetry/console/application.py", line 398, in _get_solution_provider_repository
from poetry.mixology.solutions.providers.python_requirement_solution_provider import ( # noqa: E501
File "/usr/local/lib/python3.8/site-packages/poetry/mixology/__init__.py", line 5, in <module>
from poetry.mixology.version_solver import VersionSolver
File "/usr/local/lib/python3.8/site-packages/poetry/mixology/version_solver.py", line 8, in <module>
from poetry.core.packages.dependency import Dependency
ModuleNotFoundError: No module named 'poetry.core'
</code></pre>
|
<python><google-cloud-dataflow><python-poetry>
|
2023-05-11 22:25:20
| 1
| 2,360
|
Minh
|
76,231,984
| 11,065,874
|
In python, How do I create a abstract base singleton class that can only be initialized once?
|
<p>I have this piece of code</p>
<pre><code>import abc
class SingletonABCMeta(abc.ABCMeta):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(SingletonABCMeta, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class Cabc(abc.ABC):
__metaclass__ = SingletonABCMeta
@abc.abstractmethod
def foo(self): pass
class C(Cabc):
def __init__(self, a):
print(f"initializing with {a}")
self._a = a
def foo(self):
return self._a
class DoubleC(Cabc):
def __init__(self, a):
print(f"initializing with {a}")
self._a = a
def foo(self):
return self._a * 2
if __name__ == "__main__":
print(C(2).foo())
print(DoubleC(2).foo())
print(C(3).foo())
print(DoubleC(3).foo())
</code></pre>
<p>it returns</p>
<pre><code>initializing with 2
2
initializing with 2
4
initializing with 3
3
initializing with 3
6
</code></pre>
<p>But I expect it to return</p>
<pre><code>initializing with 2
2
initializing with 2
4
2
4
</code></pre>
<p>How do I do this?</p>
|
<python><oop><singleton><abstract-class>
|
2023-05-11 22:25:05
| 1
| 2,555
|
Amin Ba
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.