QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,539,601
| 143,397
|
How to add command-line options to pytest with a custom plugin located in a subdirectory?
|
<p>Summary: my custom pytest plugin, located in a subdirectory, is imported but the implemented hook function <code>pytest_addoption()</code> is not called.</p>
<hr />
<p>I have a custom pytest plugin that I want to store in its own directory, <code>bar/</code>, alongside but separate from my tests:</p>
<pre><code>.
├── tests
│ ├── bar
│ │ └── conftest.py
│ ├── conftest.py
│ └── test_main.py
</code></pre>
<p>I invoke <code>pytest</code> in one of two ways:</p>
<p>From the <code>.</code> directory:</p>
<pre><code> $ pytest tests
</code></pre>
<p>Or from the <code>tests</code> directory:</p>
<pre><code> $ cd tests
$ pytest
</code></pre>
<p>In both of these cases, pytest finds the tests in <code>test_main.py</code> and runs them.</p>
<p>My root-level <code>tests/conftest.py</code> sets <code>pytest_plugins</code> as per the <a href="https://docs.pytest.org/en/7.1.x/how-to/writing_plugins.html#requiring-loading-plugins-in-a-test-module-or-conftest-file" rel="nofollow noreferrer">docs</a>:</p>
<pre><code># tests/conftest.py
pytest_plugins = ['bar',]
def pytest_addoption(parser):
parser.addoption("--foo", action="store_true", help="FOO")
</code></pre>
<p>This adds the new <code>--foo</code> option, which I check with:</p>
<pre><code>$ pytest --help | grep "foo"
--foo FOO
</code></pre>
<p>I confirm that my <code>tests/bar/conftest.py</code> file is being imported:</p>
<pre><code>$ pytest --trace-config | grep bar
PLUGIN registered: <module 'bar' (<_frozen_importlib_external._NamespaceLoader object at 0x7f37959cb490>)>
bar : None
PLUGIN registered: <module 'conftest' from '<snip>/tests/bar/conftest.py'>
</code></pre>
<p>And in that plugin I wish to register a new command-line option:</p>
<pre><code># tests/bar/conftest.py
def pytest_addoption(parser):
parser.addoption("--bar", action="store_true", help="BAR")
</code></pre>
<p>But the problem is that this hook function is not being called, and the option <code>--bar</code> is <em>not</em> added:</p>
<pre><code>$ pytest --help | egrep "foo|bar"
--foo FOO
</code></pre>
<p>I've read through the PyTest documentation and I note the <a href="https://docs.pytest.org/en/latest/reference/reference.html#pytest.hookspec.pytest_addoption" rel="nofollow noreferrer">pytest.hookspec.pytest_addoption</a> docs mention:</p>
<blockquote>
<p>Note: This function should be implemented only in plugins or conftest.py files situated at the tests root directory due to how pytest discovers plugins during startup.</p>
</blockquote>
<p>It's not clear to me if that's a warning due to how <code>conftest.py</code> files are discovered, or an actual hard-and-fast rule making it impossible for a plugin in a subdirectory to add options to pytest even if it is imported. Implied by <a href="https://stackoverflow.com/a/52458082">this</a> S.O. answer, the explicit inclusion of the <code>bar</code> plugin via <code>pytest_plugins = ['bar']</code> might allow that hook function to be registered and thus called, unless this specific situation is not supported?</p>
<p>If this is not possible, is there a known workaround for implementing a custom plugin (i.e. in a separate <code>conftest.py</code> file) that can add its own command-line options?</p>
<p>I'm using pytest-7.2.1 with python-3.10.6.</p>
|
<python><unit-testing><plugins><pytest>
|
2023-02-23 00:49:44
| 0
| 13,932
|
davidA
|
75,539,437
| 15,569,921
|
non-symmetric square matrix with given eigenvalues
|
<p>Given an array of eigenvalues, how can I generate a <strong>non-symmetric</strong> square matrix which has those eigenvalues?</p>
<p>I have tried the QR decomposition, but it returns a symmetric one. Here's what I have done so far.</p>
<pre><code>from scipy.stats import ortho_group
eigenvalues = [0.63, 0.2, 0.09, 0.44, 0.3]
s = np.diag(eigenvalues)
q = ortho_group.rvs(len(eigenvalues))
print(np.linalg.eigvalsh(q.T @ s @ q)) # checking the eigenvalues
print(q.T @ s @ q)
</code></pre>
|
<python><matrix><random><scipy><eigenvalue>
|
2023-02-23 00:17:55
| 1
| 390
|
statwoman
|
75,539,347
| 7,766,024
|
Getting a ValueError: Not enough values to unpack for Python dictionary items unpacking
|
<p>I have a dictionary with a single key-value pair where the key is a string and the value is a set of integers (i.e., <code>dict[str, set[int]]</code>).</p>
<p>I want to unpack the key and value by <code>key, value = some_dict.items()</code> but am getting a <code>ValueError: not enough values to unpack (expected 2, got 1)</code> error.</p>
<p>I suspected that this was because I wasn't traversing the dictionary properly so I've tried the following which all lead to the same error:</p>
<pre><code>>>> key, value = zip(some_dict.items())
>>> key, value = list(zip(some_dict.items()))
</code></pre>
<p>What works is:</p>
<pre><code>for k, v in some_dict.items():
key, value = k, v
</code></pre>
<p>How can I unpack the items without using a list?</p>
|
<python>
|
2023-02-22 23:59:42
| 1
| 3,460
|
Sean
|
75,539,283
| 5,693,152
|
when is it "safe" to mix path separators in Python strings representing Windows paths?
|
<p>This minimal example: (Running in PyCharm debugger)</p>
<pre><code>import os
from os.path import join
import subprocess
src_path = r'C:/TEMP/source'
dest_path = r'C:/TEMP/dest'
if __name__ == "__main__":
for root, _, files in os.walk(src_path):
for name in files:
src_file_path = join(root, name)
rel_dest_file_path = os.path.join(dest_path, os.path.dirname(os.path.relpath(src_file_path, src_path)))
rdfp = join(rel_dest_file_path, name)
sfp = src_file_path
cmd = "['copy', '/v', %s, %s]" % (sfp, rdfp)
print 'calling shell subprocess %s' % cmd
subprocess.call(['copy', '/v', sfp, rdfp], shell=True)
</code></pre>
<p>Produces this output:</p>
<pre><code>calling shell subprocess ['copy', '/v', C:/TEMP/source\foo bar.txt, C:/TEMP/dest\foo bar.txt]
1 file(s) copied.
calling shell subprocess ['copy', '/v', C:/TEMP/source\foo.txt, C:/TEMP/dest\foo.txt]
The syntax of the command is incorrect.
Process finished with exit code 0
</code></pre>
<p>Why doesn't the path to the file named "foo bar.txt" also produce a command syntax error? Why does the path instead lead to a successful file copy?</p>
<p>I can fix the syntax problem in the example by explicitly using the Windows path separator in the initial raw string literal path assignments which makes sense to me.</p>
<pre><code>src_path = r'C:\TEMP\source'
dest_path = r'C:\TEMP\dest'
</code></pre>
<p>What doesn't make sense is why a blank space in the "mixed slash" path also "solves" the syntax issue.</p>
<p>Any references or pointers?</p>
|
<python><windows><shell><path-separator>
|
2023-02-22 23:47:10
| 1
| 925
|
geneSummons
|
75,539,146
| 2,458,922
|
How to Take Subset of a Tensor based on set of Index , in Tensorflow
|
<p>Given
a: tensor([[3., 5.], [1., 2.],[5., 7.],[4., 6.],[3., 5.]])
index: tensor( [0,0],[3,1])</p>
<p>Is there any Way to Get a[index] is [3,6]</p>
|
<python><tensorflow><tensor>
|
2023-02-22 23:24:56
| 0
| 1,731
|
user2458922
|
75,539,085
| 10,743,830
|
how to speed up following pandas dataframe iteration and loc indexing
|
<p>The following is just a part of the whole dataset. Whole dataset is milions of rows so the computation should be super fast. In any case data looks as follows:</p>
<p>Link to the h5 file: <a href="https://drive.google.com/file/d/16aI3plRFa3M6nSIiT1XioUIgsPYl1Wg8/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/16aI3plRFa3M6nSIiT1XioUIgsPYl1Wg8/view?usp=sharing</a></p>
<p>What I have done is standard loc indexing</p>
<pre><code>filename="look at the h5 file in the link"
new_centroid_trackings = np.array([[0,0,0,0,0,0,0,0]])
model_name="DLC_resnet50_4mice_new_video_no_wheelFeb17shuffle1_220000"
tracking_coords = pd.read_hdf(filename)
for frame in range(tracking_coords.shape[0]):
centroid_mouse1_x=(tracking_coords.loc[frame, model_name]["mouse1"]["tail1"]["x"]+tracking_coords.loc[frame, model_name]["mouse1"]["tail2"]["x"]+tracking_coords.loc[frame, model_name]["mouse1"]["tail3"]["x"])/3
centroid_mouse1_y=(tracking_coords.loc[frame, model_name]["mouse1"]["tail1"]["y"]+tracking_coords.loc[frame, model_name]["mouse1"]["tail2"]["y"]+tracking_coords.loc[frame, model_name]["mouse1"]["tail3"]["y"])/3
if np.isnan(centroid_mouse1_x) or np.isnan(centroid_mouse1_y):
centroid_mouse1_y = np.nan
centroid_mouse1_x = np.nan
centroid_mouse2_x=(tracking_coords.loc[frame, model_name]["mouse2"]["tail1"]["x"]+tracking_coords.loc[frame, model_name]["mouse2"]["tail2"]["x"]+tracking_coords.loc[frame, model_name]["mouse2"]["tail3"]["x"])/3
centroid_mouse2_y=(tracking_coords.loc[frame, model_name]["mouse2"]["tail1"]["y"]+tracking_coords.loc[frame, model_name]["mouse2"]["tail2"]["y"]+tracking_coords.loc[frame, model_name]["mouse2"]["tail3"]["y"])/3
if np.isnan(centroid_mouse2_x) or np.isnan(centroid_mouse2_y):
centroid_mouse2_y = np.nan
centroid_mouse2_x = np.nan
centroid_mouse3_x=(tracking_coords.loc[frame, model_name]["mouse3"]["tail1"]["x"]+tracking_coords.loc[frame, model_name]["mouse3"]["tail2"]["x"]+tracking_coords.loc[frame, model_name]["mouse3"]["tail3"]["x"])/3
centroid_mouse3_y=(tracking_coords.loc[frame, model_name]["mouse3"]["tail1"]["y"]+tracking_coords.loc[frame, model_name]["mouse3"]["tail2"]["y"]+tracking_coords.loc[frame, model_name]["mouse3"]["tail3"]["y"])/3
if np.isnan(centroid_mouse3_x) or np.isnan(centroid_mouse3_y):
centroid_mouse3_y = np.nan
centroid_mouse3_x = np.nan
centroid_mouse4_x=(tracking_coords.loc[frame, model_name]["mouse4"]["tail1"]["x"]+tracking_coords.loc[frame, model_name]["mouse4"]["tail4"]["x"]+tracking_coords.loc[frame, model_name]["mouse4"]["tail3"]["x"])/3
centroid_mouse4_y=(tracking_coords.loc[frame, model_name]["mouse4"]["tail1"]["y"]+tracking_coords.loc[frame, model_name]["mouse4"]["tail4"]["y"]+tracking_coords.loc[frame, model_name]["mouse4"]["tail3"]["y"])/3
if np.isnan(centroid_mouse4_x) or np.isnan(centroid_mouse4_y):
centroid_mouse4_y = np.nan
centroid_mouse4_x = np.nan
# now concatinate the centroids to the previous ones
new_centroid_trackings=np.concatenate((new_centroid_trackings, np.array([[centroid_mouse1_x,centroid_mouse1_y,centroid_mouse2_x, centroid_mouse2_y, centroid_mouse3_x, centroid_mouse3_y, centroid_mouse4_x, centroid_mouse4_y]])), axis=0)
</code></pre>
<p>And for this around 90 seconds is needed for 7500 rows.</p>
<p>Now my idea was to maybe do this with a numpy array instead with pandas dataframe. Or are there some other faster methods that can speed up the computation?</p>
|
<python><pandas>
|
2023-02-22 23:12:17
| 2
| 352
|
Noah Weber
|
75,539,078
| 13,231,896
|
How to use static image as watermark for every page when printing pdf using django-wkhtmltopdf
|
<p>I need to be able to use a static image as watermark when printing a pdf from a django template using django-wkhtmltopdf.
I was trying using the following code, but the document has several pages and the background does not apply to every page individually. Instead it tries to apply the background to the whole document, and as a result the images takes a bigger size, covering all pages. How can I apply the image as a background to every page individually?
Here is my code:</p>
<ol>
<li><p>Defining a background using CSS</p>
<p>body {
background-image: url("{{ watermark_logos }}");
background-size: cover;
}</p>
</li>
<li><p>Generating a pdf with django-wkhtml2pdf. Note that the "cmd_options" represent the parameters for wkhtmltopdf</p>
<pre><code>response = PDFTemplateResponse(request=request,
template=template_to_use,
filename="Ficha proyecto {}.pdf".format(project.get_numero_trabajo),
context= data,
show_content_in_browser=False,
cmd_options={
'margin-top':0,
'margin-bottom':0,
'margin-left':0,
'margin-right':0,
"zoom":1,
"viewport-size" :"1366x513",
'javascript-delay':1000,
'enable-local-file-access':True,
'footer-center' :'[page]/[topage]',
"no-stop-slow-scripts":True},
)
</code></pre>
</li>
</ol>
<p>Here is the watermark I want to use for every page</p>
<p><a href="https://i.sstatic.net/SipoW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SipoW.jpg" alt="watermark I want to use" /></a></p>
|
<python><django><wkhtmltopdf><django-wkhtmltopdf>
|
2023-02-22 23:11:35
| 2
| 830
|
Ernesto Ruiz
|
75,539,007
| 10,629,530
|
Custom validation for FastAPI's query parameter using pydatinc causes Internal Server Error
|
<p>My <code>GET</code> endpoint receives a query parameter that needs to meet the following criteria:</p>
<ol>
<li>be an <code>int</code> between 0 and 10</li>
<li>be even number</li>
</ol>
<p><code>1.</code> is straight forward using <code>Query(gt=0, lt=10)</code>. However, it is not quiet clear to me how to extend <code>Query</code> to do extra custom validation such as <code>2.</code>. The documentation ultimately leads to pydantic. But, my application runs into internal server error when the second validation <code>2.</code> fails.</p>
<p>Below is a minimal scoped example</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Depends, Query
from pydantic import BaseModel, ValidationError, validator
app = FastAPI()
class CommonParams(BaseModel):
n: int = Query(default=..., gt=0, lt=10)
@validator('n')
def validate(cls, v):
if v%2 != 0:
raise ValueError("Number is not even :( ")
return v
@app.get("/")
async def root(common: CommonParams = Depends()):
return {"n": common.n}
</code></pre>
<p>Below are requests that work as expected and ones that break:</p>
<pre><code># requsts that work as expected
localhost:8000?n=-4
localhost:8000?n=-3
localhost:8000?n=2
localhost:8000?n=8
localhost:8000?n=99
# request that break server
localhost:8000?n=1
localhost:8000?n=3
localhost:8000?n=5
</code></pre>
|
<python><exception><fastapi><valueerror><pydantic>
|
2023-02-22 23:00:58
| 1
| 743
|
ooo
|
75,538,966
| 10,737,147
|
numpy e^i(theta) and trigonometric cos(theta) + isin(theta) does not match
|
<p>I read,</p>
<p><a href="https://i.sstatic.net/OwDrH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OwDrH.png" alt="enter image description here" /></a></p>
<p>So I tried to do apply this to a list of points as shown below</p>
<p><a href="https://i.sstatic.net/5ExHQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5ExHQ.png" alt="enter image description here" /></a></p>
<p>And below part works perfectly as expected.</p>
<pre><code>Zj = np.array([1. +0.j , 0.5+0.5j, 0. +0.j , 0.5-0.5j, 1. +0.j ])
δj = Zj[1: ] - Zj[:-1]
assert(np.allclose(δj, np.array([-.5+0.5j , -.5-.5j, 0.5-.5j, .5+.5j])))
assert(np.allclose(np.angle(δj, deg=True) , np.array([ 135., -135., -45., 45.])))
</code></pre>
<p>But when I take $e^{i\theta}$ that does not work as intended.</p>
<pre><code>e_iVj = -1j * ((δj)/abs(δj))
assert(np.allclose(np.cos(np.angle(δj)) + np.sin(np.angle(δj)), e_iVj))
</code></pre>
<p>EDIT:</p>
<p>As @hpaulg suggested, I've added 1j component in to sin. but still its not quite match up with ref to signs. Values are correct</p>
<pre><code>>>> np.cos(np.angle(δj)) + 1j* np.sin(np.angle(δj))
array([-0.70710678+0.70710678j, -0.70710678-0.70710678j,
0.70710678-0.70710678j, 0.70710678+0.70710678j])
>>> e_iVj
array([ 0.70710678+0.70710678j, -0.70710678+0.70710678j,
-0.70710678-0.70710678j, 0.70710678-0.70710678j])
>>> 1j * ((δj)/abs(δj))
array([-0.70710678-0.70710678j, 0.70710678-0.70710678j,
0.70710678+0.70710678j, -0.70710678+0.70710678j])
</code></pre>
<p>@peterwhy</p>
<p>Could you please let me know if the equation on the first snippet is correct as per your understanding.</p>
<pre><code>-- Confirming Angles are represented correctly
>>> Zj= Zj[0]
>>> Zj_p1 = Zj_p1[0]
>>> δj = Zj_p1 - Zj
>>> np.angle(δj, deg=True)
135.0 # Confirmed ok
</code></pre>
<pre><code>-- Showing e^iϑ = δj/abs(δj)
>>> ϑ = np.angle(δj)
>>> e_iϑ = np.exp(1j*ϑ)
>>> e_iϑ
(-0.7071067811865475+0.7071067811865476j)
>>> δj/abs(δj)
(-0.7071067811865475+0.7071067811865475j)
>>> np.isclose(np.exp(1j*ϑ), δj/abs(δj))
True # confirmed both methods yield the same result
</code></pre>
<pre><code>-- Showing e^-iϑ = abs(δj)/ δj
>>> np.exp(-1j*ϑ)
(-0.7071067811865475-0.7071067811865476j)
>>> abs(δj)/δj
(-0.7071067811865476-0.7071067811865476j)
>>> np.isclose(np.exp(-1j*ϑ), abs(δj)/δj)
True # confirmed both methods yield the same result
</code></pre>
<p>With reluctance, my guess is -i there is a typo. Above is what I've got with python which indicates -i there is not really required. Could you confirm ?</p>
|
<python><numpy><math>
|
2023-02-22 22:55:19
| 2
| 437
|
XYZ
|
75,538,846
| 13,132,728
|
How to swap multiple values within a row based on a conditional in pandas
|
<h2>Description of my problem</h2>
<p>I pulled some data from an api that is unfortunately formatted. In particular, there are four columns I need to utilize in some way to solve my issue. They are <code>label_1</code>, <code>odds_1</code>, <code>label_2</code>, and <code>line_2</code>. The <code>odds_</code> columns can consist of either <code>Over</code> or <code>Under</code>. Ideally, I'd want one of the <code>odds_</code> columns to consist of exclusively <code>Over</code> while the other consists of exclusively <code>Under</code>. But alas, that is not how the data was formatted, so I am tasked with doing this myself. Ideally, <code>label_</code> would be all <code>Over</code> while <code>label_2</code> would be all under.</p>
<p>Here is a visual of my problem:</p>
<h2>My data</h2>
<pre><code>odds_1 label_1 line_1 odds_2 label_2 line_2
-165 Under 3.5 130 Over 3.5
-137 Under 2.5 108 Over 2.5
-104 Over 10.5 -122 Under 10.5
-117 Over 26.5 -109 Under 26.5
100 Over 2.5 -125 Under 2.5
-117 Over 14.5 -109 Under 14.5
</code></pre>
<h2>My desired output</h2>
<pre><code>odds_1 label_1 line_1 odds_2 label_2 line_2
130 Over 3.5 -165 Under 3.5
108 Over 2.5 -137 Under 2.5
-104 Over 10.5 -122 Under 10.5
-117 Over 26.5 -109 Under 26.5
100 Over 2.5 -125 Under 2.5
-117 Over 14.5 -109 Under 14.5
</code></pre>
<h2>What I have tried</h2>
<ul>
<li><p>I tried using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.at.html" rel="nofollow noreferrer">pandas.DataFrame.at</a>, but that poses multiple problems. 1. I don't believe there is a way to vectorize it, and 2. It doesn't address the necessary swap of the <code>odds_</code> values.</p>
</li>
<li><p>I have also tried <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer">numpy.where</a>. This fixes the vectorization issue, but does not address the swap of the <code>odds_</code> columns (at least the way I'm currently doing it). I haven't discovered a way to nest <code>np.where</code> to make this work, but maybe I'm missing something</p>
</li>
</ul>
<pre><code>df = np.where(df.label_1 == 'Under', 'Over', df.label_1)
odds_1 label_1 line_1 odds_2 label_2 line_2
-165 Over 3.5 130 Under 3.5
-137 Over 2.5 108 Under 2.5
-104 Over 10.5 -122 Under 10.5
-117 Over 26.5 -109 Under 26.5
100 Over 2.5 -125 Under 2.5
-117 Over 14.5 -109 Under 14.5
</code></pre>
<ul>
<li>I have also tried <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iterrows.html" rel="nofollow noreferrer">pandas.DataFrame.iterrows</a> but can't quite get that to work either. Still not sure how to address the swap of the <code>odds_</code> columns here.</li>
</ul>
<pre><code>for idx, row in df.iterrows():
row.label_1 = 'Over' where some conditional?
</code></pre>
<p>Hope I gave enough info. Any help would be greatly appreciated. Thanks!</p>
|
<python><pandas><dataframe>
|
2023-02-22 22:37:42
| 2
| 1,645
|
bismo
|
75,538,781
| 3,832,467
|
Why IIS HTTPS Redirect doesn't work with API?
|
<p>I am using the following rule to redirect a HTTPS request to HTTP. It works fine when I access my application via the browser. But accessing it via the Python API leads to an error "Your Api endpoint at xxx is not available or not responding."</p>
<p>Any ideas why the API behavior might be different to the browser?</p>
<pre class="lang-xml prettyprint-override"><code> <rule name="Redirect to HTTPS" enabled="false" stopProcessing="true">
<match url="(.*)" />
<conditions>
<add input="{HTTPS}" pattern="^OFF$" />
</conditions>
<action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="SeeOther" />
</rule>
</code></pre>
|
<python><iis>
|
2023-02-22 22:29:08
| 0
| 496
|
HansHupe
|
75,538,581
| 2,540,669
|
Does python's WSGI specification have anything to say about websockets?
|
<p>I know that there are options out there for WSGI servers that can handle websockets. Ex: <a href="https://flask-socketio.readthedocs.io/en/latest/deployment.html" rel="nofollow noreferrer">this</a> allows running flask + websockets in gunicorn, which is a WSGI server. But I thought the WSGI spec was only concerned with HTTP? Does the spec mention websockets and if not, how is something like <code>flask-socketio</code> possible?</p>
|
<python><websocket><wsgi><flask-socketio>
|
2023-02-22 22:01:33
| 1
| 3,181
|
augray
|
75,538,575
| 4,338,000
|
How do I create ordinal value from continuous value if they fall per 2?
|
<p>If I want to create ordinal value from continuous value, how can I split per 2? I can use cut function to create 2 bins or 50 bins if I had 100 data and are incremental. If I had random dataset, how can I create ordinal value per 2 for instance? For example if I have a columns with <code>[1,2,3,2,2,4,5,10,15,20]</code> and I want to create ordinal per 5 degree, thr result would look like <code>[1,1,1,1,1,1,1,2,2,3]</code>. This is what I would use if I want to create bucket of 2:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(10)
df = pd.DataFrame({
'normal': np.random.normal(10, 3, 1000),
'chi': np.random.chisquare(4, 1000)
})
pd.cut(df['normal'], 2)
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-22 22:00:57
| 2
| 1,276
|
Sam
|
75,538,456
| 6,202,327
|
Why does scipy's quadrature function throw an error when using arrays?
|
<p>I have two versions of a snippet; one works and one doesn't.</p>
<p>This works:</p>
<pre><code>f = lambda x : x * 2.0 * pi
print(scipy.integrate.quadrature(f, 0.0, 1.0))
</code></pre>
<p>This fails:</p>
<pre><code>f = lambda x : math.exp(x * 2.0 * pi)`
print(scipy.integrate.quadrature(f, 0.0, 1.0))
</code></pre>
<p>With the error:</p>
<blockquote>
<p>TypeError: only size-1 arrays can be converted to Python scalars</p>
</blockquote>
<p>I don;t understand both are scalar functions, why is one accepted but the other is not?</p>
|
<python><math><scipy><integral>
|
2023-02-22 21:44:20
| 1
| 9,951
|
Makogan
|
75,538,416
| 588,804
|
Compare dataframe with database: insert new values, update changed values
|
<p>I've been looking into other solutions but haven't found anything that shows what I'm trying to do. This is the structures of my data:</p>
<p>My database has these columns; id is the primary key, inserted_date is auto generated:</p>
<pre><code>+----+------------+-------+---------------+
| id | fruit | price | inserted_date |
+----+------------+-------+---------------+
| 1 | apple | 1 | 2023-22-01 |
| 2 | banana | 2 | 2023-22-01 |
| 3 | strawberry | 3 | 2023-22-01 |
+----+------------+-------+---------------+
</code></pre>
<p>My dataframe has these columns:</p>
<pre><code>+--------+-------+
| fruit | price |
+--------+-------+
| apple | 1 |
| banana | 4 |
| pear | 2 |
+--------+-------+
</code></pre>
<p>At the end, this is what I would like to see in the database:</p>
<pre><code>+----+------------+-------+---------------+
| id | fruit | price | inserted_date |
+----+------------+-------+---------------+
| 1 | apple | 1 | 2023-22-01 | <- No change. In new dataframe
| 2 | banana | 4 | 2023-22-01 | <- Updated
| 3 | strawberry | 3 | 2023-22-01 | <- No change. Not in new dataframe
| 4 | pear | 2 | 2023-22-02 | <- New Value
+----+------------+-------+---------------+
</code></pre>
<p>I would get that data by comparing the "fruit" (unique value) with the price, which could change.</p>
<p>I already have the DataFrames setup with the data but haven't found a way to compare them and insert the correct data into the database. I can do it 1 by 1, but figured that there's a better way to do it. Appreciate any help!</p>
|
<python><pandas><dataframe>
|
2023-02-22 21:39:39
| 0
| 1,428
|
raygo
|
75,538,326
| 7,455,960
|
how to query latest row in relational table after joining needed data
|
<p>i've done some digging around the past couple days on this site and elsewhere and i can't seem to find an answer to a specific query i want to run</p>
<p>i have 4 tables: Posts, PrivateMessagePosts, Users, and Offenses. Offenses are tied to either a <code>Post.id</code> or <code>PrivateMessagePost.id</code> but not a user. they look like this:</p>
<p><code>Offense</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>post_id</th>
<th>private_message_post_id</th>
<th>end_time</th>
<th>is_ban</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>12</td>
<td>null</td>
<td>$some_datetime</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>34</td>
<td>null</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p><code>Post</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>author_id</th>
<th>body</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>12</td>
<td>hello</td>
</tr>
<tr>
<td>2</td>
<td>54</td>
<td>world</td>
</tr>
</tbody>
</table>
</div>
<p><code>PrivateMessagePost</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>author_id</th>
<th>body</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>12</td>
<td>hello</td>
</tr>
<tr>
<td>2</td>
<td>23</td>
<td>world</td>
</tr>
</tbody>
</table>
</div>
<p><code>User</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>username</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>joe</td>
</tr>
<tr>
<td>2</td>
<td>bill</td>
</tr>
</tbody>
</table>
</div>
<p>so far I have a SQLAlchemy query which will get a list of posts given a thread ID (that part is irrelevant), and return them with all of the offenses tied to them. it looks like this:</p>
<pre class="lang-py prettyprint-override"><code> db.session.query(
Post,
Thread.title,
User.username,
User.byline,
User.avatar_url,
User.registered_on,
User.id
Forum.id,
Forum.title,
func.group_concat(offense.id),
func.group_concat(offense.is_ban),
func.group_concat(func.coalesce(offense.end_time, "None")),
func.group_concat(offense.post_id),
active_offense_one.id,
)
.join(Thread, base_post.thread_id == Thread.id)
.join(Forum, Forum.id == Thread.forum_id)
.join(some_user, some_user.id == base_post.author_id)
"""
this join gets the list of offenses that are tied to
the specific post we're querying,
but irrelevant to the question
"""
.join(
offense,
and_(
offense.post_id == base_post.id,
offense.approved_user_id > 0,
or_(offense.end_time > current_time, offense.is_ban == True),
),
isouter=True,
)
.group_by(
base_post.id,
active_offense_one.id
)
</code></pre>
<p>which translates to this SQL:</p>
<pre><code>SELECT
posts_1.id AS posts_1_id,
posts_1.body AS posts_1_body,
posts_1.posted_at AS posts_1_posted_at,
posts_1.updated_at AS posts_1_updated_at,
posts_1.ip_address AS posts_1_ip_address,
posts_1.thread_id AS posts_1_thread_id,
posts_1.author_id AS posts_1_author_id,
threads.title AS threads_title,
users_1.username AS users_1_username,
users_1.byline AS users_1_byline,
users_1.avatar_url AS users_1_avatar_url,
users_1.registered_on AS users_1_registered_on,
forums.id AS forums_id,
forums.title AS forums_title,
"""
using group_concat to get a CSV of all the offenses tied to this specific post
"""
group_concat(offenses_1.id) AS group_concat_1,
group_concat(offenses_1.is_ban) AS group_concat_2,
group_concat(coalesce(offenses_1.end_time, %(coalesce_1)s)) AS group_concat_3,
group_concat(offenses_1.post_id) AS group_concat_4,
offenses_2.id AS offenses_2_id, users_1.id AS users_1_id
FROM offenses AS offenses_2, posts AS posts_1
INNER JOIN threads ON posts_1.thread_id = threads.id
INNER JOIN forums ON forums.id = threads.forum_id
INNER JOIN users AS users_1 ON users_1.id = posts_1.author_id
LEFT OUTER JOIN offenses AS offenses_1
ON offenses_1.post_id = posts_1.id
AND offenses_1.approved_user_id > %(approved_user_id_1)s
AND (offenses_1.end_time > %(end_time_1)s OR offenses_1.is_ban = true)
GROUP BY posts_1.id, offenses_2.id
</code></pre>
<p>That's all great and working fine, BUT now i need a way to say "give me the latest offense that this user has that is either not past it's end time or is a ban. it doesn't matter if the offense isn't tied to the post - it just needs to be tied to the user"</p>
|
<python><mysql><sql><sqlalchemy>
|
2023-02-22 21:27:41
| 0
| 329
|
Marty
|
75,538,278
| 1,341,731
|
SQLAlchemy with server side cursors and need to close the result set early - but the program hangs till the resultset is exhausted?
|
<p>I'm trying to use the following coding style with SQLAlchemy to process millions of rows of data, and have a need to abort the result fetch part way through. How can I close a result set and force the underlying connection to stop sending unwanted data and halt the query?</p>
<p>In other words when #close() is called on the result set, why does the underlying code continue to stream data to effectively /dev/null? How can I stop it from streaming data?</p>
<p>Using SQLAlchemy 2.0.3 on Linux.</p>
<pre class="lang-py prettyprint-override"><code>with self._engine.execution_options(stream_results=True).connect() as conn:
result: Result
with conn.execute(stmt) as result:
print("pre-consumer")
consumer(result)
print("post-consumer")
# Code hangs here while MariaDB is continuing to send data.
result.close()
</code></pre>
<p>I'm using the following connection URL:</p>
<p><code>mariadb+mariadbconnector://_user_:***@mariadb.localnet/db?allowMultiQueries=true&charset=utf8mb4&dumpQueriesOnException=true&includeInnodbStatusInDeadlockExceptions=true&tcpAbortiveClose=false&useCompression=true</code></p>
<p>Example consumer:</p>
<pre><code>def _print_10_ids(result: Result) -> None:
ix: int = 0
for article_id in result.scalars():
print(f"{ix}: {article_id:,}")
ix += 1
if ix >= 10:
# Stop processing rows and return.
return
</code></pre>
|
<python><sqlalchemy><mariadb>
|
2023-02-22 21:21:02
| 1
| 310
|
Michael Conrad
|
75,538,270
| 7,747,759
|
How to change rescale tick label for imshow in python?
|
<p>I am not able to change the tick labels. For example, I create a heatmap using imshow, and the indices for the heatmap don't correspond to the intensity values. Here is a snippet of code to demonstrate my question:</p>
<pre><code>iters=100
A=np.arange(0,iters+1)/(iters)
B=np.arange(0,iters+1)/(iters)
C = np.zeros((len(A),len(B)))
for i in range(0,len(A)):
for j in range(0,len(B)):
a = A[i]
b = B[j]
C[j][i]=a-b
fig, ax = plt.subplots()
img = ax.imshow(C, cmap='hot', interpolation='none')
plt.colorbar(img)
ax.invert_yaxis()
ax.set_xlabel('A')
ax.set_ylabel('B')
ax.set_xticklabels(A)
ax.set_yticklabels(B)
plt.show()
</code></pre>
<p>The values for the X and Y axes in the plot should be labeled according to the values populated in A and B. If I set the ticks according to the arrays A and B, I only get the first few elements. How associate the tick labels with the full range of A and B?</p>
|
<python><matplotlib>
|
2023-02-22 21:19:47
| 1
| 511
|
Ralff
|
75,538,238
| 11,678,574
|
python subprocess.run("npx prettier --write test.ts", shell=True) does not work but running "npx prettier --write test.ts" in terminal does not
|
<p>My file structure:</p>
<ul>
<li>test.py</li>
<li>test.ts</li>
</ul>
<p>and I am attempting to format the TypeScript file using a Python script. (Running this on Command prompt in Windows) However, when I run
my python file with <code>subprocess.run("npx prettier --write test.ts", shell=True)</code> shows</p>
<pre><code>npm WARN config global '--global', '--local' are deprecated. Use '--location=global' instead.
test2.ts 136ms
</code></pre>
<p>as the output in the terminal but the ts file is not changed.</p>
<p>However, running <code>npx prettier --write test.ts</code> in the terminal has the same output but the ts file is correctly formatted using Prettier.</p>
<p>Does anyone know why this is the case and how to make the prettier formatting working using python subprocess?</p>
|
<python><typescript><subprocess><prettier><npx>
|
2023-02-22 21:16:11
| 1
| 313
|
g999
|
75,538,210
| 13,919,791
|
How do I use Scalene to profile my Pytest test suit?
|
<p>I want to use Scalene to profile my Pytest test suit</p>
<p>Typically I run the test suit by running</p>
<pre><code>pytest
</code></pre>
<p>So I tried</p>
<pre><code>scalene pytest
</code></pre>
<p>which doesn't work as I expect.</p>
<p>What is the correct way to run my test suit through scalene?</p>
|
<python><pytest><profiler><scalene>
|
2023-02-22 21:12:19
| 1
| 845
|
snowskeleton
|
75,538,202
| 10,963,057
|
How can I change the distance between the axis title and the axis values in a 3D scatterplot in Plotly (Python)?
|
<p>i have the following example code of a 3d Scatterplot in plotly and i want to place the x,y and z title with more distance to the axis.</p>
<pre><code>import plotly.graph_objects as go
# set up data
x = [1,2,3,4,5]
y = [2,4,6,8,10]
z = [2,4,6,8,10]
# create the figure
fig = go.Figure(data=[go.Scatter3d(x=x, y=y, z=z, mode='markers')])
# set up the axes
fig.update_layout(
title='3D Scatterplot',
scene = dict(
xaxis_title='X Axis',
yaxis_title='Y Axis',
zaxis_title='Z Axis'
)
)
# show the figure
fig.show()
</code></pre>
|
<python><3d><plotly><scatter-plot><axis>
|
2023-02-22 21:11:22
| 1
| 1,151
|
Alex
|
75,538,119
| 7,687,256
|
Python-telegram-bot not returning expected values
|
<p>I've written a Python code to get a set of NFT prices from opensea's API and return it's total USD value below:</p>
<pre><code># Downgraded to python-telegram-bot 13.7 : pip install python-telegram-bot==13.7
# Set your bot token here.
bot_token = ''
# Import necessary libs
import os
import requests
from telegram.ext import Updater, CommandHandler, MessageHandler, filters
# ETH NFT portfolio contained in the dictionary.
eth_nfts_dict = {'pudgypenguins': 1,
'lilpudgys': 1,
'pudgyrods': 1}
# Set up logging
import logging
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO
)
# Setting up their polling stuff
updater = Updater(token=bot_token, use_context=True)
dispatcher = updater.dispatcher
## Functions
'''Fetch spot prices frm coinbase'''
def coinbase_spot(symbol):
cb_url = f"https://api.coinbase.com/v2/exchange-rates?currency={symbol}"
cb_response = requests.get(cb_url)
return float(cb_response.json()['data']['rates']['USD'])
'''Print ETH NFT prices in portfolio, and total USD value'''
def get_eth_nfts(nfts_dict, name):
eth_counter = 0
for nft, qty in nfts_dict.items():
url = f"https://api.opensea.io/api/v1/collection/{nft}/stats"
headers = {"accept": "application/json"}
response = requests.get(url, headers=headers)
eth_floor_price = response.json()['stats']['floor_price']
eth_counter += eth_floor_price * qty
nfts_dict[nft] = eth_floor_price
# Below code is to format the data to appear visually aligned on TG.
# Wrap ``` to the string (for Markdown formatting)
message = "```\n"
for key, value in nfts_dict.items():
message = message + "{0:<40} {1}".format(key, value) + "\n"
message = message + f"\nTotal value of {name}'s ETH NFTs is {round(eth_counter, 3)} ETH ({'${:,.2f}'.format(eth_counter * coinbase_spot('eth'))})" + "\n```"
return message
# Commands
def eth_nfts(update, context):
context.bot.send_message(chat_id=update.effective_chat.id, text=get_eth_nfts(eth_nfts_dict, 'Ben'), parse_mode='Markdown') # parse_mode set to Markdown
# Create and add command handlers
eth_nfts_handler = CommandHandler('Ben_eth', eth_nfts)
dispatcher.add_handler(eth_nfts_handler) # basically it's: updater.dispatcher.add_handler(CommandHandler('<command>', <command function>))
updater.start_polling()
updater.idle() # make sure that program won't fail if 2 programs are attempted to run simultaneously.
</code></pre>
<p>The results in the image below shows what happens when I try to ping my Telegram bot:
<a href="https://i.sstatic.net/PB2Cr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PB2Cr.png" alt="enter image description here" /></a></p>
<p>The intial '/ben_eth' command returns the correct Total value. But subsequent '/ben_eth' seem to be returning wrong Total values (multiples of the intial value). What is the bug here that is causing the subsequent Total values to be wrong?</p>
|
<python><telegram-bot><python-telegram-bot>
|
2023-02-22 21:01:19
| 1
| 939
|
spidermarn
|
75,537,788
| 5,235,665
|
Pandas groupby is removing columns unexpectedly
|
<p>New to Pandas and I'm trying to figure something out here. I have the following code:</p>
<pre><code>logger.info(f"There are {mydf.shape[0]} rows in the subset dataframe")
logger.info(f"before groupby mydf is: {mydf.head(25)}")
mydf = mydf.groupby([Customer_ID, Customer_Name]).agg("sum").reset_index()
logger.info(f"after groupby mydf is: {mydf.head(25)}")
logger.info(f"Now there are {mydf.shape[0]} rows in the subset dataframe")
</code></pre>
<p>When I run this, the log output I'm seeing is:</p>
<pre><code>There are 36905 rows in the subset dataframe
before groupby mydf is: Customer ID Customer Name ... Balance Summary Info EBITDA
after groupby mydf is: Customer ID Customer Name Total Invoice Amount ($) Revenue EBITDA
Now there are 66 rows in the subset dataframe
</code></pre>
<p>Why is my <code>Balance Summary Info</code> column going missing after the <code>groupby</code>, and what do I have to do to keep it in there (so that its in the final <code>mydf</code> dataframe)?</p>
|
<python><pandas><dataframe>
|
2023-02-22 20:22:20
| 1
| 845
|
hotmeatballsoup
|
75,537,775
| 1,511,294
|
Unable to create GridSpace Object in SynapseML
|
<p>I am trying to create a GridSpace object in SynapseML , like this</p>
<pre><code>paramGrid = HyperparamBuilder().addHyperparam(gbt,gbt.maxBin, DiscreteHyperParam([200, 255,300]))
searchSpace= paramGrid.build()
print(searchSpace)
type(searchSpace)
GridSpace(searchSpace)
</code></pre>
<p>This gives me the following error
P</p>
<pre><code>Py4JError: An error occurred while calling None.com.microsoft.azure.synapse.ml.automl.GridSpace. Trace:
py4j.Py4JException: Constructor com.microsoft.azure.synapse.ml.automl.GridSpace([class [Lscala.Tuple2;]) does not exist
</code></pre>
<p>When I ran the code to instantiate a RandomSpace object it worked properly.
I checked the api docs of the 2 constructors here
<a href="https://mmlspark.blob.core.windows.net/docs/0.10.1/pyspark/_modules/synapse/ml/automl/HyperparamBuilder.html#DiscreteHyperParam" rel="nofollow noreferrer">https://mmlspark.blob.core.windows.net/docs/0.10.1/pyspark/_modules/synapse/ml/automl/HyperparamBuilder.html#DiscreteHyperParam</a></p>
<p>For GridSpace:</p>
<pre><code>def __init__(self, paramValues):
</code></pre>
<p>And for RandomSpace:</p>
<pre><code>def __init__(self, paramDistributions):
</code></pre>
<p>I saw both the params are unpacked in the same way , so I am not able to figure out the mistake.</p>
|
<python><azure><machine-learning><azure-synapse>
|
2023-02-22 20:20:54
| 1
| 1,665
|
Ayan Biswas
|
75,537,615
| 15,215,859
|
GCP composer read MySql table and push into GCS and then into BigQuery
|
<p>We are using GCP Cloud composer 2 (Airflow managed) as orchestral tools and BigQuery as DB. I need to push all the records from MySQL table into GCS cloud bucket and BigQuery table but the method should be upsert. So I wrote below DAG but it fails saying "airflow.exceptions.AirflowException: Invalid arguments were passed to BigQueryUpsertTableOperator (task_id: load_to_bigquery). Invalid arguments were:
**kwargs: 'table_id', Schema_fields,schema_update_options': ['ALLOW_FIELD_ADDITION'], 'write_disposition': 'WRITE_APPEND', 'source_format': 'CSV', 'source_objects': "</p>
<p>I want to know if these arguments are not available in the BigQueryUpsertTableOperator then how should i do Upsert operation from MySQL to GCS and then to Bigquery and also how should i pull incremental data from MySql table after initial run. I want MySQL to be in sync with GCS and BigQuery always and for the same of which please help me. Also i want the Schema to be given by me externally since some columns are coming with incorrect datatypes from MySQL to GCS and as you see in the error, i can't use Schema_fields since it's not an attribute of BigQueryUpsertTableOperator.</p>
<p>Here is my DAG code -</p>
<pre><code>from datetime import timedelta, datetime
from airflow import DAG
from airflow.operators.mysql_operator import MySqlOperator
from airflow.providers.google.cloud.transfers.mysql_to_gcs import MySQLToGCSOperator
from airflow.providers.google.cloud.operators.gcs import GCSDeletefilesOperator
from airflow.providers.google.cloud.operators.bigquery import BigQueryUpsertTableOperator
from airflow.utils.dates import days_ago
from airflow.models import Variable
default_args = {
'owner': 'airflow',
'start_date': days_ago(1),
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
dag = DAG('archival_dag', default_args=default_args, schedule_interval='0 0 * * *')
# Get last execution date of the DAG
last_execution_date = Variable.get('last_execution_date', default_var=None)
if last_execution_date is None:
# This is the first run, fetch all data from MySQL table
query = "SELECT * FROM bkp_item_22_02_2023;"
else:
# This is not the first run, fetch only incremental data
query = f"SELECT * FROM bkp_item_22_02_2023 WHERE file_updated >= '{last_execution_date}'"
bucket = 'us-central1-bucket'
file_name = 'data/{{ds_nodash}}.csv'
# Create a task to upload the data to GCS
upload_to_gcs = MySQLToGCSOperator(
task_id='upload_to_gcs',
sql=query,
bucket=bucket,
filename=file_name,
mysql_conn_id='mysql_conn_id',
gcp_conn_id='google_cloud_default',
export_format='CSV',
dag=dag
)
# Create a task to load the data from GCS to BigQuery
load_to_bigquery = BigQueryUpsertTableOperator(
task_id='load_to_bigquery',
dataset_id='testing',
table_id='item_bkp_22022023',
schema_fields=[
{
"name": "Id",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "item_type_id",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "user_id",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "image_info",
"mode": "NULLABLE",
"type": "BYTES"
},
{
"name": "text_info",
"mode": "NULLABLE",
"type": "STRING"
},
{
"name": "location_info",
"mode": "NULLABLE",
"type": "STRING"
},
{
"name": "item_date_time",
"mode": "NULLABLE",
"type": "TIMESTAMP"
},
{
"name": "latitude",
"mode": "NULLABLE",
"type": "FLOAT"
},
{
"name": "longitude",
"mode": "NULLABLE",
"type": "FLOAT"
},
{
"name": "parcel_id",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "shipment_id",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "transaction_id",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "web_hook_status",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "file_id",
"mode": "NULLABLE",
"type": "STRING"
},
{
"name": "file_created",
"mode": "NULLABLE",
"type": "TIMESTAMP"
},
{
"name": "file_updated",
"mode": "NULLABLE",
"type": "TIMESTAMP"
},
{
"name": "pickup_id",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "team_id",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "driver_id",
"mode": "NULLABLE",
"type": "INTEGER"
},
{
"name": "container_id",
"mode": "NULLABLE",
"type": "INTEGER"
}
],
schema_update_options=['ALLOW_FIELD_ADDITION'],
write_disposition='WRITE_APPEND',
source_format='CSV',
source_files=[f'{bucket}/{file_name}'],
create_disposition='CREATE_IF_NEEDED',
google_cloud_storage_conn_id='google_cloud_default',
bigquery_conn_id='google_cloud_default',
table_resource={"tableId": "item_bkp_22022023", "datasetId": "testing", "projectId": "test-dev"},
dag=dag
)
# Set the execution date for the next run
Variable.set('last_execution_date', datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
# Define task dependencies
upload_to_gcs >> load_to_bigquery
</code></pre>
|
<python><python-3.x><google-bigquery><airflow>
|
2023-02-22 20:00:10
| 1
| 317
|
Tushaar
|
75,537,296
| 1,344,369
|
Enumerating list of simple observables subscribing to wrong object in Python
|
<p>I am experiencing some weird bug in a code using observables in list comprehensions. I built here the simplest <a href="https://colab.research.google.com/drive/1lRkhaAnlwj3zYXXx7DTPRWc4rsrKuAts?usp=sharing" rel="nofollow noreferrer">MWE</a> I could think of:</p>
<pre class="lang-py prettyprint-override"><code>class DummyObservable:
def __init__(self, value):
self.value = value
self.subscribers = set()
def __repr__(self):
return f"Dummy (id={id(self)}, value={self.value}, cbs={len(self.subscribers)})"
def subscribe(self, callback):
self.subscribers.add(callback)
callback(self.value)
return lambda: self.subscribers.remove(callback)
def get(self):
return self.value
def update(self, value):
self.value = value
for callback in self.subscribers:
callback(value)
</code></pre>
<p>It seems to work fine (check the observable id):</p>
<pre class="lang-py prettyprint-override"><code>a = DummyObservable("foo")
b = DummyObservable("bar")
u = a.subscribe(lambda x: print(f"{id(a)}: {x}"))
#[Out:] 140257126530832: foo
a.update("fonzie")
#[Out:] 140257126530832: fonzie
a,b
#[Out:] (Dummy (id=140257126530832, value=fonzie, cbs=1), Dummy (id=140257126528096, value=bar, cbs=0))
</code></pre>
<p>Now, trying to subscribe inside a list comprehension:</p>
<pre class="lang-py prettyprint-override"><code>[s.subscribe(lambda x: print(f"{id(s)}: {x}")) for s in [a,b]]
#[Out:]
140257126530832: fonzie
140257126528096: bar
</code></pre>
<p>Seems ok, so far. The id of <code>a</code> and <code>b</code> are printed correctly, showing the subscription is ok.
But... is it really? Look, this strange behavior:</p>
<pre class="lang-py prettyprint-override"><code>a.update("foobar")
#[Out:]
140257126530832: foobar
140257126528096: foobar
</code></pre>
<p>What is going on? The update on "a" is calling the callback of "b".
<a href="https://colab.research.google.com/drive/1lRkhaAnlwj3zYXXx7DTPRWc4rsrKuAts?usp=sharing" rel="nofollow noreferrer">See running MWE.</a></p>
|
<python><observable><list-comprehension>
|
2023-02-22 19:22:51
| 0
| 1,667
|
Fred Guth
|
75,537,277
| 2,540,669
|
How many websocket clients can flask-socketio handle when in Gunicorn with gevent?
|
<p>I am considering the following setup:</p>
<ul>
<li>A "normal" flask app</li>
<li>A <a href="https://python-socketio.readthedocs.io/en/latest/" rel="nofollow noreferrer">socketio</a> app</li>
<li><a href="https://flask-socketio.readthedocs.io/en/latest/" rel="nofollow noreferrer">Flask SocketIO</a></li>
<li><a href="https://python-socketio.readthedocs.io/en/latest/server.html#gevent-with-gunicorn" rel="nofollow noreferrer">gevent with Gunicorn, using <code>GeventWebSocketWorker</code> worker class</a></li>
<li>a single gunicorn worker</li>
</ul>
<p>If you're really curious, <a href="https://github.com/sematic-ai/sematic/blob/4ab386fe388649939b4917336618b282d749dd39/sematic/api/server.py#L87" rel="nofollow noreferrer">here's the source</a> tying it all together.</p>
<p>This seems to work, and be able to serve both normal HTTP traffic to the flask app as well as websockets to the socketio app. My question is: how? And how scalable is it (particularly regarding the websocket piece)?</p>
<p>My understanding of WSGI is that each request that comes in (http or websocket) will block a worker until the connection is terminated. This would mean that this setup would only be able to handle one websocket connection at a time. I've done enough experimenting to not think that's the case, but I still don't know <em>how</em> it works. Are the websockets being handled outside Gunicorn? If so, why do you have to give Gunicorn the <code>GeventWebSocketWorker</code> class? If they are being handled from Gunicorn, how is that possible when Gunicorn is WSGI compliant and WSGI doesn't seem to really allow more than one connection per worker?</p>
<p>On top of these theoretical questions, I also want to know how many websocket connections I can handle with this setup (ballpark).</p>
|
<python><gunicorn><gevent><flask-socketio><gevent-socketio>
|
2023-02-22 19:21:27
| 1
| 3,181
|
augray
|
75,537,234
| 20,898,396
|
Why is accessing elements of an array slower in the GPU than the CPU with Numba?
|
<p>Since we can't call print inside <code>@cuda.jit</code> and trying to print <code>cuda.to_device(A)</code> results in <code><numba.cuda.cudadrv.devicearray.DeviceNDArray at 0x7f2c5c0605e0></code>, I didn't think we could print anything from the GPU. However, we can print a single element.</p>
<pre><code>import numpy as np
from numba import cuda
A = np.random.randn(1000, 1000)
A_gpu = cuda.to_device(A)
A_gpu[0][0]
</code></pre>
<pre><code>-1.0404635120476469
</code></pre>
<p>I was wondering if the number had to be copied to the CPU first before being printed and tried timing it.</p>
<pre><code>%timeit A[0][0]
%timeit A_gpu[0][0]
</code></pre>
<pre><code>231 *ns* ± 5.85 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
166 *µs* ± 25.6 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
<p>Accessing an element in the GPU is a thousand time slower then in the CPU. However, we can also print the shape and that is a little faster in the GPU, so I doubt anything had to go through the CPU just to be printed.</p>
<pre><code>%timeit A.shape
%timeit A_gpu.shape
</code></pre>
<pre><code>78.1 ns ± 1.1 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
58.8 ns ± 19.3 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
</code></pre>
<p>Why would accessing an element in the GPU be slower and is it a problem if we are doing it inside a <code>@cuda.jit</code> or it is optimized? (If the GPU has 1000 cores, and the array has size 1000*1000, the 1000 cores would access 1000 elements all at once 1000 times, which would add a non-negligible 166 µs * 1000)</p>
<pre><code>@cuda.jit
def add_one_gpu(A):
x, y = cuda.grid(2)
m, n = A.shape
if x < m and y < n:
**A[x, y]** += 1
</code></pre>
|
<python><cuda><numba>
|
2023-02-22 19:17:05
| 1
| 927
|
BPDev
|
75,537,204
| 4,764,604
|
How are df2 values placed to the right of df1 columns when there is a match in a common column?
|
<p>I want that for each row of <code>df_transformed</code> where there is a match with a row of the <code>Name</code> column with <code>df</code> we add the values of the <code>df_transformed</code> row to <code>df</code>.</p>
<p>I have an example dataframe <code>df</code> (actually there are other columns):</p>
<pre><code> Name
0 Wine Intelligence, Rapport
1 .DS_Store
2 Chiffres 2021 mail
3 Chiffres 2021 rapport
4 Chiffres 2021 rapport
</code></pre>
<p>and <code>df_transformed</code>:</p>
<pre><code> Name Theme Pays
0 Chiffres Rapport (pdf) Expeditions [France, Allemagne, Italie, Espagne, Belgi...
1 Image WI Image [ Allemagne, Italie, Espagne, Belgique, Por...
2 Brand Power WI Marques [ Allemagne, Italie, Espagne, Belgique, Por...
3 Volume WI Achat [ Allemagne, Italie, Espagne, Belgique, Por...
4 Critères WI Critères [ Allemagne, Italie, Espagne, Belgique, Por...
</code></pre>
<p>When I use <code>df_merged = df.merge(df_transformed, on='Name', how='left')</code>, I get</p>
<pre><code> Name Theme Pays
0 Wine Intelligence, Rapport Expeditions [France, Allemagne, Italie, Espagne, Belgi...
1 .DS_Store Image [ Allemagne, Italie, Espagne, Belgique, Por...
2 Chiffres 2021 mail Marques [ Allemagne, Italie, Espagne, Belgique, Por...
3 Chiffres 2021 rapport Achat [ Allemagne, Italie, Espagne, Belgique, Por...
4 Chiffres 2021 rapport Critères [ Allemagne, Italie, Espagne, Belgique, Por...
</code></pre>
<p>There is a problem, it looks like it added <code>df_transformed</code> values row by row, rather than only when columns <code>Name</code> match.
See, there is a problem: "Figures 2021 mail" has "Brands" in the Subject column when it is the one for "Brand Power WI".</p>
|
<python><python-3.x><pandas><merge>
|
2023-02-22 19:14:04
| 0
| 3,396
|
Revolucion for Monica
|
75,537,113
| 3,045,351
|
How to zip keys within a list of dicts
|
<p>I have this object:</p>
<pre><code>dvalues = [{'column': 'Environment', 'parse_type': 'iter', 'values': ['AirportEnclosed', 'Bus', 'MotorwayServiceStation']}, {'column': 'Frame Type', 'parse_type': 'list', 'values': ['All']}]
</code></pre>
<p>I want a zipped output like this:</p>
<pre><code>('AirportEnclosed', 'All')
('Bus', 'All')
('MotorwayServiceStation', 'All')
</code></pre>
<p>so far the nearest I have got is with the below:</p>
<pre><code>for d in dvalue:
dv = d['values']
zip_list = zip(dv, d['values'])
for z in zip_list:
print(z)
</code></pre>
<p>Which gives me this as an output:</p>
<pre><code>('AirportEnclosed', 'AirportEnclosed')
('Bus', 'Bus')
('MotorwayServiceStation', 'MotorwayServiceStation')
('All', 'All')
</code></pre>
<p>What do I need to change to get the desired output?</p>
|
<python><python-3.x><python-zip>
|
2023-02-22 19:03:16
| 1
| 4,190
|
gdogg371
|
75,536,995
| 1,806,392
|
Apply function accross multiple columns within group_by in Polars
|
<p>Given this dataframe:</p>
<pre><code>import polars as pl
polars_df = pl.DataFrame({
"name": ["A","B","C"],
"group": ["a","a","b"],
"val1": [1, None, 3],
"val2": [1, 5, None],
"val3": [None, None, 3],
})
</code></pre>
<p>I want to calculate the mean and count the number of NAs within the three val* columns for each group. So the result should look like:</p>
<pre><code>pl.DataFrame([
{'group': 'a', 'mean': 2.0, 'percentage_na': 0.5},
{'group': 'b', 'mean': 3.0, 'percentage_na': 0.3333333333333333}
])
</code></pre>
<p>In Pandas I was able to do this with this (quite ugly and not optimized) code:</p>
<pre><code>df = polars_df.to_pandas()
pd.concat([
df.groupby(["group"]).apply(lambda g: g.filter(like="val").mean().mean()).rename("mean"),
df.groupby(["group"]).apply(lambda g: g.filter(like="val").isna().sum().sum() / (g.filter(like="val").shape[0] * g.filter(like="val").shape[1])).rename("percentage_na")
], axis=1)
</code></pre>
|
<python><dataframe><python-polars>
|
2023-02-22 18:50:48
| 5
| 2,314
|
nik
|
75,536,896
| 12,470,058
|
How to fix the syntax of a function which is in string format?
|
<p>I have a text file containing python functions in string format. My code reads each function from the text file, feeds it with the appropriate inputs and then runs it. To run a function string (for example <code>fun_str</code>) from the text file, I use the following snippet in my code:</p>
<pre><code>dict = {}
exec(fun_str, globals(), dict)
f, = dict.values()
f()
</code></pre>
<p>As long as each function string has the python standard syntax (in terms of indentations, new lines, etc), the code works well. However, if the code reads a function string such as:</p>
<pre><code>"def fun(list): output_list = [] for i in list: if i not in output_list: output_list.append(i) return output_list"
</code></pre>
<p>(all in one line)</p>
<p>then <code>SyntaxError: invalid syntax</code> is raised with <code>^^^</code> under <code>for</code>.</p>
<p>Is there any built-in module or any approach to fix the function string so that it follows the standard syntax before it is run by <code>exec</code>?</p>
|
<python><python-3.x>
|
2023-02-22 18:41:00
| 1
| 368
|
Bsh
|
75,536,670
| 14,900,600
|
Python percentage calculator does not call exit()
|
<p>I am trying to write a percentage calculator that asks for the number of subjects, marks in the specified number of subjects and computes the percentage. It works well, but does not exit on calling exit() after the user presses "n":</p>
<pre class="lang-py prettyprint-override"><code>value = input("Do you want to calculate again (y/n):")
if value.lower == "y":
percentage()
elif value.lower == "n":
print("ok, sayonara")
exit()
</code></pre>
<p>The complete code is:</p>
<pre class="lang-py prettyprint-override"><code>def percentage():
numbers = []
x = int(input('How many subjects would you like to find the percentage for:'))
for i in range(x):
n = int(input('subject ' + str(i+1) + ': '))
numbers.append(n)
final = sum(numbers) / len(numbers)
print("The percentage is",final,"%")
while True:
try:
percentage()
value = input("Do you want to calculate again (y/n):")
if value.lower == "y":
percentage()
elif value.lower == "n":
print("ok, sayonara")
exit()
except:
print("\nOops! Error. Try again...\n")
</code></pre>
<p>here's what happens:
<a href="https://i.sstatic.net/jDPdP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jDPdP.png" alt="problem_image" /></a></p>
|
<python>
|
2023-02-22 18:17:19
| 4
| 541
|
SK-the-Learner
|
75,536,658
| 3,056,036
|
Merge Select Columns Dataframe Columns Into a Multi-Index
|
<p>I have N dataframes, in this case lets use 2 dfs as an example:</p>
<pre class="lang-py prettyprint-override"><code>df1 = pd.DataFrame([['a', 2], ['b', 4]], columns=['foo', 'bar'])
df2 = pd.DataFrame([['a', 3], ['b', 5]], columns=['foo', 'bar'])
</code></pre>
<p>Which produce:</p>
<pre><code> foo bar
0 a 2
1 b 4
</code></pre>
<pre><code> foo bar
0 a 3
1 b 5
</code></pre>
<p>How can I concat or merge them into a multi-index, where the new column level's name is based on some external variable attached to the dfs, Eg I will use the df name as an example here:</p>
<pre><code> df1 df2
foo bar bar
0 a 2 3
1 b 4 5
</code></pre>
<p>The dataframes are guaranteed to have the same <code>foo</code> values in the same order.</p>
|
<python><pandas><multi-index>
|
2023-02-22 18:16:43
| 1
| 309
|
ateymour
|
75,536,405
| 3,399,638
|
Pythonic way of taking mean of values from Dictionary with Keys
|
<p>Given a Python Dictionary, I'm attempting to take the mean over a series where the key values don't match. The following is an example of the dictionary, where there are N numbers of winner_num keys and the mean is taken from each index of 'value_held_graph.'</p>
<pre><code>nested_dict = {'winner_num_0': {'cash_held': 1800.546655015998,
'value_held_graph': [655.0,
657.1859489988019,
668.1170748266165,
673.4509510481149,
...
682.6094632572457
]},
'winner_num_1': {'cash_held': 2307.4282142925185,
'value_held_graph': [655.0,
643.9625087246983,
714.9614460254422,
716.9587778340948,
...
713.7097698975869
]},
'winner_num_N': {'cash_held': 2307.4282142925185,
'value_held_graph': [655.0,
654.5754236503379,
659.630701080459,
664.9212169741535,
...
654.4366560963232
]}
</code></pre>
<p>The desired result would look like:</p>
<pre><code>value_held_graph_mean = [655.0, 651.9079605, 680.903074, 685.1103153, ..., 683.5852964]
</code></pre>
|
<python><dictionary><mean>
|
2023-02-22 17:49:21
| 3
| 323
|
billv1179
|
75,536,380
| 7,788,098
|
How to return outer function if any of the inner functions returns
|
<p>See the following example:</p>
<pre><code>def a(test):
if test > 1:
raise Exception("error in 'a'")
print("nothing happened")
def b(test):
if test > 1:
raise Exception("error in 'b'")
print("nothing happened")
def c(test):
if test > 1:
raise Exception("error in 'c'")
print("nothing happened")
def all():
try:
a(1)
except Exception:
print("finished due to error")
return False
try:
b(2)
except Exception:
print("finished due to error")
return False
try:
c(1)
except Exception:
print("finished due to error")
return False
if __name__ == "__main__":
all()
</code></pre>
<p>Output for this is:</p>
<pre><code>nothing happened
finished due to error
</code></pre>
<p>So what I want to achieve is for <code>all()</code> to finish, returning False, when any of the inner function fails.</p>
<p>Is there any way to write the <code>all()</code> function like this, modifying the inner functions from the inside, so that they communicate the "return False" to the outer function?</p>
<pre><code>def all():
a(1)
b(2)
c(1)
</code></pre>
<p>(Current output of this would be):</p>
<pre><code>Traceback (most recent call last):
File "/Users/matiaseiletz/Library/Application Support/JetBrains/PyCharmCE2021.2/scratches/aaa.py", line 24, in <module>
all()
File "/Users/matiaseiletz/Library/Application Support/JetBrains/PyCharmCE2021.2/scratches/aaa.py", line 18, in all
b(2)
File "/Users/matiaseiletz/Library/Application Support/JetBrains/PyCharmCE2021.2/scratches/aaa.py", line 8, in b
raise Exception("error in 'b'")
Exception: error in 'b'
nothing happened
</code></pre>
<p>And the objective is to have an output like the first one, but without all the <code>try - except</code> logic around every function.</p>
<p>Thank you very much</p>
|
<python><python-3.x>
|
2023-02-22 17:46:36
| 2
| 439
|
Matias Eiletz
|
75,536,284
| 1,497,139
|
How do i fix "default prefix is not defined" errors in LinkML?
|
<p>As contributor to <a href="https://github.com/WolfgangFahl/pyMetaModel" rel="nofollow noreferrer">https://github.com/WolfgangFahl/pyMetaModel</a> i am running into a problem when trying out the generated linkML yaml files with different LinkML generators</p>
<p>While the linkML and mermaid generators seem to run fine the python code generator does not and therefore i get a 0 byte long python file when piping the result of the generator to a .py file</p>
<pre class="lang-bash prettyprint-override"><code>scripts/genexamples
generating PlantUML for examples/family/FamilyContext
generating linkML for examples/family/FamilyContext
generating mermaid ER Diagram for examples/family/FamilyContext
generating python code for examples/family/FamilyContext
INFO:root:Default_range not specified. Default set to 'string'
ValueError: File "FamilyContext.yaml", line 3, col 17 Default prefix: FamilyContext/ is not defined
generating PlantUML for examples/teaching/TeachingSchema
generating linkML for examples/teaching/TeachingSchema
generating mermaid ER Diagram for examples/teaching/TeachingSchema
generating python code for examples/teaching/TeachingSchema
INFO:root:Default_range not specified. Default set to 'string'
ValueError: File "TeachingSchema.yaml", line 3, col 17 Default prefix: TeachingSchema/ is not defined
generating PlantUML for examples/metamodel/metamodel
generating linkML for examples/metamodel/metamodel
generating mermaid ER Diagram for examples/metamodel/metamodel
generating python code for examples/metamodel/metamodel
INFO:root:Default_range not specified. Default set to 'string'
ValueError: File "metamodel.yaml", line 3, col 17 Default prefix: MetaModel/ is not defined
</code></pre>
<p>How is the error
<strong>ValueError: File "TeachingSchema.yaml", line 3, col 17 Default prefix: TeachingSchema/ is not defined</strong> to be fixed?</p>
|
<python><linkml>
|
2023-02-22 17:35:14
| 2
| 15,707
|
Wolfgang Fahl
|
75,536,279
| 7,984,318
|
Where to find all the third party packages or modules installed in a Flask project
|
<p>I'm new to flask ,but in Django I can find all the installed third party packages in
settings.py:</p>
<pre><code>INSTALLED_APPS=[
'third_party_app,'
]
</code></pre>
<p>Is there any module or any way in flask ,I can easily find all the installed third party packages ?</p>
|
<python><flask>
|
2023-02-22 17:34:52
| 1
| 4,094
|
William
|
75,535,994
| 10,682,289
|
How to import function from module that uses ArgParser without passing args
|
<p>Let's say I have two modules:</p>
<ul>
<li><p>a.py:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument("arg", help="Some argument")
args = parser.parse_args()
def func():
print('Hello world!')
</code></pre>
</li>
<li><p>b.py:</p>
<pre><code>from a import func
func()
</code></pre>
</li>
</ul>
<p>When I execute <code>python3.8 '/home/b.py'</code></p>
<p>I got</p>
<pre><code>usage: b.py [-h] arg
b.py: error: the following arguments are required: arg
</code></pre>
<p>...even though <code>func</code> doesn't need to use system arguments to be executed</p>
<p>Is there any way I can import and execute <code>func</code> without passing system arguments to <code>b.py</code>?</p>
|
<python><argparse>
|
2023-02-22 17:06:23
| 1
| 4,891
|
JaSON
|
75,535,965
| 13,142,245
|
Optimal shape for Multiprocessing Pool.map in Python
|
<p>Say you have a matrix of MxN elems (nested list.) And you want to parallelize operations; so your choices are parallelize by row or parallelize by column. Suppose that data/operations are independent and require only the value of matrix[i][j].</p>
<p>Depending on size of M & N, what is the best way to distribute?</p>
<p>My thinking is that the overhead cost of spinning up a process is nontrivial, so you should distribute based on min(M,N)</p>
<p>Eg if there are 1000 rows and 30 columns, it's better to distribute based on columns (less times needed to spin up a process.) Conversely, if there are 30 rows and 1000 columns, distribute by rows.</p>
<p>Is this thinking sound?</p>
|
<python><multiprocessing>
|
2023-02-22 17:04:26
| 1
| 1,238
|
jbuddy_13
|
75,535,917
| 1,084,684
|
How to add a table to my SQLAlchemy query's FROM's?
|
<p>I'm attempting to convert a working, large, complex SQL query to SQLAlchemy's ORM.</p>
<p>Here's a small example program that demonstrates the problem I'm seeing:</p>
<pre><code>#!/usr/bin/env python3
"""
An SSCCE.
Environment variables:
DBU Your database user
DBP Your database password
DBH Your database host
IDB Your initial database
"""
import os
import pprint
from sqlalchemy import create_engine, select
from sqlalchemy.orm import aliased, sessionmaker, declarative_base
from sqlalchemy.sql.expression import func
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
Base = declarative_base()
class NV(Base):
__tablename__ = "tb_nv"
__bind_key__ = "testdb"
__table_args__ = (
{
"mysql_engine": "InnoDB",
"mysql_charset": "utf8",
"mysql_collate": "utf8_general_ci",
},
)
id = db.Column("id", db.Integer, primary_key=True, autoincrement=True)
builds = db.relationship("Bld", primaryjoin="(NV.id == Bld.variant_id)")
class Vers(Base):
__tablename__ = "tb_vers"
__bind_key__ = "testdb"
__table_args__ = (
{
"mysql_engine": "InnoDB",
"mysql_charset": "utf8",
"mysql_collate": "utf8_general_ci",
},
)
id = db.Column("id", db.Integer, primary_key=True, autoincrement=True)
class St(Base):
__tablename__ = "tb_brst"
__bind_key__ = "testdb"
__table_args__ = ({"mysql_engine": "InnoDB", "mysql_charset": "utf8"},)
id = db.Column("id", db.Integer, primary_key=True, autoincrement=True)
version_id = db.Column(
"version_id",
db.Integer,
db.ForeignKey(
"tb_vers.id",
name="fk_tb_brst_version_id",
onupdate="CASCADE",
ondelete="RESTRICT",
),
nullable=False,
)
branch_id = db.Column(
"branch_id",
db.Integer,
db.ForeignKey(
"tb_br.id",
name="fk_tb_brst_branch_id",
onupdate="CASCADE",
ondelete="RESTRICT",
),
nullable=False,
)
build_id = db.Column(
"build_id",
db.Integer,
db.ForeignKey(
"tb_bld.id",
name="fk_tb_brst_build_id",
onupdate="CASCADE",
ondelete="RESTRICT",
),
nullable=False,
)
version = db.relationship(
"Vers", innerjoin=True, primaryjoin="(St.version_id == Vers.id)"
)
branch = db.relationship(
"Br", innerjoin=True, primaryjoin="(St.branch_id == Br.id)"
)
build = db.relationship(
"Bld", innerjoin=True, primaryjoin="(St.build_id == Bld.id)"
)
class Br(Base):
__tablename__ = "tb_br"
__bind_key__ = "testdb"
__table_args__ = (
{
"mysql_engine": "InnoDB",
"mysql_charset": "utf8",
"mysql_collate": "utf8_general_ci",
},
)
id = db.Column("id", db.Integer, primary_key=True, autoincrement=True)
name = db.Column("name", db.String(45), nullable=False)
class Bld(Base):
__tablename__ = "tb_bld"
__bind_key__ = "testdb"
__table_args__ = (
{
"mysql_engine": "InnoDB",
"mysql_charset": "utf8",
"mysql_collate": "utf8_general_ci",
},
)
id = db.Column("id", db.Integer, primary_key=True, autoincrement=True)
name = db.Column("name", db.String(100), nullable=False)
variant_id = db.Column(
"variant_id",
db.Integer,
db.ForeignKey(
"tb_nv.id",
name="fk_tb_bld_variant_id",
onupdate="CASCADE",
ondelete="RESTRICT",
),
nullable=False,
)
variant = db.relationship("NV")
def display(values):
"""Display values in a decent way."""
pprint.pprint(values)
def connect():
"""
Connect to Staging for testing.
This is based on https://medium.com/analytics-vidhya/translating-sql-queries-to-sqlalchemy-orm-a8603085762b
...and ./game-publishing/services/api/deploy/celery/config/staging-base.j2
"""
conn_str = "mysql://{}:{}@{}/{}".format(
os.environ["DBU"],
os.environ["DBP"],
os.environ["DBH"],
os.environ["IDB"],
)
engine = create_engine(conn_str)
session = sessionmaker(bind=engine)
sess = session()
return (engine, sess)
def main():
"""A minimal query that exhibits the problem."""
(engine, session) = connect()
Base.metadata.create_all(engine)
v = aliased(Vers, name="v")
v_2 = aliased(Vers, name="v_2")
nv_4 = aliased(NV, name="nv_4")
bs = aliased(St, name="bs")
bs_2 = aliased(St, name="bs_2")
bs_3 = aliased(St, name="bs_3")
br = aliased(Br, name="br")
q1 = select(nv_4.id, func.min(bs_3.build_id)).select_from(bs, v)
q2 = q1.join(v_2, onclause=(bs.version_id == v_2.id))
q3 = q2.join(bs_2, onclause=(br.id == bs_2.branch_id))
result = session.execute(q3)
display(result.scalars().all())
main()
</code></pre>
<p>The exception I'm getting (in case you don't see the same result), is:</p>
<pre><code>sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to join from, there are multiple FROMS which can join to this entity. Please use the .select_from() method to establish an explicit left side, as well as providing an explicit ON clause if not present already to help resolve the ambiguity.
</code></pre>
<p>I'm using:</p>
<pre><code>$ python3 -m pip list -v | grep -i sqlalchemy
Flask-SQLAlchemy 2.5.1 /data/home/dstromberg/.local/lib/python3.10/site-packages pip
SQLAlchemy 1.4.36 /data/home/dstromberg/.local/lib/python3.10/site-packages pip
$ python3 -m pip list -v | grep -i mysql
mysqlclient 2.1.1 /data/home/dstromberg/.local/lib/python3.10/site-packages pip
PyMySQL 0.8.0 /data/home/dstromberg/.local/lib/python3.10/site-packages pip
bash-4.2# mysql --version
mysql Ver 14.14 Distrib 5.7.41, for Linux (x86_64) using EditLine wrapper
</code></pre>
<p>I've googled for hours, but I don't seem to be getting anywhere. I found a few solutions to similar problems, but they didn't look similar enough to be useful.</p>
<p>Any suggestions?</p>
<p>Thanks!</p>
|
<python><mysql><sqlalchemy>
|
2023-02-22 17:00:54
| 1
| 7,243
|
dstromberg
|
75,535,868
| 5,574,107
|
Title on graphs
|
<p>I am making plots in a loop:</p>
<pre><code>plotData.sort_values(by=['segment'])
for date in plotData.month_of_default.unique():
plt.figure()
temp =plotData[plotData.month_of_default==date][['New_Amount_2','ID','segment','total','payment','month']]
denom = temp.drop_duplicates(subset=['ID']).groupby('segment')['total'].sum()
test = temp.groupby(['segment','month']).New_Amount_2.sum().groupby(level=0).cumsum()/denom
plt.plot(test.unstack().T)
</code></pre>
<p>I've tried putting title='' in the plt.plot() brackets and in plt.figure(), also adding fig.subtitle('Title', fontsize=16) and neither worked - what's the right syntax to do this? Thanks! :)</p>
|
<python><pandas><matplotlib>
|
2023-02-22 16:57:11
| 1
| 453
|
user13948
|
75,535,764
| 10,094,736
|
How do I transform my dataframe in python so that it's a different shape?
|
<p><strong>I have a dataframe in python which is of the format:</strong></p>
<p><a href="https://i.sstatic.net/7Kpmk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7Kpmk.png" alt="enter image description here" /></a></p>
<p><strong>I would like to transform my dataframe so that it looks like the image below instead:</strong></p>
<p><a href="https://i.sstatic.net/P0jmY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P0jmY.png" alt="enter image description here" /></a></p>
<p><strong>Any guidance would be helpful. Thanks</strong></p>
|
<python><dataframe><transform><transformation>
|
2023-02-22 16:45:41
| 1
| 405
|
Jed
|
75,535,679
| 1,503,669
|
Implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW
|
<p>How to fix this deprecated AdamW model?</p>
<p>I tried to use the BERT model to perform a sentiment analysis on the hotel reviews, when I run this piece of code, it prompts the following warning. I am still studying the transformers and I don't want the code to be deprecated very soon. I searched on the web and I can't find the solution yet.</p>
<p>I found this piece of information, but I don't know how to apply it to my code.</p>
<blockquote>
<p>To switch optimizer, put optim="adamw_torch" in your TrainingArguments
(the default is "adamw_hf")</p>
</blockquote>
<p>could anyone kindly help with this?</p>
<pre><code>from transformers import BertTokenizer, BertForSequenceClassification
import torch_optimizer as optim
from torch.utils.data import DataLoader
from transformers import AdamW
import pandas as pd
import torch
import random
import numpy as np
import torch.nn as nn
from torch.nn import CrossEntropyLoss
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix, roc_auc_score, classification_report
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from tqdm.notebook import tqdm
import json
from collections import OrderedDict
import logging
from torch.utils.tensorboard import SummaryWriter
</code></pre>
<p>skip some code...</p>
<pre><code>param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [{
'params':
[p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate':
0.01
}, {
'params':
[p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate':
0.0
}]
#
optimizer = AdamW(optimizer_grouped_parameters, lr=1e-5) ##deprecated
#optimizer = optim.AdamW(optimizer_grouped_parameters, lr=1e-5) ##torch.optim.AdamW (not working)
step = 0
best_acc = 0
epoch = 10
writer = SummaryWriter(log_dir='model_best')
for epoch in tqdm(range(epoch)):
for idx, batch in tqdm(enumerate(train_loader),
total=len(train_texts) // batch_size,
leave=False):
optimizer.zero_grad()
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs[0] # Calculate Loss
logging.info(
f'Epoch-{epoch}, Step-{step}, Loss: {loss.cpu().detach().numpy()}')
step += 1
loss.backward()
optimizer.step()
writer.add_scalar('train_loss', loss.item(), step)
logging.info(f'Epoch {epoch}, present best acc: {best_acc}, start evaluating.')
accuracy, precision, recall, f1 = eval_model(model, eval_loader) # Evaluate Model
writer.add_scalar('dev_accuracy', accuracy, step)
writer.add_scalar('dev_precision', precision, step)
writer.add_scalar('dev_recall', recall, step)
writer.add_scalar('dev_f1', f1, step)
if accuracy > best_acc:
model.save_pretrained('model_best') # Save Model
tokenizer.save_pretrained('model_best')
best_acc = accuracy
</code></pre>
|
<python><pytorch><huggingface-transformers><sentiment-analysis>
|
2023-02-22 16:37:58
| 1
| 513
|
Panco
|
75,535,529
| 12,981,397
|
How to split a column with json string into their own columns
|
<p>I have a dataframe like (with one example row):</p>
<pre><code>raw_data = [{'id': 1, 'name': 'FRANK', 'attributes': '{"deleted": false, "rejected": true, "handled": true, "order": "37"}'}]
raw_df = pd.DataFrame(raw_data)
</code></pre>
<p>I would like to break the json in the attributes column into their own columns with each of their values so that the resulting dataframe looks like:</p>
<pre><code>new_data = [{'id': 1, 'name': 'FRANK', 'deleted': 'false', 'rejected': 'true', 'handled': 'true', 'order': 37}]
new_df = pd.DataFrame(new_data)
</code></pre>
<p>Is there a way I can break up the json to achieve this? Thanks!</p>
|
<python><json><pandas><json-normalize>
|
2023-02-22 16:24:30
| 1
| 333
|
Angie
|
75,535,513
| 10,739,252
|
Multiprocessing.Queue hangs process when it is large enough
|
<p>Today, I've stumbled on some frustrating behavior of <code>multiprocessing.Queue</code>s.</p>
<p>This is my code:</p>
<pre><code>import multiprocessing
def make_queue(size):
ret = multiprocessing.Queue()
for i in range(size):
ret.put(i)
return ret
test_queue = make_queue(3575)
print(test_queue.qsize())
</code></pre>
<p>When I run this code, the process exits normally with exit code 0.</p>
<p>However, when I increase the queue size to 3576 or above, it hangs. When I send SIGINT to it through Ctrl-C, it raises the error here:</p>
<pre><code>Exception ignored in atexit callback: <function _exit_function at 0x7f91104f9360>
Traceback (most recent call last):
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
_run_finalizers()
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
thread.join()
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/threading.py", line 1096, in join
self._wait_for_tstate_lock()
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
if lock.acquire(block, timeout):
KeyboardInterrupt:
</code></pre>
<p>Can anyone please explain this behavior? I've experimented with the sizes, indeed, from a sample of 40 or so different sizes, any size below or equal to 3575 works fine and any size above 3575 hangs the process. I figured it may have something to do with the queue size in bytes, because if I insert <code>i*i</code> or some random strings instead of <code>i</code>, the threshold changes. Note that, unless <code>multiprocessing.Queue</code> does something suspicious in the background, I don't create any additional processes other than the main process. Also, adding <code>test_queue.close()</code> has no impact on the outcome.</p>
|
<python><multiprocessing><queue>
|
2023-02-22 16:23:22
| 1
| 2,942
|
Captain Trojan
|
75,535,305
| 1,216,584
|
How do I connect to a minio pod using the python API?
|
<p>I set up a microk8s deployment with the Minio service activated. I can connect to the Minio dashboard with a browser but cannot find a way to connect to the service via the API.</p>
<p>Here is the output to the <code>microk8s kubectl get all --all-namespaces</code> command</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
minio-operator pod/minio-operator-67dcf6dd7c-vxccn 0/1 Pending 0 7d22h
kube-system pod/calico-node-bpd4r 1/1 Running 4 (26m ago) 8d
kube-system pod/dashboard-metrics-scraper-7bc864c59-t7k87 1/1 Running 4 (26m ago) 8d
kube-system pod/hostpath-provisioner-69cd9ff5b8-x664l 1/1 Running 4 (26m ago) 7d22h
kube-system pod/kubernetes-dashboard-dc96f9fc-4759w 1/1 Running 4 (26m ago) 8d
minio-operator pod/console-66c4b79fbd-mw5s8 1/1 Running 3 (26m ago) 7d22h
kube-system pod/calico-kube-controllers-79568db7f8-vg4q2 1/1 Running 4 (26m ago) 8d
kube-system pod/coredns-6f5f9b5d74-fz7v8 1/1 Running 4 (26m ago) 8d
kube-system pod/metrics-server-6f754f88d-r7lsj 1/1 Running 4 (26m ago) 8d
minio-operator pod/minio-operator-67dcf6dd7c-8dnlq 1/1 Running 9 (25m ago) 7d22h
minio-operator pod/microk8s-ss-0-0 1/1 Running 9 (25m ago) 7d22h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 11d
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 8d
kube-system service/metrics-server ClusterIP 10.152.183.43 <none> 443/TCP 8d
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.232 <none> 443/TCP 8d
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.226 <none> 8000/TCP 8d
minio-operator service/operator ClusterIP 10.152.183.48 <none> 4222/TCP,4221/TCP 7d22h
minio-operator service/console ClusterIP 10.152.183.193 <none> 9090/TCP,9443/TCP 7d22h
minio-operator service/minio ClusterIP 10.152.183.195 <none> 80/TCP 7d22h
minio-operator service/microk8s-console ClusterIP 10.152.183.192 <none> 9090/TCP 7d22h
minio-operator service/microk8s-hl ClusterIP None <none> 9000/TCP 7d22h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 8d
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 1/1 1 1 8d
kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 8d
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 8d
minio-operator deployment.apps/console 1/1 1 1 7d22h
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 7d22h
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 8d
kube-system deployment.apps/metrics-server 1/1 1 1 8d
minio-operator deployment.apps/minio-operator 1/2 2 1 7d22h
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-6f5f9b5d74 1 1 1 8d
kube-system replicaset.apps/dashboard-metrics-scraper-7bc864c59 1 1 1 8d
kube-system replicaset.apps/kubernetes-dashboard-dc96f9fc 1 1 1 8d
minio-operator replicaset.apps/console-66c4b79fbd 1 1 1 7d22h
kube-system replicaset.apps/hostpath-provisioner-69cd9ff5b8 1 1 1 7d22h
kube-system replicaset.apps/calico-kube-controllers-79568db7f8 1 1 1 8d
kube-system replicaset.apps/metrics-server-6f754f88d 1 1 1 8d
minio-operator replicaset.apps/minio-operator-67dcf6dd7c 2 2 1 7d22h
NAMESPACE NAME READY AGE
minio-operator statefulset.apps/microk8s-ss-0 1/1 7d22h
</code></pre>
<p>I've tried the following commands to connect to the pod via the Python API, but keep getting errors:</p>
<pre><code>client = Minio("microk8s-ss-0-0", secure=False)
try:
objects = client.list_objects("bucket-1",prefix='/',recursive=True)
for obj in objects:
print (obj.bucket_name)
except InvalidResponseError as err:
print (err)
</code></pre>
<p>And received the following error:</p>
<pre><code>MaxRetryError: HTTPConnectionPool(host='microk8s-ss-0-0', port=80): Max retries exceeded with url: /bucket-1?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=%2F (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f29041e1e40>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
</code></pre>
<p>I also tried:
<code>client = Minio("10.152.183.195", secure=False)</code></p>
<p>And got the same result. How do I access the minio pod from the API?</p>
|
<python><kubernetes><minio>
|
2023-02-22 16:05:34
| 1
| 401
|
Mark C
|
75,535,291
| 6,223,328
|
pandas interpolate on new time base
|
<p>I'm looking for an easy way to put two dataframe with a physical seconds index on a new timebase using interpolation, e.g. with the example data</p>
<pre class="lang-py prettyprint-override"><code>data1 = pd.DataFrame({'time': np.arange(2, 300, 0.2),
'data': np.random.randint(0, 2, len(np.arange(2, 300, 0.2)))
}).set_index('time')
data2 = pd.DataFrame({'time': np.arange(10, 260, 0.2),
'data': np.random.randint(0, 2, len(np.arange(10, 260, 0.2)))
}).set_index('time')
new_timebase = np.linspace(10, 250, 2400)
</code></pre>
<p>I want to have both dataframes <code>data1</code> and <code>data2</code> have new_timebase (as index) with values interpolated from the original frames. In this case, since the data is <code>int</code>, a <em>nearest</em> intepolation would be preferable, else a subsequent call to <code>round()</code> would also be an option.</p>
<p>The final goal is to have a meaningful way of <em>binary and</em> the status in <code>data1 & data2</code></p>
|
<python><pandas><interpolation>
|
2023-02-22 16:04:24
| 0
| 357
|
Max
|
75,535,092
| 1,187,968
|
Python patch/mock under __name__ == "__main___"
|
<p>I have the following file named <code>w.py</code>, and under <code>__name__ == "__main__"</code>, I want to patch <code>MyAPI</code>, running <code>python w.py</code> shows the patching is not working at all. Is <code>patch('common.apis.MyAPI', mock_api)</code> being used correctly?</p>
<pre><code>from common.apis import MyAPI
class Worker:
def __init__(self):
companies = MyAPI.all()
raise Exception(companies)
if __name__ == "__main__":
from mock import patch, Mock
mock_api = Mock()
mock_api.all.return_value = {'dummy': 'dummy'}
with patch('common.apis.MyAPI', mock_api):
Worker()
</code></pre>
|
<python><mocking><patch>
|
2023-02-22 15:47:05
| 1
| 8,146
|
user1187968
|
75,535,023
| 395,258
|
Load Pillow image as texture in pyglet
|
<h1>Problem</h1>
<p>I am trying to load a Pillow image as a Pyglet texture using the following code:</p>
<pre><code>pixels = pillow_image.convert("L").tobytes()
width, height = pillow_image.size
image = pyglet.image.ImageData(width, height, "L", pixels, pitch=width)
image = image.get_texture().get_transform(flip_y=True)
</code></pre>
<p>It gives me the following error message:</p>
<pre><code> image = image.get_texture().get_transform(flip_y=True)
File "/home/home_dir/.local/share/virtualenvs//app/-r0DkEahp/lib/python3.8/site-packages/pyglet/image/__init__.py", line 692, in get_texture
self._current_texture = self.create_texture(Texture, rectangle)
File "/home/home_dir/.local/share/virtualenvs//app/-r0DkEahp/lib/python3.8/site-packages/pyglet/image/__init__.py", line 681, in create_texture
texture = cls.create(self.width, self.height, GL_TEXTURE_2D, internalformat)
File "/home/home_dir/.local/share/virtualenvs//app/-r0DkEahp/lib/python3.8/site-packages/pyglet/image/__init__.py", line 1260, in create
glTexImage2D(target, 0,
File "/home/home_dir/.local/share/virtualenvs//app/-r0DkEahp/lib/python3.8/site-packages/pyglet/gl/lib.py", line 79, in errcheck
raise GLException(f'(0x{error}): {msg}')
pyglet.gl.lib.GLException: (0x1281): Invalid value. A numeric argument is out of range.
</code></pre>
<p>I tried resizing the image since I read that there is a size limit on textures, but no luck.</p>
<h1>Minimum reproducible example</h1>
<p>Consider the following image:</p>
<p><a href="https://i.sstatic.net/J5prX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J5prX.png" alt="Gravatar" /></a></p>
<p>Download this image and call it "gravatar.png". Then the following code reproduces the issue with the latest pyglet version:</p>
<pre><code>import pyglet
from PIL import Image
im = Image.open("/path/to/gravatar.png").convert("L")
width, height = im.size
image = pyglet.image.ImageData(width, height, "L", im.tobytes(), pitch=width)
image = image.get_texture().get_transform(flip_y=True)
</code></pre>
<h1>Insights so far</h1>
<p>I have realized that the code I've posted here works for 1.5.x versions of pyglet but it doesn't work for 2.x versions of pyglet. This explains why it worked before, since I originally wrote the code before the first 2.x release.</p>
<h1>Question</h1>
<p>Given the insight that it works for 1.5.x versions of pyglet, my question is now how to get it to work in 2.x versions of pyglet.</p>
|
<python><python-imaging-library><pyglet>
|
2023-02-22 15:41:02
| 1
| 10,673
|
C. E.
|
75,535,002
| 11,462,274
|
Keep original page opening without upgrading to local version
|
<p>Every morning I open this site to see the games of the day, but I like to see it in the international version (<code>https://int.</code>) because I don't like the brazilian version (<code>https://br.</code>).</p>
<p>But whenever I open it through <code>WebDriver</code> as it doesn't know that I prefer it that way (in my normal browser the preference is already saved and I don't need to adjust it), he open it in the <code>int</code> version but it automatically converts to the <code>br</code> version.</p>
<p>To work around this problem the only way I could find was to ask the <code>WebDriver</code> to open the same page twice (that way from there it registers which version I want to use):</p>
<pre class="lang-python prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.firefox.service import Service
from os import path
def web_driver():
service = Service(log_path=path.devnull)
options = Options()
options.set_preference("general.useragent.override", my_user_agent)
options.page_load_strategy = 'eager'
driver = webdriver.Firefox(options=options,service=service)
return driver
driver = web_driver()
driver.get("https://int.soccerway.com/")
driver.get("https://int.soccerway.com/")
</code></pre>
<p>When the page takes too long to load, this causes a problem because I never know when it has already been updated or when it will be updated, making me generate movements or clicks on the page that, after activating the second driver.get, returns to the top of the page.</p>
<p>How can I proceed to open the page and it will always remain in the version that I initially want?</p>
<p>I tried to find in the <code>cookies</code> which I should use to pass, but I couldn't understand which to use and whether to use.</p>
|
<python><selenium-webdriver>
|
2023-02-22 15:39:55
| 1
| 2,222
|
Digital Farmer
|
75,534,779
| 4,379,365
|
Pandas, how to compute the mean of a third column grouped by two columns and set result to a fourth new column?
|
<p>In a dataframe, how to create a new column called z_mean, which is the mean of column z when grouping x and y?</p>
<pre><code>data = [
{'x':0.0, 'y':0.0, 'z':0.8},
{'x':0.0, 'y':0.0, 'z':1.0},
{'x':0.0, 'y':0.0, 'z':1.2},
{'x':1.0, 'y':1.0, 'z':1.6},
{'x':1.0, 'y':1.0, 'z':2.0},
{'x':1.0, 'y':1.0, 'z':2.4},
]
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><group-by>
|
2023-02-22 15:20:13
| 1
| 559
|
Xavier M
|
75,534,750
| 10,545,426
|
Split string value in a MultIndex level to create a new level
|
<p>I want to split a level (of str values) in a MultiIndex using a delim to create a new level in the MultiIndex.</p>
<p>My original DataFrame</p>
<pre class="lang-py prettyprint-override"><code>pd.DataFrame(
np.ones((5, 4)),
columns=pd.MultiIndex.from_tuples(
zip(
list(reduce(lambda x, y: f"{x}_{y}", tup) for tup in zip(range(4), range(4))),
list(map(str, range(4))),
)
),
index=range(5),
)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">('0_0', '0')</th>
<th style="text-align: right;">('1_1', '1')</th>
<th style="text-align: right;">('2_2', '2')</th>
<th style="text-align: right;">('3_3', '3')</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
<p>how can I split the 0th level with '_' delim so that I can get 3 levels in my MultIndex column efficiently? Like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">(0, 0, 0)</th>
<th style="text-align: right;">(1, 1, 1)</th>
<th style="text-align: right;">(2, 2, 2)</th>
<th style="text-align: right;">(3, 3, 3)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
<blockquote>
<p>Do not know how I can have multiple headers in markdown table. the length of the tuple suggests the levels. This is the output of <code>pd.DataFrame.to_markdown</code></p>
</blockquote>
|
<python><pandas>
|
2023-02-22 15:17:36
| 1
| 399
|
Stalin Thomas
|
75,534,737
| 690,255
|
Jupyter Notebook - underscore.js doesn't seem to be accessible anymore
|
<p>I'm using a custom widget in Jupyter. After upgrading to a new machine it stopped functioning. Checking the javascript console in the browser window running the notebook I see the error <code>ReferenceError: _ is not defined</code>. Indeed, running the following in a Jupyter cell:</p>
<pre><code>%%js
alert(_)
</code></pre>
<p>I get the same error. Doing the exact same command on my other machine functions correctly (it shows the definition of _ as in underscore.js).
The html source of the Jupyter Notebook still shows underscore.js as being specified in the <code>require.config</code>. Note that simple included widgets still function as expected (so it is not an issue with initializing the widget system).</p>
<p>I haven't found anything in the changelogs of ipywidgets or jupyter regarding changes to the use of underscore.js. I know that the widget api has changed recently in <code>ipywidgets8.0</code>, which is why I am still using version 7.7.3.</p>
<p>Does anyone know if this is an expected change of behaviour in how widgets work? Any other ideas of why underscore does not seem to be initialized properly?</p>
|
<javascript><python><jupyter-notebook><underscore.js><jupyter-widget>
|
2023-02-22 15:16:30
| 1
| 883
|
John
|
75,534,617
| 5,684,405
|
Poetry unable to install hydra
|
<p>Poetry can not install hydra with the below error. How to install Hydra with poetry:</p>
<pre><code>$ poetry add hydra
Using version ^2.5 for hydra
Updating dependencies
Resolving dependencies... (6.3s)
Writing lock file
Package operations: 1 install, 0 updates, 0 removals
• Installing hydra (2.5): Failed
CalledProcessError
Command '['/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9', '--no-deps', '/Users/mc/Library/Caches/pypoetry/artifacts/d8/9a/72/7404c4669ad6d023f10626f1f4ad5a0f0bb0fe11b6e4ec7fe398dff895/Hydra-2.5.tar.gz']' returned non-zero exit status 1.
at /opt/homebrew/Cellar/python@3.9/3.9.16/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py:528 in run
524│ # We don't call process.wait() as .__exit__ does that for us.
525│ raise
526│ retcode = process.poll()
527│ if check and retcode:
→ 528│ raise CalledProcessError(retcode, process.args,
529│ output=stdout, stderr=stderr)
530│ return CompletedProcess(process.args, retcode, stdout, stderr)
531│
532│
The following error occurred when trying to handle this error:
EnvCommandError
Command ['/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9', '--no-deps', '/Users/mc/Library/Caches/pypoetry/artifacts/d8/9a/72/7404c4669ad6d023f10626f1f4ad5a0f0bb0fe11b6e4ec7fe398dff895/Hydra-2.5.tar.gz'] errored with the following return code 1, and output:
Processing /Users/mc/Library/Caches/pypoetry/artifacts/d8/9a/72/7404c4669ad6d023f10626f1f4ad5a0f0bb0fe11b6e4ec7fe398dff895/Hydra-2.5.tar.gz
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Building wheels for collected packages: Hydra
Building wheel for Hydra (pyproject.toml): started
Building wheel for Hydra (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Building wheel for Hydra (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [221 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-12-arm64-cpython-39
copying src/hydra.py -> build/lib.macosx-12-arm64-cpython-39
running build_ext
building '_hydra' extension
creating build/temp.macosx-12-arm64-cpython-39
creating build/temp.macosx-12-arm64-cpython-39/src
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -I/private/var/folders/1y/w1f378ln3js1n9dhwbwshkm40000gn/T/pip-req-build-owvns8k8/src -I/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9/include -I/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c src/MurmurHash3.c -o build/temp.macosx-12-arm64-cpython-39/src/MurmurHash3.o -std=gnu99 -O2 -D_LARGEFILE64_SOURCE
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -I/private/var/folders/1y/w1f378ln3js1n9dhwbwshkm40000gn/T/pip-req-build-owvns8k8/src -I/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9/include -I/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c src/_hydra.c -o build/temp.macosx-12-arm64-cpython-39/src/_hydra.o -std=gnu99 -O2 -D_LARGEFILE64_SOURCE
src/_hydra.c:3377:36: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'Py_ssize_t' (aka 'long') [-Wsign-compare]
__pyx_t_3 = ((__pyx_v_self->_idx < __pyx_t_2) != 0);
~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~
src/_hydra.c:7137:44: warning: taking the absolute value of unsigned type 'unsigned long' has no effect [-Wabsolute-value]
(__pyx_v__bucket_indexes[__pyx_v_i]) = llabs((__pyx_t_7 % __pyx_v_max));
^
src/_hydra.c:7137:44: note: remove the call to 'llabs' since unsigned values cannot be negative
(__pyx_v__bucket_indexes[__pyx_v_i]) = llabs((__pyx_t_7 % __pyx_v_max));
^~~~~
src/_hydra.c:8530:35: error: no member named 'tp_print' in 'struct _typeobject'
__pyx_type_6_hydra_MMapBitField.tp_print = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
src/_hydra.c:8535:31: error: no member named 'tp_print' in 'struct _typeobject'
__pyx_type_6_hydra_MMapIter.tp_print = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
src/_hydra.c:8539:40: error: no member named 'tp_print' in 'struct _typeobject'
__pyx_type_6_hydra_BloomCalculations.tp_print = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
src/_hydra.c:8551:34: error: no member named 'tp_print' in 'struct _typeobject'
__pyx_type_6_hydra_BloomFilter.tp_print = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
src/_hydra.c:9924:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9924:22: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9924:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9924:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9924:52: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9924:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9940:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9940:26: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9940:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9940:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9940:59: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:9940:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:11521:9: warning: 'PyCFunction_Call' is deprecated [-Wdeprecated-declarations]
return PyCFunction_Call(func, arg, kw);
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/methodobject.h:33:1: note: 'PyCFunction_Call' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyCFunction_Call(PyObject *, PyObject *, PyObject *);
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
src/_hydra.c:11586:41: warning: 'PyCFunction_Call' is deprecated [-Wdeprecated-declarations]
__pyx_CyFunctionType_type.tp_call = PyCFunction_Call;
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/methodobject.h:33:1: note: 'PyCFunction_Call' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyCFunction_Call(PyObject *, PyObject *, PyObject *);
^
/opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
16 warnings and 4 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for Hydra
Failed to build Hydra
ERROR: Could not build wheels for Hydra, which is required to install pyproject.toml-based projects
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/env.py:1540 in _run
1536│ output = subprocess.check_output(
1537│ command, stderr=subprocess.STDOUT, env=env, **kwargs
1538│ )
1539│ except CalledProcessError as e:
→ 1540│ raise EnvCommandError(e, input=input_)
1541│
1542│ return decode(output)
1543│
1544│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
The following error occurred when trying to handle this error:
PoetryException
Failed to install /Users/mc/Library/Caches/pypoetry/artifacts/d8/9a/72/7404c4669ad6d023f10626f1f4ad5a0f0bb0fe11b6e4ec7fe398dff895/Hydra-2.5.tar.gz
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/pip.py:58 in pip_install
54│
55│ try:
56│ return environment.run_pip(*args)
57│ except EnvCommandError as e:
→ 58│ raise PoetryException(f"Failed to install {path.as_posix()}") from e
59│
</code></pre>
<p>Poetry config:</p>
<pre><code>$ poetry env info
Virtualenv
Python: 3.9.16
Implementation: CPython
Path: /Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9
Executable: /Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9/bin/python
Valid: True
System
Platform: darwin
OS: posix
Python: 3.9.16
Path: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9
Executable: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/bin/python3.9
</code></pre>
<p>MacBook M1</p>
|
<python><python-poetry><hydra>
|
2023-02-22 15:07:14
| 1
| 2,969
|
mCs
|
75,534,590
| 21,221,244
|
How to smooth adjacent polygons in Python?
|
<p>I'm looking for a way to smooth polygons such that adjacent/touching polygons remain touching. Individual polygons can be smoothed easily, e.g., with PAEK or Bezier interpolation (<a href="https://pro.arcgis.com/en/pro-app/latest/tool-reference/cartography/smooth-polygon.htm" rel="nofollow noreferrer">https://pro.arcgis.com/en/pro-app/latest/tool-reference/cartography/smooth-polygon.htm</a>), which naturally changes their boundary edge. But how to smooth all polygons such that touching polygons remain that way?</p>
<p>I'm looking for a Python solution ideally, so it can easily be automated. I found an equivalent question for Arcgis (<a href="https://gis.stackexchange.com/questions/183718/how-to-smooth-adjacent-polygons">https://gis.stackexchange.com/questions/183718/how-to-smooth-adjacent-polygons</a>), where the top answer outlines a good strategy (converting polygon edges to lines from polygon-junction to junction), smoothing these and then reconstructing the polygons). Perhaps this would the best strategy, but I'm not sure how to convert shared polygon boundaries to individual polylines in Python.</p>
<p>Here is some example code that shows what I'm trying to do for just 2 polygons (but I've created the 'smoothed' polygons by hand):</p>
<pre><code>import matplotlib.pyplot as plt
import geopandas as gpd
from shapely import geometry
x_min, x_max, y_min, y_max = 0, 20, 0, 20
## Create original (coarse) polygons:
staircase_points = [[(ii, ii), (ii, ii + 1)] for ii in range(x_max)]
staircase_points_flat = [coord for double_coord in staircase_points for coord in double_coord] + [(x_max, y_max)]
list_points = {1: staircase_points_flat + [(x_max, y_min)],
2: staircase_points_flat[1:-1] + [(x_min, y_max)]}
pols_coarse = {}
for ind_pol in [1, 2]:
list_points[ind_pol] = [geometry.Point(x) for x in list_points[ind_pol]]
pols_coarse[ind_pol] = geometry.Polygon(list_points[ind_pol])
df_pols_coarse = gpd.GeoDataFrame({'geometry': pols_coarse.values(), 'id': pols_coarse.keys()})
## Create smooth polygons (manually):
pols_smooth = {1: geometry.Polygon([geometry.Point(x) for x in [(x_min, y_min), (x_max, y_min), (x_max, y_max)]]),
2: geometry.Polygon([geometry.Point(x) for x in [(x_min, y_min), (x_min, y_max), (x_max, y_max)]])}
df_pols_smooth = gpd.GeoDataFrame({'geometry': pols_smooth.values(), 'id': pols_smooth.keys()})
## Plot
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
df_pols_coarse.plot(column='id', ax=ax[0])
df_pols_smooth.plot(column='id', ax=ax[1])
ax[0].set_title('Original polygons')
ax[1].set_title('Smoothed polygons');
</code></pre>
<p><a href="https://i.sstatic.net/00ZBI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/00ZBI.png" alt="Expected outcome smoothed touching polygons" /></a></p>
<p><strong>Update:</strong>
Using the suggestion from Mountain below and <a href="https://gis.stackexchange.com/questions/183718/how-to-smooth-adjacent-polygons">this post</a>, I think the problem could be broken down in the following steps:</p>
<ul>
<li>Find boundary edges between each pair of touching polygons (e.g., using <a href="https://gis.stackexchange.com/questions/95573/finding-the-common-borders-between-polygons-in-the-same-shapefile">this suggestion</a>).</li>
<li>Transform these into numpy arrays and smooth as per Mountain's bspline suggestion</li>
<li>Reconstruct polygons using updated/smoothed edges.</li>
</ul>
<p>Also note that for single (<code>shapely.geometry</code>) polygons, they can be smoothed using: <code>pol.simplify()</code> <a href="https://shapely.readthedocs.io/en/stable/reference/shapely.Polygon.html" rel="nofollow noreferrer">using Douglas-Peucker algorithm.</a></p>
|
<python><gis><polygon><geopandas><shapely>
|
2023-02-22 15:04:48
| 3
| 495
|
Thijs
|
75,534,549
| 5,240,684
|
mock_open with variable read_data
|
<p>I tried looking through the similar questions, but couldn't find an answer that answers my question.</p>
<p>I have a function like so:</p>
<pre><code>from smart_open import open
import pandas as pd
def foo():
with open('file1.txt', 'r') as f:
df1 = pd.read_csv(f)
value1 = do_something(df1)
with open('file2.txt', 'r') as f:
df2 = pd.read_csv(f)
value2 = do_something_else(df2)
return value1, value2
</code></pre>
<p>Now I want to test this, without having to read the actual files. So I am looking at using <code>mock_open</code> inside a patch to mock the return of open, so that I can ingest different data for the files.</p>
<p><strong>The question: How do I ensure that I get two different files depending on the filename provided to the open function?</strong></p>
<p>This is my current code:</p>
<pre><code>def read_sample_data(*args, **kwargs):
# depending on filepath return different sample data
df = None
if args[0] == 'file1.txt':
df = pd.DataFrame({'test': [1,2]})
elif args[0] == 'file2.txt':
df = pd.DataFrame({'test': [3,4]})
return df
def forward_filename(*args, **kwargs):
return mock_open(read_data=args[0])
@patch('__main__.pd.read_csv', side_effect=read_sample_data)
@patch("__main__.open", new_callable=forward_filename)
def test_foo(self, open, read_csv):
value1, value2 = foo()
#...do some testing
</code></pre>
<p>Just to clarify: The code fails with an error. I tried return the filename directly as str in the with open() context, but that fails because string objects are not allowed in context vars.</p>
<p>I am guessing the solution is simple, so I am grateful for any tips :)</p>
|
<python><python-3.x><pandas><unit-testing><smart-open>
|
2023-02-22 15:01:35
| 1
| 1,057
|
Lukas Hestermeyer
|
75,534,410
| 2,300,049
|
Force a python package that uses requests to go through a proxy
|
<p>I am currently using the python package <a href="https://dev.meteostat.net/" rel="nofollow noreferrer">meteostat</a>. It makes use of the requests package to download data off the meteostat servers. I am running this package through work a machine that needs to use an http proxy.</p>
<p>When running their example code:</p>
<pre><code>from datetime import datetime
import matplotlib.pyplot as plt
from meteostat import Stations, Monthly
# Set time period
start = datetime(2000, 1, 1)
end = datetime(2018, 12, 31)
# Get Monthly data
data = Monthly('10637', start, end)
data = data.fetch()
# Plot line chart including average, minimum and maximum temperature
data.plot(y=['tavg', 'tmin', 'tmax'])
plt.show()
</code></pre>
<p>I get the following error at <code>data.fetch()</code>:</p>
<blockquote>
<p>URLError: <urlopen error [Errno 2] No such file or directory></p>
</blockquote>
<p>I believe this is due to the fact that I am not going through my work's proxy, like I have to when I install packages via pip.</p>
<p>I have tried adding the following to my code, but it did not resolve the issue:</p>
<pre><code>import requests
proxies = {
'http': 'WORK PROXY',
'https': 'WORK PROXY',
}
</code></pre>
|
<python><python-requests><proxy><meteostat>
|
2023-02-22 14:50:39
| 0
| 321
|
entropy
|
75,534,368
| 1,889,750
|
Grouping and Reordering data in pandas dataframe
|
<p>I have some dataframe with a structure</p>
<pre><code>Candidate z0, 0deg z0, 30deg z0, 60deg z0, 90deg z0, 120deg z0, 150deg z0, 180deg z0, 210deg z0, 240deg
10006A 0.30 0.05 0.05 0.05 0.05 0.30 0.05 0.05 0.05
10008A 0.30 0.30 0.30 0.30 0.30 0.30 0.30 0.05 0.05
</code></pre>
<p>What I would like to do is to restructure the dataframe so it looks like this</p>
<pre><code>Candidate angle z0
10006A 0 0.30
10006A 30 0.05
10006A 60 0.05
10006A 90 0.05
10006A 120 0.05
10006A 150 0.30
10006A 180 0.05
10006A 210 0.05
10006A 240 0.05
10008A 0 0.30
...
</code></pre>
<p>I am completely blank on how to do this. The angle could be retrieved from the column name, but is ordered anyway from 0 to 330 in steps of 30 degree.</p>
<p>Can this be done in pandas?</p>
|
<python><pandas><dataframe>
|
2023-02-22 14:47:15
| 1
| 1,355
|
TomGeo
|
75,534,277
| 3,866,549
|
FastAPI grant_type refresh token
|
<p>I'm new to OAth2 and using FastApi, working great until now stumped on how to detect grant_type refresh. Here from the form, i'm getting grant_type as "password" or "refresh_token". The problem is that when I pass grant_type "refresh_token" the code doesn't even get to the condidtional. Is it due to the OAuth2PasswordRequestForm not allowing it?</p>
<pre><code>from fastapi.security import OAuth2PasswordBearer, OAuth2PasswordRequestForm
@app.post("/token", response_model=Token)
async def login_for_access_token(form_data: OAuth2PasswordRequestForm = Depends()):
user = authenticate_user(fake_users_db, form_data.username, form_data.password)
#for login grant_type = password
#for refreshToken grant_type = refresh_token
grant_type_str = str(form_data.grant_type)
print('167', str(form_data.grant_type))
try:
user.username
except:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=user['detail'],
headers={"WWW-Authenticate": "Bearer"},
)
try:
form_data.grant_type
except:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="must specifiy grant type",
headers={"WWW-Authenticate": "Bearer"},
)
if grant_type_str.startswith('password'):
access_token_expires = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
access_token = create_access_token(data={"sub": user.username}, expires_delta=access_token_expires)
return {"access_token": access_token, "expires_in": ACCESS_TOKEN_EXPIRE_MINUTES,
"token_type": "bearer", "scope": "read write groups", "grant_type": "password"}
elif grant_type_str.startswith('refresh_token'):
access_token_expires_refresh = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES_REFRESH)
access_token = create_refresh_token(
data={"sub": user.username}, expires_delta=access_token_expires_refresh
)
return {"access_token": access_token, "expires_in": ACCESS_TOKEN_EXPIRE_MINUTES_REFRESH,
"token_type": "bearer", "scope": "read write groups", "grant_type": "refresh_token"}
</code></pre>
<p>Running the above, the code doenst even get to my elif "refresh_token". Instead I get '''422 Unprocessable Entity'''.</p>
|
<python><oauth-2.0><fastapi>
|
2023-02-22 14:40:31
| 1
| 2,507
|
jKraut
|
75,534,266
| 18,432,809
|
test an endpoint, where login is required with PYTEST
|
<p>I am trying to test my flask-api, but im not being able to do this, because the endpoints require login.</p>
<p>I have a simple code:</p>
<pre><code>import os
import requests
from dotenv import load_dotenv
load_dotenv()
ENDPOINT = os.getenv("ENDPOINT")
### - TESTING SALES ENDPOINT - ###
def test_get_all_sales():
"""Test the sales endpoint, where all sales are getted"""
response = requests.get(f"http://localhost:5000/sales")
print(response)
assert response.status_code == 200
</code></pre>
<p>When I ran the tests with 'python -m pytest' (I'm using windows, I think in linux the cmd is different), I receive a forbidden error, because endpoint require login.</p>
<p>I am using flask session, where my frontend send a request to auth/login endpoint and a session in backend is stored and return a cookie to frontend store it. So, my auth works in this way. But how can I test these endpoints? Have a way to login, store a temporary cookie and use these endpoint? Or I need to setup something like a apikey in each endpoint when passed, doesnt require login?</p>
<p>My endpoint is like:</p>
<pre><code>@sales.route('/')
@sales.route("/<filters>")
@login_required # abort and return 403.
def sales_list(filters=None):
sales = Sale.get_all_sales() // Sale is a model class, with crud methods (simple DAO)
if sales:
return jsonify({'message': "sales fetched sucessfully.", 'data': sales}), 200
return jsonify({'message': "sales cant be fetched.", 'data': None}), 400
</code></pre>
<p>By the way, i thought this is possible because POSTMAN can set cookies and I can make requests to login_required endpoints. So, I think I can do this in python manually too</p>
|
<python><flask><pytest>
|
2023-02-22 14:38:52
| 1
| 389
|
vitxr
|
75,534,231
| 8,622,976
|
How can I connect to remote database using psycopg3
|
<p>I'm using Psycopg3 (not 2!) and I can't figure out how can I connect to a remote Postgres server</p>
<pre class="lang-py prettyprint-override"><code>psycopg.connect(connection_string)
</code></pre>
<p><a href="https://www.psycopg.org/psycopg3/docs/" rel="nofollow noreferrer">https://www.psycopg.org/psycopg3/docs/</a></p>
<p>Thanks!</p>
|
<python><postgresql><psycopg2><psycopg3>
|
2023-02-22 14:36:16
| 2
| 2,103
|
Alon Barad
|
75,534,150
| 4,016,385
|
Is there a way to calculate average buy/sell price for stock share without cycles?
|
<p><a href="https://i.sstatic.net/O6nLYm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O6nLYm.png" alt="orderbook" /></a></p>
<p>I'm looking a way to calculate average sell\buy price of stock share without cycling through each element (using pandas\numpy functions).</p>
<p>Let's assume I want to sell 100 shares. The topmost levels will be sold first. The average selling price will be calculated like this:</p>
<p>161.58 * 2 = 323.16</p>
<p>161.57 * 1 = 161.57</p>
<p>161.56 * 16 = 2584.96</p>
<p>100 - (16 + 1 + 2) = 81</p>
<p>161.55 * 81 = 13085.55</p>
<p>323.16 + 161.57 + 2584.96 + 13085.55 = 16155.24</p>
<p>16155.24/100 = 161.5524</p>
<p>So, the average price to sell shares (average ask price) is 161.5524 per share</p>
<p>Same thing for buying shares.</p>
<p>161.61 * 25 = 4040,25</p>
<p>161.62 * 75 = 12121,5</p>
<p>12121,5 + 4040,25 = 16161,75</p>
<p>16161,75/100 = 161,6175</p>
<p>So, the average price to buy shares is 161,6175 per share</p>
<p>The original table is looking like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ask_19_price</th>
<th>ask_19_amount</th>
<th>...</th>
<th>ask_1_amount</th>
<th>ask_0_price</th>
<th>ask_0_amount</th>
<th>bid_0_price</th>
<th>...</th>
</tr>
</thead>
<tbody>
<tr>
<td>161.42</td>
<td>3124</td>
<td>...</td>
<td>1</td>
<td>161.58</td>
<td>2</td>
<td>161.61</td>
<td>...</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>The resulting table I'm looking for should look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ask_avg</th>
<th>bid_avg</th>
</tr>
</thead>
<tbody>
<tr>
<td>161.5524</td>
<td>161,6175</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe><numpy>
|
2023-02-22 14:29:57
| 1
| 397
|
Alexander Zar
|
75,534,125
| 5,036,928
|
Numpy Value Error: Elementwise Operations on Meshgrid
|
<p>Given my code below:</p>
<pre><code>import numpy as np
nx, ny, nz = (50, 50, 50)
x = np.linspace(1, 2, nx)
y = np.linspace(2, 3, ny)
z = np.linspace(0, 1, nz)
xg, yg, zg = np.meshgrid(x, y, z)
def fun(x, y, z, a, b, c):
r = np.array([x, y, z])
r0 = np.array([a, b, c])
R = ((x-a)**2 + (y-b)**2 + (z-c)**2)**(1/2)
return np.linalg.norm(r - r0)
# return R
evald_z = np.trapz(fun(xg, yg, zg, 1, 1, 1), zg)
evald_y = np.trapz(evald_z, y)
evald_x = np.trapz(evald_y, x)
print(evald_x)
</code></pre>
<p>I get the error:</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (3,50,50,50) (3,)
</code></pre>
<p>It works simply by changing the return statement (to return R):</p>
<pre><code>1.7086941510502398
</code></pre>
<p>Obviously if the function is evaluated at some single (x, y, z) location, the function itself works. I understand why I get this error - when defining the numpy array into <code>r</code>, it is taking full (50, 50, 50) <code>xg</code>, <code>yg</code> and <code>zg</code> grids. I have other points in my code (not shown) where using the (1, 3) shapes for <code>r</code> and <code>r0</code> come in handy (and make the code more readable). How can I alter the definitions of <code>r</code> and <code>r0</code> (keeping the (1,3) shape) to perform elementwise operations (as done when using R instead of the norm)? And/or - is this possible, why/why not?</p>
<p>Thanks in advance!!</p>
|
<python><numpy><mesh><elementwise-operations>
|
2023-02-22 14:28:00
| 1
| 1,195
|
Sterling Butters
|
75,533,936
| 6,529,926
|
Most efficient way of performing creation of new rows in a DataFrame
|
<p>I'm implementing a data augmentation script that takes as input a pandas DataFrame and a list of strings (e.g. <code>variations</code>). The script should generate new rows for the DataFrame, where each row concatenates an element of <code>variations</code>.</p>
<p>For instance, having a DataFrame:</p>
<pre><code>Compliment | Sentence_ID
Hi | 1
Hello | 2
Hola | 3
</code></pre>
<p>And variations <code>["Elvis", "Monica"]</code></p>
<p>The resulting dataframe should be like this:</p>
<pre><code>Compliment | Sentence_ID
Hi | 1
Hi Elvis | 1
Hi Monica | 1
Hello | 2
Hello Elvis | 2
Hello Monica | 2
Hola | 3
Hola Elvis | 3
Hola Monica | 3
</code></pre>
<p>I made some tests with <code>pd.iterrows()</code> but it seems to be very slow (~5 minutes) when the dataframe is large. I'd like to know if there is such a more feasible option.</p>
|
<python><pandas><numpy><performance>
|
2023-02-22 14:13:49
| 4
| 729
|
heresthebuzz
|
75,533,901
| 8,165,879
|
How to use sagemaker pipeline parameters in a processing step?
|
<p>I would like to pass a parameter to my sagemaker pipeline and use it in my processing step. I am defining my step as follows:</p>
<pre><code>from sagemaker.processing import Processor
my_processor = Processor(role=role,
image_uri='xxxx',
instance_type="ml.m5.xlarge",
instance_count=1,
entrypoint=[ "python", "processing.py"])
step_process = ProcessingStep(
name="ProcessStep",
processor=my_processor)
</code></pre>
<p>My pipeline is defined as:</p>
<pre><code>from sagemaker.workflow.pipeline import Pipeline
from sagemaker.workflow.parameters import (ParameterString)
filename = ParameterString(
name='filename',
default_value='xyz.json'
)
pipeline_name = "ProcessPipeline"
pipe = Pipeline(
name=pipeline_name,
parameters=[filename],
steps=[step_process]
)
</code></pre>
<p>I am trying to access the parameters as follows in processing.py:</p>
<pre><code>parser = ArgumentParser()
parser.add_argument('--filename', type=str, dest='filename')
args, _ = parser.parse_known_args()
s3 = boto3.client('s3')
my_obj=s3.get_object(Bucket = 'my_bucket', Key = args.filename)
</code></pre>
<p>The pipeline execution on sagemaker UI shows that the parameter has been correctly passed. However, arg.filename returns None in processing.py. What am I missing ?</p>
|
<python><amazon-web-services><pipeline><amazon-sagemaker>
|
2023-02-22 14:10:55
| 1
| 530
|
Yash
|
75,533,892
| 962,891
|
How to plot multiple markers in mplfinance scatter plot
|
<p>I am using pandas v 1.5.3 and Python 3.10.</p>
<p>I have timeseries data along with flag columns like this:</p>
<pre><code> Open High Low Close Adj Close Volume Flag
Date
2018-01-10 155.745697 157.103256 155.353729 156.959854 122.008064 4366109 1
2018-01-12 156.806885 157.495224 155.860428 155.965576 121.235191 5263367 3
2018-01-22 154.407272 156.768646 154.024857 155.449326 120.833931 8870917 1
2018-02-06 143.680695 148.652008 142.552582 148.508606 115.438744 10321614 8
2018-02-09 142.065002 143.919693 138.049713 142.934998 112.200180 8188402 1
</code></pre>
<p>Here are the data types of the columns in the dataframe.</p>
<pre><code>Open float64
High float64
Low float64
Close float64
Adj Close float64
Volume int64
Flag int64
</code></pre>
<p>This is the index type:</p>
<pre><code>Index dtype
datetime64[ns]
</code></pre>
<p>I want to use the flag number to determine the type of symbol to plot on the chart (a simple mapping from my boolean flag to a <a href="https://matplotlib.org/stable/api/markers_api.html" rel="nofollow noreferrer">matplotlib marker symbol</a> to use on the chart).</p>
<p>This is what I have so far (only relevant part shown):</p>
<pre><code># Define the plot style
ohlc_style = mpf.make_mpf_style(base_mpf_style='charles',
y_on_right=False,
marketcolors=mpf.make_marketcolors(up='g', down='r'),
mavcolors=['purple', 'orange'],
figcolor='w')
# Plot the chart with the 'Flag' as scatter points
mpf.plot(df, type='candle', style=ohlc_style, volume=True, show_nontrading=False, addplot=df['Flag'], scatter=True)
</code></pre>
<p>However, when I run this, it raises the following exception:</p>
<pre><code>raise TypeError('kwarg "'+key+'" validator returned False for value: "'+str(value)+'"\n '+v)
TypeError: kwarg "addplot" validator returned False for value: "Date
2018-01-02 0
2018-01-03 0
2018-01-04 0
2018-01-05 0
2018-01-08 0
..
2021-12-27 0
2021-12-28 0
2021-12-29 0
2021-12-30 0
2021-12-31 0
Name: Flag, Length: 1008, dtype: int64"
'Validator' : lambda value: isinstance(value,dict) or (isinstance(value,list) and all([isinstance(d,dict) for d in value])) },
</code></pre>
<p>How do I fix this error, and also to use a mapping function that uses a hardocded map to map from my flag integers to a matplotlib.markers style?</p>
|
<python><pandas><matplotlib><mplfinance>
|
2023-02-22 14:10:00
| 2
| 68,926
|
Homunculus Reticulli
|
75,533,830
| 11,391,711
|
Applying multiple filters on files names using glob in Python
|
<p>I would like to apply multiple filters on files names using <code>glob</code> library in <code>python</code>. I went trough some online sources and can see that using <code>*</code> opearation, it's possible to do so. However, my filters are not properly working since I'm trying to apply multiple them together. It is reading more files than it should be.</p>
<p>Suppose my files are stored with date information as follows. I have year, month, and day information as an extension. For instance, the name <code>my_file_20220101A1835.txt</code> shows that the file is from January 1st of 2022 and is saved at 6:35pm. If I'd like to get all the files between 2022 and 2023 for the first half of the first six months, I am using the following line.</p>
<pre><code>folder_path = "...'
glob.glob(f"{folder_path }/*[2022-2023]**[01-06]**[01-15]*"A"*[01-24]**[00-60]*.pq")
</code></pre>
<p>Is there a structured way that I can perform this operation efficiently?</p>
|
<python><regex><file><directory><glob>
|
2023-02-22 14:03:52
| 1
| 488
|
whitepanda
|
75,533,822
| 5,868,293
|
How to add text at specific points at x-axis, at a cumulative histogram in plotly python
|
<p>I produce the following cumulative histogram using plotly-express:</p>
<pre><code>import pandas as pd
import plotly.express as px
px.histogram(pd.DataFrame({'N':[1,1,1,1,2,3,3,4,5,6,7,7,7,7,7]}), x='N',
cumulative=True, nbins=7)
</code></pre>
<p><a href="https://i.sstatic.net/ZwLJs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZwLJs.png" alt="enter image description here" /></a></p>
<p>I would like to add as text, at points N=3*i (3,6,9,...etc) the value of <code>count</code>, above the bars</p>
<p>How could I do that ?</p>
|
<python><plotly>
|
2023-02-22 14:02:58
| 2
| 4,512
|
quant
|
75,533,746
| 5,806,297
|
Custom component in Streamlit is having trouble loading
|
<p>I'm testing out a custom <a href="https://github.com/mstaal/msal_streamlit_authentication" rel="nofollow noreferrer">authentication component</a> for my Streamlit app. However, when using the component in production, it fails to render for some reason.
<a href="https://i.sstatic.net/HhkAK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HhkAK.png" alt="enter image description here" /></a></p>
<p>I've managed to get i to work in dev mode by forking the code and adding it to my Streamlit project - but I still can't make it run in production.</p>
<p>Upon digging a bit, it seems to me that the declaration of the component fails since the build path for some reason doens't work. It looks like the assertion fails since the module is none as per the following traceback</p>
<pre><code>venv\lib\site-packages\streamlit\components\v1\components.py:284, in declare_component(name, path, url)
281 # Get the caller's module name. `__name__` gives us the module's
282 # fully-qualified name, which includes its package.
283 module = inspect.getmodule(caller_frame)
--> 284 assert module is not None
...
288 # user executed `python my_component.py`), then this name will be
289 # "__main__" instead of the actual package name. In this case, we use
290 # the main module's filename, sans `.py` extension, as the component name.
AssertionError:
</code></pre>
<p>To obtain the <code>build_path</code> I use the following:</p>
<pre><code>
root_dir = os.path.dirname(os.path.abspath(__file__))
build_dir = os.path.join(root_dir, "frontend" , "dist")
</code></pre>
<p>This returns:</p>
<pre><code>'c:\\Users\\initials\\xxx\\Desktop\\Absence importer\\absense_importer\\frontend\\dist'
</code></pre>
<p>The component is declared like this:</p>
<pre><code>_USE_WEB_DEV_SERVER = os.getenv("USE_WEB_DEV_SERVER", False)
_WEB_DEV_SERVER_URL = os.getenv("WEB_DEV_SERVER_URL", "http://localhost:5173")
COMPONENT_NAME = "msal_authentication"
root_dir = os.path.dirname(os.path.abspath(__file__))
build_dir = os.path.join(root_dir, "frontend" , "dist")
if _USE_WEB_DEV_SERVER:
_component_func = components.declare_component(name=COMPONENT_NAME, url=_WEB_DEV_SERVER_URL)
else:
_component_func = components.declare_component(name=COMPONENT_NAME, path=build_dir)
</code></pre>
<p>I've also tried to wrap everything inside a Linux Docker container, but to no avail, unfortunately. Can anyone spot my error?</p>
<p>I'm on Python 3.10.7 and using Streamlit 1.18.1.</p>
<p><em><em>EDIT</em>:</em></p>
<p>Figured out my browswer has issues reading the compiled frontend code, due to a mismatch in MIME types. I'm not sure, what's wrong. By either adding the MIME types manually like</p>
<pre><code>import mimetypes
mimetypes.add_type('application/javascript', '.js')
mimetypes.add_type('text/css', '.css')
</code></pre>
<p>Or using the <a href="https://stackoverflow.com/questions/75533746/custom-component-in-streamlit-is-having-trouble-loading/75737109#75737109">sample-library</a> from the author worked.</p>
<p><a href="https://i.sstatic.net/os5vR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/os5vR.png" alt="enter image description here" /></a></p>
|
<python><typescript><streamlit>
|
2023-02-22 13:57:03
| 2
| 853
|
Artem
|
75,533,709
| 13,212,499
|
Weighted average on a GroupBy DataFrame with Multiple Columns and a Fractional Weight Column
|
<p>My question is similar to <a href="https://stackoverflow.com/questions/31521027/groupby-weighted-average-and-sum-in-pandas-dataframe">this</a> and <a href="https://stackoverflow.com/questions/57778649/return-groupby-weighted-average-for-multiple-pandas-dataframe-columns-as-a-dataf">that</a> but neither answer works for me.</p>
<p>I have a dataframe of users and user survey responses. Each survey response is a assigned a weight which is a fractional number (like 1.532342). Each user responds with ~20 scores, in this example shown as <code>scoreA</code> and <code>scoreB</code>.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">user</th>
<th style="text-align: right;">weight</th>
<th style="text-align: right;">scoreA</th>
<th style="text-align: right;">scoreB</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0.5</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">5</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0.5</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">7</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0.5</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">6</td>
</tr>
</tbody>
</table>
</div>
<p>It's trivial to compute the average unweighted score for each column by way of <code>scores.groupby('user').mean()</code> but I'm struggling to compute the weighted score.</p>
<pre><code>df = pd.DataFrame({
'weight': [ 2, 1, 0.5, 0.5,1,0.5],
'scoreA': [3,5,7, 8,9,8],
'scoreB': [1,3,5, 6,7,6]
}, index=pd.Index([1,1,1,2,2,2],name='user'))
scores = df[['scoreA', 'scoreB']]
weights = df.weight
scores.groupby('user').mean()
>>> scoreA scoreB
user
1 5.000000 3.000000
2 8.333333 6.333333
scores.groupby('user').agg(lambda x: np.average(x, weights=weights)
>>> TypeError: Axis must be specified when shapes of a and weights differ.
</code></pre>
<p>What I want to output is:</p>
<pre><code>df.drop(columns='weight').mul(df.weight,axis=0).groupby('user').sum().div(df.weight.groupby('user').sum(),axis=0)
scoreA scoreB
user
1 4.142857 2.142857
2 8.500000 6.500000
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-22 13:54:43
| 2
| 349
|
Ilya Voytov
|
75,533,620
| 11,167,163
|
What Is the new pandas update on groupby ? (FutureWarning)
|
<pre><code>DF.groupby(["Criteria"], as_index=False).apply(fn).groupby('Criteria').agg(d_agg1).round(2)
</code></pre>
<p>In the future, the group keys will be included in the index, regardless of whether the applied function returns a like-indexed object.
To preserve the previous behavior, use</p>
<pre><code> >>> .groupby(..., group_keys=False)
</code></pre>
<p>What change are needed to be done in the code to avoid warning ?</p>
|
<python><pandas>
|
2023-02-22 13:48:33
| 1
| 4,464
|
TourEiffel
|
75,533,619
| 2,505,799
|
Using Python to Iterate over a dataset rows part simultaneously
|
<p>I am looking for directions to iterate over 100K+ rows and two columns of a dataset (CSV or Google sheet) partly simultaneously, but with a delay.</p>
<p>The task is to perform an API request using each row data, with a returned API response ID saved in a third column (iterating over 100K+ rows is expected to take a few hours).</p>
<p>With an eye on time saving, I would interested to know if it is possible/suitable to launch a second API request using request ID information saved in the third column (from the first request - which is likely still populating further down the dataset) but with a delay of around 10 minutes (ten minutes allows a task on a remote device to complete, triggered from the first API request), rather than wait for all 100K+ rows to complete before running the second API request across all rows (second API request checks if the task from the first request is complete).</p>
<p>I'm looking for directions at this stage before I get too far down the road with one particular method, thanks</p>
|
<python><python-3.x><pandas>
|
2023-02-22 13:48:24
| 1
| 1,019
|
MikG
|
75,533,582
| 218,768
|
maximum recursion depth when searching python object - requests
|
<p>This code works well when running from a python terminal. However, when running inside a DCC called Nuke - ( compositing software in vfx ), this gives recursion error. Totally blank on why this occurs.</p>
<pre><code>import requests
url = 'xxxxxxxx.com/my_work'
data = {'field_to_query_01': 4444, 'field_to_query_02': 'Pending'}
requests.get(url, data)
</code></pre>
<p>I tried setting sys recursionLimit to 1500 but still the error occurs. Setting recursionlimit is not advisable as i read in some posts here.</p>
<p>Also changed requests to 2.7 instead of 2.11 which is compatible with python 2.7. Yet this error occurs.</p>
<p>I found that this issue could be because of version conflict. Changed to requests 2.7 which is found on google search to be the one for python 2.7
Yet this issue occurs. Any remote clue ?</p>
<pre><code>File "\\xxx.xxx.xx.xxx\Scripts\python_portable\site-packages\requests\sessions.py", line 451, in request
prep = self.prepare_request(req)
File "\\xxx.xxx.xx.xxx\Scripts\python_portable\site-packages\requests\sessions.py", line 378, in prepare_request
headers=merge_setting(request.headers, self.headers, dict_class=CaseInsensitiveDict),
File "\\xxx.xxx.xx.xxx\Scripts\python_portable\site-packages\requests\sessions.py", line 62, in merge_setting
merged_setting = dict_class(to_key_val_list(session_setting))
File "\\xxx.xxx.xx.xxx\Scripts\python_portable\site-packages\requests\structures.py", line 46, in __init__
self.update(data, **kwargs)
File "C:\Program Files\Nuke12.2v10\python27.zip\_abcoll.py", line 564, in update
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 144, in __instancecheck__
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 180, in __subclasscheck__
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 180, in __subclasscheck__
File "C:\Program Files\Nuke12.2v10\pythonextensions\site-packages\shiboken2\files.dir\shibokensupport\typing27.py", line 1338, in __subclasscheck__
return super(GenericMeta, self).__subclasscheck__(cls)
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 161, in __subclasscheck__
File "C:\Program Files\Nuke12.2v10\pythonextensions\site-packages\shiboken2\files.dir\shibokensupport\typing27.py", line 1070, in __extrahook__
if issubclass(subclass, scls):
File "P:\python_portable\site-packages\typing.py", line 1410, in __subclasscheck__
return super(GenericMeta, self).__subclasscheck__(cls)
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 161, in __subclasscheck__
File "P:\python_portable\site-packages\typing.py", line 1140, in __extrahook__
if issubclass(subclass, scls):
File "C:\Program Files\Nuke12.2v10\pythonextensions\site-packages\shiboken2\files.dir\shibokensupport\typing27.py", line 1338, in __subclasscheck__
return super(GenericMeta, self).__subclasscheck__(cls)
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 161, in __subclasscheck__
File "C:\Program Files\Nuke12.2v10\pythonextensions\site-packages\shiboken2\files.dir\shibokensupport\typing27.py", line 1070, in __extrahook__
if issubclass(subclass, scls):
File "P:\python_portable\site-packages\typing.py", line 1410, in __subclasscheck__
return super(GenericMeta, self).__subclasscheck__(cls)
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 161, in __subclasscheck__
File "P:\python_portable\site-packages\typing.py", line 1140, in __extrahook__
if issubclass(subclass, scls):
File "C:\Program Files\Nuke12.2v10\pythonextensions\site-packages\shiboken2\files.dir\shibokensupport\typing27.py", line 1338, in __subclasscheck__
return super(GenericMeta, self).__subclasscheck__(cls)
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 161, in __subclasscheck__
File "C:\Program Files\Nuke12.2v10\pythonextensions\site-packages\shiboken2\files.dir\shibokensupport\typing27.py", line 1070, in __extrahook__
if issubclass(subclass, scls):
File "P:\python_portable\site-packages\typing.py", line 1410, in __subclasscheck__
return super(GenericMeta, self).__subclasscheck__(cls)
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 161, in __subclasscheck__
File "P:\python_portable\site-packages\typing.py", line 1140, in __extrahook__
if issubclass(subclass, scls):
File "C:\Program Files\Nuke12.2v10\pythonextensions\site-packages\shiboken2\files.dir\shibokensupport\typing27.py", line 1338, in __subclasscheck__
return super(GenericMeta, self).__subclasscheck__(cls)
File "C:\Program Files\Nuke12.2v10\python27.zip\abc.py", line 161, in __subclasscheck__
File "C:\Program Files\Nuke12.2v10\pythonextensions\site-packages\shiboken2\files.dir\shibokensupport\typing27.py", line 1070, in __extrahook__
</code></pre>
|
<python><python-requests>
|
2023-02-22 13:44:42
| 0
| 1,078
|
nish
|
75,532,988
| 2,281,274
|
Patch module before importing in Python
|
<p>I need to patch a global constant in a module before importing it (before executing code from it).</p>
<p>It's imported as <code>from app.foo.bar import Bar</code>.</p>
<p>In the bar (<code>app/foo/bar.py</code>) there is a constant I want <code>mock.patch</code>, and that constant is checked at load time (code is in the top-level in bar.py). How can I patch constant in <code>bar.py</code> before code in <code>bar.py</code> check it?</p>
<p>app/foo/bar.py</p>
<pre class="lang-py prettyprint-override"><code>
CONSTANT = 42
if CONSTANT == 42:
raise Exception("42")
</code></pre>
<p>I want to change <code>CONSTANT</code> to <code>43</code> in my code (without changing anything on file system in <code>bar.py</code>).</p>
|
<python><monkeypatching>
|
2023-02-22 12:56:40
| 1
| 8,055
|
George Shuklin
|
75,532,908
| 762,688
|
VS Code Python launch.json config to set cwd to specific module directory
|
<p>Searching through <a href="https://stackoverflow.com/questions/38623138/vscode-how-to-set-working-directory-for-debugging-a-python-program">this question</a>, I was not able to find the information I need.</p>
<p>My dir structure in VS Code:</p>
<pre><code>ProjectXYZ
|
-module1
</code></pre>
<p>My launch.json:</p>
<pre><code>{
"name": "Main File: Python",
"type": "python",
"request": "launch",
"module": "module1",
"console": "integratedTerminal",
"justMyCode": true,
"args": ["--env", "DEV"]
}
</code></pre>
<p>Regardless of which file I'm on in VS Code, I want to debug <code>module1</code>. I then want the cwd to be set to ProjectXYZ/module1 for other purposes. As is, the cwd at breakpoints is ProjectXYZ.</p>
<p>If I add <code>"cwd": "${fileDirname}"</code> to the launch.json, it sets the cwd before opening the module and can no longer find my module. Possible values for "cwd" <a href="https://code.visualstudio.com/docs/editor/variables-reference#_settings-command-variables-and-input-variables" rel="nofollow noreferrer">found here</a>.</p>
<p>Is my setup possible purely in the launch.json or do I have to set the cwd immediately after launch (also I don't want to do this in production code, only when debugging in VS code).</p>
<p>Thanks!</p>
|
<python><json><visual-studio-code><vscode-debugger><launch>
|
2023-02-22 12:49:36
| 1
| 759
|
sean.net
|
75,532,805
| 13,394,817
|
How to specify spacings among words of several sentences to force nth word in each sentence start from exact the same coordinate on plot in python
|
<p>I am trying to write some grouped texts, each sentence in each group contain 4 parts: <code>value + unit + symbol + value</code> e.g., <code>0.1 (psi) -> 0.0223</code>, on a plot. Each group will begin from a specified coordinate, but I couldn't force the second parts (<em>units</em>) to begin from an exact the same coordinate as each other in each group. Now, I am using <code>a calculated value * " "</code> after the first parts to force the second parts start from the same point, where the <em>calculated value</em> is determined based on the number of letters, not a metric feature, of the first parts. For this, firstly, I find the longest value of the first part in each group, then the length of that (<em>maximum length</em>), then for each value (the first part) in that group <code>the length of that value + (maximum length - the length of that value) * " "</code>, but they will be appeared irregular (<em>shown on the pic</em>) in some cases, which, I think, might be due to different width of digits in each value e.g., 0 is a little wider than 1. Is there any way to cure it, perhaps something like a metric feature (not based on the number of letters) or something that force each digit or letter to occupy a specific width? How?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# data ----------------------------------
data = {"Dev": [0, 30, 60], "Bor": [1.750, 2.875, 4.125, 6.125, 8.500, 12.250],
"Poi": [0, 0.1, 0.2, 0.3, 0.4, 0.5], "Str": [0, 0.33, 0.5, 1]}
units = [["(deg)", "(in)"], ["(unitless)"], ["(psi)"]]
Inputs = list(data.values())
area_ratio = [[0.16734375, 0.043875, 0.0], [1.0, 0.93, 0.67886875, 0.3375, 0.16158125, 0.0664125],
[0.26145, 0.23625, 0.209475, 0.1827, 0.155925, 0.12915], [0.451484375, 0.163359375, 0.106984375, 0.05253125]]
x_bar_poss = [np.array([3.7, 4., 4.3]), np.array([5.25, 5.55, 5.85, 6.15, 6.45, 6.75]),
np.array([9.25, 9.55, 9.85, 10.15, 10.45, 10.75]), np.array([13.55, 13.85, 14.15, 14.45])]
colors = ['green', 'orange', 'purple', 'yellow', 'gray', 'olive']
units_ravel = [item for sublist in units for item in sublist]
# code ----------------------------------
def max_string_len(list_):
max_len = 0
for i in list_:
max_len = max(len(str(i)), max_len)
return max_len
fig, ax = plt.subplots()
for i, row in enumerate(area_ratio):
max_hight = max(row)
max_str_len = max_string_len(Inputs[i])
for j, k in enumerate(row):
plt.bar(x_bar_poss[i][j], k, width=0.3, color=colors[j], edgecolor='black')
# ==============================================================================================================
plt_text = str(Inputs[i][j]) + (max_str_len - len(str(Inputs[i][j])) + 1) * " " + units_ravel[i] \
+ r"$\longmapsto$" + f'{area_ratio[i][j]:.5f}'
# ==============================================================================================================
plt.text(x_bar_poss[i][j], 0.75, plt_text, rotation=90, ha='center', va='bottom')
ax.set(xlim=(0, 16), ylim=(0, 1), yticks=np.linspace(0, 1, 6))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/4c5p6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4c5p6.png" alt="enter image description here" /></a></p>
|
<python><string><string-length>
|
2023-02-22 12:39:57
| 1
| 2,836
|
Ali_Sh
|
75,532,801
| 12,169,382
|
How to get bivariate normal probability distribution with specified standard deviation and angle in python
|
<p>I want to get a bivariate normal probability distribution(hereafter referred to as BNPD) from ellipse parameters. As shown in the illustraion below, the length of the major axis of the ellipse will be taken as 3 times the standard deviation of BNPD, the same to the minor axis, and the rotation angle of ellipse is taken as the rotation angle of BNPD.</p>
<p><a href="https://i.sstatic.net/tAkDq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tAkDq.png" alt="enter image description here" /></a></p>
<p>This is the python test code, not success yet:</p>
<pre class="lang-py prettyprint-override"><code>
import numpy as np
import cv2 as cv
# initialize ellispe parameters
size = 500
ctx = 280
cty = 220
width_radius = 40
height_radius = 120
theta = 45
# draw a reference line and ellipse
bg = np.ones((size,size, 3))
cv.line(bg,(0,int(size/2)),(size,int(size/2)),(0,0,0),1)
cv.line(bg,(int(size/2),0),(int(size/2),size),(0,0,0),1)
cv.ellipse(bg,(ctx,cty),(width_radius,height_radius),theta,0,360,(0,0,255),2)
cv.imshow("result", bg)
cv.waitKey(0)
# convert ellipse parameters to bivariate normal probability distribution parameters
mean_x = ctx - size//2
mean_y = cty - size//2
sigma_x = width_radius/(3*np.cos(theta)) # I doubt here
sigma_y = height_radius/(3*np.sin(theta))
gaussian_x = lambda x: np.exp(-(x-mean_x)**2/(2*(sigma_x**2)))/(np.sqrt(2*np.pi) * sigma_x)
gaussian_y = lambda y :np.exp(-(y-mean_y)**2/(2*(sigma_y**2)))/(np.sqrt(2*np.pi) * sigma_y)
X = np.linspace(-size/2, size/2, size)
Y = np.linspace(-size/2, size/2, size)
result_x = gaussian_x(X).reshape(-1,1)
result_y = gaussian_y(Y).reshape(1,-1)
result_xy = np.dot(result_x, result_y)
# visualize bivariate normal probability distribution
import matplotlib.pyplot as plt
fig, axes = plt.subplots()
axes.imshow(result_xy)
plt.show()
</code></pre>
<p>Run above code, it shows:
<a href="https://i.sstatic.net/H7pPg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H7pPg.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/sjhG7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sjhG7.png" alt="enter image description here" /></a>
As you can see, the center and angle of ellipse is not correspond to the mean and angle of BNPD.</p>
<p>-------------------------update-----------------------------</p>
<p>As @Matt Pitkin suggest, I try out the <code>scipy.stats.multivariate_normal</code> with deduced covariance matrix, but from the plotted graph, it seems the rotation angle is slightly smaller than 45°. The code is:</p>
<pre class="lang-py prettyprint-override"><code>phi = theta
semiMajorAxis = width_radius
semiMinorAxis = height_radius
varX1 = semiMajorAxis**2 * np.cos(phi)**2 + semiMinorAxis**2 * np.sin(phi)**2
varX2 = semiMajorAxis**2 * np.sin(phi)**2 + semiMinorAxis**2 * np.cos(phi)**2
cov12 = (semiMajorAxis**2 - semiMinorAxis**2) * np.sin(phi) * np.cos(phi)
mean = [mean_x, mean_y]
covariance = [[varX1, cov12],
[cov12,varX2]]
from scipy.stats import multivariate_normal
X, Y = np.meshgrid(X, Y)
xy = np.stack((X, Y), -1)
fxy = multivariate_normal.pdf(xy, mean=mean, cov=covariance)
fig, axes = plt.subplots()
axes.imshow(fxy)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/l7fI4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l7fI4.png" alt="enter image description here" /></a></p>
|
<python><numpy><normal-distribution><ellipse><probability-distribution>
|
2023-02-22 12:39:43
| 1
| 700
|
Wade Wang
|
75,532,764
| 1,549,736
|
`conda-build` failing to populate working source directory?
|
<p>After updating the <em>Anaconda</em> installation on my Windows 11 Home Dell XPS-15 7590 laptop, my previously working <code>conda build</code> recipes are failing, all in the same manner:</p>
<pre><code>(base)
capnf@DESKTOP-G84ND7C MINGW64 ~/Documents/GitHub/PyBERT (master)
$ conda build --python=3.9 --numpy=1.23 -c dbanas -c defaults -c conda-forge conda.recipe/enable/
Adding in variants from internal_defaults
INFO:conda_build.variants:Adding in variants from internal_defaults
Adding in variants from config.variant
INFO:conda_build.variants:Adding in variants from config.variant
Attempting to finalize metadata for enable
INFO:conda_build.metadata:Attempting to finalize metadata for enable
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
BUILD START: ['enable-5.3.1-py39h2ed9b0f_2.tar.bz2']
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
## Package Plan ##
environment location: C:\Users\capnf\anaconda3\conda-bld\enable_1677067522896\_h_env
The following NEW packages will be INSTALLED:
blas: 1.0-mkl
{snip}
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
## Package Plan ##
environment location: C:\Users\capnf\anaconda3\conda-bld\enable_1677067522896\_build_env
The following NEW packages will be INSTALLED:
bzip2: 1.0.8-he774522_0
{snip}
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... WARNING conda.gateways.disk.create:make_menu(237): Environment name starts with underscore '_'. Skipping menu installation.
done
Using legacy MSVC compiler setup. This will be removed in conda-build 4.0. If this recipe does not use a compiler, this message is safe to ignore. Otherwise, use {{compiler('<language>')}} jinja2 in requirements/build.
WARNING:conda_build.windows:Using legacy MSVC compiler setup. This will be removed in conda-build 4.0. If this recipe does not use a compiler, this message is safe to ignore. Otherwise, use {{compiler('<language>')}} jinja2 in requirements/build.
The system cannot find the path specified.
ERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
source tree in: C:\Users\capnf\anaconda3\conda-bld\enable_1677067522896\work
{snip}
(base)
capnf@DESKTOP-G84ND7C MINGW64 ~/Documents/GitHub/PyBERT (master)
$ l ~/anaconda3/conda-bld/enable_1677067522896/work/
bld.bat
build_env_setup.bat
conda_build.bat
metadata_conda_debug.yaml
work/
(base)
capnf@DESKTOP-G84ND7C MINGW64 ~/Documents/GitHub/PyBERT (master)
$ l ~/anaconda3/conda-bld/enable_1677067522896/work/work/
(base)
capnf@DESKTOP-G84ND7C MINGW64 ~/Documents/GitHub/PyBERT (master)
$
</code></pre>
<p><strong>It appears that <code>conda-build</code> is failing to populate its working source directory.</strong></p>
<p>Is this a known pathology in the latest <code>conda-build</code> system?<br />
And, if so, are there any working hypotheses as to how it's being triggered?<br />
Is there a known temporary work-around?</p>
<p>Here's my complete <code>meta.yaml</code> file, for reference:</p>
<pre class="lang-yaml prettyprint-override"><code>{% set name = "enable" %}
{% set version = "5.3.1" %}
package:
name: "{{ name|lower }}"
version: "{{ version }}"
source:
git_url: https://github.com/enthought/{{name}}.git
git_rev: {{version}}
build:
number: 2
script: "{{ PYTHON }} -m pip install . --no-deps --ignore-installed -vv "
requirements:
build:
- setuptools
- git
- cmake
- swig
# - {{compiler('c')}}
# - {{ compiler('cxx') }}
# - {{ cdt('xorg-x11-devel') }} # [linux]
- vs2019_win-64
host:
- fonttools
- numpy
- pillow
- pip
- pyface >=7.4.2
- pyparsing
- python
- six
- traitsui
- Cython
run:
- fonttools
- numpy
- pillow
- pyface >=7.4.2
- pyparsing
- python
- six
- traitsui
test:
imports:
- enable
- enable.drawing
- enable.gadgets
- enable.layout
- enable.null
- enable.primitives
# - enable.pyglet_backend
- enable.qt4
- enable.savage
- enable.savage.compliance
- enable.savage.svg
- enable.savage.svg.backends
- enable.savage.svg.backends.kiva
- enable.savage.svg.backends.null
- enable.savage.svg.backends.wx
- enable.savage.svg.css
- enable.savage.svg.tests
- enable.savage.svg.tests.css
- enable.savage.trait_defs
- enable.savage.trait_defs.ui
- enable.savage.trait_defs.ui.qt4
- enable.savage.trait_defs.ui.wx
- enable.tests
- enable.tests.primitives
- enable.tests.qt4
- enable.tests.tools
# - enable.tests.tools.apptools
- enable.tests.wx
- enable.tools
- enable.tools.apptools
- enable.tools.pyface
- enable.tools.toolbars
- enable.trait_defs
- enable.trait_defs.ui
- enable.trait_defs.ui.qt4
- enable.trait_defs.ui.wx
# - enable.vtk_backend
- enable.wx
- kiva
- kiva.agg
- kiva.agg.tests
- kiva.fonttools
- kiva.fonttools.tests
- kiva.quartz
- kiva.tests
- kiva.tests.agg
- kiva.trait_defs
- kiva.trait_defs.ui
- kiva.trait_defs.ui.wx
about:
home: "https://github.com/enthought/enable/"
license: "BSD"
license_family: "BSD"
license_file: ""
summary: "low-level drawing and interaction"
doc_url: ""
dev_url: ""
extra:
recipe-maintainers:
- capn-freako
</code></pre>
|
<python><anaconda><conda><conda-build>
|
2023-02-22 12:35:58
| 0
| 2,018
|
David Banas
|
75,532,739
| 2,170,269
|
Incompatible `__iadd__` and `__add__` in mypy
|
<p>I'm writing some code for vectors and matrices where I want to type-check dimensions. I ran into a problem with type-checking <code>__add__</code> and <code>__iadd__</code>, though. With the simplified example below, <code>mypy</code> tells me that <code> Signatures of "__iadd__" and "__add__" are incompatible</code>. They have exactly the same signatures, though, so what am I doing wrong?</p>
<pre class="lang-py prettyprint-override"><code>
from __future__ import annotations
from typing import (
Generic,
Literal as L,
TypeVar,
overload,
assert_type
)
_D1 = TypeVar("_D1")
_D2 = TypeVar("_D2")
_D3 = TypeVar("_D3")
# TypeVarTuple is an experimental feature; this is a work-aroudn
class Shape:
"""Class that works as a tag to indicate that we are specifying a shape."""
class Shape1D(Shape, Generic[_D1]): pass
class Shape2D(Shape, Generic[_D1,_D2]): pass
_Shape = TypeVar("_Shape", bound=Shape)
Scalar = int | float
class Array(Generic[_Shape]):
@overload # Adding witht the same shape
def __add__(self: Array[_Shape], other: Array[_Shape]) -> Array[_Shape]:
return Any # type: ignore
@overload # Adding with a scalar
def __add__(self: Array[_Shape], other: Scalar) -> Array[_Shape]:
return Any # type: ignore
def __add__(self, other) -> Array:
return self # Dummy implementation
@overload # Adding witht the same shape
def __iadd__(self: Array[_Shape], other: Array[_Shape]) -> Array[_Shape]:
return Any # type: ignore
@overload # Adding with a scalar
def __iadd__(self: Array[_Shape], other: Scalar) -> Array[_Shape]:
return Any # type: ignore
def __iadd__(self, other) -> Array:
return self # Dummy implementation
# Adding with a scalar
def __radd__(self: Array[_Shape], other: Scalar) -> Array[_Shape]:
return Any # type: ignore
A = Array[Shape2D[L[3],L[4]]]()
reveal_type(A + 1.0) ; assert_type(A + 1.0, Array[Shape2D[L[3],L[4]]])
reveal_type(1.0 + A) ; assert_type(1.0 + A, Array[Shape2D[L[3],L[4]]])
reveal_type(A + A) ; assert_type(A + A, Array[Shape2D[L[3],L[4]]])
A += 1.0
A += A
</code></pre>
<p><a href="https://mypy-play.net/?mypy=1.0.0&python=3.11&gist=db8381c90caf670070789fdca78fb6ff" rel="nofollow noreferrer">Get the code in a playground here.</a></p>
|
<python><python-typing><mypy>
|
2023-02-22 12:33:46
| 1
| 1,844
|
Thomas Mailund
|
75,532,661
| 3,909,896
|
FastAPI- customize the 404 error on missing parameter in url
|
<p>I have an API endpoint at <code>/devices/{id}</code>. When I call the API without an <code>id</code>, I get <strong>404 errors</strong> with the vague message <code>"Not found"</code> in the body.</p>
<p>Is there any way to customize the content / message of the 404 error in FastAPI when a parameter (in my case <code>id</code>) is not found/missing in the called URL?</p>
<pre><code>@app.get("/devices/{id}")
async def get_cellular_data_for_device_id(request: fastapi.Request, id: str):
print("doing something")
</code></pre>
<p>404 Error content:</p>
<pre><code> {
"detail": "Not Found"
}
</code></pre>
|
<python><fastapi>
|
2023-02-22 12:26:04
| 2
| 3,013
|
Cribber
|
75,532,498
| 3,224,522
|
Running multiple snakemake pipelines one after the other with one single script
|
<p>I was wondering if it was possible to run multiple snakemake pipelines one after another in a row. I have 3 snakemake pipelines, I would like the 1st one to finish, then the 2nd one starts in automatic, then the 3rd one will start as soon as the 2nd one finishes in conda environment. The inputs and outputs might not necessarily be the same.</p>
<p>Pseudocode something like:</p>
<pre><code>snakemake -s Snakefile_pipeline_1 -j 15
snakemake -s Snakefile_pipeline_2 -j 40
conda activate some_env
snakemake -s Snakefile_pipeline_3 -j 10
</code></pre>
|
<python><snakemake>
|
2023-02-22 12:07:57
| 1
| 1,151
|
user3224522
|
75,532,436
| 17,530,552
|
How to interpolate the first and the last values using pandas.DataFrame.interpolate?
|
<p>I know that this question has been asked before, but the suggested solutions that I found to not work for me. Maybe I am trying to do something that is simply not possible, but let me explain.</p>
<p>I have a time-series <code>data</code> that has some values of <code>0</code>. I would like to interpolate the zeros in <code>data</code> using <code>pandas.DataFrame.interpolate</code>.</p>
<p>The code:</p>
<pre><code>import pandas as pd
import numpy as np
data = [0, -1.31527, -2.25448, -0.965348, -1.11168, -0.0506046, -0.605522,
2.01337, 0, 0, 2.41931, 0.821425, 0.402411, 0]
df = pd.DataFrame(data=data) # Data to pandas dataframe
df.replace(to_replace=0, value=np.nan, inplace=True) # Replace 0 by nan
ip = df.interpolate(method="nearest", order=3, limit=None,
limit_direction=None)
print(ip)
</code></pre>
<p>The result of <code>print(ip)</code>:</p>
<pre><code> 0
0 NaN
1 -1.315270
2 -2.254480
3 -0.965348
4 -1.111680
5 -0.050605
6 -0.605522
7 2.013370
8 2.013370
9 2.419310
10 2.419310
11 0.821425
12 0.402411
13 NaN
</code></pre>
<p><strong>The problem:</strong> Pandas does not interpolate the first and last value of <code>data</code>, but leaves them as zeros. I tried all options of <code>pandas.DataFrame.interpolate</code> out forward and back, but it does not seem to work interpolating the first and last zero of <code>data</code>. Is this simply impossible via Pandas or am I doing something wrong?</p>
|
<python><pandas><dataframe><interpolation>
|
2023-02-22 12:02:36
| 1
| 415
|
Philipp
|
75,532,382
| 9,403,794
|
Pandas df.apply seems not working correctly. Is df apply vectorized?
|
<p>As @jpp answare in <a href="https://stackoverflow.com/a/52674448/9403794">Performance of Pandas apply vs np.vectorize to create new column from existing columns</a>:</p>
<blockquote>
</blockquote>
<p>I will start by saying that the power of Pandas and NumPy arrays is derived from high-performance vectorised calculations on numeric arrays.1 The entire point of vectorised calculations is to avoid Python-level loops by moving calculations to highly optimised C code and utilising contiguous memory blocks.2</p>
<p>1 Numeric types include: int, float, datetime, bool, category. They exclude object dtype and can be held in contiguous memory blocks.</p>
<p>2 There are at least 2 reasons why NumPy operations are efficient versus Python:
Everything in Python is an object. This includes, unlike C, numbers. Python types therefore have an overhead which does not exist with native C types.
NumPy methods are usually C-based. In addition, optimised algorithms are used where possible.</p>
<blockquote>
</blockquote>
<p>Now I am cunfused, because I was thinking that df.apply is vectorized.
For me that means, at all rows are executed parallel in the same time.</p>
<p>I wrote simple code and time of execution show me that df.apply() is executing like df.iterrows()</p>
<pre><code>#!/usr/bin/env python3
# coding=utf-8
"""
File for testing parallel processing for pandas groupby, apply and cudf group apply
"""
import pandas as pd
import cudf as cf
import random
import time as t
amount_rows = 100
def f3(row):
row['d3'] = row['c2'] + 1
t.sleep(0.05)
return row
if __name__ == "__main__":
print(sys.version_info)
print("Pandas: ", pd.__version__)
print("CUDF: ", cf.__version__)
"Creating test data as dict"
l_key = []
for _ in range(amount_rows):
l_key.append(random.randint(0, 9))
d = {'c1': l_key, 'c2': list(range(amount_rows))}
"Creating Pandas DF from dict"
df = pd.DataFrame(d)
"Check if apply execute parallel"
t9 = t.time()
df3 = df.apply(f3, axis = 1 )
t10 = t.time()
diff4 = t10 - t9
print(f"df.apply( f3, axis=1 ) for {amount_rows} takes {round(diff4, 8)} seconds")
"ITERROWS"
aa = t.time()
for key, row in df.iterrows():
row['d3'] = row['c2'] + 1
t.sleep(0.05)
bb = t.time()
diff5 = bb - aa
print(f"df.iterrows( ) for {amount_rows} takes {round(diff5, 8)} seconds")
</code></pre>
<p>And result of executed code is:</p>
<pre><code>sys.version_info(major=3, minor=8, micro=0, releaselevel='final', serial=0)
Pandas: 1.5.3
CUDF: 22.12.0
sys.version_info(major=3, minor=8, micro=0, releaselevel='final', serial=0)
df.apply( f3, axis=1 ) for 100 takes 5.05231261 seconds
df.iterrows( ) for 100 takes 5.04581475 seconds
</code></pre>
<p>I expected execution time for df.apply lower than 1s. But it looks like df.apply is executing row by row not all rows in the same time.</p>
<p>Can someone help me to understand what is wrong with this code?</p>
|
<python><pandas>
|
2023-02-22 11:57:02
| 0
| 309
|
luki
|
75,532,295
| 4,764,419
|
Snakemake expand zip some wildcards and expand full the other
|
<p>I have three lists</p>
<pre><code>BASES = [A,B,C]
CONTRASTS = [1,2,3]
ALLOUTPUTS = [outs1,outs2]
</code></pre>
<p>I want to zip together bases and contrasts, but expand fully all options from output dir</p>
<p>Desired output would be something like</p>
<pre><code>outs1/A-1_comparison.bed
outs1/B-2_comparison.bed
outs1/C-3_comparison.bed
outs2/A-1_comparison.bed
outs2/B-2_comparison.bed
outs2/C-3_comparison.bed
</code></pre>
<p>Currently this rule</p>
<pre><code>ruleAll:
input:
expand(os.path.join("{outputdir}","{bse}-{contrast}_comparison.bed"),zip, bse = BASES,contrast = CONTRASTS,outputdir = ALLOUTPUTS)
</code></pre>
<p>produces</p>
<pre><code>outs1/A-1_comparison.bed
outs1/B-2_comparison.bed
outs1/C-3_comparison.bed
</code></pre>
<p>So it is possible to generate a partial zipping?</p>
|
<python><snakemake>
|
2023-02-22 11:48:00
| 1
| 463
|
Al Bro
|
75,532,164
| 14,269,252
|
get a list of values from user in streamlit app
|
<p>I want to get a list of values from user, but I have no idea how to perform.
I tried with code as follows but this is not the correct way.</p>
<pre><code>import streamlit as st
collect_numbers = lambda x : [str(x)]
numbers = st.text_input("PLease enter numbers")
st.write(collect_numbers(numbers))
</code></pre>
|
<python><streamlit>
|
2023-02-22 11:34:22
| 3
| 450
|
user14269252
|
75,532,043
| 15,906,357
|
How to get the correct info while extracting some particular key value from nested JSON
|
<p>I want to extract the task name and config corresponding to each task into new variable.</p>
<p>The code that I have shared is not giving me the desired output. Although it is extracting some info but it is not able to extract all the required details.</p>
<p>Here is the json:</p>
<pre><code>old = {
"tasks": [
{
"task_group_id": "Task_group_1",
"branch": [
{
"task_id": "Task_Name_1",
"code_file_path": "tasks/base_creation/final_base_logic.hql",
"language": "hive",
"config": {
"k1": "v1",
"Q1":"W1"
},
"sequence": 1,
"condition": "in_start_date in range [2021-10-01 , 2023-11-04]"
}
],
"default": {
"task_id": "Task_group_1_default",
"code_file_path": "tasks/base_creation/default_base_logic.hql",
"language": "hive",
"config": {}
}
},
{
"task_group_id": "Task_group_2",
"branch": [
{
"task_id": "Task_Name_2",
"code_file_path": "tasks/variables_creation/final_cas_logic.py",
"language": "pyspark",
"config": {
"k2": "v2"
},
"sequence": 1,
"condition": "in_start_date in range [2022-02-01 , 2023-11-04]"
},
{
"task_id": "Task_Name_3",
"code_file_path": "tasks/variables_creation/final_sor_logic.py",
"language": "pyspark",
"config": {
"k3": "v3"
},
"sequence": 2,
"condition": "in_start_date in range [2021-10-01 , 2022-01-31]"
}
],
"default": {
"task_id": "Task_group_2_default",
"code_file_path": "tasks/variables_creation/default_variables_logic.py",
"language": "pyspark",
"config": {}
}
}
],
"dependencies": " ['task_group_id_01_Name >> task_group_id_02_Name']"
}
</code></pre>
<p>Here is my code for extracting the info:</p>
<pre><code>o_mod = []
for grp in range(len(old['tasks'])):
for task_id in range(len(old['tasks'][grp]['branch'])):
o_mod.append({})
o_mod[grp]['task_id'] = old['tasks'][grp]['branch'][task_id]['task_id']
o_mod[grp]['config'] = old['tasks'][grp]['branch'][task_id]['config']
print(o_mod)
</code></pre>
<p>Here is the output which is wrong:</p>
<pre><code>[{'task_id': 'Task_Name_1', 'config': {'k1': 'v1', 'Q1': 'W1'}},
{'task_id': 'Task_Name_3', 'config': {'k3': 'v3'}},
{}]
</code></pre>
<p>I want output to look like this (Correct output):</p>
<pre><code>[{'task_id': 'Task_Name_1', 'config': {'k1': 'v1', 'Q1': 'W1'}},
{'task_id': 'Task_Name_2', 'config': {'k2': 'v2'}},
{'task_id': 'Task_Name_3', 'config': {'k3': 'v3'}}}]
</code></pre>
|
<python><json><python-3.x><nested-json>
|
2023-02-22 11:23:26
| 2
| 377
|
XGB
|
75,531,834
| 1,738,522
|
Flask trying to return a single item from a list returns an error
|
<p>I'm have a flask app which lists out my items:</p>
<pre><code>@bp.route('/explore')
def explore():
posts = Post.query.order_by(Post.timestamp.desc()))
return render_template('explore.html', title='Explore', posts=posts.items)
</code></pre>
<p>What I am trying to do is be able to put the post ID in the URL so then it only returns one specific post, with the ID determined by the URL. This is the code I wrote:</p>
<pre><code>@bp.route('/explore/<int:id>')
def item(id):
posts = Post.query.filter_by(id=id).first_or_404()
return render_template('item.html', posts=posts.items)
</code></pre>
<p>The error I get here is:</p>
<pre><code> AttributeError: 'Post' object has no attribute 'items'
</code></pre>
<p>When I remove items it doesn't work either. I'm not clear when items would work for the <strong>explore</strong> feature but not for <strong>item</strong>. I've literally copied the working code except trying to just filter to return one ID.</p>
|
<python><sqlalchemy><flask-sqlalchemy>
|
2023-02-22 11:03:49
| 0
| 12,563
|
Jimmy
|
75,531,642
| 2,057,969
|
When is it necessary to define __getattr__() and __setattr()__ methods for a class?
|
<p>The official Python Programming FAQ advises the programmer to define the methods <a href="https://docs.python.org/3/reference/datamodel.html#object.__getattr__" rel="nofollow noreferrer"><code>object.__getattr__(self, name)</code></a> and <a href="https://docs.python.org/3/reference/datamodel.html#object.__setattr__" rel="nofollow noreferrer"><code>object.__setattr__(self, name, value)</code></a> under certain circumstances when implementing the OOP concept of <em>delegation</em>. However, I can't find good general advice on <strong>when defining these two methods is necessary, customary, or useful</strong>.</p>
<hr />
<p>The Python Programming FAQ's <a href="https://docs.python.org/3/faq/programming.html#what-is-delegation" rel="nofollow noreferrer">example</a> is the following code</p>
<pre><code>class UpperOut:
def __init__(self, outfile):
self._outfile = outfile
def write(self, s):
self._outfile.write(s.upper())
def __getattr__(self, name):
return getattr(self._outfile, name)
</code></pre>
<p>"to implement[] a class that behaves like a file but converts all written data to uppercase". The accompanying text also contains a vague note telling the programmer to define <code>__setattr__</code> in a careful way whenever attributes need to be set/modified (as opposed to just being read).</p>
<p>One question about this code is why their example doesn't use inheritance, which would presumably take care of what <code>__getattr__</code> does. However, as one commenter kindly pointed out, <code>file</code> doesn't exist as a distinct type in Python 3, which may point us to an answer: Maybe they wanted to illustrate pure delegation (that is: <strong>delegation without inheritance</strong>) as one use case of overwriting <code>__getattr__</code> and <code>__setattr__</code>. (If one uses plain inheritance, attributes are inherited by default and hence don't need to be accessed explicitly via calls to <code>__getattr__</code> or <code>__setattr__</code>.)</p>
<p>For reference, a <a href="https://en.wikipedia.org/w/index.php?title=Delegation_(object-oriented_programming)&oldid=1117145858" rel="nofollow noreferrer">definition</a> of "delegation" from Wikipedia is the following:</p>
<blockquote>
<p>In object-oriented programming, <strong>delegation</strong> refers to evaluating a member [...] of one object [...] in the context of another [...] object [...]. Delegation can be done explicitly [...]; or implicitly, by the member lookup rules of the language [...]. Implicit delegation is the fundamental method for behavior reuse in prototype-based programming, corresponding to inheritance in class-based programming.</p>
</blockquote>
<p>Even though the two concepts are often discussed together, delegation is not the same as <a href="https://en.wikipedia.org/w/index.php?title=Inheritance_(object-oriented_programming)&oldid=1140506810" rel="nofollow noreferrer">inheritance</a>:</p>
<blockquote>
<p>The term "inheritance" is loosely used for both class-based and prototype-based programming, but in narrow use the term is reserved for class-based programming (one class <em>inherits from</em> another), with the corresponding technique in prototype-based programming being instead called <em>delegation</em> (one object <em>delegates to</em> another).</p>
</blockquote>
<p>Note that delegation is <a href="https://softwareengineering.stackexchange.com/a/381148/81063">uncommon</a> in Python.</p>
<hr />
<p>Note that <a href="https://stackoverflow.com/questions/19123707/why-use-setattr-and-getattr-built-ins">this question</a> on this site is about use cases of vanilla <code>getattr(object, name)</code> and <code>setattr(object, name, value)</code> and is not directly related.</p>
|
<python><python-3.x>
|
2023-02-22 10:48:39
| 1
| 1,878
|
Lover of Structure
|
75,531,538
| 9,644,712
|
Calculating time difference with different base years in pandas
|
<p>Let's assumme I have the following data:</p>
<pre><code>d = {'origin': ['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a'], 'destination': ['b', 'b', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c', 'c', 'c'], 'year': [2000, 2001, 2002, 2003, 2004, 2005, 2000, 2001, 2002, 2003, 2004, 2005], 'value': [10, 17, 22, 7, 8, 14, 10, 2, 5, 7, 78, 23] }
data_frame = pd.DataFrame(data=d)
data_frame.set_index(['origin', 'destination'], inplace=True)
data_frame
</code></pre>
<p>What I want to achieve is the following. I want to calculate the time differences w.r.t column <code>value</code> for each <strong>origin-destination pair</strong> (given as an index) for two cases.</p>
<p>In the first case, I want the year 2000 as a base. So that the corresponding value will be subtracted from the values in upcoming years (including 2000). Once the year reaches 2003, the base year will become 2003, and it continues to subtract.</p>
<p>If it is a little bit unclear, here is the final dataset I want to achieve</p>
<pre><code>d = {'origin': ['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a'], 'destination': ['b', 'b', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c', 'c', 'c'], 'year': [2000, 2001, 2002, 2003, 2004, 2005, 2000, 2001, 2002, 2003, 2004, 2005], 'value': [10, 17, 22, 7, 8, 14, 10, 2, 5, 7, 78, 23], 'diff': [0, 7, 12, 0, 1, 7, 0, -8, -5, 0, 71, 16], }
data_frame = pd.DataFrame(data=d)
data_frame.set_index(['origin', 'destination'], inplace=True)
data_frame
</code></pre>
<p>For each origin-destination pair, the difference is calculated having the base year 2000 and then switches to 2003.</p>
<p>Thanks for your help</p>
|
<python><pandas>
|
2023-02-22 10:41:00
| 2
| 453
|
Avto Abashishvili
|
75,531,441
| 740,521
|
How to get hashlib to work on both Python2 and Python3
|
<p>This code works fine when using Python2, however when using Python3 it gives me this error:</p>
<blockquote>
<p>Checksum error: bob.tgz, Unicode-objects must be encoded before
hashing</p>
</blockquote>
<p>I then made some changes to use byte like objects in the sha256 object but then I started getting:</p>
<blockquote>
<p>Checksum error: bob.tgz, 'utf-8' codec can't decode byte 0x8b in
position 1: invalid start byte</p>
</blockquote>
<p>So my question is how do I get this code to work on both Python2 and Python3?</p>
<pre><code>import hashlib
filepath = "bob.tgz"
filepath_hashed = "bob.tgz.hashed"
thekey = "asdf1234"
try:
m = hashlib.sha256()
file_contents = None
with open(filepath, "r") as f_in:
file_contents = f_in.read()
m.update(file_contents)
m.update(thekey)
with open(filepath_hashed, "w") as f_out:
f_out.write(file_contents)
f_out.write("~~CHECKSUM~~%s" % m.hexdigest())
except Exception as e:
print("Checksum error: %s, %s" % (filepath, e))
</code></pre>
|
<python><python-3.x><hashlib>
|
2023-02-22 10:33:57
| 0
| 1,206
|
user740521
|
75,531,381
| 13,076,839
|
EINVAL when connect to a unix domain socket in K8s container
|
<p>I want to connect to a container create by K8s using unix domain socket, here is my logic in container:</p>
<pre><code>def main():
file_sock = tornado.netutil.bind_unix_socket("/test/f.sock")
client, address = file_sock.accept()
...
</code></pre>
<p>this container will start by K8s with configuration:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: base
image: <image repo>
volumeMounts:
- mountPath: /test/
name: socket-volume
volumes:
- name: socket-volume
emptyDir:
sizeLimit: 500Mi
</code></pre>
<p>Now I want to connect to this <code>f.sock</code> from outside. I need to get the path from <code>config.json</code> of that container.</p>
<p>To do that, I can use <code>runc list</code> to get the bundle, and <code>config.json</code> in the bundle has a property named <code>mounts</code>. The volume I use will map to a path inside <code>mounts</code> like:</p>
<pre><code>"mounts": [
{
"destination": "/test/",
"type": "bind",
"source": "/var/lib/kubelet/pods/854cad13-3cde-40f1-a058-dc069a850226/volumes/kubernetes.io~empty-dir/socket-volume",
"options": [
"rw",
"rbind",
"rprivate",
"bind"
]
},
]
</code></pre>
<p>After getting the path of <code>f.sock</code>, I can use C code to connect to it:</p>
<pre><code>int main() {
struct sockaddr_un_longer {
sa_family_t sun_family;
char sun_path[512];
};
int s, len, ret;
struct sockaddr_un_longer remote = { .sun_family = AF_UNIX };
if ((s = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) {
printf("error when open socket\n");
return -1;
}
strcpy(remote.sun_path, sockPath);
printf("socket path: %s\n", remote.sun_path);
len = strlen(remote.sun_path) + sizeof(remote.sun_family);
printf("remote: %s\n", (char *)&remote);
if (connect(s, (struct sockaddr *)&remote, len) == -1) {
printf("errno is: %d\n", errno);
printf("error when connect socket\n");
return -1;
}
...
}
</code></pre>
<p>And this is the error I got:</p>
<pre><code>remote:
errno is: 22
error when connect socket
</code></pre>
<p>As far as I am concerned, <code>22</code> means invalid argument. And if I run python code directly in local, connect error won't happen.</p>
<p>Anyone who can help me is awesome!</p>
|
<python><c><kubernetes><unix-socket>
|
2023-02-22 10:28:43
| 0
| 354
|
MrZ
|
75,531,326
| 3,102,638
|
vscode changing python interpreter leads to no module named debugpy
|
<p>Since I changed the python interpreter from the default one in my system, I am no longer able to debug python code.
When I hit F5, I see a loading bar in the "RUN and DEBUG" window, it loads for a few seconds then disappears and nothing more.</p>
<p>My vscode</p>
<blockquote>
<p>Version: 1.75.1 (user setup)
Commit: 441438abd1ac652551dbe4d408dfcec8a499b8bf
Date: 2023-02-08T21:32:34.589Z
Electron: 19.1.9
Chromium: 102.0.5005.194
Node.js: 16.14.2
V8: 10.2.154.23-electron.0
OS: Windows_NT x64 10.0.19044
Sandboxed: No</p>
</blockquote>
<p>Default python interpreter <code>3.6.8</code> , new one that I am trying <code>3.10.2</code></p>
<p>Note that I am using vscode server on a rhel8 from win10.
I tried with 2 different version of the python extension: <code>2022.8.1</code> and <code>2023.2.0</code></p>
<p>I also tried launching it from command line and I got:
<code>No module named debugpy</code></p>
<p>From my understanding debugpy is included in the python extension, apparently after changing the interpreter it can not be found anymore?</p>
<p>Thanks,
Andrea</p>
|
<python><python-3.x><visual-studio-code><debugging><vscode-extensions>
|
2023-02-22 10:24:29
| 1
| 400
|
a_bet
|
75,531,194
| 9,879,534
|
how to slice numpy array with an n*2 numpy index pythonicly?
|
<p>I don't know how to describe my question, if there already exists answer please redirect this question. My question is, like code below:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
idx = np.array([[1,3],[5,7], [9,11]], dtype=np.int64)
data = np.arange(30).reshape(2, 15)
need_list = []
for i in range(idx.shape[0]):
need_list.append(data[:, idx[i, 0]:idx[i, 1]])
result = np.hstack(need_list)
'''The result would be
array([[ 1, 2, 5, 6, 9, 10],
[16, 17, 20, 21, 24, 25]])
'''
</code></pre>
<p>The code above works well for me, but I think it's a little verbose, and I wonder if there is any pythonic way to slice <code>data</code> by <code>idx</code> without using <code>need_list</code> and for-loop?</p>
|
<python><numpy>
|
2023-02-22 10:13:49
| 3
| 365
|
Chuang Men
|
75,531,172
| 1,367,705
|
How to get all users in organization in GitHub using PyGithub? I'm getting github.GithubException.UnknownObjectException
|
<p>I would like to print all users in my organization in GitHub using PyGitHub.</p>
<p>The below snippet works, however it returns a <code>PaginatedList</code>:</p>
<pre><code>python3 -c 'import os; from github import Github;g = Github(os.environ["TOKEN"]);
repo = g.get_repo("ORG/REPO");
organization = repo.organization;
members = organization.get_members(); print(members);'
</code></pre>
<p>However, when I want to print the <code>PaginatedList</code> I'm getting an error:</p>
<pre><code>python3 -c 'import os; from github import Github;g = Github(os.environ["TOKEN"]);
repo = g.get_repo("ORG/REPO"); organization = repo.organization;
members = organization.get_members(); print(members); [print(m) for m in members];'
</code></pre>
<p>And the error:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 1, in <listcomp>
File "/usr/local/lib/python3.10/dist-packages/PyGithub-1.55-py3.10.egg/github/PaginatedList.py", line 56, in __iter__
newElements = self._grow()
File "/usr/local/lib/python3.10/dist-packages/PyGithub-1.55-py3.10.egg/github/PaginatedList.py", line 67, in _grow
newElements = self._fetchNextPage()
File "/usr/local/lib/python3.10/dist-packages/PyGithub-1.55-py3.10.egg/github/PaginatedList.py", line 199, in _fetchNextPage
headers, data = self.__requester.requestJsonAndCheck(
File "/usr/local/lib/python3.10/dist-packages/PyGithub-1.55-py3.10.egg/github/Requester.py", line 353, in requestJsonAndCheck
return self.__check(
File "/usr/local/lib/python3.10/dist-packages/PyGithub-1.55-py3.10.egg/github/Requester.py", line 378, in __check
raise self.__createException(status, responseHeaders, output)
github.GithubException.UnknownObjectException: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest"}
</code></pre>
<p>I know I can do:</p>
<pre><code>python3 -c 'import os; from github import Github;
g = Github(os.environ["TOKEN"]); organization = g.get_organization("ORG");
members = organization.get_members();
[print(member) for member in members]; '
</code></pre>
<p>But that's not the point, I need it the way I showed (which does not work).</p>
|
<python><python-3.x><github><pygithub>
|
2023-02-22 10:11:03
| 0
| 2,620
|
mazix
|
75,531,156
| 6,552,836
|
Scipy / Mystic taking too long to simplify expression
|
<p>I'm trying to use mystic to create a simplified expression of my constraints. I have an array of 200 elements. I'm first testing for 1 constraint, which is limiting the sum of all the variables between min and max limits like this:</p>
<p><code>0 <= x0 + x1 + x2 + ....... x198 + x199 <= 20000</code></p>
<p>The issue is this process is taking too long to simplify just for this 1 constraint alone - approx 1hr (haven't even added others yet). How can I resolve this?</p>
<pre><code>min_lim = 0
max_lim = 20000
def constraint_func():
variable_num = ['x'+str(i) for i in range(200)]
constrain_eq = f'{min_lim} <=' + ' + '.join(variable_num) + f' <= {max_lim}'
return constrain_eq
eqn = ms.simplify(constraint_func(), all=True)
constrain = ms.generate_constraint(ms.generate_solvers(eqn), join=my.constraints.and_)
</code></pre>
|
<python><scipy><scipy-optimize><mystic>
|
2023-02-22 10:09:32
| 1
| 439
|
star_it8293
|
75,531,132
| 14,720,380
|
How can I implement a default factory for a generic type in a Pydantic BaseClass?
|
<p>I want to create a generic base model, that the field that defaults to the default factory for that type:</p>
<pre><code>from pydantic import BaseModel, Field
from pydantic.generics import GenericModel, Generic
from typing import TypeVar, Generic, Optional
T = TypeVar("T")
class Material(GenericModel, Generic[T]):
value: T = Field(default_factory=T)
Material[float](value=1)
Material[float](value="1 ")
Material[Optional[float]]()
</code></pre>
<p>However this is saying returning the error:</p>
<pre><code>Traceback (most recent call last):
File "/home/tom/.config/JetBrains/PyCharm2022.2/scratches/tmp.py", line 14, in <module>
Material[Optional[float]]()
File "pydantic/main.py", line 340, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1066, in pydantic.main.validate_model
File "pydantic/fields.py", line 439, in pydantic.fields.ModelField.get_default
TypeError: 'TypeVar' object is not callable
</code></pre>
<p>Is it possible to have a default factory for a generic type?</p>
|
<python><generics><pydantic>
|
2023-02-22 10:07:11
| 0
| 6,623
|
Tom McLean
|
75,531,062
| 19,570,235
|
How to resolve attribute reference for inherited objects in Python?
|
<p>I would like to have a proper Python typing for the setup I have created.</p>
<p>The issue I have is connected with class B, in which my IDE (pyCharm) reports unresolved attribute reference.<br>However, this setup is working fine.</p>
<pre class="lang-py prettyprint-override"><code>class ConfigA:
def __init__(self):
self.param1: int = 0
class ConfigB(ConfigA):
def __init__(self):
super().__init__()
self.param2: int = 1
class A:
def __init__(self, config: ConfigA):
self.config: ConfigA = config
self.do_basic_stuff()
def do_basic_stuff(self):
print(self.config.param1)
class B(A):
def __init__(self, config: ConfigB):
super().__init__(config)
def do_advanced_stuff(self):
# Unresolved attribute reference 'param2' for class 'ConfigA'
print(self.config.param2)
if __name__ == "__main__":
b = B(ConfigB())
b.do_advanced_stuff()
</code></pre>
<p>Is there a way to properly set the typing that the IDE would recognise that the object self.config is from specialised ConfigB class?</p>
|
<python><python-typing>
|
2023-02-22 10:01:04
| 1
| 417
|
mlokos
|
75,531,014
| 6,281,366
|
enum like structure that represents a tree
|
<p>i want to represent a constant tree of strings, not a dynamic one.</p>
<p>for example:</p>
<p>structure -> house,tower</p>
<p>house -> green_house, yellow_house</p>
<p>tower -> small_tower, big_tower</p>
<p>where each of them is a string (house='house')</p>
<p>the goal is to be able to access the tree in such way:</p>
<pre><code>structure.house.yellow_house
</code></pre>
<p>which will then give me a string of their enum values:</p>
<pre><code>'structure.house.yellow_house'
</code></pre>
<p>what might be a good way to define such structure?</p>
|
<python><pydantic>
|
2023-02-22 09:57:27
| 1
| 827
|
tamirg
|
75,530,980
| 5,775,358
|
Polars subtract numpy 1xn array from n columns
|
<p>I am struggling with polars. I have a dataframe and an numpy array. I would like to subtract them.</p>
<pre><code>import polars as pl
import pandas as pd
df = pl.DataFrame(np.random.randn(6, 4), schema=['#', 'x', 'y', 'z'])
arr = np.array([-10, -20, -30])
df.select(
pl.col(r'^(x|y|z)$') # ^[xyz]$
).map_rows(
lambda x: np.array(x) - arr
)
# ComputeError: expected tuple, got ndarray
</code></pre>
<p>But if I try to calculate the norm for example, then it works:</p>
<pre><code>df.select(
pl.col(r'^(x|y|z)$')
).map_rows(
lambda x: np.sum((np.array(x) - arr)**2)**0.5
)
shape: (6, 1)
┌───────────┐
│ map │
│ --- │
│ f64 │
╞═══════════╡
│ 38.242255 │
│ 37.239545 │
│ 38.07624 │
│ 36.688312 │
│ 38.419194 │
│ 36.262196 │
└───────────┘
# check if it is correct:
np.sum((df.to_pandas()[['x', 'y', 'z']].to_numpy() - arr)**2, axis=1) ** 0.5
>>> array([38.24225488, 37.23954478, 38.07623986, 36.68831161, 38.41919409,
36.2621962 ])
</code></pre>
<p>In pandas one can do it like this:</p>
<pre><code>df.to_pandas()[['x', 'y', 'z']] - arr
x y z
0 10.143819 21.875335 29.682364
1 10.360651 21.116404 28.871060
2 9.777666 20.846593 30.325185
3 9.394726 19.357053 29.716592
4 9.223525 21.618511 30.390805
5 9.751234 21.667080 27.393393
</code></pre>
<p>One way it will work is to do it for each column separately. But that means a lot of the same code, especially when the number of columns are increasing:</p>
<pre><code>df.select(
pl.col('x') - arr[0], pl.col('y') - arr[1], pl.col('z') - arr[2]
)
</code></pre>
|
<python><dataframe><numpy><python-polars>
|
2023-02-22 09:54:57
| 5
| 2,406
|
3dSpatialUser
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.