QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,855,653
| 1,167,359
|
Automate a running Visual Studio instance using COM to start debugging
|
<p>How do I use COM automation to "Start Debugging" and "Toggle Breakpoint" in Microsoft Visual Studio Community 2022 (64-bit) Version 17.3.6?</p>
<p><a href="https://i.sstatic.net/waM9m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/waM9m.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/OFAdJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OFAdJ.png" alt="enter image description here" /></a></p>
<p>I'm not sure what I need to call.
I tried oleview.exe to find out the api, but have no clue how to use it.</p>
<p>Error handling omitted in this code:</p>
<pre class="lang-py prettyprint-override"><code>import win32com.client as win32
gotoFile = "c:\\SomeFile.cpp"
gotoLine = 123
gotoChar = 12
breakpointFile = "c:\\SomeFile.cpp"
breakpointLine = 23
app = win32.GetActiveObject('VisualStudio.DTE')
app.MainWindow.Activate
app.MainWindow.Visible = True
app.UserControl = True
app.ItemOperations.OpenFile(gotoFile)
app.ActiveDocument.Selection.MoveToLineAndOffset(gotoLine, gotoChar)
app.{Toggle breakpoint in breakpointFile on breakpointLine (F9)} # How?
app.{Start Debugging (F5)} # How?
</code></pre>
|
<python><visual-studio><com>
|
2024-01-21 16:20:36
| 1
| 874
|
Adamarla
|
77,855,632
| 10,975,692
|
fastAPI Depends does not resolve coroutine dependency when @alru_cache is used
|
<p>Let's say I have a class <code>Foo</code> that need a running event loop when the constructor is called.
I want to use the <a href="https://fastapi.tiangolo.com/tutorial/dependencies/" rel="nofollow noreferrer">dependency injection system of fastAPI</a> to provide the same instance of the <code>Foo</code> class to my endpoints.</p>
<p>The <code>Foo</code> itself is not a singleton, so I make create a getter function <code>get_foo()</code> that always returns the same instance.</p>
<p>So a minimal example looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from typing import Annotated
import uvicorn
from async_lru import alru_cache
from fastapi import FastAPI, Depends
app = FastAPI()
class Foo:
def __init__(self, value):
self.value = value
async def _task() -> None:
await asyncio.sleep(5)
print(self.value)
self._task = asyncio.create_task(_task())
@alru_cache # makes it singleton-like
async def get_foo() -> Foo:
return Foo(42)
@app.get("/")
async def root(foo: Annotated[Foo, Depends(get_foo)]):
return {"message": foo.value} # crashes here: AttributeError: 'coroutine' object has no attribute 'value'
if __name__ == '__main__':
uvicorn.run(app=app, host="0.0.0.0", port=8000)
</code></pre>
<p>This crashes with <code>AttributeError: 'coroutine' object has no attribute 'value'</code>. Make <code>get_foo()</code> sync does not work either: <code>RuntimeError: no running event loop</code>.</p>
<p>If I remove the <code>@alru_cache</code> it works, but it is no longer a singleton-like object as desired.</p>
<p>What should I do to make it work as expected?</p>
|
<python><python-asyncio><fastapi>
|
2024-01-21 16:16:00
| 3
| 1,500
|
DarkMath
|
77,855,437
| 525,865
|
a bs4 script gives back a empty list
|
<p>I have a list with around 50 entries from centers in Germany. These centers are public institutions and are close to the economy. I want to create a list with all the centers _categories</p>
<pre><code>Industry sectors:
Location:
Contact person:
</code></pre>
<p>The data - they can be found here on the overview page:<br />
<a href="https://www.mittelstand-digital.de/MD/Redaktion/DE/artikel/Mittelstand-4-0/mittelstand-40-unternehmen.html" rel="nofollow noreferrer">https://www.mittelstand-digital.de/MD/Redaktion/DE/artikel/Mittelstand-4-0/mittelstand-40-unternehmen.html</a></p>
<p>The idea is to use a parser (scraper) that uses Python and Beautiful Soup and then writes the data into a Calc spreadsheet via Pandas.<br />
So i go like so:</p>
<pre><code> import requests
from bs4 import BeautifulSoup
import pandas as pd
# URL der Webseite
url = "https://www.mittelstand-digital.de/MD/Redaktion/DE/Artikel/Mittelstand-4-0/mittelstand-40-kompetenzzentren.html"
# Webseiteninhalt abrufen
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
# Leere Listen für die Daten erstellen
themen_list = []
branchen_list = []
ort_list = []
ansprechpartner_list = []
# Zentren-Daten extrahieren und den Listen hinzufügen
for center in soup.find_all('div', class_='linkblock'):
themen_list.append(center.find('h3').text.strip())
branchen_list.append(center.find('p', class_='text').text.strip())
ort_list.append(center.find('span', class_='ort').text.strip())
ansprechpartner_list.append(center.find('span', class_='ansprechpartner').text.strip())
# Daten in ein Pandas DataFrame umwandeln
data = {
'Themen': themen_list,
'Branchen': branchen_list,
'Ort': ort_list,
'Ansprechpartner': ansprechpartner_list
}
df = pd.DataFrame(data)
</code></pre>
<p>but this does not work at the moment. I only get back a enpty list</p>
|
<python><web-scraping><beautifulsoup>
|
2024-01-21 15:23:01
| 1
| 1,223
|
zero
|
77,855,188
| 900,394
|
How to set up aws s3 with Heroku?
|
<p>I followed <a href="https://devcenter.heroku.com/articles/s3" rel="nofollow noreferrer">this tutorial</a> to link my aws s3 bucket with my Heroku app.</p>
<p>In my s3 bucket I have a zip file called <code>model.zip</code>. I need to use this model in my app (Python). So after linking the bucket successfully according to the tutorial, I did the following:</p>
<pre><code>s3 = boto3.client('s3')
s3.download_file('my-models-bucket', 'model.zip', 'model.zip')
</code></pre>
<p>But when I run it on Heroku, I get a 403 error at the <code>s3.download_file</code> line:</p>
<blockquote>
<p>botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden</p>
</blockquote>
<p>Do I need to specify the credentials in <code>boto3.client()</code>? I don't want to upload any credentials, so how should it be done? Or should I somehow use the environment variables I set up while linking s3 according to the tutorial? I.e. the tutorial says to do the following:</p>
<pre><code>heroku config:set AWS_ACCESS_KEY_ID=MY-ACCESS-ID AWS_SECRET_ACCESS_KEY=MY-ACCESS-KEY
Adding config vars and restarting app... done, v21
AWS_ACCESS_KEY_ID => MY-ACCESS-ID
AWS_SECRET_ACCESS_KEY => MY-ACCESS-KEY
</code></pre>
<p>Does that mean that I now have access to <code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code> in my Python code and I should read them using <code>os.environ.get("AWS_ACCESS_KEY_ID")</code> or something?</p>
<p>Or maybe there's something I should do on Aws?</p>
|
<python><amazon-s3><heroku>
|
2024-01-21 14:21:17
| 1
| 5,382
|
Alaa M.
|
77,855,169
| 12,698,762
|
JAX python C callbacks
|
<p>Numba allows to create C-callbacks directly in python with the
@cfunc-decorator ( <a href="https://numba.pydata.org/numba-doc/0.42.0/user/cfunc.html" rel="nofollow noreferrer">https://numba.pydata.org/numba-doc/0.42.0/user/cfunc.html</a> ):</p>
<pre><code>@cfunc("float64(float64)")
def square(x):
return x**2
</code></pre>
<p>To clarify, the resulting function is a pure C-function, which can then be called directly from C-code.</p>
<p>Is there an equivalent functionality available in JAX ( <a href="https://jax.readthedocs.io/en/latest/#" rel="nofollow noreferrer">https://jax.readthedocs.io/en/latest/#</a> )?</p>
<p>I have been searching for a while but couldn't find anything. I would appreciate any tips.</p>
|
<python><numba><jax>
|
2024-01-21 14:15:26
| 1
| 905
|
Adam
|
77,854,931
| 3,285,115
|
How (improve) updating multiple columns of a pandas dataframe, when the columns are datetime?
|
<p>I have a DataFrame that has columns as datetime objects, and I would like to update specific values for a specific index based on a list of <em>days</em>.</p>
<p>Here is an MRE that works:</p>
<pre><code>import pandas as pd
start, stop = "2023-12-01", "2023-12-31"
dates: pd.Index = pd.date_range(start, stop, freq="D")
december = pd.DataFrame(columns = dates, index = ['ft', 'pt'])
december.loc['ft', pd.to_datetime(['2023-12-04', '2023-12-06', '2023-12-11', '2023-12-12', '2023-12-27', '2023-12-28'])] = 'X'
</code></pre>
<p>In this example, I update the columns for the following dates:</p>
<pre><code>['2023-12-04', '2023-12-06', '2023-12-11', '2023-12-12', '2023-12-27', '2023-12-28']
</code></pre>
<p>But ideally I would pass a list of days like <code>[4, 6, 11, 12, 27, 28]</code> to the <code>.loc</code> call.</p>
|
<python><pandas>
|
2024-01-21 13:08:10
| 1
| 3,288
|
erasmortg
|
77,854,912
| 5,741,205
|
How to install dependencies that are installed in a tox-docker container, but NOT locally?
|
<p>I'd like to execute unit test (configured in <code>tox.ini</code>) in a docker container using <a href="https://github.com/tox-dev/tox-docker" rel="nofollow noreferrer">tox-docker</a> plugin. The idea behind it is to be able run unit tests, that require <code>cuda-python</code> module, on a MacBook with Apple M2 Pro.
My question is - how should I configure a corresponding section in a <code>tox.ini</code> so that the requirements are installed in a docker container, but not locally (because this would fail)?</p>
<p>Here is mentioned section in a <code>tox.ini</code>:</p>
<pre><code>[testenv:test-docker-env]
docker =
py-3.11
skip_install = false
deps =
# -r{toxinidir}/requirements/requirements.txt # <- this fails when executed locally on MacBook!
-r{toxinidir}/requirements/requirements-test.txt
commands =
python -m pytest --cov=src \
--cov-report term-missing \
--cov-report xml:coverage-reports/coverage-BUILD_{env:BUILD_NUMBER:0}.xml \
--junitxml=xunit-reports/xunit-result-BUILD_{env:BUILD_NUMBER:0}.xml \
-vv \
tests/
[docker:py-3.11]
image = python:3.11
</code></pre>
<p>How can i instruct <code>tox-docker</code> to install required modules from the <code>{toxinidir}/requirements/requirements.txt</code>, but not let <code>tox</code> to install them locally?</p>
|
<python><docker><macos><tox>
|
2024-01-21 13:03:41
| 1
| 211,730
|
MaxU - stand with Ukraine
|
77,854,835
| 16,383,578
|
How to calculate the number of ways to choose Y items from at most X groups efficiently?
|
<p>I don't know how to describe this concisely. Say you have X groups of items, each item in the same group is identical. All the groups have infinite size.</p>
<p>You want to choose Y items from the X groups completely at random, you want exactly Y items, and you can choose all from 1 group, 2 groups, 3 groups... all groups.</p>
<p>It is hard for me to describe so I will give a simple example. Say you want to choose 9 items from 3 groups, call these groups A, B, C.</p>
<p>You can choose (9A, 0B, 0C) or (0A, 9B, 0C) or (0A, 0B, 9C) or (1A, 2B, 6C)...</p>
<p>You can choose either from one group, from two groups, or from all groups.</p>
<p>The task is to find the number of ways distinct ways to partition the Y items, or the sets of counts of items from the same groups, and the composition doesn't matter.</p>
<p>To make it more clear, (9A, 0B, 0C) and (0A, 9B, 0C) and (0A, 0B, 9C) are treated the same, they are all counted (0, 0, 9) and there are 3 ways to obtain (0, 0, 9), 54 ways to obtain (0, 1, 8).</p>
<p>I tried to do it using a smart way but I got the <em><strong>WRONG</strong></em> results:</p>
<pre><code>from collections import Counter
from itertools import product
groups = Counter()
for i in range(10):
for k in range(j := 9 - i):
groups[tuple(sorted((i, k, j - k)))] += 1
</code></pre>
<strike>
<pre>
Counter({(1, 2, 6): 6,
(1, 3, 5): 6,
(2, 3, 4): 6,
(0, 1, 8): 4,
(0, 2, 7): 4,
(0, 3, 6): 4,
(0, 4, 5): 4,
(1, 1, 7): 3,
(1, 4, 4): 3,
(2, 2, 5): 3,
(0, 0, 9): 1,
(3, 3, 3): 1})
</pre>
</strike>
<p>So I did it the bruteforce way and iterated through all the possibilities to get the <em><strong>CORRECT</strong></em> result:</p>
<pre><code>new_groups = Counter()
for item in product('abc', repeat=9):
a = b = c = 0
for e in item:
if e == 'a':
a += 1
elif e == 'b':
b += 1
else:
c += 1
new_groups[tuple(sorted((a, b, c)))] += 1
</code></pre>
<pre><code>Counter({(2, 3, 4): 7560,
(1, 3, 5): 3024,
(2, 2, 5): 2268,
(1, 4, 4): 1890,
(3, 3, 3): 1680,
(1, 2, 6): 1512,
(0, 4, 5): 756,
(0, 3, 6): 504,
(0, 2, 7): 216,
(1, 1, 7): 216,
(0, 1, 8): 54,
(0, 0, 9): 3})
</code></pre>
<p>The result is correct, but this method is extremely inefficient, it is in exponential time, for 9 items and 3 groups it takes 19683 iterations, and 9 items 4 groups 262144 iterations!</p>
<p>What is a better method?</p>
<hr />
<p>Perhaps I didn't make it clear but I thought the code made it pretty clear what I was after.</p>
<p>Now I will state it explicitly, say you have Y balls and you want to throw all of them randomly into X containers. All balls are thrown into containers, no misses (this is mathematics).</p>
<p><em><strong>The task is to find all sets of the counts of number of balls in the bins, which bins have the count doesn't matter, the sets are unordered (like Python <code>set</code>s), and the corresponding count of each set.</strong></em></p>
<p>The output should be the same as my correct but inefficient method's output. <em><strong>I am NOT trying to calculate just ONE number.</strong></em></p>
<hr />
<p>The correct output is the output from the bruteforce approach. The second one.</p>
<hr />
<p>I just wrote another piece of code using the bruteforce method to illustrate what the numbers mean.</p>
<pre><code>groups2 = {}
for item in product('abc', repeat=9):
a = b = c = 0
for e in item:
if e == 'a':
a += 1
elif e == 'b':
b += 1
else:
c += 1
key = tuple(sorted((a, b, c)))
if key in groups2:
groups2[key].append(''.join(item))
else:
groups2[key] = [''.join(item)]
</code></pre>
<p>To answer why (0, 1, 8) is 54:</p>
<pre><code>In [43]: groups2[(0, 1, 8)]
Out[43]:
['aaaaaaaab',
'aaaaaaaac',
'aaaaaaaba',
'aaaaaaaca',
'aaaaaabaa',
'aaaaaacaa',
'aaaaabaaa',
'aaaaacaaa',
'aaaabaaaa',
'aaaacaaaa',
'aaabaaaaa',
'aaacaaaaa',
'aabaaaaaa',
'aacaaaaaa',
'abaaaaaaa',
'abbbbbbbb',
'acaaaaaaa',
'acccccccc',
'baaaaaaaa',
'babbbbbbb',
'bbabbbbbb',
'bbbabbbbb',
'bbbbabbbb',
'bbbbbabbb',
'bbbbbbabb',
'bbbbbbbab',
'bbbbbbbba',
'bbbbbbbbc',
'bbbbbbbcb',
'bbbbbbcbb',
'bbbbbcbbb',
'bbbbcbbbb',
'bbbcbbbbb',
'bbcbbbbbb',
'bcbbbbbbb',
'bcccccccc',
'caaaaaaaa',
'caccccccc',
'cbbbbbbbb',
'cbccccccc',
'ccacccccc',
'ccbcccccc',
'cccaccccc',
'cccbccccc',
'ccccacccc',
'ccccbcccc',
'cccccaccc',
'cccccbccc',
'ccccccacc',
'ccccccbcc',
'cccccccac',
'cccccccbc',
'cccccccca',
'ccccccccb']
</code></pre>
|
<python><math><combinatorics>
|
2024-01-21 12:41:46
| 3
| 3,930
|
Ξένη Γήινος
|
77,854,629
| 12,200,808
|
How to check out tensorflow 2.15.0.post1
|
<p>The <code>tensorflow</code> tags only has "<code>v2.15.0 --> Nov 15, 2023</code>", but <code>pypi</code> has "<code>tensorflow == 2.15.0.post1</code>"</p>
<p><a href="https://pypi.org/project/tensorflow/" rel="nofollow noreferrer">https://pypi.org/project/tensorflow/</a></p>
<pre><code>tensorflow == 2.15.0.post1 Released: Dec 6, 2023
</code></pre>
<p><a href="https://pypi.org/project/tensorflow/2.15.0/" rel="nofollow noreferrer">https://pypi.org/project/tensorflow/2.15.0/</a></p>
<pre><code>tensorflow == 2.15.0 Released: Nov 15, 2023
</code></pre>
<p><a href="https://github.com/tensorflow/tensorflow/tags" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/tags</a></p>
<pre><code>v2.15.0 --> Nov 15, 2023
</code></pre>
<p>How to check out the source code version of "<code>tensorflow == 2.15.0.post1</code>"?</p>
|
<python><tensorflow><ubuntu><github>
|
2024-01-21 11:37:16
| 1
| 1,900
|
stackbiz
|
77,854,619
| 2,459,153
|
Python os.py behaves like a package. Why?
|
<p>The following operation works</p>
<pre><code>from os.path import normcase
</code></pre>
<p>As far as I'm concerned it shouldn't. "os.py" is not a directory so not a package. os does contain path which is itself another module.</p>
<p>Furthermore</p>
<pre><code>>>> import os
>>> os.__package__
''
>>>
$ grep __package__ /usr/lib/python3.10/os.py
</code></pre>
<p>So module "os.py" has a <strong>package</strong> attribute but it doesn't declare this attribute in its file.</p>
<p>What dark magic is this?</p>
<p>The reason I care is because I'm going very deep into import mechanisms. Developing a context manager to take care of sys.path while importing.</p>
|
<python><import><module><package>
|
2024-01-21 11:35:38
| 0
| 326
|
Francis Cagney
|
77,854,431
| 1,543,290
|
How to set the quality of an mp3 Codec with PyAV
|
<p>I wrote a simple code to create a .mp3 file from any audio input, using PyAV. It's working (pasted at the end of the question). However, when using <code>ffmpeg</code> it's possible to set the quality of the .mp3 file, and I'd like to do this as well in my code. According to the <a href="https://ffmpeg.org/ffmpeg-codecs.html#libmp3lame-1" rel="nofollow noreferrer">ffmpeg documentation</a> :</p>
<blockquote>
<p>q (-V)</p>
<p>Set constant quality setting for VBR. This option is valid only using the ffmpeg command-line tool. For library interface users, use <code>global_quality</code>.</p>
</blockquote>
<p>So, the question is how do I use <code>global_quality</code> with PyAV?</p>
<p>I found it at <a href="https://pyav.org/docs/develop/development/includes.html" rel="nofollow noreferrer">PyAV documentation</a>, listed as <code>Wrapped C Types and Functions</code>, under <code>struct AVCodecContext</code>, but I still don't understand how to use this.</p>
<p>I tried creating an <code>AudioCodecContext</code> (which is the closest thing to AVCodecContext I found) with <code>c = av.Codec('mp3').create()</code>, but it doesn't seem to have a <code>global_quality</code> field.</p>
<p>My existing function:</p>
<pre class="lang-py prettyprint-override"><code> def encode_audio_to_mp3(input_file_path, output_file_path):
input_container = av.open(input_file_path)
output_container = av.open(output_file_path, 'w')
output_stream = output_container.add_stream('mp3')
for in_packet in input_container.demux():
for in_frame in in_packet.decode():
for out_packet in output_stream.encode(in_frame):
output_container.mux(out_packet)
# Flush stream
for packet in output_stream.encode():
output_container.mux(packet)
output_container.close()
input_container.close()
</code></pre>
|
<python><ffmpeg><mp3><codec><pyav>
|
2024-01-21 10:32:12
| 1
| 1,722
|
Zvika
|
77,854,397
| 11,748,924
|
Pandas resample signal series with its corresponding label
|
<p>I have this table with these columns: <strong>Seconds, Amplitude, Labels, Metadata</strong>.
Basically, it's an ECG signal.</p>
<p>You can download the csv here:
<a href="https://tmpfiles.org/3951223/question.csv" rel="nofollow noreferrer">https://tmpfiles.org/3951223/question.csv</a></p>
<p>As you see, the second timestep is <strong>0.004</strong>. How to resample that with the desired new timestep, such as <strong>0.002</strong>, without destructing another column.</p>
<p>Such as <strong>label_encoding</strong>, that column is intended for machine learning <em>y label</em> purpose, especially multiclassification problem; it's segmentation region. It's unique values are (24, 1, 27).</p>
<p>While <strong>bound_or_peak</strong> is intended for displaying or plotting the purpose of the region. It consists of 3 bits (the maximum value is 7). If the most significant bit set, then it started region to plot (onset). If the second bit is set, then it must be a peak of the ECG signal wave. If the least significant bit is set, then it must be an offset region to plot.</p>
<p>Here is the table produced by this code:</p>
<pre><code>%load_ext google.colab.data_table
import numpy as np
import pandas as pd
# Create a NumPy matrix with row and column labels
matrix_data = signal.signal_arr
dtype_dict = {'seconds': float, 'amplitude': float, 'label_encoding': int, 'bound_or_peak': int}
# Convert the NumPy matrix to a Pandas DataFrame with labels
df = pd.DataFrame(matrix_data, columns=dtype_dict.keys()).astype(dtype_dict)
# Display the DataFrame
df[:250]
</code></pre>
<p>What I mean with <em>without destruction another column</em> is: after resampled, another column such as <strong>labels</strong> and <strong>bound_or_peak</strong> are located as is following <em>df before resampled</em>. While <strong>amplitude</strong> should have <strong>interpolated</strong>, especially linear <strong>interpolated</strong>.</p>
<p>Actually, I have an idea to ignore the <strong>seconds</strong> column. Instead, that column can be compressed into a single value, such as in <strong>frequency sampling</strong>. So converting timestep to frequency sampling is a good idea, I think. <strong>0.004</strong> means <strong>1/0.004</strong>; therefore, the frequency sampling is <strong>250</strong>.</p>
<p>Now the problem is how to resample or interpolate the amplitude to another frequency sampling without destructing another column.</p>
<p>Update:
As the commentator said, I should have used textual representation to show the table instead of a picture:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>index</th>
<th>seconds</th>
<th>amplitude</th>
<th>label_encoding</th>
<th>bound_or_peak</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.0</td>
<td>0.035</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>0.004</td>
<td>0.06</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>0.008</td>
<td>0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>0.012</td>
<td>0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>4</td>
<td>0.016</td>
<td>0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>0.02</td>
<td>0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>6</td>
<td>0.024</td>
<td>0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>7</td>
<td>0.028</td>
<td>0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>8</td>
<td>0.032</td>
<td>0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>9</td>
<td>0.036000000000000004</td>
<td>0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>10</td>
<td>0.04</td>
<td>0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>11</td>
<td>0.044</td>
<td>0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>12</td>
<td>0.048</td>
<td>0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>13</td>
<td>0.052000000000000005</td>
<td>0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>14</td>
<td>0.056</td>
<td>0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>15</td>
<td>0.06</td>
<td>0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>16</td>
<td>0.064</td>
<td>0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>17</td>
<td>0.068</td>
<td>0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>18</td>
<td>0.07200000000000001</td>
<td>0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>19</td>
<td>0.076</td>
<td>0.06</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>20</td>
<td>0.08</td>
<td>0.055</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>21</td>
<td>0.084</td>
<td>0.04</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>22</td>
<td>0.088</td>
<td>0.03</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>23</td>
<td>0.092</td>
<td>0.015</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>24</td>
<td>0.096</td>
<td>0.0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>25</td>
<td>0.1</td>
<td>-0.01</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>26</td>
<td>0.10400000000000001</td>
<td>-0.02</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>27</td>
<td>0.108</td>
<td>-0.03</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>28</td>
<td>0.112</td>
<td>-0.04</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>29</td>
<td>0.116</td>
<td>-0.05</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>30</td>
<td>0.12</td>
<td>-0.06</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>31</td>
<td>0.124</td>
<td>-0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>32</td>
<td>0.128</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>33</td>
<td>0.132</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>34</td>
<td>0.136</td>
<td>-0.095</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>35</td>
<td>0.14</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>36</td>
<td>0.14400000000000002</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>37</td>
<td>0.148</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>38</td>
<td>0.152</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>39</td>
<td>0.156</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>40</td>
<td>0.16</td>
<td>-0.095</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>41</td>
<td>0.164</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>42</td>
<td>0.168</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>43</td>
<td>0.17200000000000001</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>44</td>
<td>0.176</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>45</td>
<td>0.18</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>46</td>
<td>0.184</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>47</td>
<td>0.188</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>48</td>
<td>0.192</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>49</td>
<td>0.196</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>50</td>
<td>0.2</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>51</td>
<td>0.20400000000000001</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>52</td>
<td>0.20800000000000002</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>53</td>
<td>0.212</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>54</td>
<td>0.216</td>
<td>-0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>55</td>
<td>0.22</td>
<td>-0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>56</td>
<td>0.224</td>
<td>-0.06</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>57</td>
<td>0.228</td>
<td>-0.055</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>58</td>
<td>0.232</td>
<td>-0.055</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>59</td>
<td>0.23600000000000002</td>
<td>-0.055</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>60</td>
<td>0.24</td>
<td>-0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>61</td>
<td>0.244</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>62</td>
<td>0.248</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>63</td>
<td>0.252</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>64</td>
<td>0.256</td>
<td>-0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>65</td>
<td>0.26</td>
<td>-0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>66</td>
<td>0.264</td>
<td>-0.06</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>67</td>
<td>0.268</td>
<td>-0.06</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>68</td>
<td>0.272</td>
<td>-0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>69</td>
<td>0.276</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>70</td>
<td>0.28</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>71</td>
<td>0.28400000000000003</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>72</td>
<td>0.28800000000000003</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>73</td>
<td>0.292</td>
<td>-0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>74</td>
<td>0.296</td>
<td>-0.06</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>75</td>
<td>0.3</td>
<td>-0.06</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>76</td>
<td>0.304</td>
<td>-0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>77</td>
<td>0.308</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>78</td>
<td>0.312</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>79</td>
<td>0.316</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>80</td>
<td>0.32</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>81</td>
<td>0.324</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>82</td>
<td>0.328</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>83</td>
<td>0.332</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>84</td>
<td>0.336</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>85</td>
<td>0.34</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>86</td>
<td>0.34400000000000003</td>
<td>-0.085</td>
<td>24</td>
<td>4</td>
</tr>
<tr>
<td>87</td>
<td>0.34800000000000003</td>
<td>-0.08</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>88</td>
<td>0.352</td>
<td>-0.075</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>89</td>
<td>0.356</td>
<td>-0.06</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>90</td>
<td>0.36</td>
<td>-0.045</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>91</td>
<td>0.364</td>
<td>-0.035</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>92</td>
<td>0.368</td>
<td>-0.025</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>93</td>
<td>0.372</td>
<td>-0.025</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>94</td>
<td>0.376</td>
<td>-0.025</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>95</td>
<td>0.38</td>
<td>-0.02</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>96</td>
<td>0.384</td>
<td>-0.015</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>97</td>
<td>0.388</td>
<td>-0.01</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>98</td>
<td>0.392</td>
<td>-0.005</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>99</td>
<td>0.396</td>
<td>0.005</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>100</td>
<td>0.4</td>
<td>0.02</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>101</td>
<td>0.404</td>
<td>0.035</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>102</td>
<td>0.40800000000000003</td>
<td>0.045</td>
<td>24</td>
<td>2</td>
</tr>
<tr>
<td>103</td>
<td>0.41200000000000003</td>
<td>0.05</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>104</td>
<td>0.41600000000000004</td>
<td>0.055</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>105</td>
<td>0.42</td>
<td>0.05</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>106</td>
<td>0.424</td>
<td>0.035</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>107</td>
<td>0.428</td>
<td>0.015</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>108</td>
<td>0.432</td>
<td>-0.005</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>109</td>
<td>0.436</td>
<td>-0.035</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>110</td>
<td>0.44</td>
<td>-0.05</td>
<td>24</td>
<td>0</td>
</tr>
<tr>
<td>111</td>
<td>0.444</td>
<td>-0.065</td>
<td>24</td>
<td>1</td>
</tr>
<tr>
<td>112</td>
<td>0.448</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>113</td>
<td>0.452</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>114</td>
<td>0.456</td>
<td>-0.095</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>115</td>
<td>0.46</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>116</td>
<td>0.464</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>117</td>
<td>0.468</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>118</td>
<td>0.47200000000000003</td>
<td>-0.095</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>119</td>
<td>0.47600000000000003</td>
<td>-0.095</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>120</td>
<td>0.48</td>
<td>-0.095</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>121</td>
<td>0.484</td>
<td>-0.1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>122</td>
<td>0.488</td>
<td>-0.105</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>123</td>
<td>0.492</td>
<td>-0.105</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>124</td>
<td>0.496</td>
<td>-0.105</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>125</td>
<td>0.5</td>
<td>-0.105</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>126</td>
<td>0.504</td>
<td>-0.115</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>127</td>
<td>0.508</td>
<td>-0.115</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>128</td>
<td>0.512</td>
<td>-0.11</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>129</td>
<td>0.516</td>
<td>-0.105</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>130</td>
<td>0.52</td>
<td>-0.105</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>131</td>
<td>0.524</td>
<td>-0.105</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>132</td>
<td>0.528</td>
<td>-0.095</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>133</td>
<td>0.532</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>134</td>
<td>0.536</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>135</td>
<td>0.54</td>
<td>-0.095</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>136</td>
<td>0.544</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>137</td>
<td>0.548</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>138</td>
<td>0.552</td>
<td>-0.08</td>
<td>1</td>
<td>4</td>
</tr>
<tr>
<td>139</td>
<td>0.556</td>
<td>-0.075</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>140</td>
<td>0.56</td>
<td>-0.08</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>141</td>
<td>0.5640000000000001</td>
<td>-0.07</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>142</td>
<td>0.5680000000000001</td>
<td>-0.025</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>143</td>
<td>0.5720000000000001</td>
<td>0.075</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>144</td>
<td>0.5760000000000001</td>
<td>0.25</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>145</td>
<td>0.58</td>
<td>0.54</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>146</td>
<td>0.584</td>
<td>0.96</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>147</td>
<td>0.588</td>
<td>1.41</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>148</td>
<td>0.592</td>
<td>1.885</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>149</td>
<td>0.596</td>
<td>1.735</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>150</td>
<td>0.6</td>
<td>1.09</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>151</td>
<td>0.604</td>
<td>0.35</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>152</td>
<td>0.608</td>
<td>-0.455</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>153</td>
<td>0.612</td>
<td>-0.725</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>154</td>
<td>0.616</td>
<td>-0.705</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>155</td>
<td>0.62</td>
<td>-0.54</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>156</td>
<td>0.624</td>
<td>-0.315</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>157</td>
<td>0.628</td>
<td>-0.195</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>158</td>
<td>0.632</td>
<td>-0.115</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>159</td>
<td>0.636</td>
<td>-0.09</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>160</td>
<td>0.64</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>161</td>
<td>0.644</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>162</td>
<td>0.648</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>163</td>
<td>0.652</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>164</td>
<td>0.656</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>165</td>
<td>0.66</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>166</td>
<td>0.664</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>167</td>
<td>0.668</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>168</td>
<td>0.672</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>169</td>
<td>0.676</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>170</td>
<td>0.68</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>171</td>
<td>0.684</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>172</td>
<td>0.6880000000000001</td>
<td>-0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>173</td>
<td>0.6920000000000001</td>
<td>-0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>174</td>
<td>0.6960000000000001</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>175</td>
<td>0.7000000000000001</td>
<td>-0.07</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>176</td>
<td>0.704</td>
<td>-0.065</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>177</td>
<td>0.708</td>
<td>-0.06</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>178</td>
<td>0.712</td>
<td>-0.055</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>179</td>
<td>0.716</td>
<td>-0.05</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>180</td>
<td>0.72</td>
<td>-0.045</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>181</td>
<td>0.724</td>
<td>-0.04</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>182</td>
<td>0.728</td>
<td>-0.035</td>
<td>27</td>
<td>4</td>
</tr>
<tr>
<td>183</td>
<td>0.732</td>
<td>-0.035</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>184</td>
<td>0.736</td>
<td>-0.035</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>185</td>
<td>0.74</td>
<td>-0.035</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>186</td>
<td>0.744</td>
<td>-0.035</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>187</td>
<td>0.748</td>
<td>-0.03</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>188</td>
<td>0.752</td>
<td>-0.02</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>189</td>
<td>0.756</td>
<td>-0.01</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>190</td>
<td>0.76</td>
<td>-0.005</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>191</td>
<td>0.764</td>
<td>0.0</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>192</td>
<td>0.768</td>
<td>0.005</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>193</td>
<td>0.772</td>
<td>0.005</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>194</td>
<td>0.776</td>
<td>0.005</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>195</td>
<td>0.78</td>
<td>0.01</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>196</td>
<td>0.784</td>
<td>0.025</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>197</td>
<td>0.788</td>
<td>0.04</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>198</td>
<td>0.792</td>
<td>0.045</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>199</td>
<td>0.796</td>
<td>0.05</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>200</td>
<td>0.8</td>
<td>0.055</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>201</td>
<td>0.804</td>
<td>0.055</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>202</td>
<td>0.808</td>
<td>0.055</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>203</td>
<td>0.812</td>
<td>0.06</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>204</td>
<td>0.8160000000000001</td>
<td>0.065</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>205</td>
<td>0.8200000000000001</td>
<td>0.07</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>206</td>
<td>0.8240000000000001</td>
<td>0.085</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>207</td>
<td>0.8280000000000001</td>
<td>0.1</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>208</td>
<td>0.8320000000000001</td>
<td>0.105</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>209</td>
<td>0.836</td>
<td>0.105</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>210</td>
<td>0.84</td>
<td>0.11</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>211</td>
<td>0.844</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>212</td>
<td>0.848</td>
<td>0.12</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>213</td>
<td>0.852</td>
<td>0.125</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>214</td>
<td>0.856</td>
<td>0.12</td>
<td>27</td>
<td>2</td>
</tr>
<tr>
<td>215</td>
<td>0.86</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>216</td>
<td>0.864</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>217</td>
<td>0.868</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>218</td>
<td>0.872</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>219</td>
<td>0.876</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>220</td>
<td>0.88</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>221</td>
<td>0.884</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>222</td>
<td>0.888</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>223</td>
<td>0.892</td>
<td>0.115</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>224</td>
<td>0.896</td>
<td>0.11</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>225</td>
<td>0.9</td>
<td>0.105</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>226</td>
<td>0.904</td>
<td>0.1</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>227</td>
<td>0.908</td>
<td>0.09</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>228</td>
<td>0.912</td>
<td>0.07</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>229</td>
<td>0.916</td>
<td>0.05</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>230</td>
<td>0.92</td>
<td>0.035</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>231</td>
<td>0.924</td>
<td>0.015</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>232</td>
<td>0.928</td>
<td>-0.005</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>233</td>
<td>0.932</td>
<td>-0.02</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>234</td>
<td>0.936</td>
<td>-0.03</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>235</td>
<td>0.9400000000000001</td>
<td>-0.04</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>236</td>
<td>0.9440000000000001</td>
<td>-0.05</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>237</td>
<td>0.9480000000000001</td>
<td>-0.055</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>238</td>
<td>0.9520000000000001</td>
<td>-0.06</td>
<td>27</td>
<td>0</td>
</tr>
<tr>
<td>239</td>
<td>0.9560000000000001</td>
<td>-0.07</td>
<td>27</td>
<td>1</td>
</tr>
<tr>
<td>240</td>
<td>0.96</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>241</td>
<td>0.964</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>242</td>
<td>0.968</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>243</td>
<td>0.972</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>244</td>
<td>0.976</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>245</td>
<td>0.98</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>246</td>
<td>0.984</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>247</td>
<td>0.988</td>
<td>-0.075</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>248</td>
<td>0.992</td>
<td>-0.08</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>249</td>
<td>0.996</td>
<td>-0.085</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table></div>
|
<python><pandas><dataframe><numpy><pandas-resample>
|
2024-01-21 10:19:00
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
77,854,336
| 5,798,365
|
Why None is printed when a slice of the list is sorted
|
<p>The following code</p>
<pre><code> L = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(L[2:8].sort(reverse=True))
</code></pre>
<p>prints <code>None</code> while I expect it to print <code>L = [1, 2, 8, 7, 6, 5, 4, 3, 9, 10]</code></p>
<ol>
<li><p>Why is it so?</p>
</li>
<li><p>If that's not the way to sort part of the list (a slice), what is, given that I need to do it in the same list, not create another one?</p>
</li>
</ol>
|
<python><sorting><slice>
|
2024-01-21 09:57:46
| 1
| 861
|
alekscooper
|
77,854,081
| 3,555,115
|
Filter lines based on specific event and create a dictionary
|
<p>I have a log file which contains timestamps and events</p>
<pre><code> Mon Jan 15 22:16:46 PST 2024 /vol/vol1 Sorting file records (120 entries)
Mon Jan 15 22:16:46 PST 2024 /vol/vol2 Sorting file records (120 entries)
Mon Jan 15 22:16:46 PST 2024 /vol/vol1 Sorting text records (120 entries)
Mon Jan 15 22:16:46 PST 2024 /vol/vol2 Sorting file records (120 entries)
Mon Jan 15 22:16:47 PST 2024 /vol/vol1 Sort (0 entries)
Mon Jan 15 22:16:47 PST 2024 /vol/vol1 Pass (0 entries)
Mon Jan 15 22:16:47 PST 2024 /vol/vol2 Sort (0 entries)
Mon Jan 15 22:16:47 PST 2024 /vol/vol2 Pass (0 entries)
**Mon Jan 15 22:16:47 PST 2024 /vol/vol1 ( Filetering start )**
Mon Jan 15 22:51:46 PST 2024 /vol/vol1 Sorting file records (121 entries)
Mon Jan 15 22:56:46 PST 2024 /vol/vol1 Sorting text records (122 entries)
**Mon Jan 15 22:56:47 PST 2024 /vol/vol2 ( Filetering start )**
Mon Jan 15 22:56:47 PST 2024 /vol/vol1 Sort (0 entries)
Mon Jan 15 22:57:47 PST 2024 /vol/vol1 Pass (0 entries)
</code></pre>
<p>I am only interested in parsing lines and create dictionary of events with instance name and event that are after "Filetering start" event for each /vol/* instance.
Ex :</p>
<pre><code>For /vol/vol1:
Mon Jan 15 22:51:46 PST 2024 /vol/vol1 Sorting file records (121 entries)
Mon Jan 15 22:56:46 PST 2024 /vol/vol1 Sorting text records (122 entries)
For /vol/vol2:
No lines as after filetering start there arent any events logged.
</code></pre>
<p>I tried to open file and create dictionary events by</p>
<pre><code> r = {
'vol':[],
'Sorting file':[],
}
with open(file) as fh:
for line in fh:
if "Sorting file records" in line:
line = line.split();
r['Sorting File'].append(line[3])
for i in line:
if "/vol/" in i:
r['vol'].append(i)
</code></pre>
<p>But the dictionary seems to take events before the filtering start as well , and I am not sure how we can delete all such entries. Any suggestions please ?</p>
|
<python><dataframe><dictionary>
|
2024-01-21 08:27:02
| 1
| 750
|
user3555115
|
77,853,969
| 8,763,290
|
Why does timestamp wrap around for `libevdev` events?
|
<p>I've been playing around with <a href="https://python-libevdev.readthedocs.io/en/latest/" rel="nofollow noreferrer">python-libevdev</a> and noticed that event timestamps seem to wrap around. For example, when I ran the following script:</p>
<pre class="lang-py prettyprint-override"><code>import libevdev as ev
with open('/dev/input/by-path/my-event-kbd', 'rb') as fd:
device = ev.Device(fd)
device.grab()
for e in device.events():
if e.matches(ev.EV_KEY):
if e.value == 0:
print('release', e.usec)
elif e.value == 1:
print('press ', e.usec)
</code></pre>
<p>I got:</p>
<pre><code>press 285047
release 898527
press 383116
release 828000
press 164149
release 295935
press 929637
release 65158
press 593924
release 750227
press 81770
release 210363
press 533884
release 652184
press 949230
release 81242
</code></pre>
<p>and so on. The <a href="https://python-libevdev.readthedocs.io/en/latest/libevdev.html#libevdev.event.InputEvent.usec" rel="nofollow noreferrer">docs</a> for <code>InputEvent.usec</code> just say:</p>
<blockquote>
<p>The timestamp, microseconds</p>
</blockquote>
<p>Can anyone explain why/how the values wrap around? How would I use these values to calculate the time elapsed between a key being pressed and then released?</p>
|
<python><linux><linux-kernel><keyboard><evdev>
|
2024-01-21 07:48:40
| 1
| 449
|
jth
|
77,853,940
| 8,243,936
|
Python socket connection was forcibly closed by the remote host, what is problem?
|
<p>I have some <em>GPS Tracker Devices</em> that send their states to server when I sent <strong>"LOAD"</strong> command, all things is good and I get response, <strong>but after 1 minute i get this error</strong> :</p>
<blockquote>
<p>An existing connection was forcibly closed by the remote host</p>
</blockquote>
<p>can someone help to solve this?</p>
<p><strong>here is my server codes :</strong></p>
<pre><code>import datetime
import socket
import threading
host = "0.0.0.0"
port = 4018
connedcted_devices = set()
output_imei_list = set()
def get_clients(sock, conn):
_msg = "LOAD"
message = 'message ' + str(_msg)
encodedMessage = bytes(message, 'utf-8')
output_imei = conn.recv(1024)
print(output_imei)
if output_imei in output_imei_list:
print("Connection already exists")
else:
output_imei_list.add(output_imei)
# print(output_imei_list)
connedcted_devices.add(conn)
print("New connection added to cnn_list")
conn.sendall(encodedMessage)
def receive_response(conn):
# counter = 0
while True:
# if counter == 50 :
# break
# conn.settimeout(60) # Set timeout to 60 seconds
encodedAckText = conn.recv(1024)
ackText = encodedAckText.decode('utf-8')
splitedTextList = ackText.split(",")
list_len = len(splitedTextList)
if list_len > 0:
print(splitedTextList)
# counter += 1
# print(counter)
def main():
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.bind((host, port))
print('socket binded')
sock.listen()
print('socket now listening')
while True:
conn, addr = sock.accept()
get_clients(sock, conn)
if(len(connedcted_devices) > 0):
for x in connedcted_devices:
threading.Thread(target=receive_response, args=(x,)).start()
if __name__ == '__main__':
main()```
</code></pre>
|
<python><sockets>
|
2024-01-21 07:34:07
| 0
| 750
|
NEBEZ
|
77,853,853
| 485,330
|
Error while importing python-whois on AWS Lambda
|
<p>I'm trying to run a script that uses <strong>python-whois</strong> and I'm having an error. This was previously imported locally using "pip3 install python-whois -t ." and then uploaded to the zip file into AWS Lambda. I'm using Python v3.12</p>
<pre><code>import json
import whois
def lambda_handler(event, context):
return {
'statusCode': 200,
'body': event
}
</code></pre>
<p><strong>The error message:</strong></p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'imp'
Traceback (most recent call last):
</code></pre>
|
<python><aws-lambda><whois>
|
2024-01-21 06:58:06
| 3
| 704
|
Andre
|
77,853,620
| 7,652,266
|
How to make the level of subpackage up when import subpackge in main script?
|
<p>I want to make a subpackage level up to write the main script in a more simple path. That is, I want to write the path for sub2 in directly pack.sub2 rather than pack.sub1.sub2.
Now I made a directory structure and some modules as follows.</p>
<p>pack/sub1/sub2/mod.py</p>
<p>pack/<strong>init</strong>.py</p>
<pre><code>from .sub1 import sub2
</code></pre>
<p>In Ipython,</p>
<pre><code>import pack
dir(pack)
import pack.sub1
import pack.sub2
</code></pre>
<p>Error message:</p>
<pre><code>"No module name 'pack.sub2'
</code></pre>
<p>dir(pack) shows 'sub1', 'sub2' but import pack.sub2 fails while import pack.sub1 is working naturally. In an other context such as the relation between subpackages inside a library, it might work. But in this case, it is not working. What is the working principle of python in this respect? And how to make the python main script written in a short path form importing subpackage?</p>
|
<python><relative-path>
|
2024-01-21 04:40:26
| 1
| 563
|
Joonho Park
|
77,853,378
| 2,910,279
|
Pandas dataframe aggregation on dynamic groups
|
<p>I have a dataframe like this:</p>
<pre><code>Date Value
2023-05-20 2
2023-05-22 4
2023-05-22 3
2023-05-24 5
</code></pre>
<p>There may be missing days, and also days having multiple entries.
How do I get a list of <em>all</em> days (between lowest and largest given day) cumulating the sum of values "so far", in this example the result should look like:</p>
<pre><code>Date ValueCum
2023-05-20 2
2023-05-21 2
2023-05-22 9
2023-05-23 9
2023-05-24 14
</code></pre>
|
<python><pandas>
|
2024-01-21 01:59:15
| 1
| 1,451
|
Henning
|
77,853,361
| 7,824,238
|
Plotly express scatter with facet_col does not use correct colors - what I am doing wrong?
|
<p>The following code shows first one scatter plot with mixed positive and negative data.
Negative data (around -100) on the bottom left, positive data (around +100) on the top right.</p>
<p><a href="https://i.sstatic.net/bkCHa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bkCHa.png" alt="plot with -100 and +100 data in one plot" /></a></p>
<p>The second plot uses the <code>facet_col</code> feature of the plotly express scatter function to separate positive and negative values in two subplots. Somehow the result seems wrong as the negative data uses the whole color palette from -100 until +100 (red AND blue) . I would have expected that the color of the points in the left subplot would only be red but not blue/red.</p>
<p><a href="https://i.sstatic.net/9yRmN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9yRmN.png" alt="plot with -100 and +100 data in two subplots" /></a></p>
<p><code>plotly.__version__= 5.18.0</code></p>
<p>What am I doing wrong?</p>
<pre><code>import pandas as pd
import plotly.express as px
import numpy as np
import plotly
print("plotly.__version__=", plotly.__version__)
np.random.seed(1)
# configuration for first array
mean0 = np.array([0., 0.])
cov0 = np.array([[1., 0.], [0., 1.]])
size0 = 10000
print("size0=", size0)
# configuration for second array
mean1 = np.array([10., 10.])
cov1 = np.array([[.5, 0.], [0., .5]])
size1 = 100
# build first array
vals0 = np.random.multivariate_normal(mean0, cov0, size0)
# append another column to the right of the array
vals0 = np.append(vals0, [[-1] for x in range(size0)], axis=1)
# fill new column with randomized data (negative values)
vals0[:, 2] = -100.0 + 0.2 * np.random.random(size0)
# build second array
vals1 = np.random.multivariate_normal(mean1, cov1, size1)
# append another column to the right of the array
vals1 = np.append(vals1, [[-1] for x in range(size1)], axis=1)
# fill new column with randomized data (positive values)
vals1[:, 2] = 100.0 - 0.2 * np.random.random(size1)
# combine first and second array
vals2 = np.append(vals0, vals1, axis=0)
# convert numpy array to pandas DataFrame
df = pd.DataFrame(vals2, columns=['x', 'y', 'z'])
df['type'] = df.z.apply(lambda z: 'negative' if z < 0 else "positive")
fig1 = px.scatter(df, x='x', y='y', color='z', color_continuous_scale=["red", "blue", ])
fig2 = px.scatter(df, x='x', y='y', color='z', facet_col='type', color_continuous_scale=["red", "blue", ])
fig1.show()
fig2.show()
</code></pre>
|
<python><plotly><plotly-express>
|
2024-01-21 01:52:06
| 1
| 420
|
7824238
|
77,853,347
| 1,667,018
|
Python generator yielding from nested non-generator function
|
<p>This is a dumb example based on a more complex thing that I want to do:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generator
def f() -> Generator[list[int], None, None]:
result = list()
result.append(1)
if len(result) == 2:
yield result
result = list()
result.append(2)
if len(result) == 2:
yield result
result = list()
result.append(3)
if len(result) == 2:
yield result
result = list()
result.append(4)
if len(result) == 2:
yield result
result = list()
print(list(f()))
</code></pre>
<p>The point here is that this bit is copied multiple times:</p>
<pre class="lang-py prettyprint-override"><code> if len(result) == 2:
yield result
result = list()
</code></pre>
<p>Normally, I'd change it into something like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generator
def f() -> Generator[list[int], None, None]:
def add_one(value: int) -> None:
nonlocal result
result.append(value)
if len(result) == 2:
nonlocal_yield result
result = list()
result = list()
add_one(1)
add_one(2)
add_one(3)
add_one(4)
print(list(f()))
</code></pre>
<p>Obviously, <code>nonlocal_yield</code> is not a thing. Is there an elegant way to achieve this?</p>
<p>I know that I can just create the full list of results, i.e., <code>[[1, 2], [3, 4]]</code>, and then either return it or <code>yield</code> individual 2-element sublists. Something like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generator
def f() -> list[list[int]]:
def add_one(value: int) -> None:
nonlocal current
current.append(value)
if len(current) == 2:
result.append(current)
current = list()
result = list()
current = list()
add_one(1)
add_one(2)
add_one(3)
add_one(4)
return result
print(list(f()))
</code></pre>
<p>However, this beats the purpose of a generator. I'll go for it in absence of a better solution, but I'm curious if there is a "pure" generator way to do it.</p>
|
<python><generator><nested-function>
|
2024-01-21 01:44:56
| 1
| 3,815
|
Vedran Šego
|
77,853,142
| 7,563,454
|
Conditional variable assignment with numpy array
|
<p>I'm migrating my Python based engine to Numpy in hopes of better performance. Previously I used my own vector3 class: With it I could use use conditional variable assignments in the following form, with <code>settings["size"]</code> being the provided array if present in the settings dictionary otherwise <code>0,0,0</code> as the fallback.</p>
<pre><code>self.size = "size" in settings and settings["size"] or vec3(0, 0, 0)
</code></pre>
<p>Now I am instead using:</p>
<pre><code>self.size = "size" in settings and settings["size"] or np.array([0, 0, 0])
</code></pre>
<p>But this introduces a new error:</p>
<blockquote>
<p>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p>
</blockquote>
<p>What is the correct way of doing inline conditional assignments with numpy arrays, where it's checked whether the value exists and a default is set is not?</p>
|
<python><numpy>
|
2024-01-20 23:46:29
| 1
| 1,161
|
MirceaKitsune
|
77,853,003
| 15,452,898
|
Marking the results of code inside initial dataframe in python
|
<p>Currently I'm performing calculations on a database that contains information on transactions. It is a big dataset consumes a lot of resources and have just faced with an issue of how to use optimize my current solution.</p>
<p>My initial dataframe looks like this:</p>
<pre><code>Name ID ContractDate LoanSum Status
A ID1 2022-10-10 10 Closed
A ID1 2022-10-15 13 Active
A ID1 2022-10-30 20 Active
B ID2 2022-11-05 30 Active
C ID3 2022-12-10 40 Closed
C ID3 2022-12-12 43 Active
C ID3 2022-12-19 46 Active
D ID4 2022-12-10 10 Closed
D ID4 2022-12-12 30 Active
</code></pre>
<p>I have to create a dataframe that contains all loans issued to specific borrowers (grouped by ID) where the number of days between two loans (assigned to one unique ID) is less than 15 and the difference between loansums issued to one specific borrower is less or equal then 3.</p>
<p>My solution:</p>
<pre><code>from pyspark.sql import functions as f
from pyspark.sql import Window
df = spark.createDataFrame(data).toDF('Name','ID','ContractDate','LoanSum','Status')
df.show()
cols = df.columns
w = Window.partitionBy('ID').orderBy('ContractDate')
new_df = df.withColumn('PreviousContractDate', f.lag('ContractDate').over(w)) \
.withColumn('PreviousLoanSum', f.lag('LoanSum').over(w)) \
.withColumn('Target', f.expr('datediff(ContractDate, PreviousContractDate) < 15 and LoanSum - PreviousLoanSum <= 3')) \
.withColumn('Target', f.col('Target') | f.lead('Target').over(w)) \
.filter('Target == True') \
.select(cols[0], *cols[1:])
+----+---+------------+-------+------+
|Name| ID|ContractDate|LoanSum|Status|
+----+---+------------+-------+------+
| A|ID1| 2022-10-10| 10|Closed|
| A|ID1| 2022-10-15| 13|Active|
| C|ID3| 2022-12-10| 40|Closed|
| C|ID3| 2022-12-12| 43|Active|
| C|ID3| 2022-12-19| 46|Active|
+----+---+------------+-------+------+
</code></pre>
<p>As you can see my results are stored in a separate table. My next goal is to remove dataframe “new_df” from initial dataframe “df” in order to work with related rows.</p>
<p>If I use this obvious solution, the system works super slow especially when I have to subtract dataframes one-by-one on each step:</p>
<pre><code>df_sub = df.subtract(new_df)
</code></pre>
<p>My question: is it possible (if yes then how) not to create new dataframe but separate rows that are included in dataframe new_df inside the first dataframe df? Maybe to mark the rows in a special way by creating also a new column in order to filter the rows needed for further analysis later?</p>
<p>Thank you in advance!</p>
|
<python><pyspark><subset><data-manipulation>
|
2024-01-20 22:38:10
| 1
| 333
|
lenpyspanacb
|
77,852,795
| 1,005,409
|
Cross-Environment ECDSA Signature Verification Fails Between React and Python
|
<p>I'm working on a project where I need to generate an ECDSA signature in a React application and then verify it in a Python backend. The signature generation and verification work within their respective environments, but the signature generated in React fails verification in Python. I'm using the elliptic library in React and the cryptography library in Python.</p>
<p>React Code (Signature Generation):</p>
<pre><code>import { useEffect } from 'react';
import { ec as EC } from 'elliptic';
const App = () => {
useEffect(() => {
const ec = new EC('secp256k1');
const private_key_hex = "cf63f7ffe346cd800e431b34bdbd45f6aac3c2ac6055ac18195753aff9b9cce8";
const message_hash_hex = "2f8fc7172db7dcbd71ec70c83263db33a54ff761b02a54480a8d07b9c633d651";
const privateKey = ec.keyFromPrivate(private_key_hex, 'hex');
const signature = privateKey.sign(message_hash_hex, 'hex', { canonical: true });
const signature_der_hex = signature.toDER('hex');
console.log("Signature (DER hex):", signature_der_hex);
}, []);
return <div><h1>ECDSA Signature in React</h1></div>;
};
export default App;
</code></pre>
<p>yielding:</p>
<pre><code>Signature (DER hex): 3045022100e5c678f346cdd180815912e580c27d9d70a4a2e71ab6cfb2bdaedfbf4cdf24cc02200bba6ae9b3bb25c886b5cc8549ac6796438f295e91320a1d705f17e25cb7199b
</code></pre>
<p>And here the part where I am trying to verify the signature in python:</p>
<pre><code>from cryptography.hazmat.primitives.asymmetric import ec
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.backends import default_backend
def verify_signature(public_key_hex, signature_der_hex, message_hash_hex):
public_key_bytes = bytes.fromhex(public_key_hex)
try:
public_key = ec.EllipticCurvePublicKey.from_encoded_point(ec.SECP256K1(), public_key_bytes)
except Exception as e:
print(f"Error loading public key: {e}")
return False
hashed_message = bytes.fromhex(message_hash_hex)
try:
signature_bytes = bytes.fromhex(signature_der_hex)
public_key.verify(signature_bytes, hashed_message, ec.ECDSA(hashes.SHA256()))
return True
except Exception as e:
print(f"Verification failed: {e}")
return False
public_key_hex = "03aa4fcbf0792ce77a3be9415bf96bff99a832466f56ecba3bb8ce630877de2701"
# Replace with signature from React
signature_der_hex = "3045022100e5c678f346cdd180815912e580c27d9d70a4a2e71ab6cfb2bdaedfbf4cdf24cc02200bba6ae9b3bb25c886b5cc8549ac6796438f295e91320a1d705f17e25cb7199b"
message_hash_hex = "2f8fc7172db7dcbd71ec70c83263db33a54ff761b02a54480a8d07b9c633d651"
verification_result = verify_signature(public_key_hex, signature_der_hex, message_hash_hex)
print("Verification:", verification_result)
</code></pre>
<p><strong>Problem:</strong>
When I use the signature generated by the React app in the Python verification code, the verification fails. However, when I run the signing and verification process separately within each environment, it succeeds.</p>
<p><strong>What I've Tried:</strong></p>
<ul>
<li>Ensuring that the message hash is identical in both environments.</li>
<li>Converting the signature to DER format in React before sending it to Python (as per above code)</li>
<li>Aligning public key formats between React and Python.</li>
</ul>
<p>I suspect the issue might be with how the signature is encoded or with some nuances in how the libraries handle ECDSA operations. Any insights or suggestions on how to make these two environments compatible would be greatly appreciated.</p>
|
<javascript><python><reactjs><cryptography><ecdsa>
|
2024-01-20 21:19:50
| 1
| 6,037
|
Muppet
|
77,852,757
| 873,650
|
python interpreter incompatible with packages versions in requirements.txt
|
<p>I have a <code>requirements.txt</code> file but my python version is not compatible with those versions required, what is the smartest way to upgrade the versions of the packages without changing too much of the original setup?</p>
<p><strong>I am following a book that has those reqs and I'm afraid of new versions may lead to unexpected code behavior or incompatibilities.</strong></p>
<p>I have already upgraded pip.
I don't want to up/downgrade python itself.</p>
<p>requirements.txt not compatible with <code>python 3.10</code>:</p>
<pre><code>numpy==1.21.2
scipy==1.7.0
matplotlib==3.4.3
sklearn==1.0
pandas==1.3.2
</code></pre>
<p>Errors:</p>
<pre><code># pip install numpy==1.21.2
Collecting numpy==1.21.2
Downloading numpy-1.21.2.zip (10.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.3/10.3 MB 10.7 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: numpy
Building wheel for numpy (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for numpy (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [198 lines of output]
setup.py:63: RuntimeWarning: NumPy 1.21.2 may not yet support Python 3.10.
</code></pre>
|
<python><numpy>
|
2024-01-20 21:09:29
| 0
| 4,421
|
Fernando Fabreti
|
77,852,422
| 2,829,961
|
How to install and load polars in Python?
|
<p>Based on <a href="https://stackoverflow.com/a/74859637/2829961">this</a> stackoverflow answer, I installed <code>polars-lts-cpu</code> after uninstalling <code>polars</code> but I can't get it to work:</p>
<p><a href="https://i.sstatic.net/3OtNa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3OtNa.png" alt="enter image description here" /></a></p>
<p>How do I import <code>polars-lts-cpu</code>?</p>
<h2>Edit:</h2>
<p>I tried the suggestion in the comment but got the same issue:</p>
<pre><code>(homl3) C:\Users\umair>python -m pip install polars-lts-cpu
Requirement already satisfied: polars-lts-cpu in c:\users\umair\anaconda3\envs\homl3\lib\site-packages (0.20.5)
(homl3) C:\Users\umair>python
Python 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:00:38) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import polars as pl
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'polars'
</code></pre>
|
<python><python-polars>
|
2024-01-20 19:25:28
| 1
| 6,319
|
umair durrani
|
77,852,333
| 2,692,339
|
How to run multiple inferences in parallel on CPU?
|
<p>I've got some models implemented in PyTorch, where their performance is evaluated on a custom platform (wrapper around Pytorch, keeping overall interface).</p>
<p>This is really slow however: testing 10k CIFAR10 takes almost 30mins on a single CPU. My cloud farm has no GPU available, but is highly CPU-oriented with load of memory available. Thus I'm thinking about spawning multiple threads/processes to parallelize these inference tests.</p>
<p>I know this is not as trivial with Python due to GIL and Pytorch resource model; from some research I found <code>torch.multiprocessing.Pool</code>.</p>
<p>Is it the best way? How could I deploy say <code>N</code> inference tasks on <code>N</code> CPUs, and then collect the results into an array? I wonder whether some <code>torch.device</code> info must be handled or is done automatically.</p>
<p>Something like:</p>
<pre><code>for task in inference_tasks:
p = spawn(process)
accuracy = inference(model, p)
....
#collect results
results.append(accuracy)
</code></pre>
<p><strong>Edit:</strong> the inference predictions are all <strong>independent</strong> from each other. The <code>DataLoader</code> could be copied and fed to each process to do the inference, then collect all the results.</p>
|
<python><deep-learning><pytorch><inference>
|
2024-01-20 19:00:19
| 1
| 8,538
|
edmz
|
77,852,211
| 17,889,328
|
Pycharm+Protocols - how to silence spurious 'Getter should return or yield something'
|
<pre><code>class MyProtocol(Protocol):
@property
def my_property(self) -> str:
...
class MyClass(MyProtocol):
@property
def my_property(self) -> str:
# the actual implementation is here
</code></pre>
<p>pycharm complains the protocol doesnt return values. because its a protocol....</p>
<p>same as <a href="https://stackoverflow.com/questions/69417586/how-to-define-a-property-as-part-of-a-protocol">here</a> but its been years no solution</p>
<p>also <a href="https://youtrack.jetbrains.com/issue/PY-40180/Getter-should-return-or-yield-something-warning-shown-for-Protocol-properties" rel="nofollow noreferrer">here</a> but 4 years with no solution</p>
<p>i'd like pycharm to fix their product, but in the meantime does anyone know a '# noqa' code i can use?</p>
|
<python><pycharm><protocols><static-analysis>
|
2024-01-20 18:27:26
| 1
| 704
|
prosody
|
77,852,152
| 2,363,342
|
Generating 8th note tuplets using music21
|
<p>I'm using music21 to add a given sequence of notes to arbitrary tuplet divisions. For example, given "CEGBCAFD" in the time signature of 2/4, I want to create measures with quintuplets (e.g., 5 over 2) that go over that sequence.</p>
<p>Here's the python code I came up with to generate the tuplet:</p>
<pre><code> for i, note_name in enumerate(notes):
# Create a note
n = note.Note(note_name)
# Set note duration to eighth and within a tuplet
n.duration = duration.Duration('eighth')
# Create tuplet if needed and add it to note's duration
if tuplet_counter % rhythm == 0:
tup = duration.Tuplet(numberNotesActual=rhythm, numberNotesNormal=2, type='quarter')
tup.setDurationType('eighth') # Set the tuplet's note type to eighth
n.duration.appendTuplet(tup)
# Add accent if the note is in the accents list
if note_name in accents:
n.articulations.append(articulations.Accent())
#measure.makeBeams(inPlace=True)
measure.append(n)
tuplet_counter += 1
# If we reach the rhythm count or the end of the notes, reset the counter and append the measure
if tuplet_counter == rhythm or note_name == notes[-1]:
part.append(measure)
measure = stream.Measure() # Create a new measure for the next group
tuplet_counter = 0
if (i + 1) < len(notes): # If there are more notes, set the attributes for the new measure
measure.append(key.Key(key_signature))
measure.append(meter.TimeSignature(time_signature))
# Add the part to the score
s.append(part)
</code></pre>
<p>It does a fine job of creating 5 note tuplets, but the XML it generates makes them 16th notes for some reason, which creates a 10 over 2 feel rather than a 5 over 2 feel. How can I get it to give me 5 8th notes per measure in a single tuplet?</p>
<p>XML snippet:</p>
<pre><code> <note>
<pitch>
<step>C</step>
<octave>4</octave>
</pitch>
<duration>2016</duration>
<type>16th</type>
<time-modification>
<actual-notes>5</actual-notes>
<normal-notes>4</normal-notes>
<normal-type>16th</normal-type>
</time-modification>
<beam number="1">begin</beam>
```
</code></pre>
|
<python><music21><musicxml>
|
2024-01-20 18:09:44
| 0
| 1,109
|
Lee Hampton
|
77,852,109
| 630,556
|
Why does Ruff not show as many diagnostics as I expect?
|
<p>I have installed the Ruff extension and enabled it in VS Code, but it doesn't seem to be underlining my code at all and providing suggestions like my previous linters did.</p>
<p>I did a clean install of VS Code, so most of the Python/Ruff extension settings are default. Is there an additional step I need to take to get it to start underlining my code and providing recommendations?</p>
<p><a href="https://i.sstatic.net/6drKw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6drKw.png" alt="" /></a></p>
<p>It's highlighting the imports for not being used, but I would expect other things to be highlighted like the line length, the additional spaces at the end of the file, not having 2 spaces before function declaration, etc.</p>
<p>Here is the sample code as requested:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
print('kkkkkkkkjlskdjflksdjflsdjflkdsjflksdjflkjdslkfjsdlkjflsdjflsdjfldsjflsdkjflsdjflksdjflksdjflksdjflskdjflsdkjfklsdjkl')
def test_func(x):
y=x+1
return y
</code></pre>
|
<python><visual-studio-code><ruff>
|
2024-01-20 17:54:48
| 5
| 1,363
|
Nick Nelson
|
77,852,050
| 11,741,232
|
Forever loop in Blender scripting?
|
<p>If you add</p>
<pre class="lang-py prettyprint-override"><code>while True:
print("hi")
</code></pre>
<p>to an otherwise functional script in Blender's scripting window and run it, Blender hangs.</p>
<p>This is undesirable because I'm trying to use Redis as a pub-sub broker, such that another script can send positions for Blender to display:</p>
<pre class="lang-py prettyprint-override"><code>bpy.ops.mesh.primitive_uv_sphere_add(radius=0.005, location=(0, 0, 0))
ball = bpy.context.object
ball.name = 'sphere'
for message in pubsub.listen():
if message['type'] == 'message':
position_str = message['data'].decode('utf-8')
position = ast.literal_eval(position_str)
print(f"Received position: {position}")
ball.location = position
</code></pre>
<p>This forever loop also hangs Blender and doesn't move the ball as I want.</p>
<p>Is there any way to get this to work?</p>
<p>Update:</p>
<p>I tried doing it in a subprocess, it no longer hangs but it doesn't create a sphere either:</p>
<pre class="lang-py prettyprint-override"><code>def update_sphere_from_redis():
r = redis.StrictRedis(host='localhost', port=6379, db=0)
pubsub = r.pubsub()
pubsub.subscribe('positions')
bpy.ops.mesh.primitive_uv_sphere_add(radius=0.005, location=(0, 0, 0))
ball = bpy.context.object
ball.name = 'sphere'
for message in pubsub.listen():
if message['type'] == 'message':
position_str = message['data'].decode('utf-8')
position = ast.literal_eval(position_str)
print(f"Received position: {position}")
ball.location = position
process = multiprocessing.Process(target=update_sphere_from_redis)
# Start the processB
process.start()
</code></pre>
|
<python><redis><blender>
|
2024-01-20 17:36:26
| 2
| 694
|
kevinlinxc
|
77,852,033
| 182,737
|
Enable ruff rules only on specific files
|
<p>I work on a large project and I'd slowly like to enfore pydocstyle using ruff. However, many files will fail on e.g. D103 "undocumented public function". I'd like to start with enforcing it on a few specific files, so I'd like to write something like</p>
<pre><code>select = ["D"]
[tool.ruff.ignore-except-per-file] # this config does not exist
# ignore D103 but on all files, except the ones that pass
"properly_formatted_module1.py" = ["D103"]
"properly_formatted_module2.py" = ["D103"]
</code></pre>
<p>I don't think this is possible; the only way I see is to explicitly write down ALL of the file names in a <code>[tool.ruff.extend-per-file-ignores]</code>. There are a few hunderd of them so that's not really nice to do.</p>
|
<python><ruff>
|
2024-01-20 17:31:14
| 2
| 1,229
|
Frank Meulenaar
|
77,851,947
| 16,728,369
|
Could not build wheels for Pillow, typed-ast, which is required to install pyproject.toml-based projects
|
<p>I'm trying to run a old project here is my requirements.txt</p>
<pre><code>asgiref==3.4.1
astroid==2.3.3
certifi==2019.11.28
chardet==3.0.4
Django==4.0.1
django-cleanup==5.2.0
idna==2.8
isort==4.3.21
lazy-object-proxy==1.4.3
mccabe==0.6.1
Pillow==9.0.0
requests==2.22.0
six==1.13.0
sqlparse==0.4.2
sslcommerz-python==0.0.7
typed-ast==1.4.0
tzdata==2021.5
urllib3==1.25.7
wrapt==1.11.2
</code></pre>
<p>after asking chatgpt i've also downloaded windows software development kit with microsoft visual c++ 2015-2022 Redistributable (x64, x86).
here are the packages that already exists in my venv</p>
<pre><code>Package Version
---------- -------
pip 23.3.2
setuptools 65.5.0
typed-ast 1.5.5
wheel 0.42.0
</code></pre>
<p>i tried to run both command</p>
<pre><code>pip install -r requirements.txt
pip install -r requirements.txt --use-pep517
</code></pre>
<p>but the error occurs</p>
<pre><code> error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.38.33130\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for typed-ast
Failed to build Pillow typed-ast
ERROR: Could not build wheels for Pillow, typed-ast, which is required to install pyproject.toml-based projects
</code></pre>
<p>ask me if any other info is needed.</p>
|
<python><pip><python-imaging-library><install.packages>
|
2024-01-20 17:08:40
| 1
| 469
|
Abu RayhaN
|
77,851,918
| 11,865,149
|
opencv- contourArea doesn't work as expected
|
<p>I followed <a href="https://pyimagesearch.com/2015/04/20/sorting-contours-using-python-and-opencv/" rel="nofollow noreferrer">this opencv guide</a> to sort the contours in the first image by size.</p>
<p>I calculated an edge map (the result is the middle image):</p>
<pre><code># load the image and initialize the accumulated edge image
image = cv2.imread("legos.png")
accumEdged = np.zeros(image.shape[:2], dtype="uint8")
# loop over the blue, green, and red channels, respectively
for chan in cv2.split(image):
# blur the channel, extract edges from it, and accumulate the set
# of edges for the image
chan = cv2.medianBlur(chan, 11)
edged = cv2.Canny(chan, 50, 200)
accumEdged = cv2.bitwise_or(accumEdged, edged)
# show the accumulated edge map
cv2.imwrite("images/Edge_Map.png", accumEdged)
</code></pre>
<p>then I calculated and drawn some contours from it:</p>
<pre><code>def draw_contour(image, c, color):
# compute the center of the contour area
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# draw the countour and its area on the image
cv2.drawContours(image, [c], -1, color, 2)
cv2.putText(image, str(cv2.contourArea(c)), (cX - 10, cY - 15), cv2.FONT_HERSHEY_SIMPLEX, 0.65, color,2)
# find contours in the accumulated image
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
sorted_cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
draw_contour(image, sorted_cnts[0], (0, 0, 0)) # forth brick
draw_contour(image, sorted_cnts[4], (80, 0, 80)) # small circle
draw_contour(image, sorted_cnts[6], (255, 255, 255)) # third brick
</code></pre>
<p>the problem is that the contours are not sorted (the way I thought). I included the first, fifth and seventh contours along with their areas. The fifth (area 229) is clearly smaller than the seventh (area 130).</p>
<p>I would like to know what cv2.contourArea does in this case.</p>
<p><a href="https://i.sstatic.net/REXdW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/REXdW.png" alt="initial image" /></a>
<a href="https://i.sstatic.net/dApkP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dApkP.png" alt="edged image" /></a>
<a href="https://i.sstatic.net/8Ex26.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Ex26.png" alt="final image" /></a></p>
|
<python><opencv><computer-vision><canny-operator>
|
2024-01-20 17:02:26
| 1
| 393
|
benjamin
|
77,851,840
| 7,155,895
|
Frame that resizes by adding any widget inside
|
<p>I have a behavior that is happening, that I don't know how to find a solution to. The solution is probably right before my eyes, but I can't find it, despite my searches.</p>
<p>The problem is this, I have a frame with a certain size, in which I created a grid inside. Until that, there are no problems.</p>
<p><a href="https://i.sstatic.net/qJhNq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qJhNq.png" alt="enter image description here" /></a></p>
<p>The problem happens the moment I put something inside the grid, whatever it is (in this case, two buttons)</p>
<p><a href="https://i.sstatic.net/Y5Tuf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y5Tuf.png" alt="enter image description here" /></a></p>
<p>The code is this:</p>
<pre><code>import tkinter
from engine import th
class Eframe(tkinter.Frame):
def __init__(self, master: any, height, width, **kwargs):
super().__init__(master=master, **kwargs)
self.height = height
self.width = width
self.EframeGen()
def EframeGen(self):
theme = th.FrameTheme
self.configure(background=theme["bg"][0])
self.configure(highlightbackground="gray28")
self.configure(highlightthickness=3)
self.configure(height=self.height)
self.configure(width=self.width)
class file():
def __init__(self, engine):
self.engine = engine
self.engine.updateClass.append(self)
self.widget = []
def frameFile(self):
self.engine.buttonLoad.configure(state="disabled")
self.height = self.engine.FrameOperation.winfo_height()
self.width = self.engine.FrameOperation.winfo_width()
self.MainFrame = Eframe(self.engine.FrameOperation, self.height, self.width)
self.widget.append(self.MainFrame)
self.MainFrame.pack(pady=(20, 10), padx=30)
self.MainFrame.columnconfigure(0, weight=1)
self.MainFrame.columnconfigure(1, weight=3)
frame1 = tkinter.Button(self.MainFrame, background="white")
frame1.grid(column=0, row=0, padx=5, pady=5)
frame2 = tkinter.Button(self.MainFrame, background="red")
frame2.grid(column=1, row=0, padx=5, pady=5)
def Update(self):
if len(self.widget) >= 1:
for wid in self.widget:
wid.update()
</code></pre>
<p>So, in itself, the posted code is "reproducible", if modified. To explain better:</p>
<p>engine = This is the parent class, where the main code of the program is.</p>
<p>self.engine.updateClass = It refers to a list that the parent class references to update all classes after every certain amount of time. To be clear, it calls the Update function of each class contained in the list.</p>
<p>self.engine.buttonLoad = This is the button that calls the frameFile function of this class, only to be disabled. Not necessary to reproduce the code.</p>
<p>self.engine.FrameOperation = It refers to the main frame in which the MainFrame is contained, which I'm having problems with. In theory, you can replace it with root.</p>
<p><a href="https://i.sstatic.net/yj2tq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yj2tq.png" alt="enter image description here" /></a></p>
<p>Can you help me with this problem? Which I'm sure has a simple solution, which I don't see though. If the code isn't right, I'll try to put it better so I can reproduce it.</p>
<p>Final note, I've modified the code several times, reading different systems to create the grid correctly, but it still comes out this way, so I definitely don't understand how it works yet.</p>
|
<python><tkinter><python-3.11>
|
2024-01-20 16:41:09
| 1
| 579
|
Rebagliati Loris
|
77,851,795
| 19,672,778
|
CNN outputs nan Loss after first epoch
|
<p>I was training my CNN model with my data and after i made some adjustments (i changed sigmoid activations into relu on convolutions and added learning rate decay function) i get NAN loss and NAN mae after first epoch. can anyone tell me what should i do?</p>
<pre class="lang-py prettyprint-override"><code>def scheduler(epoch, lr):
return 1/(1+(1/math.e)*epoch)*lr
lr_schedule = tf.keras.callbacks.LearningRateScheduler(scheduler)
with tpu_strategy.scope():
modelCNN = tf.keras.Sequential()
modelCNN.add(tf.keras.layers.Conv2D(100, (8, 8), strides=(2, 2), activation='relu', input_shape=(256, 256, 1)))
modelCNN.add(tf.keras.layers.BatchNormalization())
modelCNN.add(tf.keras.layers.MaxPooling2D(pool_size=(8, 8), strides=(3, 3)))
modelCNN.add(tf.keras.layers.Conv2D(100, (4, 4), strides=(1, 1), activation='relu'))
modelCNN.add(tf.keras.layers.BatchNormalization())
modelCNN.add(tf.keras.layers.GlobalMaxPooling2D())
modelCNN.add(tf.keras.layers.Dense(100, activation = 'relu'))
modelCNN.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01), loss='mean_absolute_error', metrics=['mae'])
modelCNN.fit(X_train, Y_train, epochs=96, validation_data=(X_test, Y_test), callbacks=[lr_schedule])
</code></pre>
<pre><code>Epoch 1/96 8127/8127 [==============================] - 232s 26ms/step - loss: 64.8304 - mae: 64.8304 - val_loss: nan - val_mae: nan - lr: 0.0100
Epoch 2/96 3448/8127 [===========>..................] - ETA: 1:12 - loss: nan - mae: nan
</code></pre>
<p><strong>UPDATE #1</strong>
Removing BatchNormalization Layers Fixed the problem , but the mae was decreasing too slow and incosistently. how can i target this problem?</p>
|
<python><tensorflow><deep-learning><conv-neural-network>
|
2024-01-20 16:26:19
| 1
| 319
|
NikoMolecule
|
77,851,693
| 10,037,034
|
how to change config file by using hydra?
|
<p>i was trying to use hydra for to manage my config files. But it wont change anything. for ex: this is my code;</p>
<pre><code>from omegaconf import DictConfig, OmegaConf
import hydra
@hydra.main(version_base=None, config_path=".", config_name="config")
def main(config: DictConfig)->None:
print(OmegaConf.to_yaml(config))
print(f"Output directory : {hydra.core.hydra_config.HydraConfig.get().runtime.output_dir}")
if __name__ == '__main__':
main()
</code></pre>
<p>And i am trying adding or changing my config file. Config file in the same folder.</p>
<pre><code>db:
driver: mysql
user: omry
password: secret
</code></pre>
<p>Command :</p>
<blockquote>
<p>python main.py db.user=root</p>
</blockquote>
<p>Result</p>
<pre><code>db:
driver: mysql
user: root
password: secret
</code></pre>
<p>But if i look config.yaml , i cant see it changed. Still same. How can i fix this?</p>
|
<python><config><fb-hydra>
|
2024-01-20 15:59:50
| 1
| 1,311
|
Sevval Kahraman
|
77,851,648
| 1,711,271
|
Given a dataframe with 3 columns, return a list of lists of all the unique values of 'C', for each distinct pair of 'A' and 'B'
|
<p>I have the following dataframe:</p>
<pre><code> foo bar baz
0 1234_312_GCD 1234 312
1 1234_312_GCD 1234 312
2 1234_312_GCD 1234 312
3 1234_312_GCD 1234 312
4 1234_312_GCD 1234 312
5 0777_666_lcm 0777 666
6 0777_666_lcm 0777 666
7 0777_666_lcm 0777 666
8 0777_666_lcm 0777 666
9 0777_666_lcm 0777 666
10 1234_312_lcm 1234 312
11 1234_312_lcm 1234 312
12 1234_312_lcm 1234 312
13 0777_666_GCD 0777 666
14 0777_666_GCD 0777 666
15 0777_666_GCD 0777 666
16 0777_666_GCD 0777 666
</code></pre>
<p>I want to return a list of lists, where each list contains all the distinct values of <code>foo</code>
corresponding to a pair of distinct values of <code>bar</code> and <code>baz</code>. For the example dataframe, this would be a list of two lists, each containing two strings:</p>
<pre><code>foo_list = [['1234_312_GCD', '1234_312_lcm'], [0777_666_lcm', '0777_666_GCD']]
</code></pre>
|
<python><pandas><string><group-by>
|
2024-01-20 15:50:20
| 2
| 5,726
|
DeltaIV
|
77,851,504
| 1,469,954
|
Python unable to terminate spawned child processes by terminating parent process
|
<p>I have a master script (Python) from where I want to spawn several child scripts. The child scripts should run independently without bothering each other (just writing to a file in a loop). What I want - when the master script is terminated, so should all the spawned child processes.</p>
<p>But however, even through <code>CTRL-C</code> kills the master process and control is returned to command prompt, the child processes keep running as I can see the individual output files getting updated continually. Any idea on how to make this work?</p>
<p>Child script</p>
<pre><code>import sys
from time import sleep
arg = sys.argv[1]
sleepTime = int(sys.argv[2])
while True:
with open(f"{arg}.txt", "w") as f:
f.write(arg)
sleep(sleepTime)
</code></pre>
<p>Master script</p>
<pre><code>import sys, subprocess, os, signal
from time import sleep
d = ['fool', 'gool', 'lool']
for i in range(3):
try:
subprocess.Popen(['python3', 'child.py', d[i], str(i + 3)], stdin=subprocess.PIPE)
except KeyboardInterrupt:
sys.exit(1)
</code></pre>
<p>What I run in command line to kick off everything:</p>
<pre><code>python3 master.py
</code></pre>
|
<python><subprocess><popen><sigint>
|
2024-01-20 15:01:29
| 1
| 5,353
|
NedStarkOfWinterfell
|
77,851,446
| 10,859,114
|
missing data when recording audio on raspberry pi pico
|
<p>i wanted to record audio using <code>max9814</code> and <code>raspberry pi pico</code> with <code>micropython/circuitpython</code> but when i recorded the data, some parts of it lose because it's trying to read the data and write the data into a .wav file and it can not hadle it simultaneously.</p>
<p>the circuit is like below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>RPI PICO</th>
<th>MAX9814</th>
</tr>
</thead>
<tbody>
<tr>
<td>GP26</td>
<td>OUT</td>
</tr>
<tr>
<td>3v3</td>
<td>VDD</td>
</tr>
<tr>
<td>GND</td>
<td>GND</td>
</tr>
</tbody>
</table>
</div>
<p>my code is below:</p>
<pre class="lang-py prettyprint-override"><code>
import board
import analogio
import time
import adafruit_wave
import struct
import storage
# storage.remount("/", readonly=False)
adc = analogio.AnalogIn(board.A0)
conversion_factor = 3.3/4096
buffer = []
f = adafruit_wave.open("audio.wav", "w")
f.setnchannels(1)
f.setsampwidth(2)
f.setframerate(8000)
def save(data):
global f
f.writeframes(bytearray(data))
print("done")
try:
while True:
sample = adc.value * conversion_factor
frame = bytearray(struct.pack('>H', int(sample)))
buffer.extend(frame)
if len(buffer) > 16000:
save(buffer)
buffer = []
print("clear buffer")
except KeyboardInterrupt:
print("closing")
f.close()
</code></pre>
<p>How can i handle it?</p>
|
<python><audio><micropython><raspberry-pi-pico><adafruit-circuitpython>
|
2024-01-20 14:41:40
| 0
| 470
|
Alirezaarabi
|
77,851,436
| 4,701,426
|
Any difference between using .pickle/pkl file extension vs no extension when working with pickle files with pandas?
|
<p>I have noticed that all three of these seem to produce the same result (<code>df</code> is a pandas dataframe):</p>
<pre><code>df.to_pickle('df')
df.to_pickle('df.pkl')
df.to_pickle('df.pickle')
</code></pre>
<p>I read <a href="https://stackoverflow.com/questions/42193963/file-extension-naming-p-vs-pkl-vs-pickle">here</a> that there is no difference between .pkl and .pickle but how about using no extension at all like the first one above? Acceptable?</p>
|
<python><pandas><pickle>
|
2024-01-20 14:38:38
| 1
| 2,151
|
Saeed
|
77,851,377
| 2,663,442
|
How to make a histogram based on the second column of a large matrix and then calculate the sum efficiently?
|
<p>The matrix is like 4000 columns x 1e8 lines. Now I need to divide the 1e8 lines into 1000 bins based on the 2nd column and sum the lines in every bin. Finally, I would get 4000 columns x 1000 lines.</p>
<pre><code>aa=np.random.rand(10000000, 4000)
l,r=min(aa[:,1]),max(aa[:,1])
bins=np.linspace(l,r,1001)
for bn in range(1000):
ll,rr=bins[bn],bins[bn+1]
np.sum(dd[(aa[:,1]>ll) &( aa[:,1]< rr) ],axis=0)' # one of the 1000 lines
</code></pre>
<p>I find the speed of the final step above is low.</p>
<p>Then I need to stack all the 1000 lines to generate the 1000 x 4000 matrix.</p>
<pre><code> @njit(parallel=True)
def sumgood(bins, dd):
results = np.zeros((1000, 4000), dtype=np.float64)
for bn in range(1000):
l, r = bins[bn], bins[bn+1]
subset = dd[(dd[:,1] > l) & (dd[:,1] < r)] #column >l and column<r
sum_subset = np.sum(subset, axis=0)
results[bn] = sum_subset
return results
binmap=sumgood(bins, dd)
########################
for bn in range(1000):
l,r=bins[bn],bins[bn+1]
exec(f'good{bn}=np.sum(dd[(dd[:,1]>l) &( dd[:,1]< r) ],axis=0)')
binmap=np.concatenate([globals()[f'good{bn}'] for bn in range(1000)])
</code></pre>
|
<python><numpy><histogram>
|
2024-01-20 14:19:58
| 2
| 763
|
questionhang
|
77,851,371
| 1,028,270
|
Is "Type of "X" is partially unknown" a problem with this library or my own code?
|
<p>I'm using another users package and getting the linting warning when importing it:</p>
<pre><code># Throws: Type of "task" is partially unknown
from invoke.tasks import task
</code></pre>
<p>Is this an issue with the package's code or is there something I'm expected to do in my code?</p>
<p><a href="https://github.com/pyinvoke/invoke/blob/506bf4e020c177a03cf4257a22969bad0845e4ee/invoke/tasks.py#L289" rel="nofollow noreferrer">Looking at that package</a>:</p>
<pre><code>def task(*args: Any, **kwargs: Any) -> Callable:
klass: Type[Task] = kwargs.pop("klass", Task)
# @task -- no options were (probably) given.
if len(args) == 1 and callable(args[0]) and not isinstance(args[0], Task):
return klass(args[0], **kwargs)
# @task(pre, tasks, here)
if args:
if "pre" in kwargs:
raise TypeError(
"May not give *args and 'pre' kwarg simultaneously!"
)
kwargs["pre"] = args
def inner(body: Callable) -> Task[T]:
_task = klass(body, **kwargs)
return _task
# update_wrapper(inner, klass)
return inner
</code></pre>
<p>Should they have done something like <code>-> Callable[Any]:</code>?</p>
<p>If it is an issue with the package is there anything I can do in my code apart from telling the type checker to ignore that line?</p>
|
<python><python-typing>
|
2024-01-20 14:18:49
| 2
| 32,280
|
red888
|
77,851,355
| 12,712,848
|
Routing in Azure Function in Python does not work
|
<p>I'm trying to modify a route name in my function. This is my code (just testing):</p>
<pre class="lang-py prettyprint-override"><code>import logging
import azure.functions as func
from azure.cosmos import CosmosClient
import json
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
return func.HttpResponse(f"Hello world", status_code=200)
# Use the @app.route decorator to define the endpoint for the function
app = func.FunctionApp()
@app.route(route="/search/testing")
def get_customers_by_filters(req: func.HttpRequest) -> func.HttpResponse:
return main(req)
</code></pre>
<p>I have used <code>func start</code> and the local host is referencing to the folders name. So, instead of <code>http://localhost:3000/api/search/testing</code> the url is <code>http://localhost:3000/api/functiontest</code>. Am I using <code>@app.route()</code> correctly?</p>
<p>My function.json is the following:</p>
<pre class="lang-json prettyprint-override"><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
</code></pre>
|
<python><azure><azure-functions>
|
2024-01-20 14:14:26
| 1
| 841
|
OK 400
|
77,851,112
| 3,402,296
|
Interpolate reciprocal function from a set of points
|
<p>Not sure whether to post this here or on Mathematics. Anyway my problem is that I have a set of points that describe a curve like the following</p>
<p><a href="https://i.sstatic.net/ecmv1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ecmv1.png" alt="enter image description here" /></a></p>
<p>The points end abruptly at <code>x=18</code> and I want to impute how the curve will continue later on. I reckon that the curve descends in a <code>1/x</code> fashion after reaching the initial spike. However I would like to find the exact function that approximates best my points.</p>
<p>Moreover I would like to put an end to the curve, i.e. I want to see the curve if it reaches 0 at <code>x=36</code> or <code>48</code> or <code>60</code> and obtain the equation accordingly.</p>
<p>I tried to solve this problem by using the <code>numpy.Polynomial.fit</code> function but with no luck (I believe that this is because the <code>Polynomial.fit</code> will not impute a reciprocal function). The results I obtained were not satisfactory as shown by the following picture (second degree polynomial). You can see that the curves do not reach the 0 at the desired x (36, 48 and 60) and they also show an abrupt step at the beginning of the interpolation.
<a href="https://i.sstatic.net/2NFNw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2NFNw.png" alt="second degree polynomials as computed by Polynomial.fit" /></a></p>
<p>Is there a method to do so?</p>
|
<python><math><polynomials>
|
2024-01-20 12:59:55
| 1
| 576
|
RDGuida
|
77,851,075
| 1,028,270
|
Unclear on how Pylance(reportOptionalMemberAccess) is useful
|
<p>I've been playing with strict type checking and there is something I don't fully understand.</p>
<p>Consider the following:</p>
<pre><code># mymeth() -> (Result | None)
aaa = derp.mymeth()
# get error: "some_prop" is not a known member of "None"
bbb = json.loads(aaa.some_prop)
return bbb
</code></pre>
<p>I can fix this by doing this:</p>
<pre><code>if aaa:
bbb = json.loads(aaa.some_prop)
else:
raise(ValueError("mymeth() returned None, can't load as json"))
return bbb
</code></pre>
<p>But what am I getting out of this? Am I doing this right? It feels like I'm making more work for myself. If <code>aaa</code> is <code>None</code> then <code>json.loads()</code> will raise itself and tell me that right? Or maybe that stack trace won't be as readable to the user as my own more specific exception?</p>
<p>I'm sure this rule is useful, I'm just looking for an example of why and how I should be properly utilizing it.</p>
|
<python><python-typing>
|
2024-01-20 12:50:43
| 1
| 32,280
|
red888
|
77,850,728
| 10,013,975
|
Passing keyword arguments with default values in a layered call
|
<p>I am working on a python library where I want to give some flexibility to the user. I am facing below challenge.</p>
<p>Let's say I exposed the class <code>Client</code> to the user, which is defined like below:</p>
<pre><code>class Client:
def __init__(self, path=None):
self.file_obj = FileOpener(path = path)
</code></pre>
<p>And my <code>FileOpener</code> class is:</p>
<pre><code>class FileOpener:
def __init__(self, path='my/default/path'):
# My logic
pass
</code></pre>
<p>In above code, the issue is, if the user initialized the <code>Client</code> object as <code>c = Client()</code>, my underlying <code>FileOpener</code> class receives <code>None</code> as the path. But I want that even if <code>None</code> is passed, it should pick the default path.</p>
<p>Here are few solutions for this problem that I came up with, but they are somewhat flawed:</p>
<ol start="0">
<li><p>The most basic approach - filter the arguments and pass only those which are not <code>None</code>. I don't want to put a if - else ladder or create any unnecessary <code>dict</code> to store the values. Its not maintainable / readable and new coders can mess up easily.</p>
</li>
<li><p>I can define the default value in <code>Client</code> class itself. If the user doesn't pass anything, it will receive the default value. The issue is, suppose I have multiple layers, and in future I want to change the default value, I will need to change it everywhere. Hence this approach is not maintainable at all.</p>
</li>
<li><p>Second thing that came to my mind was, using <code>**kwargs</code>, that I can pass easily to underlying objects. This will not pass <code>None</code> by default if user didn't explicitly pass it. But now my user doesn't know what args my class is taking. He will need to go through the docs every time and won't be able to take advantage of his code editor (example - in VS Code you can use auto complete).</p>
</li>
</ol>
<p>I want to understand if there is any standard approach for resolving this issue. Or any suggestions to get rid of the flaws in my approach. I want to have a maintainable solution so that if anyone in future comes to contribute, he can understand just by looking at the code and doesn't mess up.</p>
|
<python><python-3.x><design-patterns>
|
2024-01-20 10:53:56
| 2
| 433
|
Mukul Bindal
|
77,850,567
| 19,932,351
|
Wordpress REST API remove all posts and images (media) via Python
|
<p>I feel like this should be something several other people already accomplished... to write a script that removes all sample data from a dev webpage using the Wordpress Rest API.
Right now I try to automate posting images and posts, which works and leads to several of hundreds posts and images blowing up my database.
Looking for a script doing that, neither ChatGPT nor browsing the web has provided any good answers.</p>
<p>Any ideas where to look for scripts like that or how to write it yourself?</p>
|
<python><python-requests><wordpress-rest-api>
|
2024-01-20 10:08:12
| 1
| 662
|
Leon
|
77,850,507
| 7,862,953
|
webscraping python ssl certificate
|
<p>Trying to practice in webscrape in the command</p>
<pre><code>result = requests.get(url)
</code></pre>
<p>I get</p>
<pre><code>SSLError: HTTPSConnectionPool(host='subslikescript.com', port=443): Max retries exceeded with url: /movie/Titanic-120338 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
</code></pre>
<p>Change the <code>verify=False</code> will provide a warning</p>
<pre><code>from bs4 import BeautifulSoup
import requests
url='https://subslikescript.com/movie/Titanic-120338'
result = requests.get(url, verify=False) # Add verify=False to disable certificate verification
</code></pre>
<p>Warning</p>
<pre><code>InsecureRequestWarning: Unverified HTTPS request is being made to host 'subslikescript.com'. Adding certificate verification is strongly advised
</code></pre>
<p>Try to update</p>
<pre><code>python -m pip install certifi
</code></pre>
<p>No success. Can someone explain in simple English what is happening and what do I need to do?</p>
|
<python><ssl><web-scraping>
|
2024-01-20 09:47:12
| 0
| 537
|
zachi
|
77,850,212
| 11,861,874
|
Python One Page Report Based on Output of Function
|
<p>I am trying to prepare a report using Python. So the first function produces some output and next, I run function 2 and get an output, at the end, I need all output one below the other saved in .pdf or .doc format at one location, with some header/title to the report. It's a kind of one-pager report with multiple outputs, it can be pandas data frame or pictures.</p>
<pre><code># Function 1 Output
|Columns|Values|
|-------|------|
| A | 100 |
| B | 200 |
| C | 300 |
|-------|------|
# Some other processing steps.
# Function 2 Output
|Columns|Values|
|-------|------|
| D | 400 |
| E | 500 |
| F | 600 |
|-------|------|
</code></pre>
|
<python><pandas><matplotlib><report>
|
2024-01-20 07:52:39
| 2
| 645
|
Add
|
77,850,040
| 3,555,115
|
Filter rows based on column value condition and merge with other dataframe
|
<p>I have a dataframe</p>
<pre><code>df1 =
vol comp sort
vol1 10 20
vol2 10 20
df2 =
load vol counter_name counter_value
1 vol1 bytes_read 50
1 vol1 bytes_written 50
1 vol1 stage comp
1 vol2 bytes_read 50
1 vol2 stage sort
1 vol2 bytes_written 50
2 vol1 bytes_read 150
2 vol1 bytes_written 250
2 vol1 stage done
2 vol2 bytes_read 50
2 vol2 stage comp
2 vol2 bytes_written 1250
</code></pre>
<p>I need to add "bytes_read", "bytes_written" field from df2 "counter_name" column to df1, when stage is "done" for each vol in "volume" volumn. When stage is not "done" then we can just make it NAN.</p>
<p>Output should be</p>
<pre><code> df1 =
vol comp sort bytes_read bytes_written
vol1 10 20 150 250
vol2 10 20 NaN NAN
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-20 06:29:23
| 1
| 750
|
user3555115
|
77,850,020
| 8,176,763
|
dynamically select columns and values with sqlalchemy
|
<p>Say I have a table like:</p>
<pre><code>product market idea
A FG cool
C BG sad
T B super
</code></pre>
<p>In the frontend the user can select any value from product market or idea and also any column to filter. So for example the user can select product = A and market = FG.</p>
<p>The goal is to construct a query using sqlalchemy orm to gather the columns the user selected and also gather the values and dynamically generate the query as:</p>
<pre><code>select * from my_table where product = A and market = FG
</code></pre>
<p>Here is my attempt:</p>
<pre><code>from sqlalchemy import create_engine, Column, String,and_,insert,select,Integer,delete
from sqlalchemy.orm import declarative_base
from sqlalchemy.orm import sessionmaker
# Define model
Base = declarative_base()
class MyTable(Base):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True)
product = Column(String)
market = Column(String)
idea = Column(String)
# Create a connection to database
engine = create_engine('postgresql+psycopg://postgres:mypass@localhost:5432/postgres')
Base.metadata.create_all(engine)
# Create a session
Session = sessionmaker(bind=engine)
session = Session()
# Bulk Insert Data
session.execute(
insert(MyTable),
[
{"id":1,"product":"BG","market":"JY","idea":"sad"},
{"id":2,"product":"GF","market":"HN","idea":"happy"},
]
)
session.commit()
def dynamic_query(selected_filters):
# Initialize the base query
# query = session.query(MyTable)
query = select(MyTable)
# Build the dynamic filters
filters = []
for key, value in selected_filters.items():
filters.append(getattr(MyTable, key) == value)
# Apply the filters to the query
if filters:
query = query.where(and_(*filters))
# Execute the query
results = session.scalars(query).all()
print(results)
return results
# Example usage
user_selected_filters = {'product': 'BG', 'market': 'HN'}
result = dynamic_query(user_selected_filters)
print(result)
</code></pre>
<p>Result is:
[]</p>
<p>But it should contain all rows.</p>
|
<python><sqlalchemy>
|
2024-01-20 06:17:29
| 1
| 2,459
|
moth
|
77,850,003
| 10,015,739
|
Vector Coordinates and linear combination as an arrow will not display on the same plot
|
<p>I am attempting to graph the coordinates of two vectors on a plot (see Part 1 of code snippet) and the linear combination of the two vectors (see Part 2 of code snippet1) on the same plot. However, in Snippet 1, only the arrow is displaying, though I am not sure why.</p>
<p>In Snippet 2, I was able to get both to print, by reversing the order, but the arrow does not reach the end coordinate w = [2, 1], as expected. Can anyone advise how I can fix the arrow so it lands on the coordinate (2, 1), (see Plot Image below)?</p>
<p>Snippet 1: Only Arrow displays:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Part 1: Plot points
x = [-2, 2, 2]
y = [-1, -1, 1]
plt.plot(x, y, color='r', linestyle='dashed', linewidth=2, marker='o', markerfacecolor='blue', markeredgecolor='blue')
plt.xlim(-3, 3)
plt.ylim(-2, 2)
y_range = np.arange(-2, 3)
x_range = np.arange(-3, 4)
plt.yticks(y_range)
plt.xticks(x_range)
# Part 2: Plot Vector
v = np.array([-2, -1])
w = np.array([2, 1])
# Creates axes of plot referenced 'ax'
ax = plt.axes()
# Plots red dot at origin (0,0)
ax.plot(*v, 'or')
ax.arrow(*v, *w, color='b', linewidth=2.5, head_width=0.30, head_length=0.35)
# Sets limit for plot for x-axis
plt.xlim(-3, 3)
# Set major ticks for x-axis
major_xticks = np.arange(-3, 4)
ax.set_xticks(major_xticks)
# Sets limit for plot for y-axis
plt.ylim(-2, 2)
# Set major ticks for y-axis
major_yticks = np.arange(-2, 3)
ax.set_yticks(major_yticks)
# Default Grid
plt.grid(visible=True, which='major')
plt.show()
</code></pre>
<p>Code Snippet 2: Both coordinates and arrow display, however, the head of the arrow does not reach the end coordinate w= [2, 1], as expected:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Part 2: Plot Vector
v = np.array([-2, -1])
w = np.array([2, 1])
# Creates axes of plot referenced 'ax'
ax = plt.axes()
# Plots red dot at origin (0,0)
ax.plot(*v, 'or')
ax.arrow(*v, *w, color='b', linewidth=2.5, head_width=0.30, head_length=0.35)
# Sets limit for plot for x-axis
plt.xlim(-3, 3)
# Set major ticks for x-axis
major_xticks = np.arange(-3, 4)
ax.set_xticks(major_xticks)
# Sets limit for plot for y-axis
plt.ylim(-2, 2)
# Set major ticks for y-axis
major_yticks = np.arange(-2, 3)
ax.set_yticks(major_yticks)
# Part 1: Plot points
x = [-2, 2, 2]
y = [-1, -1, 1]
plt.plot(x, y, color='r', linestyle='dashed', linewidth=2, marker='o', markerfacecolor='blue', markeredgecolor='blue')
plt.xlim(-3, 3)
plt.ylim(-2, 2)
y_range = np.arange(-2, 3)
x_range = np.arange(-3, 4)
plt.yticks(y_range)
plt.xticks(x_range)
# Default Grid
plt.grid(visible=True, which='major')
plt.show()
</code></pre>
<p>Plot Image:
<a href="https://i.sstatic.net/oTCPv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTCPv.png" alt="enter image description here" /></a></p>
|
<python><numpy><matplotlib><linear-algebra>
|
2024-01-20 06:09:04
| 1
| 483
|
tlockhart
|
77,849,981
| 8,176,763
|
sqlalchemy returns instance of the class but it does not print correctly
|
<p>I have code like this:</p>
<pre><code>from sqlalchemy import create_engine, Column, String,and_,insert,select,Integer,delete
from sqlalchemy.orm import declarative_base
from sqlalchemy.orm import sessionmaker
# Define your model
Base = declarative_base()
class MyTable(Base):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True)
product = Column(String)
market = Column(String)
idea = Column(String)
# Create a connection to your database
engine = create_engine('postgresql+psycopg://postgres:2001@localhost:5432/postgres')
# MyTable.__table__.drop(engine)
Base.metadata.create_all(engine)
# Create a session
Session = sessionmaker(bind=engine)
session = Session()
session.execute(
insert(MyTable),
[
{"id":1,"product":"BG","market":"JY","idea":"sad"},
{"id":2,"product":"GF","market":"HN","idea":"happy"},
]
)
session.commit()
results = session.scalars(select(MyTable).where(MyTable.id==1)).all()
print(results)
</code></pre>
<p>Results is :</p>
<pre><code>[<__main__.MyTable object at 0x103818650>]
</code></pre>
<p>I was expecting to have the print results as:</p>
<pre><code>[MyTable(id=1,product='BG',market='JY',idea='sad')]
</code></pre>
|
<python><sqlalchemy>
|
2024-01-20 05:55:57
| 1
| 2,459
|
moth
|
77,849,974
| 12,200,808
|
How to show the dependency problem in the existing python environment
|
<p>I installed the python packages on Ubuntu by the following command:</p>
<pre><code># pip install --upgrade -r r.txt
</code></pre>
<p><strong>Here is the console output:</strong></p>
<pre><code>Installing collected packages: protobuf
ERROR: pip's legacy dependency resolver does not consider dependency conflicts when selecting packages. This behaviour is the source of the following dependency conflicts.
apache-beam 2.53.0 requires packaging>=22.0, but you'll have packaging 20.9 which is incompatible.
ortools 9.8.3296 requires absl-py>=2.0.0, but you'll have absl-py 1.4.0 which is incompatible.
ortools 9.8.3296 requires pandas>=2.0.0, but you'll have pandas 1.5.3 which is incompatible.
Successfully installed protobuf-4.25.1
#
</code></pre>
<p>After closing the console, I want to reopen the terminal to check what dependency problem is in my system.</p>
<p><strong>Here is the expected output:</strong></p>
<pre><code># pip what_command_to_show_the_problem
ERROR: pip's legacy dependency resolver does not consider dependency conflicts when selecting packages. This behaviour is the source of the following dependency conflicts.
apache-beam 2.53.0 requires packaging>=22.0, but you'll have packaging 20.9 which is incompatible.
ortools 9.8.3296 requires absl-py>=2.0.0, but you'll have absl-py 1.4.0 which is incompatible.
ortools 9.8.3296 requires pandas>=2.0.0, but you'll have pandas 1.5.3 which is incompatible.
</code></pre>
<p>What command can be used to show the same <code>dependency problem</code> existed in the system?</p>
|
<python><bash><ubuntu><pip>
|
2024-01-20 05:53:15
| 1
| 1,900
|
stackbiz
|
77,849,842
| 10,070,717
|
Python Paramiko - problems with sending "iperf -s" to Linux server
|
<p>I need to send the command <code>iperf -s</code> to a Linux server using a Python program. The command is not sent. The chronology of problems is as follows:</p>
<ol>
<li>I am sending the command <code>iperf -s</code> to the server. Nothing happens, the program freezes (I don't stop the program).</li>
<li>I use the <code>netstat -tupln</code> to check the processes. A process <code>iperf</code> is created on the server, but nothing is happened.</li>
<li>If I kill this process with a <code>sudo kill -9 <PID></code>, the program outputs the traditional output when the <code>iperf -s</code> is executed and the program exits.</li>
<li>If I check the processes again, the <code>iperf</code> process is not there.</li>
</ol>
<p>Here's my code:</p>
<pre><code>remote_server = "172.22.94.171"
client = paramiko.client.SSHClient()
username = "useruser"
password = "password"
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(remote_server, port=22, username=username, password=password)
stdin, stdout, stderr = client.exec_command('sudo iperf -s', get_pty=True)
stdin.write(password+'\n')
print(stdout.readlines())
</code></pre>
|
<python><ssh><paramiko>
|
2024-01-20 04:40:34
| 0
| 355
|
Кирилл Химов
|
77,849,780
| 9,108,781
|
ChromaDB query filtering by documents
|
<p>Can I run a query among a supplied list of documents, for example, by adding something like "where documents in supplied_doc_list"? I know those documents are in the collection. I kept track of them when I added them. Alternatively, is there a way to filter based on docID. As another alternative, can I create a subset of the collection for those documents, and run a query in that subset of collection? Thanks a lot!</p>
<pre><code>results = collection.query(
query_texts=["Doc1", "Doc2"],
n_results=1
)
</code></pre>
|
<python><chromadb>
|
2024-01-20 03:52:21
| 0
| 943
|
Victor Wang
|
77,849,394
| 18,476,381
|
Python SQL Alchemy Nested Object Insert
|
<p>I am using a Python and SQLAlchemy to perform CRUD operations on my oracle database.</p>
<p>I have the below models:</p>
<pre><code>class ServiceOrderModel(BaseModel):
service_order_id: int | None = None
service_order_number: str | None = None
service_type: str
service_order_items: Optional[List[ServiceOrderItemModel]] = None
class Config:
from_attributes = True
class ServiceOrderItemModel(BaseModel):
service_order_item_id: Optional[int] = None
service_order_id: Optional[int] = None
component_id: Optional[int] = None
component: Optional[ComponentModel] = None
total_qty: Optional[float]
unit_price: Optional[float]
service_order_items_received: Optional[List[ServiceOrderItemReceiveModel]] = None
class Config:
from_attributes = True
class ComponentModel(BaseModel):
component_id: Optional[int] = None
component_serial_number: str = None
component_name: Optional[str] = None
class Config:
from_attributes = True
</code></pre>
<p>When a user creates a service order they also have to create multiple items (One to many). Each item also has a (One to one) relationship with component.</p>
<p>When I have the ServiceOrderModel, what is the best way to perform an insert into the service_order, service_order_item, and component table with this single object.</p>
<p>Would it be best to split it up? How would I do that as I would need the service_order_id generated from the service_order table to perform an insert into the item table. I have my table def's below as well.</p>
<pre><code>class ServiceOrder(BaseModel):
__tablename__ = "service_order"
service_order_id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
service_order_number: Mapped[Optional[str]] = mapped_column(
String(42),
server_default=text(
"""
CASE "SERVICE_TYPE"
WHEN 'NEW' THEN 'SO'||TO_CHAR("SERVICE_ORDER_ID")
WHEN 'REPAIR' THEN 'RO'||TO_CHAR("SERVICE_ORDER_ID")
ELSE NULL
END
"""
),
)
service_type: Mapped[str] = mapped_column(String(50))
service_order_item: Mapped[List["ServiceOrderItem"]] = relationship(
"ServiceOrderItem", back_populates="service_order"
)
class ServiceOrderItem(BaseModel):
__tablename__ = "service_order_item"
service_order_item_id: Mapped[int] = mapped_column(
primary_key=True, autoincrement=True
)
component_id: Mapped[Optional[int]] = mapped_column(
ForeignKey("component.component_id")
)
service_order_id: Mapped[int] = mapped_column(
ForeignKey("service_order.service_order_id")
)
total_qty: Mapped[Optional[int]]
unit_price: Mapped[Optional[float]]
component: Mapped["Component"] = relationship(
"Component", back_populates="service_order_item"
)
service_order: Mapped["ServiceOrder"] = relationship(
"ServiceOrder", back_populates="service_order_item"
)
class Component(BaseModel):
__tablename__ = "component"
component_id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
component_name: Mapped[str] = mapped_column(String(500))
component_serial_number: Mapped[str] = mapped_column(String(250), unique=True)
service_order_item: Mapped[List["ServiceOrderItem"]] = relationship(
"ServiceOrderItem", back_populates="component"
)
</code></pre>
|
<python><sqlalchemy><pydantic>
|
2024-01-20 00:01:34
| 1
| 609
|
Masterstack8080
|
77,849,271
| 18,476,381
|
Pydantic init/validating model from dict is changing sub model to None
|
<p>I have a pydantic model like below:</p>
<pre><code>class ServiceOrderModel(BaseModel):
service_order_id: int | None = None
service_order_number: str | None = None
service_order_items: Optional[List[ServiceOrderItemModel]] = Field(
None, alias="service_order_item"
)
class Config:
from_attributes = True
</code></pre>
<p>I am trying to initialize this model from a dictionary:</p>
<pre><code>service_order_model = ServiceOrderModel(**service_order_dict)
</code></pre>
<p>In my service_order_dict I clearly have an item of "service_order_items" which is not empty and is a list of dicts which fit the model of ServiceOrderItemModel.</p>
<p>Right after I initialize service_order_model and check it, it sets service_order_items to None... Has anyone ran into this?</p>
<p>I am trying to use sqlalchemy to insert this nested object into an oracle database.</p>
|
<python><sqlalchemy><pydantic>
|
2024-01-19 23:18:57
| 1
| 609
|
Masterstack8080
|
77,849,133
| 146,073
|
SQLAlchemy bidirectional many-to-many raising ArgumentError
|
<p>I've tried to establish a many-to-many relationship between two tables, User and Product, through a third table, Cart. Since the cart has additional data for each row I'm trying to use the association object pattern, but clearly there's something fundamental I'm not understanding. Here are the model declarations:</p>
<pre class="lang-py prettyprint-override"><code>class Cart(BaseModel):
__tablename__ = 'cart'
product_id: Mapped[int] = Column(Integer, ForeignKey('product.id'), primary_key=True)
user_id: Mapped[int] = Column(Integer, ForeignKey('user.id'), primary_key=True)
selling_price: Mapped[Decimal] = mapped_column(Numeric(precision=6, scale=2), nullable=True)
quantity: Mapped[int]
user: Mapped["User"] = relationship("Users", back_populates="product")
product: Mapped["Product"] = relationship("Product", back_populates="user")
class User(BaseModel, UserMixin):
__tablename__ = "user"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
firstname: Mapped[str] = mapped_column(String(20), unique=False, nullable=False)
lastname: Mapped[str] = mapped_column(String(20), unique=False, nullable=False)
email: Mapped[str] = mapped_column(String(120), unique=True, nullable=False)
password: Mapped[str] = mapped_column(String(60), nullable=False)
products: Mapped[list['Product']] = relationship("Cart", back_populates='user')
def add_to_cart(self, product_id: int):
item_to_add = Cart.insert().values(product_id=product_id, user_id=self.id)
self.session.execute(item_to_add)
self.session.commit()
flash('Your item has been added to your cart!', 'success')
def __repr__(self):
return f"User('{self.firstname}','{self.lastname}', '{self.email}','{self.id}')"
class Product(BaseModel):
__tablename__ = "product"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
name: Mapped[str] = mapped_column(String(100), nullable=False)
price: Mapped[Decimal] = mapped_column(Numeric(precision=6, scale=2), nullable=True)
description: Mapped[str] = mapped_column(Text(), nullable=False)
users: Mapped[list['User']] = relationship("Cart", back_populates='product')
def __repr__(self):
return f"Product('{self.name}', '{self.price}')"
</code></pre>
<p>At one point in the code I execute this query:</p>
<pre class="lang-py prettyprint-override"><code> q = select(User).where(User.email == form.email.data)
user = conn.session.query(q).scalars().first()
</code></pre>
<p>Because the code is embedded in a web server the process doesn't terminate, but I see this traceback (<em>routes.py</em> is the file with the web view function in it) - I've wrapped the final line for reading convenience:</p>
<pre><code>Traceback (most recent call last):
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/flask/app.py", line 1478, in __call__
return self.wsgi_app(environ, start_response)
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/flask/app.py", line 1458, in wsgi_app
response = self.handle_exception(e)
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/flask/app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/flask/app.py", line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/flask/app.py", line 867, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/flask/app.py", line 852, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/Users/sholden/Projects/Python/ecommerce/unwrap/routes.py", line 54, in login
user = conn.session.query(q).scalars().first()
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2912, in query
return self._query_cls(entities, self, **kwargs)
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 276, in __init__
self._set_entities(entities)
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 288, in _set_entities
self._raw_columns = [
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 289, in <listcomp>
coercions.expect(
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/sqlalchemy/sql/coercions.py", line 442, in expect
return impl._implicit_coercions(
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/sqlalchemy/sql/coercions.py", line 506, in _implicit_coercions
self._raise_for_expected(element, argname, resolved)
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/sqlalchemy/sql/coercions.py", line 1137, in _raise_for_expected
return super()._raise_for_expected(
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/sqlalchemy/sql/coercions.py", line 710, in _raise_for_expected
super()._raise_for_expected(
File "/Users/sholden/.venvs/ecommerce-3.10/lib/python3.10/site-packages/sqlalchemy/sql/coercions.py", line 535, in _raise_for_expected
raise exc.ArgumentError(msg, code=code) from err
sqlalchemy.exc.ArgumentError: Column expression, FROM clause, or other columns
clause element expected, got <sqlalchemy.sql.selectable.Select object at 0x104c04d90>.
To create a FROM clause from a <class 'sqlalchemy.sql.selectable.Select'> object, use the
.subquery() method. (Background on this error at https://sqlalche.me/e/20/89ve)
</code></pre>
<p>I've tried various combinations of arguments to the <code>relationship</code> calls, but the documentation and other information on SQLAlchemy, while voluminous, is conflicting and often unhelpful and I've so far found no way to get the query to work.</p>
<p>What am I missing?</p>
|
<python><sqlalchemy><many-to-many><relationship>
|
2024-01-19 22:28:36
| 0
| 37,731
|
holdenweb
|
77,849,080
| 23,190,147
|
How to detect a mouse circling around something in python?
|
<p>I'm trying to detect when a user's mouse is circling around an object repeatedly, and take a screenshot of that object.</p>
<p>To make this simpler, I'll split it into two parts. First of all, I want to detect the mouse circling. This is easy, there are many python libraries for this. But I don't want the screenshot to be taken by accident, so I want the user's mouse to be circling for at least 5 seconds before we take the screenshot. For example:</p>
<pre><code>#import necessary libraries
def get_mouse_input():
pass
def get_coordinates():
pass
</code></pre>
<p>When I say <code>get_coordinates()</code>, I mean that I want to get the coordinates of the area that the mouse is circling around. That way, I can get a screenshot of that particular area.</p>
<p>Next, taking the screenshot, which is pretty straightforward, you simply just use screenshot module and perhaps find a way to crop the image, and you've got it.</p>
<p>The real problem for me is detecting when the user is circling around an area. I don't care if the user is circling around white space, or black space, what I care about is what the coordinates of the mouse are. The problem is, the mouse kind of makes the shape of a circle or ellipse when you circle it, and of course, user input can vary, so it makes it extremely hard to detect something like this.</p>
<p>The main question I'm asking here is: <em>Is this possible to do in python?</em>
And if so, how? This is just a fun project of mine, so I'm open to any suggestions.</p>
<p>UPDATE:</p>
<p>I've figured out how to get the mouse input, but the problem lies with using that input. It's easy to get the mouse position, but actually seeing if those coordinates lie in the shape of an ellipse or circle is difficult. For example, let's say that I have 4 coordinates, which the mouse repeatedly is touching:</p>
<pre><code>import pyautogui
import time
while True:
cordslist1 = []
cordslist2 = []
cordslist3 = []
cordslist4 = []
for i in range(4):
cords1 = pyautogui.position()
time.sleep(0.5)
cords2 = pyautogui.position()
time.sleep(0.5)
cords3 = pyautogui.position()
time.sleep(0.5)
cords4 = pyautogui.position()
for j in range(4):
exec(f"cordslist{i + 1}.append(cords{j + 1}")
cordslist = [cordslist1, cordslist2, cordslist3, cordslist4]
#do something with cordslist
</code></pre>
<p>and...now I have a huge list called <code>cordslist</code>, which was achieved using some questionable methods...and I don't know what to do with it. For example, say I have 4 coordinates: (10, 4), (100, 100), (200, 200), and (7, 8). How do I detect if that shape is an ellipse or circle? Then, how do I take a screenshot of those coordinates as well? I'm still a little stuck. I'm hoping there is a mathematical function that might help me with this, using the python module <code>math</code>, or just some basic for loops.</p>
<p>UPDATE:</p>
<p>I've solved this. I've found a much simpler solution than detecting the coordinates each time (using a python library) that works perfectly. To anybody else who tries this, know that doing it the way I tried to do it from the start is NOT a good idea.</p>
|
<python><image><mouse>
|
2024-01-19 22:14:48
| 0
| 450
|
5rod
|
77,849,062
| 2,182,636
|
Get Playwright to Wait Until Title Contains Specific Text
|
<p>I'm new at playwright and I'm a little stuck. I'm scraping a site that has a lot of back and forth Javascript. I want my parsing logic to wait until the the title of the page is populated with a particular value, which would indicate that the page has loaded enough to parse.</p>
<p>Here are the relevant parts of my code.</p>
<pre class="lang-py prettyprint-override"><code> with sync_playwright() as pw_firefox:
browser = pw_firefox.firefox.launch(headless=True, timeout=self.timeout)
context = browser.new_context(viewport={"width": 1920, "height": 1080},
extra_http_headers=HEADERS,
strict_selectors=False)
page = context.new_page()
# Go to url and wait for the page to load
page.goto(final_url)
html = page.content()
</code></pre>
<p>Is there a way to use the <code>wait_for_selector()</code> or <code>wait_for_function()</code> methods to get playwright to wait until the <code><title></code> is populated with a specific value?
Thanks!</p>
|
<python><web-scraping><playwright><playwright-python>
|
2024-01-19 22:10:09
| 2
| 586
|
cgivre
|
77,849,025
| 4,089,351
|
Avoiding DeprecationWarning when extracting indices to subset vectors
|
<p>General idea: I want to take a slice of a 3D surface plot of a function of two variables <code>f(x,y)</code> at a given <code>x = some value</code>. The problem is that I have to know the index where <code>x</code> assumes this value after creating <code>x</code> as a vector with <code>np.linspace</code>, for instance. Finding this index turns out to be doable thanks to another post in SO. What I can't do is use this index as is returned to subset a different vector <code>Z</code>, because of the index is returned as a 1-element list (or tuple), and I need an integer. When I use <code>int()</code> I get a warning:</p>
<pre><code>import numpy as np
lim = 10
x = np.linspace(-lim,lim,2000)
y = np.linspace(-lim,lim,2000)
X, Y = np.meshgrid(x, y)
Z = X**2 + Y**2
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
idx = int(np.where(x == find_nearest(x,0))[0])
print(idx)
print(Z[idx,:].shape)
</code></pre>
<p>Output with warning:</p>
<pre><code><ipython-input-282-c2403341abda>:16: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
idx = int(np.where(x == find_nearest(x,0))[0])
1000
(2000,)
</code></pre>
<p>The shape of the ultimate sub-setting of <code>Z</code> is <code>(2000,)</code>, which allows me to plot it against <code>x</code>.</p>
<p>However, if instead I just extract with <code>[0]</code> the value return by <code>np.where()</code>, the shape of the final sub-set <code>Z</code>, i.e. <code>(1,2000)</code> is not going to allow me to plot it against <code>x</code>:</p>
<pre><code>idx1 = np.where(x == find_nearest(x,0))[0]
print(idx1)
print(Z[idx1,:].shape)
</code></pre>
<p>How can I extract the index corresponding to the value of <code>x</code> is want as an integer, and thus avoid the warning (see below)?</p>
<p><code>DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.) idx = int(np.where(x == find_nearest(x,0))[0])</code></p>
|
<python><numpy><subset>
|
2024-01-19 21:59:41
| 1
| 4,851
|
Antoni Parellada
|
77,848,945
| 1,028,270
|
How do I disable D100 and or C0111 with ruff?
|
<p>I use sphinx and at the top of my py files I have this:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
my-module
=========
Some notes about my module
"""
</code></pre>
<p>These warning are incorrect in the this context:</p>
<ul>
<li>1 blank line required between summary line and descriptionFlake8(D205)</li>
<li>First line should end with a periodFlake8(D400)</li>
</ul>
<p>I can't put a blank line above ======= because that breaks the headings and I also don't want to put a period after my heading. How can I ignore in just these cases?</p>
<p>This two posts seem relevant:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/46192576/how-can-i-fix-flake8-d100-missing-docstring-error-in-atom-editor">How can I fix `flake8 D100 — Missing docstring` error in atom editor</a></li>
<li><a href="https://stackoverflow.com/questions/7877522/how-do-i-disable-missing-docstring-warnings-at-a-file-level-in-pylint">How do I disable "missing docstring" warnings at a file-level in Pylint?</a></li>
</ul>
<p>In my project.toml this did nothing:</p>
<pre><code>[tool.ruff.lint]
ignore = ["D100"]
</code></pre>
<p>I can't even find out how to disable</p>
<p>I also opened a GitHub issue: <a href="https://github.com/astral-sh/ruff/issues/9583" rel="nofollow noreferrer">https://github.com/astral-sh/ruff/issues/9583</a></p>
<h1>Edit</h1>
<p>Maybe I'm not understanding how D100 works? It looks like it's been officially implemented as I see a check next to it: <a href="https://github.com/astral-sh/ruff/issues/970" rel="nofollow noreferrer">https://github.com/astral-sh/ruff/issues/970</a></p>
<p>Shouldn't ignoring D100 prevent docstring linting at the top of the files?</p>
|
<python><docstring><ruff>
|
2024-01-19 21:38:19
| 1
| 32,280
|
red888
|
77,848,932
| 3,555,115
|
Simplify dataframe by combining data from various columns together for same instance
|
<p>I have a dataframe after concatenting several dataframes</p>
<pre><code>df1 =
instance scan comp sort
0 A 23:15:12 NaN NaN
1 B 23:17:12 NaN NaN
2 C 23:16:12 NaN NaN
0 A NaN 23:19:32 NaN
1 B NaN 23:19:32 NaN
2 C NaN 23:43:23 NaN
0 A NaN NaN 23:45:32
1 B NaN NaN 23:45:26
2 C NaN NaN 23:45:12
</code></pre>
<p>I need to simplify above and have all the columns for the same instance to be updated</p>
<pre><code>df2 =
instance scan comp sort
A 23:15:12 23:19:32 23:45:32
B 23:17:12 23:19:32 23:45:26
C 23:16:12 23:43:23 23:45:12
</code></pre>
<p>There can 100's of such instances and many columns for each instance such as scan, comp,sort etc. I have tried groupby(['instance'), but it doesnt seem to work and resulting in object errors.
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x7f72f595f650></p>
|
<python><pandas><dataframe>
|
2024-01-19 21:35:09
| 2
| 750
|
user3555115
|
77,848,763
| 1,182,299
|
Extract a text with python beautifulsoup from a TEI XML document
|
<p>I'm trying to extract a text from a TEI XML document, I can't figure out how to get the other tags names not just the text itself.</p>
<pre><code> <text>
<body>
<div xml:id="S00483.1">
<div xml:id="S00483.1.1">
<pb xml:id="S00483.1v" n="1v"/>
<cb xml:id="S00483.1vA" n="1vA"/>
<lb n="1"/>
<div xml:id="S00483.1.1.1">
<ab xml:id="S00483.1.1.1.1">
<w xml:id="S00483.1.1.1.1.1">מאמתי</w>
<w xml:id="S00483.1.1.1.1.2">קורין</w>
<w xml:id="S00483.1.1.1.1.3">את</w>
<w xml:id="S00483.1.1.1.1.4">שמע</w>
<w xml:id="S00483.1.1.1.1.5">בערבים</w>
<w xml:id="S00483.1.1.1.1.6">משעה</w>
<w xml:id="S00483.1.1.1.1.7">שהכהנים</w>
<lb n="2"/>
<w xml:id="S00483.1.1.1.1.8">נכנסין</w>
<w xml:id="S00483.1.1.1.1.9">לאכל</w>
<w xml:id="S00483.1.1.1.1.10">בתרומתן</w>
<w xml:id="S00483.1.1.1.1.11">עד</w>
<w xml:id="S00483.1.1.1.1.12">סוף</w>
<w xml:id="S00483.1.1.1.1.13">האשמורת</w>
<lb n="3"/>
<w xml:id="S00483.1.1.1.1.14">הראשנה</w>
<w xml:id="S00483.1.1.1.1.15">דברי</w>
<w xml:id="S00483.1.1.1.1.16">רבי</w>
<w xml:id="S00483.1.1.1.1.17">אליעזר</w>
<w xml:id="S00483.1.1.1.1.18">וחכמין</w>
<w xml:id="S00483.1.1.1.1.19">אומרין</w>
<w xml:id="S00483.1.1.1.1.20">עד</w>
<lb n="4"/>
<w xml:id="S00483.1.1.1.1.21">חצות</w>
<w xml:id="S00483.1.1.1.1.22">רבן</w>
<w xml:id="S00483.1.1.1.1.23">גמליאל</w>
<w xml:id="S00483.1.1.1.1.24">אומר</w>
<w xml:id="S00483.1.1.1.1.25">עד</w>
<w xml:id="S00483.1.1.1.1.26">שיעלה</w>
<w xml:id="S00483.1.1.1.1.27">עמוד</w>
<lb n="5"/>
<w xml:id="S00483.1.1.1.1.28">השחר</w>
<pc type="unitEnd" rend="null"/>
<milestone unit="MSMishnah" n="2"/>
<label>ב<am rend="null">׳</am>
</label>
<w xml:id="S00483.1.1.1.1.29">מעשה</w>
<w xml:id="S00483.1.1.1.1.30">שבאו</w>
<w xml:id="S00483.1.1.1.1.31">בניו</w>
<w xml:id="S00483.1.1.1.1.32">מבית</w>
<w xml:id="S00483.1.1.1.1.33">המשתה</w>
<lb n="6"/>
<w xml:id="S00483.1.1.1.1.34">אמרו</w>
<w xml:id="S00483.1.1.1.1.35">לו</w>
<w xml:id="S00483.1.1.1.1.36">לא</w>
<w xml:id="S00483.1.1.1.1.37">קרינו</w>
<w xml:id="S00483.1.1.1.1.38">את</w>
<w xml:id="S00483.1.1.1.1.39">שמע</w>
<w xml:id="S00483.1.1.1.1.40">אמר</w>
<w xml:id="S00483.1.1.1.1.41">להם</w>
<w xml:id="S00483.1.1.1.1.42">אם</w>
<w xml:id="S00483.1.1.1.1.43">לא</w>
<lb n="7"/>
<w xml:id="S00483.1.1.1.1.44">עלה</w>
<w xml:id="S00483.1.1.1.1.45">עמוד</w>
<w xml:id="S00483.1.1.1.1.46">השחר</w>
<w xml:id="S00483.1.1.1.1.47">מותרין</w>
<w xml:id="S00483.1.1.1.1.48">אתם</w>
<w xml:id="S00483.1.1.1.1.49">לקרות</w>
<pc type="unitEnd" rend="null"/>
<milestone unit="MSMishnah" n="3"/>
<label>ג<am rend="null">׳</am>
</label>
<w xml:id="S00483.1.1.1.1.50">ולא</w>
<lb n="8"/>
<w xml:id="S00483.1.1.1.1.51">זו</w>
<w xml:id="S00483.1.1.1.1.52">בלבד</w>
<w xml:id="S00483.1.1.1.1.53">אלא</w>
<w xml:id="S00483.1.1.1.1.54">כל</w>
<w xml:id="S00483.1.1.1.1.55">שאמרו</w>
<w xml:id="S00483.1.1.1.1.56">חכמים</w>
<w xml:id="S00483.1.1.1.1.57">עד</w>
<w xml:id="S00483.1.1.1.1.58">חצות</w>
<surplus reason="fill">מצ<am rend="null">׳</am>
</surplus>
<lb n="9"/>
<w xml:id="S00483.1.1.1.1.59">מצותן</w>
<w xml:id="S00483.1.1.1.1.60">עד</w>
<w xml:id="S00483.1.1.1.1.61">שיעלה</w>
<w xml:id="S00483.1.1.1.1.62">עמוד</w>
<w xml:id="S00483.1.1.1.1.63">השחר</w>
<pc type="unitEnd" rend="null"/>
<milestone unit="MSMishnah" n="4"/>
<label>ד<am rend="null">׳</am>
</label>
<w xml:id="S00483.1.1.1.1.64">הקטר</w>
<lb n="10"/>
<w xml:id="S00483.1.1.1.1.65">חלבים</w>
<w xml:id="S00483.1.1.1.1.66">ואיברין</w>
<delSpan spanTo="#d1e217" hand="#h2" rend="deletion-mark"/>
<w xml:id="S00483.1.1.1.1.67">ואכילת</w>
<w xml:id="S00483.1.1.1.1.68">פסחים</w>
<anchor xml:id="d1e217" type="del"/>
<w xml:id="S00483.1.1.1.1.69">מצותן</w>
<w xml:id="S00483.1.1.1.1.70">עד</w>
<lb n="11"/>
<w xml:id="S00483.1.1.1.1.71">שיעלה</w>
<w xml:id="S00483.1.1.1.1.72">עמוד</w>
<w xml:id="S00483.1.1.1.1.73">השחר</w>
<pc type="unitEnd" rend="null"/>
<milestone unit="MSMishnah" n="5"/>
<label>ה<am rend="null">׳</am>
</label>
<w xml:id="S00483.1.1.1.1.74">וכל</w>
<w xml:id="S00483.1.1.1.1.75">הנאכלין</w>
<w xml:id="S00483.1.1.1.1.76">ליום</w>
<lb n="12"/>
<w xml:id="S00483.1.1.1.1.77">אחד</w>
<w xml:id="S00483.1.1.1.1.78">מצותן</w>
<w xml:id="S00483.1.1.1.1.79">עד</w>
<w xml:id="S00483.1.1.1.1.80">שיעלה</w>
<w xml:id="S00483.1.1.1.1.81">עמוד</w>
<w xml:id="S00483.1.1.1.1.82">השחר</w>
<w xml:id="S00483.1.1.1.1.83">אם</w>
<w xml:id="S00483.1.1.1.1.84">כן</w>
<lb n="13"/>
<w xml:id="S00483.1.1.1.1.85">למה</w>
<w xml:id="S00483.1.1.1.1.86">אמרו</w>
<w xml:id="S00483.1.1.1.1.87">חכמים</w>
<w xml:id="S00483.1.1.1.1.88">עד</w>
<w xml:id="S00483.1.1.1.1.89">חצות</w>
<w xml:id="S00483.1.1.1.1.90">אלא</w>
<w xml:id="S00483.1.1.1.1.91">להרחיק</w>
<w xml:id="S00483.1.1.1.1.92">את</w>
<lb n="14"/>
<w xml:id="S00483.1.1.1.1.93">האדם</w>
<w xml:id="S00483.1.1.1.1.94">מן</w>
<w xml:id="S00483.1.1.1.1.95">העבירה</w>
<pc type="unitEnd" rend="null"/>
</ab>
<ab xml:id="S00483.1.1.1.2">
</code></pre>
<p>The desired output:</p>
<pre><code>S00483.1.1.1.1 מאמתי קורין את שמע בערבים משעה שהכהנים נכנסין לאכל בתרומתן עד סוף האשמורת הראשנה דברי רבי אליעזר וחכמין אומרין עד חצות רבן גמליאל אומר עד שיעלה עמוד השחר מעשה שבאו בניו מבית המשתה אמרו לו לא קרינו את שמע אמר להם אם לא עלה עמוד השחר מותרין אתם לקרות ולא זו בלבד אלא כל שאמרו חכמים עד חצות מצותן עד שיעלה עמוד השחר הקטר חלבים ואיברין ואכילת פסחים מצותן עד שיעלה עמוד השחר וכל הנאכלין ליום אחד מצותן עד שיעלה עמוד השחר אם כן למה אמרו חכמים עד חצות אלא להרחיק את האדם מן העבירה
S00483.1.1.1.2 מאמתי קורין את שמע בשחרים משיכירו בין תכלת ללבן רבי אליעזר אומר בין תכלת לכרתן וגומרה עד הנץ החמה יהושע עד שלש שעות שכן דרך בני מלכים לעמוד בשלש שעות הקורא מיכן והלך לא הפסיד כאדם שהוא קורא בתורה
S00483.1.1.1.3 בית שמי אומרין בערב כל אדם יטו ויקרו ובבקר יעמודו ובשכבך ובקומך ובית הלל כל אדם קורין כדרכן ובלכתך בדרך אם כן למה נאמר בשכבך ובקומך אלא בשעה שדרך בני אדם שוכבין ובשעה שדרך בני אדם עומדין אמר טרפון אני הייתי בא בדרך והטיתי לקרות כדברי בית שמי וסיכנתי בעצמי מפני הלסטים אמרו לו כדיי הייתה לחוב בעצמך שעברתה על דברי בית הלל
</code></pre>
<p>What I get so far:</p>
<pre><code>מאמתי קורין את שמע בערבים משעה שהכהנים נכנסין לאכל בתרומתן עד סוף האשמורת הראשנה דברי רבי אליעזר וחכמין אומרין עד חצות רבן גמליאל אומר עד שיעלה עמוד השחר מעשה שבאו בניו מבית המשתה אמרו לו לא קרינו את שמע אמר להם אם לא עלה עמוד השחר מותרין אתם לקרות ולא זו בלבד אלא כל שאמרו חכמים עד חצות מצותן עד שיעלה עמוד השחר הקטר חלבים ואיברין ואכילת פסחים מצותן עד שיעלה עמוד השחר וכל הנאכלין ליום אחד מצותן עד שיעלה עמוד השחר אם כן למה אמרו חכמים עד חצות אלא להרחיק את האדם מן העבירה
מאמתי קורין את שמע בשחרים משיכירו בין תכלת ללבן רבי אליעזר אומר בין תכלת לכרתן וגומרה עד הנץ החמה
יהושע
עד שלש שעות שכן דרך בני מלכים לעמוד בשלש שעות הקורא מיכן והלך לא הפסיד כאדם שהוא קורא בתורה
בית שמי אומרין בערב כל אדם יטו ויקרו ובבקר יעמודו
ובשכבך ובקומך ובית הלל
כל אדם קורין כדרכן
ובלכתך בדרך אם כן למה נאמר בשכבך ובקומך אלא בשעה שדרך בני אדם שוכבין ובשעה שדרך בני אדם עומדין אמר
טרפון אני הייתי בא בדרך והטיתי לקרות כדברי בית שמי וסיכנתי בעצמי מפני הלסטים אמרו לו כדיי הייתה לחוב בעצמך שעברתה על דברי בית הלל
</code></pre>
<p>My code:</p>
<pre><code>from lxml import etree # This is the only part of lxml we need
import pandas as pd #
from collections import Counter # A built-in tool for counting items
# First create a parser object. It will work faster if you tell it not to collect IDs.
parser = etree.XMLParser(collect_ids=False)
# Parse your XML file into a "tree" object
tree = etree.parse('S00483.xml', parser)
# Get the XML from the tree object
parma = tree.getroot()
# Create your nsmap for the 'tei' namespace
# You only need to do this once in your script,
# but you'll refer back to this variable throughout
# the rest of this notebook.
nsmap={'tei': 'http://www.tei-c.org/ns/1.0'}
all_line_tags = parma.findall(".//tei:ab", namespaces=nsmap)
for line in all_line_tags[:10]: #Only the first 20 lines
print(' '.join([w.text for w in line.findall(".//tei:w", namespaces=nsmap)]))
</code></pre>
|
<python><python-3.x><xml><beautifulsoup><tei>
|
2024-01-19 20:53:48
| 1
| 1,791
|
bsteo
|
77,848,741
| 5,594,008
|
dulwich error when running in docker-compose build poetry install
|
<p>I'm trying to run docker-compose build the command is failing on line</p>
<pre><code>RUN --mount=type=ssh poetry install $(if [ "$build_env" = 'production' ]; \
then echo '--only main --extras=production'; fi) --no-interaction --no-ansi
</code></pre>
<p>with the error message</p>
<pre><code>19.57 at /usr/local/venv/lib/python3.11/site-packages/dulwich/protocol.py:215 in read_pkt_line
19.58 211│
19.58 212│ try:
19.58 213│ sizestr = read(4)
19.58 214│ if not sizestr:
19.58 → 215│ raise HangupException
19.58 216│ size = int(sizestr, 16)
19.58 217│ if size == 0 or size == 1: # flush-pkt or delim-pkt
19.58 218│ if self.report_activity:
19.58 219│ self.report_activity(4, "read")
19.58
19.58 The following error occurred when trying to handle this error:
19.58
19.58
19.58 HangupException
19.58
19.58 git@github.com: Permission denied (publickey).
19.59
19.59 at /usr/local/venv/lib/python3.11/site-packages/dulwich/client.py:1154 in fetch_pack
19.61 1150│ with proto:
19.61 1151│ try:
19.61 1152│ refs, server_capabilities = read_pkt_refs(proto.read_pkt_seq())
19.61 1153│ except HangupException as exc:
19.61 → 1154│ raise _remote_error_from_stderr(stderr) from exc
19.61 1155│ (
19.61 1156│ negotiated_capabilities,
19.61 1157│ symrefs,
19.61 1158│ agent
</code></pre>
<p>,</p>
<p>Seems like this is some kind of ssh problem, but I can't understand what exactly. When I run <code>ssh -T git@github.com</code> - it returns me a valid output</p>
|
<python><docker><ssh><dulwich>
|
2024-01-19 20:48:58
| 1
| 2,352
|
Headmaster
|
77,848,710
| 788,153
|
How to fix a column inplace after performing groupby in pandas
|
<p>My data has some issue and I want to fix this as defined in the function <code>fixseq</code>. I am looking for efficient way to fix column <code>B</code>. This column can't have negative number and if does, I need to shift the number so that it starts from 0.</p>
<pre><code>def fixseq(tmp):
print(tmp)
if tmp['B'][0] < 0:
tmp['B'] = tmp['B'] + abs(tmp['B'][0])
return tmp
data = {
'id': ['A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B'],
'number': [-4, -3, -2, -1, 0, 1, 0, 1, 2],
}
df = pd.DataFrame(data)
df
</code></pre>
<p>Output:</p>
<pre><code>id number
0 A 0
1 A 1
2 A 2
3 A 3
4 A 4
5 A 5
6 B 0
7 B 1
8 B 2
</code></pre>
<p>I did try <code>groupby</code> followed by <code>apply</code> but keeps on getting error. Tried with map as well. I can process each group at a time and accumulate in list and finally concatenate it to get the work done but I do not want to do this heavy lifting. I am sure there is some better way to do it.</p>
<p>Thanks</p>
|
<python><pandas>
|
2024-01-19 20:41:53
| 2
| 2,762
|
learner
|
77,848,673
| 10,869,768
|
replicate with llama-index and streamlit - unable to find documents
|
<p>I am running code below on a GPU via:</p>
<pre><code>streamlit run replicate_lama2.py
import os
import traceback
import sys
import streamlit as st
os.environ["REPLICATE_API_TOKEN"] = "my_key"
from llama_index.llms import Replicate
llama2_7b_chat = "meta/llama-2-7b-chat:8e6975e5ed6174911a6ff3d60540dfd4844201974602551e10e9e87ab143d81e"
llm = Replicate(
model=llama2_7b_chat,
temperature=0.01,
additional_kwargs={"top_p": 1, "max_new_tokens": 300},
)
from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index import ServiceContext
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
service_context = ServiceContext.from_defaults(
llm=llm, embed_model=embed_model
)
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(
documents, service_context=service_context
)
#index = VectorStoreIndex.from_documents(documents)
# Get the query engine
query_engine = index.as_query_engine(streaming=True)
# Create a Streamlit web app
#st.title("LLM Query Interface")
query = st.text_input("Enter your query:")
submit_button = st.button("Submit")
if submit_button:
# Query the engine with the defined query
response = query_engine.query(query)
st.write("### Query Result:")
st.write(response)
</code></pre>
<p>My directory "data" that contains different files I want to perform my queries is in the same directory where replicate_lama2.py script it. When I run this I am getting streamlit to open this chat in web browser (firefox) and I can ask a question (which is definitely answerable from documents in /data dir) but instead of any answer I am getting output in attach, that says "No doc available". How to make this work?<a href="https://i.sstatic.net/1rKsi.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1rKsi.jpg" alt="enter image description here" /></a></p>
|
<python><streamlit><replicate><llama><llama-index>
|
2024-01-19 20:33:11
| 1
| 429
|
anikaM
|
77,848,436
| 438,223
|
Simple RNN with more than one layer in Pytorch for squential prediction
|
<p>I got sequential time series data. At each time stamp, there is only variable to observe (if my understanding is correct this means number of features = 1). I want to train a simple RNN with more than one layer to predict the next observation.</p>
<p>I created training data using sliding window, with window size set to 8. To give a concrete idea, below is my original data, training data and target .</p>
<p><strong>Sample Data</strong></p>
<p><code>0.40 0.82 0.14 0.01 0.98 0.53 2.5 0.49 0.53 3.37 0.49</code></p>
<p><strong>Training Data</strong></p>
<pre><code>X =
0.40 0.82 0.14 0.01 0.98 0.53 2.5 0.49
0.82 0.14 0.01 0.98 0.53 2.5 0.49 0.53
0.14 0.01 0.98 0.53 2.5 0.49 0.53 3.37
</code></pre>
<p>corresponding targets are</p>
<pre><code>Y =
0.53
3.37
0.49
</code></pre>
<p>I set the batch size to 3. But it gives me an error saying</p>
<p><code>RuntimeError: input.size(-1) must be equal to input_size. Expected 8, got 1</code></p>
<pre><code>import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import numpy as np
X = np.array( [ [0.40, 0.82, 0.14, 0.01, 0.98, 0.53, 2.5, 0.49], [0.82, 0.14, 0.01, 0.98, 0.53, 2.5, 0.49, 0.53], [0.14, 0.01, 0.98, 0.53, 2.5, 0.49, 0.53, 3.37] ], dtype=np.float32)
Y = np.array([[0.53], [3.37], [0.49]], dtype=np.float32)
class RNNModel(nn.Module):
def __init__(self, input_sz, n_layers):
super(RNNModel, self).__init__()
self.hidden_dim = 3*input_sz
self.n_layers = n_layers
output_sz = 1
self.rnn = nn.RNN(input_sz, self.hidden_dim, num_layers=n_layers, batch_first=True)
self.linear = nn.Linear(self.hidden_dim, output_sz)
def forward(self,x):
batch_sz = x.size(0)
hidden = torch.zeros(self.n_layers, batch_sz, self.hidden_dim) #initialize n_layer*batch_sz number of hidden states of dimension hidden_dim)
out, hidden = self.rnn(x, hidden)
out = out.contiguous().view(-1, self.hidden_dim)
return out,hidden
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = RNNModel(8,2)
X = torch.tensor(X[:,:,np.newaxis])
Y = torch.tensor(Y[:,:,np.newaxis])
X = X.to(device)
Y = Y.to(device)
model = model.to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = nn.MSELoss()
loader = data.DataLoader(data.TensorDataset(X, Y), shuffle=False, batch_size=3)
n_epoch = 10
for epoch in range(n_epoch):
model.train()
for X_batch, Y_batch in loader:
Y_pred = model(X_batch)
loss = loss_fn(Y_pred,Y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 10 != 0:
continue
model.eval()
with torch.no_grad():
Y_pred = model(X)
train_rmse = np.sqrt(loss_fn(Y_pred,Y))
print("Epoch %d: train RMSE %.4f" % (epoch, train_rmse))
</code></pre>
<p>What am I doing wrong? Can anyone help me?</p>
|
<python><numpy><machine-learning><pytorch><recurrent-neural-network>
|
2024-01-19 19:36:58
| 3
| 1,596
|
Shew
|
77,848,403
| 11,150,561
|
TPOT best pipeline has no predict_proba() - how to prevent falling over?
|
<p>I am running 5-fold X-validation on a dataset using tpot from a Jupyter notebook:</p>
<pre><code>scores = []
preds = []
actual_labels = []
# Initialise the 5-fold cross-validation
kf = StratifiedKFold(n_splits=5,shuffle=True)
for train_index, test_index in kf.split(X, y):
# Generate the training and test partitions of X and Y for each iteration of CV
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# TPOT is a AutoML system, that will automatically search for the best pipeline for the task
estimator = TPOTClassifier(generations=5, population_size=50, cv=5, random_state=42, verbosity=2, n_jobs=10)
#As TPOT is a AutoML system, it does its own process of tuning rather than using grid search
estimator.fit(X_train, y_train)
# Predicting the test data with the optimised models
predictions = estimator.predict(X_test)
score = metrics.f1_score(y_test, predictions)
scores.append(score)
# Extract the probabilities of predicting the 2nd class, which will use to generate the PR curve
probs = estimator.predict_proba(X_test)[:,1]
preds.extend(probs)
actual_labels.extend(y_test)
</code></pre>
<p>In one of the 5 runs the best pipeline is:</p>
<pre><code>Best pipeline: SGDClassifier(ZeroCount(input_matrix), alpha=0.001, eta0=0.01, fit_intercept=False, l1_ratio=0.0, learning_rate=invscaling, loss=squared_hinge, penalty=elasticnet, power_t=1.0)
</code></pre>
<p>Because the loss is 'squared hinge', it has no predict_proba() attribute and the whole process falls over. If I was to build the classifier by hand, I understand that I'd need to change the loss to e.g. 'modified_huber', but how can I prevent tpot from falling over because of this?
<a href="https://i.sstatic.net/Fz8Dj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fz8Dj.png" alt="enter image description here" /></a></p>
|
<python><loss-function><tpot><sgdclassifier>
|
2024-01-19 19:28:14
| 0
| 389
|
Reader 123
|
77,848,382
| 11,618,586
|
labeling segments of cyclical data based on multiple conditions
|
<p>This is a follow-up on a <a href="https://stackoverflow.com/questions/77842667/labeling-segments-of-data-based-on-multiple-conditions">question</a> I posted that a member graciously provided the solution for. However, I had neglected to account for periodicity in my question.</p>
<p>I have a dataframe like so:</p>
<pre><code>data = {'ID':[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'column1': [15, 16, 17, 14, 13, 5, 3, 2, 1.9, 1.2, 1, 0.8, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 2, 3, 4, 5, 6],
'column2': [10, 11, 12, 13, 13.5, 14, 14.5, 15, 16, 17, 18, 19, 20, 20, 20, 20, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10]
}
df=pd.DataFrame(data)
ID column1 column2
0 1 15.0 10.0
1 1 16.0 11.0
2 1 17.0 12.0
3 1 14.0 13.0
4 1 13.0 13.5
5 1 5.0 14.0
6 1 3.0 14.5
7 1 2.0 15.0
8 1 1.9 16.0
9 1 1.2 17.0
10 1 1.0 18.0
11 1 0.8 19.0
12 1 0.5 20.0
13 1 0.5 20.0
14 1 0.5 20.0
15 1 0.5 20.0
16 1 0.5 20.0
17 1 0.5 19.0
18 1 0.5 18.0
19 1 0.5 17.0
20 1 0.5 16.0
21 1 1.0 15.0
22 1 2.0 14.0
23 1 3.0 13.0
24 1 4.0 12.0
25 1 5.0 11.0
26 1 6.0 10.0
</code></pre>
<p>As can be seen, column<code>1</code> decreases and then increases. Column <code>2</code> increases and then decreases.
I want to create a label column based on the following conditions:</p>
<ol>
<li>In <code>column1</code>, from the beginning until just before we first encounter a
value <code><=2</code>, <strong>AND</strong> when <code>column2</code> is <code>>13</code>, label as <code>Pre_Start</code></li>
<li>When column 1 is decreasing <strong>AND</strong> <code>0.5 < column1 <= 2</code> <strong>AND</strong> column 2 is increasing <strong>AND</strong> <code>< column2 <= 19</code> label as <code>Start</code></li>
<li>When <code>column1 <= 0.5</code> <strong>AND</strong> <code>column2 >= 19</code> label as <code>Steady</code></li>
<li>When <code>column1 <= 0.5</code> <strong>AND</strong> column 2 is decreasing <strong>AND</strong> <code>14 < column2 < 19</code> label as <code>Ramp</code></li>
<li>When column 1 is increasing <strong>AND</strong> <code>column1 > 0.5</code> <strong>AND</strong> column 2 is decreasing <strong>AND</strong> <code>column2 < 19</code> label as <code>End</code></li>
</ol>
<p>Im not sure if there is an ironclad boolean logic that would account for the conditions without looking at whether it is increasing or decreasing.</p>
<p>Using the method member Corralien provided, based on my question without addressing periodicity, I was able to get the resulting dataframe:</p>
<pre><code>def label_group(df):
c1 = df['column1']
c2 = df['column2']
conds = [(0.5 < c1) & (c1 <= 2) & (c2 > 14) & (c2 <= 19), # Start
(c1 <= 0.5) & (c2 >= 19), # Steady
(c1 <= 0.5) & (14 < c2) & (c2 < 19), # Ramp
(c1 > 0.5) & (c2 < 19)] # End
choices = ['Start', 'Steady', 'Ramp', 'End']
labels = np.select(condlist=conds, choicelist=choices)
labels = pd.Series(labels, index=df.index)
labels[:((c1 <= 2) & (c2 > 13)).argmax()] = 'Pre_Start'
return labels.to_frame()
df['Label'] = df.groupby('ID', group_keys=True).apply(label_group).droplevel('ID')
ID column1 column2 Label
0 1 15.0 10.0 Pre_Start
1 1 16.0 11.0 Pre_Start
2 1 17.0 12.0 Pre_Start
3 1 14.0 13.0 Pre_Start
4 1 13.0 13.5 Pre_Start
5 1 5.0 14.0 Pre_Start
6 1 3.0 14.5 Pre_Start
7 1 2.0 15.0 Start
8 1 1.9 16.0 Start
9 1 1.2 17.0 Start
10 1 1.0 18.0 Start
11 1 0.8 19.0 Start
12 1 0.5 20.0 Steady
13 1 0.5 20.0 Steady
14 1 0.5 20.0 Steady
15 1 0.5 20.0 Steady
16 1 0.5 20.0 Steady
17 1 0.5 19.0 Steady
18 1 0.5 18.0 Ramp
19 1 0.5 17.0 Ramp
20 1 0.5 16.0 Ramp
21 1 1.0 15.0 Start
22 1 2.0 14.0 End
23 1 3.0 13.0 End
24 1 4.0 12.0 End
25 1 5.0 11.0 End
26 1 6.0 10.0 End
</code></pre>
<p>As can be seen, row 21 satisfies the <code>start</code> condition, however. It should be labeled <code>End</code> and state not satisfying any of the conditions labeled <code>other</code>.
I tried using the <code>shift()</code> function but to no avail.</p>
<p>Any help would be appreciated.</p>
|
<python><python-3.x><pandas>
|
2024-01-19 19:24:54
| 1
| 1,264
|
thentangler
|
77,848,348
| 5,672,025
|
Python instanceof on alias type
|
<p>I have the following types and type alias defined:</p>
<pre class="lang-py prettyprint-override"><code>@unique
class MyEnum(Enum):
A = 1
B = 2
C = 3
D = 4
class A:
pass
class B:
pass
MyAlias = MyEnum | A | B
MyAlias2 = MyAlias | str | int
@dataclass(slots=True)
class A:
type: MyAlias2
</code></pre>
<p>If I do <code>isinstance</code> check like so:</p>
<pre class="lang-py prettyprint-override"><code>a = A()
isintance(a, MyAlias)
</code></pre>
<p>it returns <code>False</code>.</p>
<p>I understand that it happens because I redefine the class later, but I do this because I first need to define what is a type of type <code>MyAlias</code> so I could later define what is <code>MyAlias2</code> to then be able to use it as an attribute inside class <code>A</code>. Now obviously since we're in python I could just not define the attributes of class <code>A</code> and make a constructor that accept whatever, but since I'm thinking about portability in the future, and also I do want the "benefits" of type hinting right now, I do it this way and I wonder what can I do to make the <code>isinstance</code> report <code>True</code> in such case where there's circular dependency between what is defined first.</p>
|
<python><types>
|
2024-01-19 19:17:36
| 2
| 1,981
|
Jorayen
|
77,848,299
| 2,711,953
|
operator.attrgetter cannot read attributes in a dataclass
|
<p>I have a module that holds a dataclass:</p>
<p>my_module.py</p>
<pre><code>@dataclass
class MyDataClass:
foo: str
bar: str
</code></pre>
<p>I want to use <code>operator.attrgetter</code> to read <code>MyDataClass.foo</code></p>
<pre><code>import package.my_module
from operator import attrgetter
obj = attrgetter('MyDataClass.foo')(package.my_module) # error!
</code></pre>
<p>attrgetter throws <code>AttributeError: type object 'MyDataClass' has no attribute 'foo'</code>.</p>
<p>Is there something else I should be using instead of <code>operator.attrgetter</code>?</p>
|
<python>
|
2024-01-19 19:06:29
| 1
| 318
|
wO_o
|
77,848,276
| 10,178,227
|
linear regression output are nonsensical
|
<p>I have a dataset and am trying to fill in the missing values by utilizing a 2d regression to get the slope of the surrounding curves to approximate the missing value. I am not sure if this is the right approach here, but am open to listen to other ideas. However, here's my example:</p>
<pre><code>local_window = pd.DataFrame({102.5: {0.021917: 0.0007808776581961896,
0.030136: 0.0009108521507099643,
0.035616: 0.001109650616093018,
0.041095: 0.0013238862647034224,
0.060273: 0.0018552410055933753},
105.0: {0.021917: 0.0008955896980595855,
0.030136: 0.001003244315807649,
0.035616: 0.0011852612740301449,
0.041095: 0.0013952857530607904,
0.060273: 0.0018525880756980716},
107.5: {0.021917: np.nan,
0.030136: 0.0012354997955153118,
0.035616: 0.00140044893559622,
0.041095: 0.0015902024099268574,
0.060273: 0.001973254493672934}})
</code></pre>
<pre><code>def predict_nan_local(local_window):
if not local_window.isnull().values.any():
return local_window
# Extract x and y values for the local window
X_local = local_window.columns.values.copy()
y_local = local_window.index.values.copy()
# Create a meshgrid of x and y values
X_local, y_local = np.meshgrid(X_local, y_local)
# Flatten x and y for fitting the model
X_local_flat = X_local.flatten()
y_local_flat = y_local.flatten()
values_local_flat = local_window.values.flatten()
# Find indices of non-NaN values
non_nan_indices = ~np.isnan(values_local_flat)
# Filter out NaN values
X_local_flat_filtered = X_local_flat[non_nan_indices]
y_local_flat_filtered = y_local_flat[non_nan_indices]
values_local_flat_filtered = values_local_flat[non_nan_indices]
regressor = LinearRegression()
regressor.fit(np.column_stack((X_local_flat_filtered, y_local_flat_filtered)), values_local_flat_filtered)
nan_indices = np.argwhere(np.isnan(local_window.values))
X_nan = local_window.columns.values[nan_indices[:, 1]]
y_nan = local_window.index.values[nan_indices[:, 0]]
# Predict missing value
predicted_values = regressor.predict(np.column_stack((X_nan, y_nan)))
local_window.iloc[nan_indices[:, 0], nan_indices[:, 1]] = predicted_values
return local_window
</code></pre>
<p><a href="https://i.sstatic.net/B5UEd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B5UEd.png" alt="enter image description here" /></a></p>
<p>The output - as you can see - doesn't make a whole lot of sense. Is there anything I am missing?</p>
|
<python><regression>
|
2024-01-19 18:59:40
| 1
| 389
|
yungpadewon
|
77,848,197
| 79,125
|
How do I rewind subprocess's output to capture output on failure?
|
<p>What's a generic way that I can capture output from <code>subprocess</code> <strong>after</strong> reading its output fails?</p>
<p>When I get a UnicodeDecodeError I want to see the original content to reproduce, but I'm not always able to run the command again to get the output.</p>
<p>If I avoid <code>text</code>/<code>universal_newlines</code>, then I can move the failure out of subprocess and more into my code. I can wrap the process's <code>stdout</code> in a <code>TextIOWrapper</code> to control the encoding but how do I see the content that failed to decode? seek isn't working.</p>
<p>So far I have this:</p>
<pre><code>import io
import logging
import pprint
import shutil
import subprocess
import tempfile
def run_cmd(cmd):
ps = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=None, text=False)
data = None
# Decode with a wrapper instead of text to force utf-8 instead of local
# encoding (cp1252 on windows).
output = io.TextIOWrapper(ps.stdout, encoding="utf-8")
try:
data = output.read()
except UnicodeDecodeError:
with tempfile.NamedTemporaryFile(delete=False) as f:
logging.warning("UnicodeDecodeError.")
output.seek(0)
shutil.copyfileobj(output, f)
logging.error("UnicodeDecodeError. Wrote input to file: %s", f.name)
raise
return ps.wait(), data
pprint.pprint(run_cmd(["dir"])) # My command actually contacts a server and sometimes fails.
</code></pre>
<p>But the log file is always empty. Is there a reason why <code>seek</code> isn't applicable in this case? <a href="https://docs.python.org/3/library/io.html#io.IOBase.seekable" rel="nofollow noreferrer">It should raise an error if unsupported</a>.</p>
<p>My specific error is: <code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x97 in position 337: invalid start byte</code>, but I'm hoping for a way I can tackle this problem every time I encounter it.</p>
<p><em>This question is about unicode errors from <code>subprocess</code> and not parsing a specific file like <a href="https://stackoverflow.com/questions/46180610/python-3-unicodedecodeerror-how-do-i-debug-unicodedecodeerror">Python 3 UnicodeDecodeError - How do I debug UnicodeDecodeError?</a></em></p>
|
<python><io><subprocess>
|
2024-01-19 18:43:02
| 0
| 12,232
|
idbrii
|
77,848,047
| 1,938,410
|
cannot import name '_imaging' from 'PIL' when using Pillow on AWS Lambda
|
<p>I uploaded Pillow package into AWS Lambda and added it as a layer, but the execution of the lambda function give the <code>cannot import name '_imaging' from 'PIL' (/opt/python/PIL/__init__.py)</code> error.</p>
<p>What I have done:</p>
<ol>
<li><code>pip install pillow -t .</code> and put <code>PIL</code> and <code>pillow-10.2.0.dist-info</code> folder into folder <code>python</code> and make a zip, and upload the zip into Lambda function as a Layer. Selected runtime as Python 3.12.</li>
<li>In my lambda function, I added this layer and set function runtime as 3.12 as well.</li>
<li>included the code <code>from PIL import Image</code> and it gives me this error.</li>
</ol>
<p>I've done some search, but all the solutions I've found are very old and is not relevant. Any help is appreciated!</p>
|
<python><amazon-web-services><aws-lambda><python-imaging-library>
|
2024-01-19 18:12:33
| 1
| 507
|
SamTest
|
77,847,983
| 1,397,946
|
Processing requests in FastAPI sequentially while staying responsive
|
<p>My server exposes an API for a resource-intensive rendering work. The job it does involves a GPU and as such the server can handle only a single request at a time. Clients should submit a job and receive <code>201</code> - ACCEPTED - as a response immediately after. The processing can take up to a minute and there can be a few dozens of requests scheduled.</p>
<p>Here's what I came up with, boiled to a minimal reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import time
import asyncio
from fastapi import FastAPI, status
app = FastAPI()
fifo_queue = asyncio.Queue()
async def process_requests():
while True:
name = await fifo_queue.get() # Wait for a request from the queue
print(name)
time.sleep(10) # A RESOURCE INTENSIVE JOB THAT BLOCKS THE THREAD
fifo_queue.task_done() # Indicate that the request has been processed
@app.on_event("startup")
async def startup_event():
asyncio.create_task(process_requests()) # Start the request processing task
@app.get("/render")
async def render(name):
fifo_queue.put_nowait(name) # Add the request parameter to the queue
return status.HTTP_201_CREATED # Return a 201 status code
</code></pre>
<p>The problem with this approach is that the server does not stay responsive. After sending the first request it gets busy full time with it and does not respond as I have hoped.</p>
<pre><code>curl http://127.0.0.1:8000/render\?name\=001
</code></pre>
<p>In this example simply replacing <code>time.sleep(10)</code> with <code>await asyncio.sleep(10)</code> solves the problem, but not in the real use case (though possibly offers a clue as for what I am doing incorrectly).</p>
<p>Any ideas?</p>
|
<python><concurrency><python-asyncio><fastapi>
|
2024-01-19 18:00:12
| 3
| 11,517
|
Lukasz Tracewski
|
77,847,952
| 1,711,271
|
Find a substring of 4 consecutive digits and a substring of 3 consecutive digits in a string
|
<p>I have strings of the type:</p>
<pre><code>1432_ott_457_blusp
312_fooob_bork_1234
broz_901_6453
kkhas_1781_LET_GROK_234
1781_234_kkhas
</code></pre>
<p>etc. In other words, each string contains multiple substrings delimited by <code>_</code>. The total number of substrings is variable. I look for a substring containing 4 digits, and another one containing 3 digits. As you can see, the two substrings can be located anywhere inside the string. A solution such as</p>
<pre><code>import re
three_digits = re.findall('\d{3}')
</code></pre>
<p>doesn't work, because it will match both the 3 digits substring and the first 3 digits of the 4 digits one. A solution which assumes that both the 3 digits and the 4 digits substring exist is fine, but one that checks this precondition would be even better.</p>
|
<python><string><substring>
|
2024-01-19 17:51:03
| 3
| 5,726
|
DeltaIV
|
77,847,901
| 23,260,297
|
Group data from a dataframe into a separate dataframe
|
<p>I need to group rows in a dataframe on 3 conditions, and take each grouping and create a new dataframe. I am unfamiliar with the groupby function in pandas and all the examples I have come across have been no help. Is groupby the right approach for this kind of solution? or is there a better way to achieve this?</p>
<p>My data looks like this:</p>
<pre class="lang-none prettyprint-override"><code>ID Date Product Type Cost Quantity
---- ----- ------- ------ ------ --------
1 1/1/2023 Bike Buy 10.00 1
2 1/1/2023 Bike Buy 10.00 1
3 1/1/2023 Bike Sell 10.00 1
4 1/1/2023 Bike Sell 10.00 1
5 1/2/2023 Car Sell 20.00 1
6 1/2/2023 Car Sell 20.00 1
7 1/3/2023 Truck Buy 30.00 1
8 1/3/2023 Truck Buy 30.00 1
</code></pre>
<p>My expected output needs to be like this, where each data structure holds data with the same products, same type, and on the same date:</p>
<pre class="lang-none prettyprint-override"><code>ID Date Product Type Cost Quantity
---- ----- ------- ------ ------ --------
1 1/1/2023 Bike Buy 10.00 1
2 1/1/2023 Bike Buy 10.00 1
ID Date Product Type Cost Quantity
---- ----- ------- ------ ------ --------
3 1/1/2023 Bike Sell 10.00 1
4 1/1/2023 Bike Sell 10.00 1
ID Date Product Type Cost Quantity
---- ----- ------- ------ ------ --------
5 1/2/2023 Car Sell 20.00 1
6 1/2/2023 Car Sell 20.00 1
ID Date Product Type Cost Quantity
---- ----- ------- ------ ------ --------
7 1/3/2023 Truck Buy 30.00 1
8 1/3/2023 Truck Buy 30.00 1
</code></pre>
<p>I am using Python pandas.</p>
|
<python><pandas>
|
2024-01-19 17:41:36
| 1
| 2,185
|
iBeMeltin
|
77,847,816
| 13,642,459
|
Use dask to map over an array and return a dataframe
|
<p>I am using dask and zarr to operate over some very large images.</p>
<p>I have a pipeline set up that performs some transformations to these images and I would then like to measure properties of the image using the regionprops and regionprops_table functions from skimage. This takes a matrix as an input and returns a data frame. I can't use map_overlap as this rewuires a matrix to be returned but I would like something sematically similar to this:</p>
<pre><code>import numpy as np
import dask.array as da
import pandas as pd
mask = np.zeros((1000, 1000), dtype=int)
mask[100:200, 100:200] = 1
mask[300:400, 300:400] = 2
mask[500:600, 500:600] = 3
mask = da.from_array(mask, chunks=(200, 200))
def get_data_frame(mask):
res = regionprops_table(mask, properties=('label', 'area', 'eccentricity'))
df = pd.DataFrame(res)
return df
mask.map_overlap(get_data_frame, depth=50, boundary=None).compute()
</code></pre>
<p>Returning either a pandas data frame or a dask data frame but I would like each chunk to be dealt with in parallel.</p>
|
<python><dask>
|
2024-01-19 17:24:26
| 2
| 456
|
Hugh Warden
|
77,847,798
| 1,473,517
|
How to find an optimal shape with sections missing
|
<p>Given an n by n matrix of integers, I want to find a part of the matrix that has the maximum sum with some restrictions. In the version I can solve, I am allowed to draw two lines and remove everything below and/or to the right. These can be one horizontal line and one diagonal line at 45 degrees (going up and right). For example with this 10 by 10 matrix:</p>
<pre><code> [[ 1, -3, -2, 2, -1, -3, 0, -2, 0, 0],
[-1, 3, 3, -3, 0, -1, 0, 0, -2, -2],
[-1, 0, -1, 0, 2, 1, 1, -3, 2, 1],
[-3, 1, -3, -1, 1, -3, -2, -1, -3, 1],
[ 1, -3, 1, -2, 2, 1, -3, 2, -3, 0],
[-1, -2, 0, -2, 2, -3, 3, -1, -1, 2],
[ 2, 2, -3, -1, 0, -1, 2, 0, 3, 0],
[-1, 3, -1, 1, -1, 0, 0, 3, -3, 0],
[ 3, 2, 1, 1, 2, 3, 0, 2, 0, -3],
[ 0, 3, 2, 0, -1, -2, 3, -3, -3, 1]]
</code></pre>
<p>An optimal sum is 3 which you get by the shaded area here:</p>
<p><a href="https://i.sstatic.net/E0xyu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E0xyu.png" alt="enter image description here" /></a></p>
<p>If <code>square</code> contains the 2d array then this code will find the location of the places where the bottom row should end to get the maximum sum:</p>
<pre><code>max_sums = np.empty_like(square, dtype=np.int_)
max_sums[0] = np.cumsum(square[0])
for row_idx in range(1, dim):
cusum = np.cumsum(square[row_idx])
for col_idx in range(dim):
if col_idx < dim - 1:
max_sums[row_idx, col_idx] = cusum[col_idx] + max_sums[row_idx - 1, col_idx + 1]
else:
max_sums[row_idx, col_idx] = cusum[col_idx] + max_sums[row_idx - 1, col_idx]
maxes = np.argwhere(max_sums==max_sums.max()) # Finds all the locations of the max
print(f"The coordinates of the maximums are {maxes} with sum {np.max(max_sums)}")
</code></pre>
<h1>Problem</h1>
<p>I would like to be able to leave out regions of the matrix defined by consecutive rows when finding the maximum sum. In the simpler version that I can solve I am only allowed to leave out rows at the bottom of the matrix. That is one region in the terminology we are using. The added restriction is that I am only allowed to leave out two regions at most. For the example given above, let us say we leave out the first two rows and rows 3 to 5 (indexing from 0) and rerun the code above. We then get a sum of 26.</p>
<p>Excluded regions must exclude entire rows, not just parts of them.</p>
<p><a href="https://i.sstatic.net/SDSUC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SDSUC.png" alt="enter image description here" /></a></p>
<p>I would like to do this for much larger matrices. How can I solve this problem?</p>
|
<python><algorithm><performance><optimization>
|
2024-01-19 17:19:09
| 1
| 21,513
|
Simd
|
77,847,632
| 13,147,413
|
GroupBy + custom mode function in Polars
|
<p>I need to apply a "custom" mode function to a groupBy object.
I'm able to groupBy and apply mode:</p>
<pre><code> df.with_columns(pl.col("X").mode().over(['Y', 'Z']).name.prefix("mode_"))
</code></pre>
<p>and that kind of approach gives me this kind of error:</p>
<pre><code> ComputeError: the length of the window expression did not match that of the group
</code></pre>
<p>due to the fact that some values of the "X" col are null (can't drop them).</p>
<p>I'm heading towards the a custom mode function that returns the mode whenever possible and None otherwise. Something like this:</p>
<pre><code> def custom_mode(x):
return x.mode().iloc[0] if not x.mode().empty else None
</code></pre>
<p>but i'm open to different and smarter approaches.</p>
<p>Any help would be very much appreciated!</p>
|
<python><dataframe><python-polars>
|
2024-01-19 16:49:26
| 1
| 881
|
Alessandro Togni
|
77,847,612
| 4,436,572
|
Python not recognising Pyo3 class
|
<p>I am new to rust and I was trying <code>pyo3</code> today. Following is my naive example:</p>
<pre><code>use pyo3::prelude::*;
#[pyclass]
struct TestClass {
symbol: i32,
}
#[pymethods]
impl TestClass {
#[new]
fn new(symbol: i32) -> PyResult<Self> {
Ok(TestClass { symbol })
}
#[staticmethod]
fn print_symbol(value: i32) -> PyResult<()> {
print!("print value passed in: {} \n", value);
Ok(())
}
}
/// A Python module implemented in Rust.
#[pymodule]
fn test_pyo3(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_class::<TestClass>()?;
Ok(())
}
</code></pre>
<p>My <code>Cargo.toml</code> looks like this:</p>
<pre><code>[package]
name = "test_pyo3"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lib]
name = "test_pyo3"
crate-type = ["cdylib"]
[dependencies]
pyo3 = { version = "0.20.0", features = ["extension-module"] }
</code></pre>
<p>In Python I was doing:</p>
<pre><code>from test_pyo3 import TestClass
t = TestClass(23)
t.print_symbol(50)
# output
# print value passed in: 50
</code></pre>
<p>In PyCharm, I was trying to check the python implementation (if possible) for the <code>TestClass</code>; however I was getting: <code>Cannot find reference 'TestClass' in __init__.py</code>. I understand that I didn't have a <code>__init__.py</code> file if I were coding in python; however I was expecting <code>pyo3</code> to generate all this (or at least make available the generated python implementations).</p>
<p>Why python cannot find the reference for <code>TestClass</code> in this case?</p>
|
<python><rust><pyo3>
|
2024-01-19 16:46:29
| 0
| 1,288
|
stucash
|
77,847,460
| 1,711,271
|
split a column in multiple columns, but only keep two of them
|
<p>I have the following dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'AB': ['A1_B1_C1', 'A2_B2_C2']})
</code></pre>
<p>I want to split the column <code>AB</code> on the first two occurrences of delimiter <code>_</code>, but only keep the first two columns. In other words, the output has to be</p>
<pre><code> AB A B
0 A1_B1_C1 A1 B1
1 A2_B2_C2 A2 B2
</code></pre>
<p>Currently, I can do it with</p>
<pre><code>df[['A','B', 'C']]=df['AB'].str.split('_',n=2,expand=True)
df = df.drop(columns='C')
</code></pre>
<p>But it seems wasteful. Any options where I don't need to create a column that I then have to drop?</p>
|
<python><pandas><split>
|
2024-01-19 16:20:26
| 1
| 5,726
|
DeltaIV
|
77,847,217
| 1,028,270
|
How do I properly deal with console_scripts in a docker build
|
<p>I have an entrypoint in my setup.cfg:</p>
<pre><code>[options.entry_points]
console_scripts =
mycmd = my_proj.main:program.run
</code></pre>
<p>main.py</p>
<pre><code>from invoke import Collection, Program
from my_proj import jobs
program = Program(namespace=Collection.from_module(jobs), version="1.2.3")
</code></pre>
<p>This is my project structure:</p>
<pre><code>my_proj
main.py
jobs/
myjob.py
log_conf
logger.ini
logsetup.py
</code></pre>
<p>If I follow best practices in my Dockerfile it does not work:</p>
<pre><code>RUN pip install wheel
COPY setup.cfg setup.py pyproject.toml MANIFEST.in ./
RUN --mount=type=cache,target=/root/.cache \
pip install '.'
COPY ./my_proj ./my_proj
</code></pre>
<p>If I run mycmd in the container I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/bin/mycmd", line 5, in <module>
from my_proj.main import program
ModuleNotFoundError: No module named 'my_proj'
</code></pre>
<p>I also tried just copying the entrypoint: <code>COPY my_proj/main.py ./my_proj/main.py</code></p>
<p>But then I get <code>pip install ImportError: cannot import name 'jobs' from 'myjob' (unknown location)</code></p>
<p>The only solution was to <code>COPY ./my_proj ./my_proj</code> <em>before</em> running install which stinks because now I need to pip install <em>every</em> time I rebuild.</p>
<p>I'm not sure if this is a pyinvoke issue or a pip issue, how do I avoid having to copy my entire project into the image before running <code>pip install</code>?</p>
|
<python><pip><pyinvoke>
|
2024-01-19 15:35:51
| 0
| 32,280
|
red888
|
77,847,091
| 2,343,977
|
MoviePy crossfaadein transition
|
<p>I'm trying to work with <a href="https://pypi.org/project/moviepy/" rel="nofollow noreferrer">python moviepy</a></p>
<p>I find some snippet online to apply a transition when I concatenate videos, however it seems not working for me. What I'm doing wrong?
This is an extraction of the code :</p>
<pre><code>clip1 = VideoFileClip("out.mp4")
clip2 = VideoFileClip("out.mp4")
clips = [clip1, clip2]
fade_duration = 1
clips = [clip.crossfadein(fade_duration) for clip in clips]
final_clip = concatenate_videoclips(clips, padding = -fade_duration)
final_clip.write_videofile("final.mp4")
</code></pre>
<p>If I run the script I receive the error : <code>VideoFileClip don't have the method crossfadein</code></p>
|
<python><moviepy>
|
2024-01-19 15:16:33
| 1
| 642
|
Alessandro Candon
|
77,846,959
| 3,585,934
|
How to mock a third-pary class with pytest & mock
|
<p>my test code:</p>
<pre class="lang-py prettyprint-override"><code>from unittest.mock import patch
import pytest
from journalwatcher.journal import read_for_test
@patch("systemd.journal.Reader")
def test_read(Reader):
read_for_test(this_boot_only=True)
Reader.this_boot.assert_called_once()
</code></pre>
<p>which is executed by pytest.</p>
<p>My to-be-tested code</p>
<pre class="lang-py prettyprint-override"><code>import systemd.journal
def read_for_test(this_boot_only=False):
j = systemd.journal.Reader()
if this_boot_only:
j.this_boot()
</code></pre>
<p>Executing the code always gives the test error that <code>this_boot</code> was not called. Debugging reveals that <code>j</code> is a <code>MagicMock</code> instance, but a different one (<code>id</code> value) than <code>Reader</code>.</p>
<p>Obviously, I am doing something wrong here, but what?</p>
|
<python><mocking><pytest>
|
2024-01-19 14:53:57
| 1
| 625
|
Horus
|
77,846,918
| 839,119
|
Sympy: hierarchically collect an equation with more than one factor expression?
|
<p>Can we collect/factor an symbolic equation using sympy with more than one factor expression (and hierarchically)?
I provide the following example: a long calculation result into the expanded equation <code>f</code>. In my computation, equations are larger and more complex.</p>
<pre><code>import sympy
a,b,m,n = sympy.symbols('a b m n')
f = -a**2*b*m**2 - a**2*b*m*n - a**2*b*m + a**2*m**2*n + a**2*m**2 + 2*a**2*m + a**2*n**2 - a*b*m - a*b*n + 5*a*m + a*n**2 + a*n + b*m**2 - 2*b*m*n - b*m - b*n**2 - m**2 + 2*m*n + n**2 + 1
g=sympy.collect(f,(a**2,1-b))
</code></pre>
<p>I expect a function (collect?) to hierarchically aggregate terms around <code>a**2</code> first, and then <code>(1-b)</code> so that <code>f</code> is written as equation <code>g</code> bellow</p>
<pre><code>g = a**2 * ( (1-b) * (m+n*m+m**2)+m+n**2-m*n+m**2*n)+a*((1-b)*(m+n)+n**2+4*m)+(1-b) * (m+n**2-m**2+2*m*n) +1 - m
</code></pre>
<p>where the equation is factorized/grouped around a**2 then (1-b)</p>
<p>Of course, f=g that can be verified in my example case:</p>
<pre><code>h = sympy.simplify(f-g)
print(h)
</code></pre>
<p><em>Edit:</em>
<strong>More explanations of the compact but efficient answer from <em>smichr</em></strong></p>
<pre><code>import sympy
a,b,m,n, y = sympy.symbols('a b m n y')
f1 = -a**2*b*m**2 - a**2*b*m*n - a**2*b*m + a**2*m**2*n + a**2*m**2 + 2*a**2*m + a**2*n**2 - a*b*m - a*b*n + 5*a*m + a*n**2 + a*n + b*m**2 - 2*b*m*n - b*m - b*n**2 - m**2 + 2*m*n + n**2 + 1
g1=sympy.collect(f1,(a**2,1-b))
f1_1 = f1.subs(b,1-y) # change b by 1-y. here, the trick is that it seems easier to sympy to substitute one term of the original equation by an expression, as I wanted to factorise by y=1-b <-> b=1-y
f1_2 = f1_1.expand() # expanding the product implying y to ease its collect after
f1_3 = sympy.collect(f1_2, a) # collecting the expanded equation by a to get the factor a² (and a also)
f1_4 = sympy.collect(f1_3,y) # collecting the already factorised-by-a equation, here is the trick to hierarchically collect by two factors?
f1_5 = f1_4.subs(y,1-b) # we change back y by its initial term = 1-b
# last trick, if we want to factorise by the exact factor (and not the powers of it), we can mention exact=true to sympy.collect
print(f1_5)
</code></pre>
<p>It results in:</p>
<pre><code>a**2 * [m**2*n - m*n + m + n**2 + (1 - b) * (m**2 + m*n + m)] +
... a * [4*m + n**2 + (1 - b)*(m + n)] +
... (1 - b)*(-m**2 + 2*m*n + m + n**2) + 1 - m
</code></pre>
<p>Which is hierarchically factorised first by a (including a² and a) then by (1-b). Thank smichr and stackoverflow.</p>
|
<python><sympy>
|
2024-01-19 14:45:45
| 1
| 1,500
|
sol
|
77,846,901
| 21,244,591
|
Django locale .po files not showing up in locales folder
|
<p>I have been running into an issue setting up locales for my Django app. When I run the <code>django-admin</code>'s <code>makemessages</code> command no resulting <code>.po</code> files show up in my <code>{project_root}/locales</code> folder.</p>
<p>Here are the steps I followed to setup locales:</p>
<p><strong>Created locale/ folder in project root</strong></p>
<p><strong>Changes to settings.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from os.path import join as osjoin
MIDDLEWARE = [
...
'django.middleware.locale.LocaleMiddleware',
...
]
USE_I18N = True
LOCALE_PATHS = [
osjoin(BASE_DIR, 'locale'),
]
</code></pre>
<p><strong>Solved missing dependencies</strong>
Had to install <a href="https://mlocati.github.io/articles/gettext-iconv-windows.html" rel="noreferrer">GNU gettext (for windows)</a> and added it to system PATH.</p>
<p><strong>Commands I ran</strong></p>
<pre class="lang-bash prettyprint-override"><code>python manage.py makemessages -l fr
</code></pre>
<p>also tried:</p>
<pre class="lang-bash prettyprint-override"><code>django-admin makemessages -l fr
</code></pre>
<p>I also tried with the <code>fr_FR</code> tag, didn't change anything.</p>
<p>both result in this output message but no files added to <code>locales/</code>:</p>
<pre><code>processing locale fr
</code></pre>
<p>I also superstitiously tried to run the command as admin but didn't change much.</p>
<p>I am also aware of <a href="https://stackoverflow.com/questions/24611112/django-do-not-create-locale-po-file-for-my-project">these answers</a> to the same question (9 years ago) but the answers didn't help.</p>
|
<python><django><locale><gettext>
|
2024-01-19 14:43:54
| 3
| 366
|
Olivier Neve
|
77,846,896
| 10,966,677
|
Pandas conditional groupby: aggregation over partition using predicate
|
<p>I have had this task several times, so that I am wondering if it is possible to aggregate in a OVER PARTITION BY style while using a predicate.</p>
<p>There are dozens of examples on SO of <code>groupby</code> of window function and quite a few using a condition, but none where the condition depends on the values of the column (i.e. not based on a literal - constant value).</p>
<p>Consider this example dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data={
'id': ['0001', '0001', '0001', '0002', '0002', '0003', '0003', '0004'],
'days': [2, 5, 6, 3, 5, 4, 6, 8],
'amount': [100, 150, 150, 200, 100, 300, 250, 200]
})
print(df)
id days amount
0 0001 2 100
1 0001 5 150
2 0001 6 150
3 0002 3 200
4 0002 5 100
5 0003 4 300
6 0003 6 250
7 0004 8 200
</code></pre>
<p>I would like the window sum of <code>amount</code> grouped by a condition on <code>days</code>. This condition might be a less equal on <code>days</code>, i.e. given a number of <code>days</code>, calculate the sum of <code>amount</code> for all <code>days</code> up to the value of <code>days</code> in that row, i.e. <code>df['days'] <= i</code> for every row <code>i</code>.</p>
<p>I couldn't find any other solution other than using a loop:</p>
<pre><code>df['run_sum_le'] = 0
for i in df.sort_values('days', ascending=False)['days'].unique():
le = df['days'] <= i
df.loc[le, 'run_sum_le'] = df[le]['amount'].sum()
print(df)
id days amount run_sum_le
0 0001 2 100 100
1 0001 5 150 850
2 0001 6 150 1250
3 0002 3 200 300
4 0002 5 100 850
5 0003 4 300 600
6 0003 6 250 1250
7 0004 8 200 1450
</code></pre>
<p>Note that the the correct ordering of the iterator is necessary.</p>
<p>Another example for more clarity: let us calculate the sum of <code>amount</code> using the condition greater equal and equal, i.e. sum <code>amount</code> using <code>df['days'] >= i</code> and <code>df['days'] == i</code> for every row <code>i</code>.</p>
<pre><code>df['run_sum_eq'] = 0
df['run_sum_ge'] = 0
for i in df.sort_values('days', ascending=True)['days'].unique():
eq = df['days'] == i
ge = df['days'] >= i
df.loc[eq, 'run_sum_eq'] = df[eq]['amount'].sum()
df.loc[ge, 'run_sum_ge'] = df[ge]['amount'].sum()
print(df)
id days amount run_sum_le run_sum_eq run_sum_ge
0 0001 2 100 100 100 1450
1 0001 5 150 850 250 850
2 0001 6 150 1250 400 600
3 0002 3 200 300 200 1350
4 0002 5 100 850 250 850
5 0003 4 300 600 300 1150
6 0003 6 250 1250 400 600
7 0004 8 200 1450 200 200
</code></pre>
<p>Note 1: changed ordering in the iterator.
Note 2: this method for <code>eq = df['days'] == i</code> is an overkill because can be replaced by the one liner:</p>
<pre><code>df['run_sum_eq'] = df['days'].map(df.groupby('days')['amount'].sum())
</code></pre>
<p>which is simply groping by <code>days</code>, i.e. it corresponds to a <em>regular</em> PARTITION BY calculation.</p>
<p>This last method is equivalent to:</p>
<pre><code>df['days'].map(df.groupby('days')['amount'].agg(pd.Series.sum))
</code></pre>
<p>which leads me to think that I could use a lambda in the <code>agg()</code> where I can include the predicate:</p>
<pre><code>df['days'].map(df.groupby('days')['amount'].agg([('amount', lambda val: ???)]))
</code></pre>
<p>and this is where I get stuck.</p>
|
<python><pandas><group-by>
|
2024-01-19 14:42:45
| 1
| 459
|
Domenico Spidy Tamburro
|
77,846,779
| 5,679,047
|
Dependent data validation without allowing arbitrary inputs when previous column is blank (Excel, xlsxwriter)
|
<p>I have a Python script that produces an Excel file with columns that can be used to input chapters, sections, and subsections from a book that SMEs can use to write citations. I need to be able to process these citations programmatically, so I use data validation to ensure that they enter the exact text that the program is expecting.</p>
<p>It works almost perfectly, but it has a flaw: the user can enter arbitrary input in the section column if the chapter column is unfilled, and in the subsection column if the section column is not filled. How can I modify this script so that if the chapter column is blank, the section column must be blank, and same for the subsection column if the section column is blank?</p>
<p>MWE follows:</p>
<pre><code>import xlsxwriter
from xlsxwriter.utility import xl_range_formula, xl_rowcol_to_cell
N = 10
chapters = {'chapter 1 (1)': ['section 1 (1.1)', 'section 2 (1.2)'],
'chapter 2 (2)': ['section 1 (2.1), section 2 (2.2)']}
all_sections = {'section 1 (1.1)': ['subsection 1 (1.1.1)', 'subsection 2 (1.1.2)'],
'section 2 (1.2)': ['subsection 1 (1.2.1)', 'subsection 2 (1.2.2)'],
'section 1 (2.1)': ['subsection 1 (2.1.1)', 'subsection 2 (2.1.2)'],
'section 2 (2.2)': ['subsection 1 (2.2.1)', 'subsection 2 (2.2.2)']}
key = 'Book'
workbook = xlsxwriter.Workbook('dependent_dv_mwe.xlsx')
main_worksheet = workbook.add_worksheet('Evaluation')
chapter_worksheet = workbook.add_worksheet('Chapters')
section_worksheet = workbook.add_worksheet('Sections')
keys_worksheet = workbook.add_worksheet('Keys')
chapter_start_col = 0
section_start_col = 0
chapter_max_row = 0
section_max_row = 0
keys_worksheet.write(0, 0, key)
for col, (chapter, sections) in enumerate(chapters.items()):
chapter_worksheet.write(0, chapter_start_col + col, chapter)
for j, section in enumerate(sections):
chapter_worksheet.write(j + 1, chapter_start_col + col, section)
chapter_max_row = max(chapter_max_row, j + 1)
workbook.define_name(key, xl_range_formula('Chapters',
0, chapter_start_col,
0, chapter_start_col + col))
chapter_start_col = chapter_start_col + col + 1
for col, (section, subsections) in enumerate(all_sections.items()):
section_worksheet.write(0, section_start_col + col, section)
for j, subsection in enumerate(subsections):
section_worksheet.write(j + 1, section_start_col + col, subsection)
section_max_row = max(section_max_row, j + 1)
section_start_col = section_start_col + col + 1
# main_worksheet.data_validation(
# 'A1', {'validate': 'list',
# 'source': '='
# + xl_range_formula('Keys', 0, 0, len(all_chapters), 0),
# 'ignore_blank': True}
# )
main_worksheet.write(xl_rowcol_to_cell(0, 0),
'Chapter')
main_worksheet.write(xl_rowcol_to_cell(0, 1),
'Section')
main_worksheet.write(xl_rowcol_to_cell(0, 2),
'Subsection')
for j in range(1, N + 1):
cell1 = xl_rowcol_to_cell(j, 0)
cell2 = xl_rowcol_to_cell(j, 1)
cell3 = xl_rowcol_to_cell(j, 2)
main_worksheet.data_validation(cell1, {'validate': 'list',
'source': '=%s' % key,
'ignore_blank': True})
main_worksheet.data_validation(
cell2,
{'validate': 'list',
'source': '=INDEX(%s,,MATCH(%s, %s, 0))'
% (xl_range_formula('Chapters', 1, 0,
chapter_max_row,
chapter_start_col),
cell1,
xl_range_formula('Chapters', 0, 0, 0,
chapter_start_col)),
'ignore_blank': True}
)
main_worksheet.data_validation(
cell3, {'validate': 'list',
'source': '=INDEX(%s,,MATCH(%s, %s, 0))'
% (xl_range_formula('Sections', 1, 0,
section_max_row,
section_start_col),
cell2,
xl_range_formula('Sections', 0, 0, 0,
section_start_col)),
'ignore_blank': True}
)
workbook.close()
</code></pre>
|
<python><excel><xlsxwriter>
|
2024-01-19 14:23:54
| 1
| 681
|
Zorgoth
|
77,846,721
| 893,254
|
How can I implement a match or if statement based on an enum's current value in Python?
|
<p>I have the following enum defined in some Python code:</p>
<pre><code>from enum import Enum
class TestEnum(Enum):
MY_VALUE=1
</code></pre>
<p>I want to implement a <code>__repr__</code> function for this enum.</p>
<ul>
<li>Actually, it doesn't have to be <code>__repr__</code> this is just for convenience when using <code>str()</code> conversion</li>
</ul>
<pre><code>class TestEnum(Enum):
def __repr__(self) -> str:
match self:
case TestEnum.MY_VALUE:
return 'my_value'
case _:
return 'INVALID'
test_enum = TestEnum.MY_VALUE
print(str(test_enum))
</code></pre>
<ul>
<li>The idea of using <code>match self:</code> comes from Rust. It may not be valid syntax in Python.</li>
</ul>
<p>This code outputs the following:</p>
<pre><code>TestEnum.MY_VALUE
</code></pre>
<p>Which is not what I was hoping for.</p>
<p>Can what I am trying to achieve be achieved with Python?</p>
|
<python><enums>
|
2024-01-19 14:15:26
| 0
| 18,579
|
user2138149
|
77,846,715
| 9,251,158
|
Monkey-patching OpenTimelineIO adapter to import Final Cut Pro XML
|
<p>I have several video projects from Final Cut Pro that I want to use in KdenLive. I found the OpenTimelineIO project and it would solve all my problems. I installed with</p>
<pre class="lang-bash prettyprint-override"><code>$ python3 -m pip install opentimelineio
...
$ python3 -m pip show opentimelineio
Name: OpenTimelineIO
Version: 0.15.0
</code></pre>
<p>I tried the sample code provided on <a href="https://github.com/AcademySoftwareFoundation/OpenTimelineIO" rel="nofollow noreferrer">GitHub</a>:</p>
<pre class="lang-py prettyprint-override"><code>import opentimelineio as otio
timeline = otio.adapters.read_from_file("/path/to/file.fcpxml")
for clip in timeline.find_clips():
print(clip.name, clip.duration())
</code></pre>
<p>and I get the error:</p>
<pre class="lang-bash prettyprint-override"><code> File "~/Library/Python/3.8/lib/python/site-packages/opentimelineio_contrib/adapters/fcpx_xml.py", line 998, in _format_id_for_clip
resource = self._compound_clip_by_id(
AttributeError: 'NoneType' object has no attribute 'find'
</code></pre>
<p>Following <a href="https://stackoverflow.com/questions/76725224/attributeerror-nonetype-object-has-no-attribute-find-when-converting-with">"AttributeError: 'NoneType' object has no attribute 'find'" when converting with OpenTimelineIO</a> , I monkey-patch the source code, changing around line 991:</p>
<pre class="lang-py prettyprint-override"><code>def _format_id_for_clip(self, clip, default_format):
if not clip.get("ref", None) or clip.tag == "gap":
return default_format
resource = self._asset_by_id(clip.get("ref"))
if resource is None:
resource = self._compound_clip_by_id(
clip.get("ref")
).find("sequence")
</code></pre>
<p>To:</p>
<pre class="lang-py prettyprint-override"><code>def _format_id_for_clip(self, clip, default_format):
if not clip.get("ref", None) or clip.tag == "gap":
return default_format
resource = self._asset_by_id(clip.get("ref"))
if resource is None:
resource = self._compound_clip_by_id(
clip.get("ref")
)
if resource is None:
return default_format
else:
resource = resource.find("sequence")
return resource.get("format", default_format)
</code></pre>
<p>Then I get another error:</p>
<pre class="lang-bash prettyprint-override"><code> File "/usr/local/lib/python3.11/site-packages/opentimelineio_contrib/adapters/fcpx_xml.py", line 1054, in _format_frame_duration
total, rate = media_format.get("frameDuration").split("/")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'
</code></pre>
<p>The offending lines are:</p>
<pre class="lang-py prettyprint-override"><code># --------------------
# time helpers
# --------------------
def _format_frame_duration(self, format_id):
media_format = self._format_by_id(format_id)
total, rate = media_format.get("frameDuration").split("/")
rate = rate.replace("s", "")
return total, rate
</code></pre>
<p>So I printed details on clips, and it seems most are <code>100/2500s</code>, so I return that:</p>
<pre class="lang-py prettyprint-override"><code>def _format_frame_duration(self, format_id):
media_format = self._format_by_id(format_id)
print(media_format)
print(dir(media_format))
try:
print(media_format.__dict__)
print(media_format.__dict__())
except AttributeError:
pass
print([attr for attr in dir(media_format) if attr[:2] + attr[-2:] != '____' and not callable(getattr(media_format,attr))])
print(media_format.attrib)
print(media_format.tag)
print(media_format.tail)
print(media_format.text)
if None is media_format.get("frameDuration"):
return "100", "2500"
total, rate = media_format.get("frameDuration").split("/")
rate = rate.replace("s", "")
return total, rate
</code></pre>
<p>And then that command runs, but the next throws an error:</p>
<pre class="lang-bash prettyprint-override"><code> for clip in timeline.find_clips():
^^^^^^^^^^^^^^^^^^^
AttributeError: 'opentimelineio._otio.SerializableCollection' object has no attribute 'find_clips'
</code></pre>
<p>I try running the import from Kdenlive and get this error:</p>
<pre class="lang-bash prettyprint-override"><code> File "/usr/local/lib/python3.11/dist-packages/opentimelineio_contrib/adapters/fcpx_xml.py", line 938, in _timing_clip
while clip.tag not in ("clip", "asset-clip", "ref-clip"):
^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'tag'
</code></pre>
<p><a href="https://www.dropbox.com/scl/fi/7xg18lh4a6ehy9k3f442a/Info.fcpxml?rlkey=v0o9b59lhn1mgl7qxnuw6tzsy&dl=0" rel="nofollow noreferrer">Here is a Dropbox link</a> to a complete FCP XML file that causes this error.</p>
<p>I <a href="https://github.com/AcademySoftwareFoundation/OpenTimelineIO/issues/1661" rel="nofollow noreferrer">submitted an issue</a> on GitHub about the second error that I monkey-patched and it has had no activity for 3 months. These issues raise the bar quite a bit and require some knowledge of opentimelineio, so I ask or help here.</p>
<p>How can I further monkey-patch the OpenTimelineIO code to convert this project to Kdenlive?</p>
|
<python><python-3.x><xml><finalcut>
|
2024-01-19 14:14:30
| 2
| 4,642
|
ginjaemocoes
|
77,846,541
| 16,607,067
|
Tags are not being saved when using django-parler
|
<p>I am using <code>django-parler</code> for translation and <code>django-taggit</code> for adding tags. But when I add tags inside translation field (because of using in multiple language) tags are not being saved in admin page.<br />
<strong>models.py</strong></p>
<pre class="lang-py prettyprint-override"><code>class News(models.Model):
translations = TranslatedFields(
title=models.CharField(max_length=255),
content=RichTextUploadingField(),
tags=TaggableManager(),
slug=models.SlugField(max_length=255, db_index=True),
)
category = models.ForeignKey(Category)
</code></pre>
|
<python><django><django-taggit><django-parler>
|
2024-01-19 13:45:02
| 1
| 439
|
mirodil
|
77,846,493
| 10,413,428
|
Best Practice to install a temporary QTranslator for subpart of QApplication
|
<p>I use the following code to setup translation when setting up my PySide6 application:</p>
<pre class="lang-py prettyprint-override"><code>app = QApplication(sys.argv)
translator_ui = QTranslator()
if translator_ui.load(f"lang_{config.language.ui}", "translations"):
app.installTranslator(translator_ui)
</code></pre>
<p>Now I have the requirement that it should possible to export a pdf report, which is generated by the application should be exported in a different language as the application. So I marked all string in the report generator as translatable and continued.</p>
<p>After this I came up with the following solution for the "temporary language switch".</p>
<pre class="lang-py prettyprint-override"><code>existing_app = QApplication.instance()
if existing_app:
existing_app.removeTranslator(translator_ui)
translator_report = QTranslator()
if translator_report.load(f"lang_{config.language.report}", "translations"):
existing_app.installTranslator(translator_report)
report.generate_document()
report.save_document()
translator_ui = QTranslator()
if translator_ui.load(f"lang_{config.language.ui}", "translations"):
existing_app.installTranslator(translator_ui)
</code></pre>
<p>This works fine, but I am not sure if this is a valid approach to switch the whole application translator just for generation of an report...</p>
|
<python><qt><pyside6><qt6>
|
2024-01-19 13:37:51
| 1
| 405
|
sebwr
|
77,846,470
| 6,501,203
|
Polars pairwise sum of array column
|
<p>I just got started with Polars (python) so this may be an ignorant question. I have a DF like the image shows where one of the columns (series) contains a numpy array of length 18. I would like to do a groupby on the <code>group</code> column and a pairwise sum aggregation on the series column, but I can't figure out a good way to do that in Polars. I can, of course, just do a map_elements and np.sum the arrays (like in the example) but I'm hoping there is a way to optimize it.</p>
<p>Here is my current implementation which achieves the desired effect but I don't think it is optimal because it uses map_elements. Is there a polars expression that achieve the same thing or is this the best I can do (without learning Rust, which I will someday)?</p>
<pre><code>import polars as pl
import numpy as np
data = [
{'group': 1,
'series': np.array([ 2398, 2590, 3000, 3731, 3986, 4603, 4146, 4325, 6068,
6028, 7486, 7759, 8323, 8961, 9598, 10236, 10873, 11511])},
{'group': 1,
'series': np.array([ 2398, 2590, 3000, 3731, 3986, 4603, 4146, 4325, 6068,
6028, 7486, 7759, 8323, 8961, 9598, 10236, 10873, 11511])},
{'group': 2,
'series': np.array([1132, 1269, 1452, 1687, 1389, 1655, 1532, 1661, 1711, 1528, 1582,
1638, 1603, 1600, 1597, 1594, 1591, 1588])},
{'group': 3,
'series': np.array([ 2802, 3065, 3811, 4823, 4571, 4817, 4668, 5110, 6920,
7131, 10154, 11138, 11699, 12840, 13981, 15123, 16264, 17405])},
]
df = pl.DataFrame(data)
# this performs the desired aggregation (pairwise sum of 'series' arrays)
# sums first two rows together (group 1), leaves others unchanged
df.group_by('group').agg(
pl.col('series').map_elements(lambda x: np.sum(x.to_list(), axis=0))
).to_dicts()
'''
desired output
group series
i64 object
2 [1132 1269 1452 1687 1389 1655 1532 1661 1711 1528 1582 1638 1603 1600
1597 1594 1591 1588]
1 [ 4796 5180 6000 7462 7972 9206 8292 8650 12136 12056 14972 15518
16646 17922 19196 20472 21746 23022]
3 [ 2802 3065 3811 4823 4571 4817 4668 5110 6920 7131 10154 11138
11699 12840 13981 15123 16264 17405]
'''
</code></pre>
<p>Thank you in advance for any help.</p>
|
<python><dataframe><python-polars>
|
2024-01-19 13:35:04
| 2
| 370
|
Alec Daling
|
77,846,457
| 1,581,090
|
How can I fix Python video playing on Ubuntu using multiprocessing?
|
<p>On <a href="https://en.wikipedia.org/wiki/Ubuntu_version_history#Ubuntu_22.04_LTS_(Jammy_Jellyfish)" rel="nofollow noreferrer">Ubuntu 22.04</a> (Jammy Jellyfish), I am trying to play a video clip using the following code:</p>
<pre><code>import sys
import multiprocessing
from moviepy.editor import VideoFileClip
video_path = sys.argv[1]
def process_play_video(clip: VideoFileClip, video_fps: int):
clip.preview(fps=video_fps)
clip = VideoFileClip(str(video_path))
video_fps = 15
play_directly = True
if play_directly:
process_play_video(clip, video_fps)
else:
process1 = multiprocessing.Process(target=process_play_video,
args=(clip, video_fps))
process1.start()
process1.join()
clip.close()
</code></pre>
<p>I have a video clip with the following video format:</p>
<pre class="lang-none prettyprint-override"><code>Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output_video.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.76.100
Duration: 00:10:00.03, start: 0.000000, bitrate: 2155 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 1954 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
Metadata:
handler_name : VideoHandler
vendor_id : [0][0][0][0]
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 192 kb/s (default)
Metadata:
handler_name : SoundHandler
vendor_id : [0][0][0][0]
</code></pre>
<p>which can be replayed without problems when I play that video directly (<code>play_directly = True</code>). However, when I start it in an own process (<code>play_directly = False</code>) the playing of the same video file does not work. I only see the first frame and nothing else happens.</p>
<p>I also have a different video file that can be played in both ways.</p>
<p>How can I figure out why this is happening and how can I fix it? It has to work with <code>moviepy.editor.VideoFileClip</code>, and it has to work with <code>multiprocessing</code> as shown in the code example.</p>
|
<python><video><multiprocessing>
|
2024-01-19 13:32:57
| 0
| 45,023
|
Alex
|
77,846,449
| 9,462,829
|
Get perimeter out of irregular group of shapes
|
<p>I'm working on groups of shapes (think groups of city blocks) that are almost contiguous, like this:</p>
<p><a href="https://i.sstatic.net/wMY7o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wMY7o.png" alt="enter image description here" /></a></p>
<p>What I'd like to do is to get the shape of the outer perimeter of this group. With more regular forms, this was almost working:</p>
<pre><code>per_df = gpd.GeoSeries(df.geometry.unary_union.convex_hull).boundary
</code></pre>
<p>Bringing something like this:</p>
<p><a href="https://i.sstatic.net/d89Rq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d89Rq.png" alt="enter image description here" /></a></p>
<p>But on an irregular shape it brings this:</p>
<p><a href="https://i.sstatic.net/yk74p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yk74p.png" alt="enter image description here" /></a></p>
<p>Is there some way to "fuse" or join my shapes into one so it is easier to calculate its perimeter/boundary?</p>
<p>Here's a simpler reproducible example:</p>
<pre><code>p1=Polygon([(0,0),(10,0),(10,9.8),(0,9.8)])
p2=Polygon([(10.2,10),(20,10),(20,20),(10.2,20)])
p3=Polygon([(10,10),(9.8,20),(0,10)])
df=gpd.GeoDataFrame(geometry=[p1,p2,p3])
per_df = gpd.GeoSeries(df.geometry.unary_union.convex_hull).boundary
ax = df.plot()
</code></pre>
<p>I'd like to get the perimeter of this group of shapes, even though there's a bit of separation between them:
<a href="https://i.sstatic.net/dfzJ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dfzJ8.png" alt="enter image description here" /></a></p>
<p>Thanks!</p>
|
<python><geopandas><shapely>
|
2024-01-19 13:32:09
| 3
| 6,148
|
Juan C
|
77,846,394
| 9,872,200
|
Python DB data pull loop
|
<p>I want to loop through a script that pulls data from a db 1 month at a time where I am manually updating the date parameters(input) and output file names, but I do not know how I would set up the structure to loop/iterate through the script based on a list of dates/file names. Due to the size/nature of the datapulls, it does not make sense to pull everything and then split out later in the code.</p>
<pre><code>import pandas as pd
import numpy as np
import datetime
import openpyxl
print(datetime.datetime.now())
month = '2022-01-31' # update date here, .xlsx filename and sheet name from jan to feb to march, etc and then the corresponding datetime from 2022-01-31 to 2022-02-28, etc, etc
# I want to iterate through a series of lists as solution:
#potential variable lists:
#month = ['2022-01-31', '2022-02-28', '2022-03-31'] etc.. this populates db/sql injection with each loop through
#month_file_name = ['jan','feb','mar'] etc, this populates filenames with each loop through
sql = pd.read_sql('''select col1, col2, col3 from schema.table where table.asofdate = to_date(:month, 'YYYY-MM-DD')''',sql_engine, params={'month':month})
writer = pd.ExcelWriter(r'\\desktop\output\output_jan.xlsx', engine = 'xlsxwriter')
sql.to_excel(writer, sheet_name='jan_recon', index = False)
writer.close()
print(datetime.datetime.now())
</code></pre>
|
<python><sql><pandas><database>
|
2024-01-19 13:23:50
| 1
| 513
|
John
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.